Oct  7 08:16:57 np0005473739 kernel: Linux version 5.14.0-620.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-11), GNU ld version 2.35.2-67.el9) #1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025
Oct  7 08:16:57 np0005473739 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Oct  7 08:16:57 np0005473739 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  7 08:16:57 np0005473739 kernel: BIOS-provided physical RAM map:
Oct  7 08:16:57 np0005473739 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Oct  7 08:16:57 np0005473739 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Oct  7 08:16:57 np0005473739 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Oct  7 08:16:57 np0005473739 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Oct  7 08:16:57 np0005473739 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Oct  7 08:16:57 np0005473739 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Oct  7 08:16:57 np0005473739 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Oct  7 08:16:57 np0005473739 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Oct  7 08:16:57 np0005473739 kernel: NX (Execute Disable) protection: active
Oct  7 08:16:57 np0005473739 kernel: APIC: Static calls initialized
Oct  7 08:16:57 np0005473739 kernel: SMBIOS 2.8 present.
Oct  7 08:16:57 np0005473739 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Oct  7 08:16:57 np0005473739 kernel: Hypervisor detected: KVM
Oct  7 08:16:57 np0005473739 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Oct  7 08:16:57 np0005473739 kernel: kvm-clock: using sched offset of 8909329230 cycles
Oct  7 08:16:57 np0005473739 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Oct  7 08:16:57 np0005473739 kernel: tsc: Detected 2799.998 MHz processor
Oct  7 08:16:57 np0005473739 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Oct  7 08:16:57 np0005473739 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Oct  7 08:16:57 np0005473739 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Oct  7 08:16:57 np0005473739 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Oct  7 08:16:57 np0005473739 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Oct  7 08:16:57 np0005473739 kernel: Using GB pages for direct mapping
Oct  7 08:16:57 np0005473739 kernel: RAMDISK: [mem 0x2d7c4000-0x32bd9fff]
Oct  7 08:16:57 np0005473739 kernel: ACPI: Early table checksum verification disabled
Oct  7 08:16:57 np0005473739 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Oct  7 08:16:57 np0005473739 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  7 08:16:57 np0005473739 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  7 08:16:57 np0005473739 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  7 08:16:57 np0005473739 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Oct  7 08:16:57 np0005473739 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  7 08:16:57 np0005473739 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Oct  7 08:16:57 np0005473739 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Oct  7 08:16:57 np0005473739 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Oct  7 08:16:57 np0005473739 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Oct  7 08:16:57 np0005473739 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Oct  7 08:16:57 np0005473739 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Oct  7 08:16:57 np0005473739 kernel: No NUMA configuration found
Oct  7 08:16:57 np0005473739 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Oct  7 08:16:57 np0005473739 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Oct  7 08:16:57 np0005473739 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Oct  7 08:16:57 np0005473739 kernel: Zone ranges:
Oct  7 08:16:57 np0005473739 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Oct  7 08:16:57 np0005473739 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Oct  7 08:16:57 np0005473739 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Oct  7 08:16:57 np0005473739 kernel:  Device   empty
Oct  7 08:16:57 np0005473739 kernel: Movable zone start for each node
Oct  7 08:16:57 np0005473739 kernel: Early memory node ranges
Oct  7 08:16:57 np0005473739 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Oct  7 08:16:57 np0005473739 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Oct  7 08:16:57 np0005473739 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Oct  7 08:16:57 np0005473739 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Oct  7 08:16:57 np0005473739 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Oct  7 08:16:57 np0005473739 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Oct  7 08:16:57 np0005473739 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Oct  7 08:16:57 np0005473739 kernel: ACPI: PM-Timer IO Port: 0x608
Oct  7 08:16:57 np0005473739 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Oct  7 08:16:57 np0005473739 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Oct  7 08:16:57 np0005473739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Oct  7 08:16:57 np0005473739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Oct  7 08:16:57 np0005473739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Oct  7 08:16:57 np0005473739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Oct  7 08:16:57 np0005473739 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Oct  7 08:16:57 np0005473739 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Oct  7 08:16:57 np0005473739 kernel: TSC deadline timer available
Oct  7 08:16:57 np0005473739 kernel: CPU topo: Max. logical packages:   8
Oct  7 08:16:57 np0005473739 kernel: CPU topo: Max. logical dies:       8
Oct  7 08:16:57 np0005473739 kernel: CPU topo: Max. dies per package:   1
Oct  7 08:16:57 np0005473739 kernel: CPU topo: Max. threads per core:   1
Oct  7 08:16:57 np0005473739 kernel: CPU topo: Num. cores per package:     1
Oct  7 08:16:57 np0005473739 kernel: CPU topo: Num. threads per package:   1
Oct  7 08:16:57 np0005473739 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Oct  7 08:16:57 np0005473739 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Oct  7 08:16:57 np0005473739 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Oct  7 08:16:57 np0005473739 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Oct  7 08:16:57 np0005473739 kernel: Booting paravirtualized kernel on KVM
Oct  7 08:16:57 np0005473739 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Oct  7 08:16:57 np0005473739 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Oct  7 08:16:57 np0005473739 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Oct  7 08:16:57 np0005473739 kernel: kvm-guest: PV spinlocks disabled, no host support
Oct  7 08:16:57 np0005473739 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  7 08:16:57 np0005473739 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64", will be passed to user space.
Oct  7 08:16:57 np0005473739 kernel: random: crng init done
Oct  7 08:16:57 np0005473739 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: Fallback order for Node 0: 0 
Oct  7 08:16:57 np0005473739 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Oct  7 08:16:57 np0005473739 kernel: Policy zone: Normal
Oct  7 08:16:57 np0005473739 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Oct  7 08:16:57 np0005473739 kernel: software IO TLB: area num 8.
Oct  7 08:16:57 np0005473739 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Oct  7 08:16:57 np0005473739 kernel: ftrace: allocating 49370 entries in 193 pages
Oct  7 08:16:57 np0005473739 kernel: ftrace: allocated 193 pages with 3 groups
Oct  7 08:16:57 np0005473739 kernel: Dynamic Preempt: voluntary
Oct  7 08:16:57 np0005473739 kernel: rcu: Preemptible hierarchical RCU implementation.
Oct  7 08:16:57 np0005473739 kernel: rcu: #011RCU event tracing is enabled.
Oct  7 08:16:57 np0005473739 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Oct  7 08:16:57 np0005473739 kernel: #011Trampoline variant of Tasks RCU enabled.
Oct  7 08:16:57 np0005473739 kernel: #011Rude variant of Tasks RCU enabled.
Oct  7 08:16:57 np0005473739 kernel: #011Tracing variant of Tasks RCU enabled.
Oct  7 08:16:57 np0005473739 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Oct  7 08:16:57 np0005473739 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Oct  7 08:16:57 np0005473739 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  7 08:16:57 np0005473739 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  7 08:16:57 np0005473739 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Oct  7 08:16:57 np0005473739 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Oct  7 08:16:57 np0005473739 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Oct  7 08:16:57 np0005473739 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Oct  7 08:16:57 np0005473739 kernel: Console: colour VGA+ 80x25
Oct  7 08:16:57 np0005473739 kernel: printk: console [ttyS0] enabled
Oct  7 08:16:57 np0005473739 kernel: ACPI: Core revision 20230331
Oct  7 08:16:57 np0005473739 kernel: APIC: Switch to symmetric I/O mode setup
Oct  7 08:16:57 np0005473739 kernel: x2apic enabled
Oct  7 08:16:57 np0005473739 kernel: APIC: Switched APIC routing to: physical x2apic
Oct  7 08:16:57 np0005473739 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Oct  7 08:16:57 np0005473739 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Oct  7 08:16:57 np0005473739 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Oct  7 08:16:57 np0005473739 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Oct  7 08:16:57 np0005473739 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Oct  7 08:16:57 np0005473739 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Oct  7 08:16:57 np0005473739 kernel: Spectre V2 : Mitigation: Retpolines
Oct  7 08:16:57 np0005473739 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Oct  7 08:16:57 np0005473739 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Oct  7 08:16:57 np0005473739 kernel: RETBleed: Mitigation: untrained return thunk
Oct  7 08:16:57 np0005473739 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Oct  7 08:16:57 np0005473739 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Oct  7 08:16:57 np0005473739 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Oct  7 08:16:57 np0005473739 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Oct  7 08:16:57 np0005473739 kernel: x86/bugs: return thunk changed
Oct  7 08:16:57 np0005473739 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Oct  7 08:16:57 np0005473739 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Oct  7 08:16:57 np0005473739 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Oct  7 08:16:57 np0005473739 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Oct  7 08:16:57 np0005473739 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Oct  7 08:16:57 np0005473739 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Oct  7 08:16:57 np0005473739 kernel: Freeing SMP alternatives memory: 40K
Oct  7 08:16:57 np0005473739 kernel: pid_max: default: 32768 minimum: 301
Oct  7 08:16:57 np0005473739 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Oct  7 08:16:57 np0005473739 kernel: landlock: Up and running.
Oct  7 08:16:57 np0005473739 kernel: Yama: becoming mindful.
Oct  7 08:16:57 np0005473739 kernel: SELinux:  Initializing.
Oct  7 08:16:57 np0005473739 kernel: LSM support for eBPF active
Oct  7 08:16:57 np0005473739 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Oct  7 08:16:57 np0005473739 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Oct  7 08:16:57 np0005473739 kernel: ... version:                0
Oct  7 08:16:57 np0005473739 kernel: ... bit width:              48
Oct  7 08:16:57 np0005473739 kernel: ... generic registers:      6
Oct  7 08:16:57 np0005473739 kernel: ... value mask:             0000ffffffffffff
Oct  7 08:16:57 np0005473739 kernel: ... max period:             00007fffffffffff
Oct  7 08:16:57 np0005473739 kernel: ... fixed-purpose events:   0
Oct  7 08:16:57 np0005473739 kernel: ... event mask:             000000000000003f
Oct  7 08:16:57 np0005473739 kernel: signal: max sigframe size: 1776
Oct  7 08:16:57 np0005473739 kernel: rcu: Hierarchical SRCU implementation.
Oct  7 08:16:57 np0005473739 kernel: rcu: #011Max phase no-delay instances is 400.
Oct  7 08:16:57 np0005473739 kernel: smp: Bringing up secondary CPUs ...
Oct  7 08:16:57 np0005473739 kernel: smpboot: x86: Booting SMP configuration:
Oct  7 08:16:57 np0005473739 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Oct  7 08:16:57 np0005473739 kernel: smp: Brought up 1 node, 8 CPUs
Oct  7 08:16:57 np0005473739 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Oct  7 08:16:57 np0005473739 kernel: node 0 deferred pages initialised in 18ms
Oct  7 08:16:57 np0005473739 kernel: Memory: 7765356K/8388068K available (16384K kernel code, 5784K rwdata, 13996K rodata, 4068K init, 7304K bss, 616500K reserved, 0K cma-reserved)
Oct  7 08:16:57 np0005473739 kernel: devtmpfs: initialized
Oct  7 08:16:57 np0005473739 kernel: x86/mm: Memory block size: 128MB
Oct  7 08:16:57 np0005473739 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Oct  7 08:16:57 np0005473739 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: pinctrl core: initialized pinctrl subsystem
Oct  7 08:16:57 np0005473739 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Oct  7 08:16:57 np0005473739 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Oct  7 08:16:57 np0005473739 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Oct  7 08:16:57 np0005473739 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Oct  7 08:16:57 np0005473739 kernel: audit: initializing netlink subsys (disabled)
Oct  7 08:16:57 np0005473739 kernel: audit: type=2000 audit(1759839415.993:1): state=initialized audit_enabled=0 res=1
Oct  7 08:16:57 np0005473739 kernel: thermal_sys: Registered thermal governor 'fair_share'
Oct  7 08:16:57 np0005473739 kernel: thermal_sys: Registered thermal governor 'step_wise'
Oct  7 08:16:57 np0005473739 kernel: thermal_sys: Registered thermal governor 'user_space'
Oct  7 08:16:57 np0005473739 kernel: cpuidle: using governor menu
Oct  7 08:16:57 np0005473739 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Oct  7 08:16:57 np0005473739 kernel: PCI: Using configuration type 1 for base access
Oct  7 08:16:57 np0005473739 kernel: PCI: Using configuration type 1 for extended access
Oct  7 08:16:57 np0005473739 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Oct  7 08:16:57 np0005473739 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Oct  7 08:16:57 np0005473739 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Oct  7 08:16:57 np0005473739 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Oct  7 08:16:57 np0005473739 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Oct  7 08:16:57 np0005473739 kernel: Demotion targets for Node 0: null
Oct  7 08:16:57 np0005473739 kernel: cryptd: max_cpu_qlen set to 1000
Oct  7 08:16:57 np0005473739 kernel: ACPI: Added _OSI(Module Device)
Oct  7 08:16:57 np0005473739 kernel: ACPI: Added _OSI(Processor Device)
Oct  7 08:16:57 np0005473739 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Oct  7 08:16:57 np0005473739 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Oct  7 08:16:57 np0005473739 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Oct  7 08:16:57 np0005473739 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Oct  7 08:16:57 np0005473739 kernel: ACPI: Interpreter enabled
Oct  7 08:16:57 np0005473739 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Oct  7 08:16:57 np0005473739 kernel: ACPI: Using IOAPIC for interrupt routing
Oct  7 08:16:57 np0005473739 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Oct  7 08:16:57 np0005473739 kernel: PCI: Using E820 reservations for host bridge windows
Oct  7 08:16:57 np0005473739 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Oct  7 08:16:57 np0005473739 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Oct  7 08:16:57 np0005473739 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [3] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [4] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [5] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [6] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [7] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [8] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [9] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [10] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [11] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [12] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [13] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [14] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [15] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [16] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [17] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [18] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [19] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [20] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [21] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [22] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [23] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [24] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [25] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [26] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [27] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [28] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [29] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [30] registered
Oct  7 08:16:57 np0005473739 kernel: acpiphp: Slot [31] registered
Oct  7 08:16:57 np0005473739 kernel: PCI host bridge to bus 0000:00
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Oct  7 08:16:57 np0005473739 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Oct  7 08:16:57 np0005473739 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Oct  7 08:16:57 np0005473739 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Oct  7 08:16:57 np0005473739 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Oct  7 08:16:57 np0005473739 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Oct  7 08:16:57 np0005473739 kernel: iommu: Default domain type: Translated
Oct  7 08:16:57 np0005473739 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Oct  7 08:16:57 np0005473739 kernel: SCSI subsystem initialized
Oct  7 08:16:57 np0005473739 kernel: ACPI: bus type USB registered
Oct  7 08:16:57 np0005473739 kernel: usbcore: registered new interface driver usbfs
Oct  7 08:16:57 np0005473739 kernel: usbcore: registered new interface driver hub
Oct  7 08:16:57 np0005473739 kernel: usbcore: registered new device driver usb
Oct  7 08:16:57 np0005473739 kernel: pps_core: LinuxPPS API ver. 1 registered
Oct  7 08:16:57 np0005473739 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Oct  7 08:16:57 np0005473739 kernel: PTP clock support registered
Oct  7 08:16:57 np0005473739 kernel: EDAC MC: Ver: 3.0.0
Oct  7 08:16:57 np0005473739 kernel: NetLabel: Initializing
Oct  7 08:16:57 np0005473739 kernel: NetLabel:  domain hash size = 128
Oct  7 08:16:57 np0005473739 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Oct  7 08:16:57 np0005473739 kernel: NetLabel:  unlabeled traffic allowed by default
Oct  7 08:16:57 np0005473739 kernel: PCI: Using ACPI for IRQ routing
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Oct  7 08:16:57 np0005473739 kernel: vgaarb: loaded
Oct  7 08:16:57 np0005473739 kernel: clocksource: Switched to clocksource kvm-clock
Oct  7 08:16:57 np0005473739 kernel: VFS: Disk quotas dquot_6.6.0
Oct  7 08:16:57 np0005473739 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Oct  7 08:16:57 np0005473739 kernel: pnp: PnP ACPI init
Oct  7 08:16:57 np0005473739 kernel: pnp: PnP ACPI: found 5 devices
Oct  7 08:16:57 np0005473739 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Oct  7 08:16:57 np0005473739 kernel: NET: Registered PF_INET protocol family
Oct  7 08:16:57 np0005473739 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Oct  7 08:16:57 np0005473739 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Oct  7 08:16:57 np0005473739 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Oct  7 08:16:57 np0005473739 kernel: NET: Registered PF_XDP protocol family
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Oct  7 08:16:57 np0005473739 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Oct  7 08:16:57 np0005473739 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Oct  7 08:16:57 np0005473739 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x140 took 72002 usecs
Oct  7 08:16:57 np0005473739 kernel: PCI: CLS 0 bytes, default 64
Oct  7 08:16:57 np0005473739 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Oct  7 08:16:57 np0005473739 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Oct  7 08:16:57 np0005473739 kernel: Trying to unpack rootfs image as initramfs...
Oct  7 08:16:57 np0005473739 kernel: ACPI: bus type thunderbolt registered
Oct  7 08:16:57 np0005473739 kernel: Initialise system trusted keyrings
Oct  7 08:16:57 np0005473739 kernel: Key type blacklist registered
Oct  7 08:16:57 np0005473739 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Oct  7 08:16:57 np0005473739 kernel: zbud: loaded
Oct  7 08:16:57 np0005473739 kernel: integrity: Platform Keyring initialized
Oct  7 08:16:57 np0005473739 kernel: integrity: Machine keyring initialized
Oct  7 08:16:57 np0005473739 kernel: Freeing initrd memory: 86104K
Oct  7 08:16:57 np0005473739 kernel: NET: Registered PF_ALG protocol family
Oct  7 08:16:57 np0005473739 kernel: xor: automatically using best checksumming function   avx       
Oct  7 08:16:57 np0005473739 kernel: Key type asymmetric registered
Oct  7 08:16:57 np0005473739 kernel: Asymmetric key parser 'x509' registered
Oct  7 08:16:57 np0005473739 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Oct  7 08:16:57 np0005473739 kernel: io scheduler mq-deadline registered
Oct  7 08:16:57 np0005473739 kernel: io scheduler kyber registered
Oct  7 08:16:57 np0005473739 kernel: io scheduler bfq registered
Oct  7 08:16:57 np0005473739 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Oct  7 08:16:57 np0005473739 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Oct  7 08:16:57 np0005473739 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Oct  7 08:16:57 np0005473739 kernel: ACPI: button: Power Button [PWRF]
Oct  7 08:16:57 np0005473739 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Oct  7 08:16:57 np0005473739 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Oct  7 08:16:57 np0005473739 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Oct  7 08:16:57 np0005473739 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Oct  7 08:16:57 np0005473739 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Oct  7 08:16:57 np0005473739 kernel: Non-volatile memory driver v1.3
Oct  7 08:16:57 np0005473739 kernel: rdac: device handler registered
Oct  7 08:16:57 np0005473739 kernel: hp_sw: device handler registered
Oct  7 08:16:57 np0005473739 kernel: emc: device handler registered
Oct  7 08:16:57 np0005473739 kernel: alua: device handler registered
Oct  7 08:16:57 np0005473739 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Oct  7 08:16:57 np0005473739 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Oct  7 08:16:57 np0005473739 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Oct  7 08:16:57 np0005473739 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Oct  7 08:16:57 np0005473739 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Oct  7 08:16:57 np0005473739 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Oct  7 08:16:57 np0005473739 kernel: usb usb1: Product: UHCI Host Controller
Oct  7 08:16:57 np0005473739 kernel: usb usb1: Manufacturer: Linux 5.14.0-620.el9.x86_64 uhci_hcd
Oct  7 08:16:57 np0005473739 kernel: usb usb1: SerialNumber: 0000:00:01.2
Oct  7 08:16:57 np0005473739 kernel: hub 1-0:1.0: USB hub found
Oct  7 08:16:57 np0005473739 kernel: hub 1-0:1.0: 2 ports detected
Oct  7 08:16:57 np0005473739 kernel: usbcore: registered new interface driver usbserial_generic
Oct  7 08:16:57 np0005473739 kernel: usbserial: USB Serial support registered for generic
Oct  7 08:16:57 np0005473739 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Oct  7 08:16:57 np0005473739 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Oct  7 08:16:57 np0005473739 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Oct  7 08:16:57 np0005473739 kernel: mousedev: PS/2 mouse device common for all mice
Oct  7 08:16:57 np0005473739 kernel: rtc_cmos 00:04: RTC can wake from S4
Oct  7 08:16:57 np0005473739 kernel: rtc_cmos 00:04: registered as rtc0
Oct  7 08:16:57 np0005473739 kernel: rtc_cmos 00:04: setting system clock to 2025-10-07T12:16:56 UTC (1759839416)
Oct  7 08:16:57 np0005473739 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Oct  7 08:16:57 np0005473739 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Oct  7 08:16:57 np0005473739 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Oct  7 08:16:57 np0005473739 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Oct  7 08:16:57 np0005473739 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Oct  7 08:16:57 np0005473739 kernel: hid: raw HID events driver (C) Jiri Kosina
Oct  7 08:16:57 np0005473739 kernel: usbcore: registered new interface driver usbhid
Oct  7 08:16:57 np0005473739 kernel: usbhid: USB HID core driver
Oct  7 08:16:57 np0005473739 kernel: drop_monitor: Initializing network drop monitor service
Oct  7 08:16:57 np0005473739 kernel: Initializing XFRM netlink socket
Oct  7 08:16:57 np0005473739 kernel: NET: Registered PF_INET6 protocol family
Oct  7 08:16:57 np0005473739 kernel: Segment Routing with IPv6
Oct  7 08:16:57 np0005473739 kernel: NET: Registered PF_PACKET protocol family
Oct  7 08:16:57 np0005473739 kernel: mpls_gso: MPLS GSO support
Oct  7 08:16:57 np0005473739 kernel: IPI shorthand broadcast: enabled
Oct  7 08:16:57 np0005473739 kernel: AVX2 version of gcm_enc/dec engaged.
Oct  7 08:16:57 np0005473739 kernel: AES CTR mode by8 optimization enabled
Oct  7 08:16:57 np0005473739 kernel: sched_clock: Marking stable (1234001987, 149914077)->(1568214780, -184298716)
Oct  7 08:16:57 np0005473739 kernel: registered taskstats version 1
Oct  7 08:16:57 np0005473739 kernel: Loading compiled-in X.509 certificates
Oct  7 08:16:57 np0005473739 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  7 08:16:57 np0005473739 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Oct  7 08:16:57 np0005473739 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Oct  7 08:16:57 np0005473739 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Oct  7 08:16:57 np0005473739 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Oct  7 08:16:57 np0005473739 kernel: Demotion targets for Node 0: null
Oct  7 08:16:57 np0005473739 kernel: page_owner is disabled
Oct  7 08:16:57 np0005473739 kernel: Key type .fscrypt registered
Oct  7 08:16:57 np0005473739 kernel: Key type fscrypt-provisioning registered
Oct  7 08:16:57 np0005473739 kernel: Key type big_key registered
Oct  7 08:16:57 np0005473739 kernel: Key type encrypted registered
Oct  7 08:16:57 np0005473739 kernel: ima: No TPM chip found, activating TPM-bypass!
Oct  7 08:16:57 np0005473739 kernel: Loading compiled-in module X.509 certificates
Oct  7 08:16:57 np0005473739 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 4ff821c4997fbb659836adb05f5bc400c914e148'
Oct  7 08:16:57 np0005473739 kernel: ima: Allocated hash algorithm: sha256
Oct  7 08:16:57 np0005473739 kernel: ima: No architecture policies found
Oct  7 08:16:57 np0005473739 kernel: evm: Initialising EVM extended attributes:
Oct  7 08:16:57 np0005473739 kernel: evm: security.selinux
Oct  7 08:16:57 np0005473739 kernel: evm: security.SMACK64 (disabled)
Oct  7 08:16:57 np0005473739 kernel: evm: security.SMACK64EXEC (disabled)
Oct  7 08:16:57 np0005473739 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Oct  7 08:16:57 np0005473739 kernel: evm: security.SMACK64MMAP (disabled)
Oct  7 08:16:57 np0005473739 kernel: evm: security.apparmor (disabled)
Oct  7 08:16:57 np0005473739 kernel: evm: security.ima
Oct  7 08:16:57 np0005473739 kernel: evm: security.capability
Oct  7 08:16:57 np0005473739 kernel: evm: HMAC attrs: 0x1
Oct  7 08:16:57 np0005473739 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Oct  7 08:16:57 np0005473739 kernel: Running certificate verification RSA selftest
Oct  7 08:16:57 np0005473739 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Oct  7 08:16:57 np0005473739 kernel: Running certificate verification ECDSA selftest
Oct  7 08:16:57 np0005473739 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Oct  7 08:16:57 np0005473739 kernel: clk: Disabling unused clocks
Oct  7 08:16:57 np0005473739 kernel: Freeing unused decrypted memory: 2028K
Oct  7 08:16:57 np0005473739 kernel: Freeing unused kernel image (initmem) memory: 4068K
Oct  7 08:16:57 np0005473739 kernel: Write protecting the kernel read-only data: 30720k
Oct  7 08:16:57 np0005473739 kernel: Freeing unused kernel image (rodata/data gap) memory: 340K
Oct  7 08:16:57 np0005473739 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Oct  7 08:16:57 np0005473739 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Oct  7 08:16:57 np0005473739 kernel: usb 1-1: Product: QEMU USB Tablet
Oct  7 08:16:57 np0005473739 kernel: usb 1-1: Manufacturer: QEMU
Oct  7 08:16:57 np0005473739 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Oct  7 08:16:57 np0005473739 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Oct  7 08:16:57 np0005473739 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Oct  7 08:16:57 np0005473739 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Oct  7 08:16:57 np0005473739 kernel: Run /init as init process
Oct  7 08:16:57 np0005473739 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  7 08:16:57 np0005473739 systemd: Detected virtualization kvm.
Oct  7 08:16:57 np0005473739 systemd: Detected architecture x86-64.
Oct  7 08:16:57 np0005473739 systemd: Running in initrd.
Oct  7 08:16:57 np0005473739 systemd: No hostname configured, using default hostname.
Oct  7 08:16:57 np0005473739 systemd: Hostname set to <localhost>.
Oct  7 08:16:57 np0005473739 systemd: Initializing machine ID from VM UUID.
Oct  7 08:16:57 np0005473739 systemd: Queued start job for default target Initrd Default Target.
Oct  7 08:16:57 np0005473739 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  7 08:16:57 np0005473739 systemd: Reached target Local Encrypted Volumes.
Oct  7 08:16:57 np0005473739 systemd: Reached target Initrd /usr File System.
Oct  7 08:16:57 np0005473739 systemd: Reached target Local File Systems.
Oct  7 08:16:57 np0005473739 systemd: Reached target Path Units.
Oct  7 08:16:57 np0005473739 systemd: Reached target Slice Units.
Oct  7 08:16:57 np0005473739 systemd: Reached target Swaps.
Oct  7 08:16:57 np0005473739 systemd: Reached target Timer Units.
Oct  7 08:16:57 np0005473739 systemd: Listening on D-Bus System Message Bus Socket.
Oct  7 08:16:57 np0005473739 systemd: Listening on Journal Socket (/dev/log).
Oct  7 08:16:57 np0005473739 systemd: Listening on Journal Socket.
Oct  7 08:16:57 np0005473739 systemd: Listening on udev Control Socket.
Oct  7 08:16:57 np0005473739 systemd: Listening on udev Kernel Socket.
Oct  7 08:16:57 np0005473739 systemd: Reached target Socket Units.
Oct  7 08:16:57 np0005473739 systemd: Starting Create List of Static Device Nodes...
Oct  7 08:16:57 np0005473739 systemd: Starting Journal Service...
Oct  7 08:16:57 np0005473739 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  7 08:16:57 np0005473739 systemd: Starting Apply Kernel Variables...
Oct  7 08:16:57 np0005473739 systemd: Starting Create System Users...
Oct  7 08:16:57 np0005473739 systemd: Starting Setup Virtual Console...
Oct  7 08:16:57 np0005473739 systemd: Finished Create List of Static Device Nodes.
Oct  7 08:16:57 np0005473739 systemd: Finished Apply Kernel Variables.
Oct  7 08:16:57 np0005473739 systemd: Finished Create System Users.
Oct  7 08:16:57 np0005473739 systemd-journald[310]: Journal started
Oct  7 08:16:57 np0005473739 systemd-journald[310]: Runtime Journal (/run/log/journal/955d3c0d1dc54415a876b62089e34180) is 8.0M, max 153.5M, 145.5M free.
Oct  7 08:16:57 np0005473739 systemd-sysusers[313]: Creating group 'users' with GID 100.
Oct  7 08:16:57 np0005473739 systemd-sysusers[313]: Creating group 'dbus' with GID 81.
Oct  7 08:16:57 np0005473739 systemd-sysusers[313]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Oct  7 08:16:57 np0005473739 systemd: Started Journal Service.
Oct  7 08:16:57 np0005473739 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  7 08:16:57 np0005473739 systemd[1]: Starting Create Volatile Files and Directories...
Oct  7 08:16:57 np0005473739 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  7 08:16:57 np0005473739 systemd[1]: Finished Create Volatile Files and Directories.
Oct  7 08:16:58 np0005473739 systemd[1]: Finished Setup Virtual Console.
Oct  7 08:16:58 np0005473739 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Oct  7 08:16:58 np0005473739 systemd[1]: Starting dracut cmdline hook...
Oct  7 08:16:58 np0005473739 dracut-cmdline[330]: dracut-9 dracut-057-102.git20250818.el9
Oct  7 08:16:58 np0005473739 dracut-cmdline[330]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-620.el9.x86_64 root=UUID=1631a6ad-43b8-436d-ae76-16fa14b94458 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Oct  7 08:16:58 np0005473739 systemd[1]: Finished dracut cmdline hook.
Oct  7 08:16:58 np0005473739 systemd[1]: Starting dracut pre-udev hook...
Oct  7 08:16:58 np0005473739 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Oct  7 08:16:58 np0005473739 kernel: device-mapper: uevent: version 1.0.3
Oct  7 08:16:58 np0005473739 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Oct  7 08:16:58 np0005473739 kernel: RPC: Registered named UNIX socket transport module.
Oct  7 08:16:58 np0005473739 kernel: RPC: Registered udp transport module.
Oct  7 08:16:58 np0005473739 kernel: RPC: Registered tcp transport module.
Oct  7 08:16:58 np0005473739 kernel: RPC: Registered tcp-with-tls transport module.
Oct  7 08:16:58 np0005473739 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Oct  7 08:16:58 np0005473739 rpc.statd[448]: Version 2.5.4 starting
Oct  7 08:16:58 np0005473739 rpc.statd[448]: Initializing NSM state
Oct  7 08:16:58 np0005473739 rpc.idmapd[453]: Setting log level to 0
Oct  7 08:16:58 np0005473739 systemd[1]: Finished dracut pre-udev hook.
Oct  7 08:16:58 np0005473739 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  7 08:16:58 np0005473739 systemd-udevd[466]: Using default interface naming scheme 'rhel-9.0'.
Oct  7 08:16:58 np0005473739 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  7 08:16:58 np0005473739 systemd[1]: Starting dracut pre-trigger hook...
Oct  7 08:16:58 np0005473739 systemd[1]: Finished dracut pre-trigger hook.
Oct  7 08:16:58 np0005473739 systemd[1]: Starting Coldplug All udev Devices...
Oct  7 08:16:58 np0005473739 systemd[1]: Created slice Slice /system/modprobe.
Oct  7 08:16:58 np0005473739 systemd[1]: Starting Load Kernel Module configfs...
Oct  7 08:16:58 np0005473739 systemd[1]: Finished Coldplug All udev Devices.
Oct  7 08:16:58 np0005473739 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  7 08:16:58 np0005473739 systemd[1]: Finished Load Kernel Module configfs.
Oct  7 08:16:58 np0005473739 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  7 08:16:58 np0005473739 systemd[1]: Reached target Network.
Oct  7 08:16:58 np0005473739 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Oct  7 08:16:58 np0005473739 systemd[1]: Starting dracut initqueue hook...
Oct  7 08:16:58 np0005473739 systemd[1]: Mounting Kernel Configuration File System...
Oct  7 08:16:58 np0005473739 systemd[1]: Mounted Kernel Configuration File System.
Oct  7 08:16:58 np0005473739 systemd[1]: Reached target System Initialization.
Oct  7 08:16:58 np0005473739 systemd[1]: Reached target Basic System.
Oct  7 08:16:58 np0005473739 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Oct  7 08:16:58 np0005473739 kernel: scsi host0: ata_piix
Oct  7 08:16:58 np0005473739 kernel: scsi host1: ata_piix
Oct  7 08:16:58 np0005473739 systemd-udevd[470]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 08:16:58 np0005473739 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Oct  7 08:16:58 np0005473739 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Oct  7 08:16:58 np0005473739 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Oct  7 08:16:58 np0005473739 kernel: vda: vda1
Oct  7 08:16:59 np0005473739 kernel: ata1: found unknown device (class 0)
Oct  7 08:16:59 np0005473739 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Oct  7 08:16:59 np0005473739 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Oct  7 08:16:59 np0005473739 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Oct  7 08:16:59 np0005473739 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Oct  7 08:16:59 np0005473739 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Oct  7 08:16:59 np0005473739 systemd[1]: Found device /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  7 08:16:59 np0005473739 systemd[1]: Reached target Initrd Root Device.
Oct  7 08:16:59 np0005473739 systemd[1]: Finished dracut initqueue hook.
Oct  7 08:16:59 np0005473739 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  7 08:16:59 np0005473739 systemd[1]: Reached target Remote Encrypted Volumes.
Oct  7 08:16:59 np0005473739 systemd[1]: Reached target Remote File Systems.
Oct  7 08:16:59 np0005473739 systemd[1]: Starting dracut pre-mount hook...
Oct  7 08:16:59 np0005473739 systemd[1]: Finished dracut pre-mount hook.
Oct  7 08:16:59 np0005473739 systemd[1]: Starting File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458...
Oct  7 08:16:59 np0005473739 systemd-fsck[558]: /usr/sbin/fsck.xfs: XFS file system.
Oct  7 08:16:59 np0005473739 systemd[1]: Finished File System Check on /dev/disk/by-uuid/1631a6ad-43b8-436d-ae76-16fa14b94458.
Oct  7 08:16:59 np0005473739 systemd[1]: Mounting /sysroot...
Oct  7 08:16:59 np0005473739 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Oct  7 08:16:59 np0005473739 kernel: XFS (vda1): Mounting V5 Filesystem 1631a6ad-43b8-436d-ae76-16fa14b94458
Oct  7 08:16:59 np0005473739 kernel: XFS (vda1): Ending clean mount
Oct  7 08:17:00 np0005473739 systemd[1]: Mounted /sysroot.
Oct  7 08:17:00 np0005473739 systemd[1]: Reached target Initrd Root File System.
Oct  7 08:17:00 np0005473739 systemd[1]: Starting Mountpoints Configured in the Real Root...
Oct  7 08:17:00 np0005473739 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Finished Mountpoints Configured in the Real Root.
Oct  7 08:17:00 np0005473739 systemd[1]: Reached target Initrd File Systems.
Oct  7 08:17:00 np0005473739 systemd[1]: Reached target Initrd Default Target.
Oct  7 08:17:00 np0005473739 systemd[1]: Starting dracut mount hook...
Oct  7 08:17:00 np0005473739 systemd[1]: Finished dracut mount hook.
Oct  7 08:17:00 np0005473739 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Oct  7 08:17:00 np0005473739 rpc.idmapd[453]: exiting on signal 15
Oct  7 08:17:00 np0005473739 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Oct  7 08:17:00 np0005473739 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Network.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Remote Encrypted Volumes.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Timer Units.
Oct  7 08:17:00 np0005473739 systemd[1]: dbus.socket: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Closed D-Bus System Message Bus Socket.
Oct  7 08:17:00 np0005473739 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Initrd Default Target.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Basic System.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Initrd Root Device.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Initrd /usr File System.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Path Units.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Remote File Systems.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Preparation for Remote File Systems.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Slice Units.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Socket Units.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target System Initialization.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Local File Systems.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Swaps.
Oct  7 08:17:00 np0005473739 systemd[1]: dracut-mount.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped dracut mount hook.
Oct  7 08:17:00 np0005473739 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped dracut pre-mount hook.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped target Local Encrypted Volumes.
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Oct  7 08:17:00 np0005473739 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped dracut initqueue hook.
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Apply Kernel Variables.
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Create Volatile Files and Directories.
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Coldplug All udev Devices.
Oct  7 08:17:00 np0005473739 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped dracut pre-trigger hook.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Setup Virtual Console.
Oct  7 08:17:00 np0005473739 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-udevd.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Closed udev Control Socket.
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Closed udev Kernel Socket.
Oct  7 08:17:00 np0005473739 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped dracut pre-udev hook.
Oct  7 08:17:00 np0005473739 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped dracut cmdline hook.
Oct  7 08:17:00 np0005473739 systemd[1]: Starting Cleanup udev Database...
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Create Static Device Nodes in /dev.
Oct  7 08:17:00 np0005473739 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Create List of Static Device Nodes.
Oct  7 08:17:00 np0005473739 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Stopped Create System Users.
Oct  7 08:17:00 np0005473739 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Oct  7 08:17:00 np0005473739 systemd[1]: Finished Cleanup udev Database.
Oct  7 08:17:00 np0005473739 systemd[1]: Reached target Switch Root.
Oct  7 08:17:00 np0005473739 systemd[1]: Starting Switch Root...
Oct  7 08:17:00 np0005473739 systemd[1]: Switching root.
Oct  7 08:17:00 np0005473739 systemd-journald[310]: Journal stopped
Oct  7 08:17:01 np0005473739 systemd-journald: Received SIGTERM from PID 1 (systemd).
Oct  7 08:17:01 np0005473739 kernel: audit: type=1404 audit(1759839420.699:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Oct  7 08:17:01 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 08:17:01 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 08:17:01 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 08:17:01 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 08:17:01 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 08:17:01 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 08:17:01 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 08:17:01 np0005473739 kernel: audit: type=1403 audit(1759839420.872:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Oct  7 08:17:01 np0005473739 systemd: Successfully loaded SELinux policy in 178.698ms.
Oct  7 08:17:01 np0005473739 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 32.604ms.
Oct  7 08:17:01 np0005473739 systemd: systemd 252-55.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Oct  7 08:17:01 np0005473739 systemd: Detected virtualization kvm.
Oct  7 08:17:01 np0005473739 systemd: Detected architecture x86-64.
Oct  7 08:17:01 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 08:17:01 np0005473739 systemd: initrd-switch-root.service: Deactivated successfully.
Oct  7 08:17:01 np0005473739 systemd: Stopped Switch Root.
Oct  7 08:17:01 np0005473739 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Oct  7 08:17:01 np0005473739 systemd: Created slice Slice /system/getty.
Oct  7 08:17:01 np0005473739 systemd: Created slice Slice /system/serial-getty.
Oct  7 08:17:01 np0005473739 systemd: Created slice Slice /system/sshd-keygen.
Oct  7 08:17:01 np0005473739 systemd: Created slice User and Session Slice.
Oct  7 08:17:01 np0005473739 systemd: Started Dispatch Password Requests to Console Directory Watch.
Oct  7 08:17:01 np0005473739 systemd: Started Forward Password Requests to Wall Directory Watch.
Oct  7 08:17:01 np0005473739 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Oct  7 08:17:01 np0005473739 systemd: Reached target Local Encrypted Volumes.
Oct  7 08:17:01 np0005473739 systemd: Stopped target Switch Root.
Oct  7 08:17:01 np0005473739 systemd: Stopped target Initrd File Systems.
Oct  7 08:17:01 np0005473739 systemd: Stopped target Initrd Root File System.
Oct  7 08:17:01 np0005473739 systemd: Reached target Local Integrity Protected Volumes.
Oct  7 08:17:01 np0005473739 systemd: Reached target Path Units.
Oct  7 08:17:01 np0005473739 systemd: Reached target rpc_pipefs.target.
Oct  7 08:17:01 np0005473739 systemd: Reached target Slice Units.
Oct  7 08:17:01 np0005473739 systemd: Reached target Swaps.
Oct  7 08:17:01 np0005473739 systemd: Reached target Local Verity Protected Volumes.
Oct  7 08:17:01 np0005473739 systemd: Listening on RPCbind Server Activation Socket.
Oct  7 08:17:01 np0005473739 systemd: Reached target RPC Port Mapper.
Oct  7 08:17:01 np0005473739 systemd: Listening on Process Core Dump Socket.
Oct  7 08:17:01 np0005473739 systemd: Listening on initctl Compatibility Named Pipe.
Oct  7 08:17:01 np0005473739 systemd: Listening on udev Control Socket.
Oct  7 08:17:01 np0005473739 systemd: Listening on udev Kernel Socket.
Oct  7 08:17:01 np0005473739 systemd: Mounting Huge Pages File System...
Oct  7 08:17:01 np0005473739 systemd: Mounting POSIX Message Queue File System...
Oct  7 08:17:01 np0005473739 systemd: Mounting Kernel Debug File System...
Oct  7 08:17:01 np0005473739 systemd: Mounting Kernel Trace File System...
Oct  7 08:17:01 np0005473739 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  7 08:17:01 np0005473739 systemd: Starting Create List of Static Device Nodes...
Oct  7 08:17:01 np0005473739 systemd: Starting Load Kernel Module configfs...
Oct  7 08:17:01 np0005473739 systemd: Starting Load Kernel Module drm...
Oct  7 08:17:01 np0005473739 systemd: Starting Load Kernel Module efi_pstore...
Oct  7 08:17:01 np0005473739 systemd: Starting Load Kernel Module fuse...
Oct  7 08:17:01 np0005473739 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Oct  7 08:17:01 np0005473739 systemd: systemd-fsck-root.service: Deactivated successfully.
Oct  7 08:17:01 np0005473739 systemd: Stopped File System Check on Root Device.
Oct  7 08:17:01 np0005473739 systemd: Stopped Journal Service.
Oct  7 08:17:01 np0005473739 systemd: Starting Journal Service...
Oct  7 08:17:01 np0005473739 kernel: fuse: init (API version 7.37)
Oct  7 08:17:01 np0005473739 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Oct  7 08:17:01 np0005473739 systemd: Starting Generate network units from Kernel command line...
Oct  7 08:17:01 np0005473739 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  7 08:17:01 np0005473739 systemd: Starting Remount Root and Kernel File Systems...
Oct  7 08:17:01 np0005473739 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Oct  7 08:17:01 np0005473739 systemd: Starting Apply Kernel Variables...
Oct  7 08:17:01 np0005473739 systemd: Starting Coldplug All udev Devices...
Oct  7 08:17:01 np0005473739 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Oct  7 08:17:01 np0005473739 systemd-journald[678]: Journal started
Oct  7 08:17:01 np0005473739 systemd-journald[678]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  7 08:17:01 np0005473739 systemd[1]: Queued start job for default target Multi-User System.
Oct  7 08:17:01 np0005473739 systemd[1]: systemd-journald.service: Deactivated successfully.
Oct  7 08:17:01 np0005473739 kernel: ACPI: bus type drm_connector registered
Oct  7 08:17:01 np0005473739 systemd: Started Journal Service.
Oct  7 08:17:01 np0005473739 systemd[1]: Mounted Huge Pages File System.
Oct  7 08:17:01 np0005473739 systemd[1]: Mounted POSIX Message Queue File System.
Oct  7 08:17:01 np0005473739 systemd[1]: Mounted Kernel Debug File System.
Oct  7 08:17:01 np0005473739 systemd[1]: Mounted Kernel Trace File System.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Create List of Static Device Nodes.
Oct  7 08:17:01 np0005473739 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Load Kernel Module configfs.
Oct  7 08:17:01 np0005473739 systemd[1]: modprobe@drm.service: Deactivated successfully.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Load Kernel Module drm.
Oct  7 08:17:01 np0005473739 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Load Kernel Module efi_pstore.
Oct  7 08:17:01 np0005473739 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Load Kernel Module fuse.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Generate network units from Kernel command line.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Remount Root and Kernel File Systems.
Oct  7 08:17:01 np0005473739 systemd[1]: Mounting FUSE Control File System...
Oct  7 08:17:01 np0005473739 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  7 08:17:01 np0005473739 systemd[1]: Starting Rebuild Hardware Database...
Oct  7 08:17:01 np0005473739 systemd[1]: Starting Flush Journal to Persistent Storage...
Oct  7 08:17:01 np0005473739 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Oct  7 08:17:01 np0005473739 systemd[1]: Starting Load/Save OS Random Seed...
Oct  7 08:17:01 np0005473739 systemd[1]: Starting Create System Users...
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Apply Kernel Variables.
Oct  7 08:17:01 np0005473739 systemd-journald[678]: Runtime Journal (/run/log/journal/42833e1b511a402df82cb9cb2fc36491) is 8.0M, max 153.5M, 145.5M free.
Oct  7 08:17:01 np0005473739 systemd-journald[678]: Received client request to flush runtime journal.
Oct  7 08:17:01 np0005473739 systemd[1]: Mounted FUSE Control File System.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Flush Journal to Persistent Storage.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Load/Save OS Random Seed.
Oct  7 08:17:01 np0005473739 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Create System Users.
Oct  7 08:17:01 np0005473739 systemd[1]: Starting Create Static Device Nodes in /dev...
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Coldplug All udev Devices.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Create Static Device Nodes in /dev.
Oct  7 08:17:01 np0005473739 systemd[1]: Reached target Preparation for Local File Systems.
Oct  7 08:17:01 np0005473739 systemd[1]: Reached target Local File Systems.
Oct  7 08:17:01 np0005473739 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Oct  7 08:17:01 np0005473739 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Oct  7 08:17:01 np0005473739 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Oct  7 08:17:01 np0005473739 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Oct  7 08:17:01 np0005473739 systemd[1]: Starting Automatic Boot Loader Update...
Oct  7 08:17:01 np0005473739 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Oct  7 08:17:01 np0005473739 systemd[1]: Starting Create Volatile Files and Directories...
Oct  7 08:17:01 np0005473739 bootctl[695]: Couldn't find EFI system partition, skipping.
Oct  7 08:17:01 np0005473739 systemd[1]: Finished Automatic Boot Loader Update.
Oct  7 08:17:02 np0005473739 systemd[1]: Finished Create Volatile Files and Directories.
Oct  7 08:17:02 np0005473739 systemd[1]: Starting Security Auditing Service...
Oct  7 08:17:02 np0005473739 systemd[1]: Starting RPC Bind...
Oct  7 08:17:02 np0005473739 systemd[1]: Starting Rebuild Journal Catalog...
Oct  7 08:17:02 np0005473739 auditd[701]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Oct  7 08:17:02 np0005473739 auditd[701]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Oct  7 08:17:02 np0005473739 systemd[1]: Finished Rebuild Journal Catalog.
Oct  7 08:17:02 np0005473739 systemd[1]: Started RPC Bind.
Oct  7 08:17:02 np0005473739 augenrules[706]: /sbin/augenrules: No change
Oct  7 08:17:02 np0005473739 augenrules[722]: No rules
Oct  7 08:17:02 np0005473739 augenrules[722]: enabled 1
Oct  7 08:17:02 np0005473739 augenrules[722]: failure 1
Oct  7 08:17:02 np0005473739 augenrules[722]: pid 701
Oct  7 08:17:02 np0005473739 augenrules[722]: rate_limit 0
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_limit 8192
Oct  7 08:17:02 np0005473739 augenrules[722]: lost 0
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog 2
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_wait_time 60000
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_wait_time_actual 0
Oct  7 08:17:02 np0005473739 augenrules[722]: enabled 1
Oct  7 08:17:02 np0005473739 augenrules[722]: failure 1
Oct  7 08:17:02 np0005473739 augenrules[722]: pid 701
Oct  7 08:17:02 np0005473739 augenrules[722]: rate_limit 0
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_limit 8192
Oct  7 08:17:02 np0005473739 augenrules[722]: lost 0
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog 0
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_wait_time 60000
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_wait_time_actual 0
Oct  7 08:17:02 np0005473739 augenrules[722]: enabled 1
Oct  7 08:17:02 np0005473739 augenrules[722]: failure 1
Oct  7 08:17:02 np0005473739 augenrules[722]: pid 701
Oct  7 08:17:02 np0005473739 augenrules[722]: rate_limit 0
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_limit 8192
Oct  7 08:17:02 np0005473739 augenrules[722]: lost 0
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog 0
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_wait_time 60000
Oct  7 08:17:02 np0005473739 augenrules[722]: backlog_wait_time_actual 0
Oct  7 08:17:02 np0005473739 systemd[1]: Started Security Auditing Service.
Oct  7 08:17:02 np0005473739 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Oct  7 08:17:02 np0005473739 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Oct  7 08:17:02 np0005473739 systemd[1]: Finished Rebuild Hardware Database.
Oct  7 08:17:02 np0005473739 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Oct  7 08:17:02 np0005473739 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Oct  7 08:17:02 np0005473739 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Oct  7 08:17:02 np0005473739 systemd[1]: Starting Update is Completed...
Oct  7 08:17:02 np0005473739 systemd[1]: Finished Update is Completed.
Oct  7 08:17:02 np0005473739 systemd[1]: Started Rule-based Manager for Device Events and Files.
Oct  7 08:17:02 np0005473739 systemd[1]: Reached target System Initialization.
Oct  7 08:17:02 np0005473739 systemd[1]: Started dnf makecache --timer.
Oct  7 08:17:02 np0005473739 systemd[1]: Started Daily rotation of log files.
Oct  7 08:17:02 np0005473739 systemd[1]: Started Daily Cleanup of Temporary Directories.
Oct  7 08:17:02 np0005473739 systemd[1]: Reached target Timer Units.
Oct  7 08:17:02 np0005473739 systemd[1]: Listening on D-Bus System Message Bus Socket.
Oct  7 08:17:02 np0005473739 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Oct  7 08:17:02 np0005473739 systemd[1]: Reached target Socket Units.
Oct  7 08:17:02 np0005473739 systemd[1]: Starting D-Bus System Message Bus...
Oct  7 08:17:02 np0005473739 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  7 08:17:02 np0005473739 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Oct  7 08:17:02 np0005473739 systemd[1]: Starting Load Kernel Module configfs...
Oct  7 08:17:02 np0005473739 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Oct  7 08:17:02 np0005473739 systemd[1]: Finished Load Kernel Module configfs.
Oct  7 08:17:02 np0005473739 systemd-udevd[748]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 08:17:02 np0005473739 systemd[1]: Started D-Bus System Message Bus.
Oct  7 08:17:02 np0005473739 systemd[1]: Reached target Basic System.
Oct  7 08:17:02 np0005473739 dbus-broker-lau[762]: Ready
Oct  7 08:17:02 np0005473739 systemd[1]: Starting NTP client/server...
Oct  7 08:17:02 np0005473739 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Oct  7 08:17:02 np0005473739 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Oct  7 08:17:02 np0005473739 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Oct  7 08:17:02 np0005473739 chronyd[780]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  7 08:17:02 np0005473739 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Oct  7 08:17:02 np0005473739 chronyd[780]: Loaded 0 symmetric keys
Oct  7 08:17:02 np0005473739 chronyd[780]: Using right/UTC timezone to obtain leap second data
Oct  7 08:17:02 np0005473739 chronyd[780]: Loaded seccomp filter (level 2)
Oct  7 08:17:02 np0005473739 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Oct  7 08:17:02 np0005473739 systemd[1]: Starting Restore /run/initramfs on shutdown...
Oct  7 08:17:03 np0005473739 kernel: kvm_amd: TSC scaling supported
Oct  7 08:17:03 np0005473739 kernel: kvm_amd: Nested Virtualization enabled
Oct  7 08:17:03 np0005473739 kernel: kvm_amd: Nested Paging enabled
Oct  7 08:17:03 np0005473739 kernel: kvm_amd: LBR virtualization supported
Oct  7 08:17:03 np0005473739 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Oct  7 08:17:03 np0005473739 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Oct  7 08:17:03 np0005473739 kernel: Console: switching to colour dummy device 80x25
Oct  7 08:17:03 np0005473739 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Oct  7 08:17:03 np0005473739 kernel: [drm] features: -context_init
Oct  7 08:17:03 np0005473739 kernel: [drm] number of scanouts: 1
Oct  7 08:17:03 np0005473739 kernel: [drm] number of cap sets: 0
Oct  7 08:17:03 np0005473739 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Oct  7 08:17:03 np0005473739 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Oct  7 08:17:03 np0005473739 kernel: Console: switching to colour frame buffer device 128x48
Oct  7 08:17:03 np0005473739 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Oct  7 08:17:03 np0005473739 systemd[1]: Starting IPv4 firewall with iptables...
Oct  7 08:17:03 np0005473739 systemd[1]: Started irqbalance daemon.
Oct  7 08:17:03 np0005473739 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Oct  7 08:17:03 np0005473739 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  7 08:17:03 np0005473739 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  7 08:17:03 np0005473739 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  7 08:17:03 np0005473739 systemd[1]: Reached target sshd-keygen.target.
Oct  7 08:17:03 np0005473739 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Oct  7 08:17:03 np0005473739 systemd[1]: Reached target User and Group Name Lookups.
Oct  7 08:17:03 np0005473739 systemd[1]: Starting User Login Management...
Oct  7 08:17:03 np0005473739 systemd[1]: Started NTP client/server.
Oct  7 08:17:03 np0005473739 systemd[1]: Finished Restore /run/initramfs on shutdown.
Oct  7 08:17:03 np0005473739 systemd-logind[801]: New seat seat0.
Oct  7 08:17:03 np0005473739 systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  7 08:17:03 np0005473739 systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  7 08:17:03 np0005473739 systemd[1]: Started User Login Management.
Oct  7 08:17:03 np0005473739 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Oct  7 08:17:03 np0005473739 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Oct  7 08:17:03 np0005473739 iptables.init[791]: iptables: Applying firewall rules: [  OK  ]
Oct  7 08:17:03 np0005473739 systemd[1]: Finished IPv4 firewall with iptables.
Oct  7 08:17:03 np0005473739 cloud-init[838]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 07 Oct 2025 12:17:03 +0000. Up 8.33 seconds.
Oct  7 08:17:03 np0005473739 systemd[1]: run-cloud\x2dinit-tmp-tmp2xhb2o13.mount: Deactivated successfully.
Oct  7 08:17:03 np0005473739 systemd[1]: Starting Hostname Service...
Oct  7 08:17:04 np0005473739 systemd[1]: Started Hostname Service.
Oct  7 08:17:04 np0005473739 systemd-hostnamed[852]: Hostname set to <np0005473739.novalocal> (static)
Oct  7 08:17:04 np0005473739 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Oct  7 08:17:04 np0005473739 systemd[1]: Reached target Preparation for Network.
Oct  7 08:17:04 np0005473739 systemd[1]: Starting Network Manager...
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.3375] NetworkManager (version 1.54.1-1.el9) is starting... (boot:e0a9c9d9-31fa-465e-9260-63a0cf6f9771)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.3380] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.3596] manager[0x55d6481d8080]: monitoring kernel firmware directory '/lib/firmware'.
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.3645] hostname: hostname: using hostnamed
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.3646] hostname: static hostname changed from (none) to "np0005473739.novalocal"
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.3651] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4126] manager[0x55d6481d8080]: rfkill: Wi-Fi hardware radio set enabled
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4127] manager[0x55d6481d8080]: rfkill: WWAN hardware radio set enabled
Oct  7 08:17:04 np0005473739 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4229] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4230] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4231] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4231] manager: Networking is enabled by state file
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4234] settings: Loaded settings plugin: keyfile (internal)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4263] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4284] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4308] dhcp: init: Using DHCP client 'internal'
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4310] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4322] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4333] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4340] device (lo): Activation: starting connection 'lo' (77c9c6e3-c233-4fb7-ae8f-731a952ee065)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4348] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4350] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4373] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4377] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4379] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4380] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4382] device (eth0): carrier: link connected
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4384] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4390] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4394] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4397] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4398] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4399] manager: NetworkManager state is now CONNECTING
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4401] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4405] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4407] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  7 08:17:04 np0005473739 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  7 08:17:04 np0005473739 systemd[1]: Started Network Manager.
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4471] dhcp4 (eth0): state changed new lease, address=38.102.83.153
Oct  7 08:17:04 np0005473739 systemd[1]: Reached target Network.
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4488] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4506] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 08:17:04 np0005473739 systemd[1]: Starting Network Manager Wait Online...
Oct  7 08:17:04 np0005473739 systemd[1]: Starting GSSAPI Proxy Daemon...
Oct  7 08:17:04 np0005473739 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4819] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4825] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4838] device (lo): Activation: successful, device activated.
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4848] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4867] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4874] manager: NetworkManager state is now CONNECTED_SITE
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4880] device (eth0): Activation: successful, device activated.
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4889] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  7 08:17:04 np0005473739 NetworkManager[856]: <info>  [1759839424.4893] manager: startup complete
Oct  7 08:17:04 np0005473739 systemd[1]: Started GSSAPI Proxy Daemon.
Oct  7 08:17:04 np0005473739 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Oct  7 08:17:04 np0005473739 systemd[1]: Reached target NFS client services.
Oct  7 08:17:04 np0005473739 systemd[1]: Reached target Preparation for Remote File Systems.
Oct  7 08:17:04 np0005473739 systemd[1]: Reached target Remote File Systems.
Oct  7 08:17:04 np0005473739 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Oct  7 08:17:04 np0005473739 systemd[1]: Finished Network Manager Wait Online.
Oct  7 08:17:04 np0005473739 systemd[1]: Starting Cloud-init: Network Stage...
Oct  7 08:17:04 np0005473739 cloud-init[920]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 07 Oct 2025 12:17:04 +0000. Up 9.49 seconds.
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |  eth0  | True |        38.102.83.153         | 255.255.255.0 | global | fa:16:3e:39:a5:a8 |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |  eth0  | True | fe80::f816:3eff:fe39:a5a8/64 |       .       |  link  | fa:16:3e:39:a5:a8 |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Oct  7 08:17:04 np0005473739 cloud-init[920]: ci-info: +-------+-------------+---------+-----------+-------+
Oct  7 08:17:10 np0005473739 chronyd[780]: Selected source 206.108.0.133 (2.centos.pool.ntp.org)
Oct  7 08:17:10 np0005473739 chronyd[780]: System clock wrong by 1.068328 seconds
Oct  7 08:17:10 np0005473739 chronyd[780]: System clock was stepped by 1.068328 seconds
Oct  7 08:17:10 np0005473739 chronyd[780]: System clock TAI offset set to 37 seconds
Oct  7 08:17:10 np0005473739 cloud-init[920]: Generating public/private rsa key pair.
Oct  7 08:17:10 np0005473739 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Oct  7 08:17:10 np0005473739 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Oct  7 08:17:10 np0005473739 cloud-init[920]: The key fingerprint is:
Oct  7 08:17:10 np0005473739 cloud-init[920]: SHA256:M5P7AfLDC0nL6kFWBbjqyfmkF2qqBLxt06PG1NzWUK8 root@np0005473739.novalocal
Oct  7 08:17:10 np0005473739 cloud-init[920]: The key's randomart image is:
Oct  7 08:17:10 np0005473739 cloud-init[920]: +---[RSA 3072]----+
Oct  7 08:17:10 np0005473739 cloud-init[920]: |     ....        |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |    .  . .       |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |     .. . .      |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |.   .. . . .     |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |.. .= + S .      |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |. ++o= O E       |
Oct  7 08:17:10 np0005473739 cloud-init[920]: | =oBo+* = .      |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |. O=+o.. + .     |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |+oo=+   . .      |
Oct  7 08:17:10 np0005473739 cloud-init[920]: +----[SHA256]-----+
Oct  7 08:17:10 np0005473739 cloud-init[920]: Generating public/private ecdsa key pair.
Oct  7 08:17:10 np0005473739 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Oct  7 08:17:10 np0005473739 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Oct  7 08:17:10 np0005473739 cloud-init[920]: The key fingerprint is:
Oct  7 08:17:10 np0005473739 cloud-init[920]: SHA256:3ZNKPjBkmpT9AF755KOYcWUTVAtl3mY8Z02StBsMafE root@np0005473739.novalocal
Oct  7 08:17:10 np0005473739 cloud-init[920]: The key's randomart image is:
Oct  7 08:17:10 np0005473739 cloud-init[920]: +---[ECDSA 256]---+
Oct  7 08:17:10 np0005473739 cloud-init[920]: |      . .o++*+o..|
Oct  7 08:17:10 np0005473739 cloud-init[920]: |     . =. =+oB.+.|
Oct  7 08:17:10 np0005473739 cloud-init[920]: |      + =* oo E +|
Oct  7 08:17:10 np0005473739 cloud-init[920]: |     ..=.++. + * |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |      o=S.+.+ .  |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |      o .= . .   |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |          +      |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |           .     |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |                 |
Oct  7 08:17:10 np0005473739 cloud-init[920]: +----[SHA256]-----+
Oct  7 08:17:10 np0005473739 cloud-init[920]: Generating public/private ed25519 key pair.
Oct  7 08:17:10 np0005473739 cloud-init[920]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Oct  7 08:17:10 np0005473739 cloud-init[920]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Oct  7 08:17:10 np0005473739 cloud-init[920]: The key fingerprint is:
Oct  7 08:17:10 np0005473739 cloud-init[920]: SHA256:1sTYLmasnYPBN2zv22yFPhH1P2Oll3dMPjskXtvgcyk root@np0005473739.novalocal
Oct  7 08:17:10 np0005473739 cloud-init[920]: The key's randomart image is:
Oct  7 08:17:10 np0005473739 cloud-init[920]: +--[ED25519 256]--+
Oct  7 08:17:10 np0005473739 cloud-init[920]: |                 |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |         +    .  |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |        . +  . . |
Oct  7 08:17:10 np0005473739 cloud-init[920]: |     . o +  .   +|
Oct  7 08:17:10 np0005473739 cloud-init[920]: |      o S o  o =+|
Oct  7 08:17:10 np0005473739 cloud-init[920]: |       @ =  o.+OB|
Oct  7 08:17:10 np0005473739 cloud-init[920]: |      o + ...+=o@|
Oct  7 08:17:10 np0005473739 cloud-init[920]: |         o o+Eo=o|
Oct  7 08:17:10 np0005473739 cloud-init[920]: |          ooo..o.|
Oct  7 08:17:10 np0005473739 cloud-init[920]: +----[SHA256]-----+
Oct  7 08:17:10 np0005473739 systemd[1]: Finished Cloud-init: Network Stage.
Oct  7 08:17:10 np0005473739 systemd[1]: Reached target Cloud-config availability.
Oct  7 08:17:10 np0005473739 systemd[1]: Reached target Network is Online.
Oct  7 08:17:10 np0005473739 systemd[1]: Starting Cloud-init: Config Stage...
Oct  7 08:17:10 np0005473739 systemd[1]: Starting Notify NFS peers of a restart...
Oct  7 08:17:10 np0005473739 systemd[1]: Starting System Logging Service...
Oct  7 08:17:10 np0005473739 sm-notify[1003]: Version 2.5.4 starting
Oct  7 08:17:10 np0005473739 systemd[1]: Starting OpenSSH server daemon...
Oct  7 08:17:10 np0005473739 systemd[1]: Starting Permit User Sessions...
Oct  7 08:17:10 np0005473739 systemd[1]: Started Notify NFS peers of a restart.
Oct  7 08:17:10 np0005473739 systemd[1]: Finished Permit User Sessions.
Oct  7 08:17:10 np0005473739 systemd[1]: Started OpenSSH server daemon.
Oct  7 08:17:10 np0005473739 systemd[1]: Started Command Scheduler.
Oct  7 08:17:10 np0005473739 systemd[1]: Started Getty on tty1.
Oct  7 08:17:10 np0005473739 systemd[1]: Started Serial Getty on ttyS0.
Oct  7 08:17:10 np0005473739 systemd[1]: Reached target Login Prompts.
Oct  7 08:17:10 np0005473739 rsyslogd[1004]: [origin software="rsyslogd" swVersion="8.2506.0-2.el9" x-pid="1004" x-info="https://www.rsyslog.com"] start
Oct  7 08:17:10 np0005473739 systemd[1]: Started System Logging Service.
Oct  7 08:17:10 np0005473739 rsyslogd[1004]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Oct  7 08:17:10 np0005473739 systemd[1]: Reached target Multi-User System.
Oct  7 08:17:10 np0005473739 systemd[1]: Starting Record Runlevel Change in UTMP...
Oct  7 08:17:10 np0005473739 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Oct  7 08:17:10 np0005473739 systemd[1]: Finished Record Runlevel Change in UTMP.
Oct  7 08:17:10 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 08:17:11 np0005473739 cloud-init[1034]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 07 Oct 2025 12:17:11 +0000. Up 14.73 seconds.
Oct  7 08:17:11 np0005473739 systemd[1]: Finished Cloud-init: Config Stage.
Oct  7 08:17:11 np0005473739 systemd[1]: Starting Cloud-init: Final Stage...
Oct  7 08:17:11 np0005473739 cloud-init[1038]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 07 Oct 2025 12:17:11 +0000. Up 15.17 seconds.
Oct  7 08:17:11 np0005473739 cloud-init[1040]: #############################################################
Oct  7 08:17:11 np0005473739 cloud-init[1041]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Oct  7 08:17:11 np0005473739 cloud-init[1043]: 256 SHA256:3ZNKPjBkmpT9AF755KOYcWUTVAtl3mY8Z02StBsMafE root@np0005473739.novalocal (ECDSA)
Oct  7 08:17:11 np0005473739 cloud-init[1045]: 256 SHA256:1sTYLmasnYPBN2zv22yFPhH1P2Oll3dMPjskXtvgcyk root@np0005473739.novalocal (ED25519)
Oct  7 08:17:11 np0005473739 cloud-init[1047]: 3072 SHA256:M5P7AfLDC0nL6kFWBbjqyfmkF2qqBLxt06PG1NzWUK8 root@np0005473739.novalocal (RSA)
Oct  7 08:17:11 np0005473739 cloud-init[1048]: -----END SSH HOST KEY FINGERPRINTS-----
Oct  7 08:17:11 np0005473739 cloud-init[1049]: #############################################################
Oct  7 08:17:11 np0005473739 cloud-init[1038]: Cloud-init v. 24.4-7.el9 finished at Tue, 07 Oct 2025 12:17:11 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 15.34 seconds
Oct  7 08:17:11 np0005473739 systemd[1]: Finished Cloud-init: Final Stage.
Oct  7 08:17:11 np0005473739 systemd[1]: Reached target Cloud-init target.
Oct  7 08:17:11 np0005473739 systemd[1]: Startup finished in 1.611s (kernel) + 3.763s (initrd) + 10.043s (userspace) = 15.418s.
Oct  7 08:17:14 np0005473739 irqbalance[796]: Cannot change IRQ 35 affinity: Operation not permitted
Oct  7 08:17:14 np0005473739 irqbalance[796]: IRQ 35 affinity is now unmanaged
Oct  7 08:17:14 np0005473739 irqbalance[796]: Cannot change IRQ 33 affinity: Operation not permitted
Oct  7 08:17:14 np0005473739 irqbalance[796]: IRQ 33 affinity is now unmanaged
Oct  7 08:17:14 np0005473739 irqbalance[796]: Cannot change IRQ 31 affinity: Operation not permitted
Oct  7 08:17:14 np0005473739 irqbalance[796]: IRQ 31 affinity is now unmanaged
Oct  7 08:17:14 np0005473739 irqbalance[796]: Cannot change IRQ 28 affinity: Operation not permitted
Oct  7 08:17:14 np0005473739 irqbalance[796]: IRQ 28 affinity is now unmanaged
Oct  7 08:17:14 np0005473739 irqbalance[796]: Cannot change IRQ 34 affinity: Operation not permitted
Oct  7 08:17:14 np0005473739 irqbalance[796]: IRQ 34 affinity is now unmanaged
Oct  7 08:17:14 np0005473739 irqbalance[796]: Cannot change IRQ 32 affinity: Operation not permitted
Oct  7 08:17:14 np0005473739 irqbalance[796]: IRQ 32 affinity is now unmanaged
Oct  7 08:17:14 np0005473739 irqbalance[796]: Cannot change IRQ 30 affinity: Operation not permitted
Oct  7 08:17:14 np0005473739 irqbalance[796]: IRQ 30 affinity is now unmanaged
Oct  7 08:17:14 np0005473739 irqbalance[796]: Cannot change IRQ 29 affinity: Operation not permitted
Oct  7 08:17:14 np0005473739 irqbalance[796]: IRQ 29 affinity is now unmanaged
Oct  7 08:17:15 np0005473739 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  7 08:17:35 np0005473739 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  7 08:32:26 np0005473739 systemd[1]: Starting Cleanup of Temporary Directories...
Oct  7 08:32:27 np0005473739 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Oct  7 08:32:27 np0005473739 systemd[1]: Finished Cleanup of Temporary Directories.
Oct  7 08:32:27 np0005473739 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Oct  7 08:34:16 np0005473739 systemd[1]: Starting dnf makecache...
Oct  7 08:34:17 np0005473739 dnf[1067]: Failed determining last makecache time.
Oct  7 08:34:17 np0005473739 dnf[1067]: CentOS Stream 9 - BaseOS                         54 kB/s | 6.7 kB     00:00
Oct  7 08:34:17 np0005473739 dnf[1067]: CentOS Stream 9 - AppStream                      67 kB/s | 6.8 kB     00:00
Oct  7 08:34:18 np0005473739 dnf[1067]: CentOS Stream 9 - CRB                            57 kB/s | 6.6 kB     00:00
Oct  7 08:34:18 np0005473739 dnf[1067]: CentOS Stream 9 - Extras packages                28 kB/s | 8.0 kB     00:00
Oct  7 08:34:18 np0005473739 dnf[1067]: Metadata cache created.
Oct  7 08:34:18 np0005473739 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  7 08:34:18 np0005473739 systemd[1]: Finished dnf makecache.
Oct  7 08:56:14 np0005473739 systemd[1]: Created slice User Slice of UID 1000.
Oct  7 08:56:14 np0005473739 systemd[1]: Starting User Runtime Directory /run/user/1000...
Oct  7 08:56:14 np0005473739 systemd-logind[801]: New session 1 of user zuul.
Oct  7 08:56:14 np0005473739 systemd[1]: Finished User Runtime Directory /run/user/1000.
Oct  7 08:56:14 np0005473739 systemd[1]: Starting User Manager for UID 1000...
Oct  7 08:56:14 np0005473739 systemd[1088]: Queued start job for default target Main User Target.
Oct  7 08:56:14 np0005473739 systemd[1088]: Created slice User Application Slice.
Oct  7 08:56:14 np0005473739 systemd[1088]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  7 08:56:14 np0005473739 systemd[1088]: Started Daily Cleanup of User's Temporary Directories.
Oct  7 08:56:14 np0005473739 systemd[1088]: Reached target Paths.
Oct  7 08:56:14 np0005473739 systemd[1088]: Reached target Timers.
Oct  7 08:56:14 np0005473739 systemd[1088]: Starting D-Bus User Message Bus Socket...
Oct  7 08:56:14 np0005473739 systemd[1088]: Starting Create User's Volatile Files and Directories...
Oct  7 08:56:14 np0005473739 systemd[1088]: Finished Create User's Volatile Files and Directories.
Oct  7 08:56:14 np0005473739 systemd[1088]: Listening on D-Bus User Message Bus Socket.
Oct  7 08:56:14 np0005473739 systemd[1088]: Reached target Sockets.
Oct  7 08:56:14 np0005473739 systemd[1088]: Reached target Basic System.
Oct  7 08:56:14 np0005473739 systemd[1088]: Reached target Main User Target.
Oct  7 08:56:14 np0005473739 systemd[1088]: Startup finished in 104ms.
Oct  7 08:56:14 np0005473739 systemd[1]: Started User Manager for UID 1000.
Oct  7 08:56:14 np0005473739 systemd[1]: Started Session 1 of User zuul.
Oct  7 08:56:14 np0005473739 python3[1173]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 08:56:17 np0005473739 python3[1201]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 08:56:23 np0005473739 python3[1259]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 08:56:24 np0005473739 python3[1299]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Oct  7 08:56:25 np0005473739 python3[1325]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDNPNXcbrch3fUHfCW0yVm3wTWQnc8XtUQ8gv6mG5+aA+yIMuCdPt9p0d/akgpTmmZOocK5z+07iMRX48o3AGxjAWINu/yNqLuApVbsokZnXIHneeVSVnzQbXusA5FP829tFcK86llqF8QjrxoC5Xo+eULvlJmYD/ZRgn7rr874/eezTviIxKj3Iy7lZhG12QT+5d+2n9PdknhzTZHPu1rPHRiyUtSaLlW/v/EOZQPw5mN6WTnLydV+88qFDmQN6iQOVIf/o2htXm7EiXoIj81z3miPwP/kjAnG/qtNDjNQkgUvh60hB2JEP1ilV/UntAZjbdG1HvElClfi8mMj6QTETHvLjg1z46I3QWmV85jL7ixm9UD9ox+ewi4XnHzE7xWYv3hbxpLeijmDgaUrsoYPfFccytH7ARPRbQpqHaBVkRohg1MjhkNafUAMNXqXdV5MSpKrUKytSxXDvcBQnI087CokMlSsrhbV5lMHTZolJ0IrMWgYIvKF8d8gpc4kgdc= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:26 np0005473739 python3[1349]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:26 np0005473739 python3[1448]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:56:26 np0005473739 python3[1519]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759841786.3307803-207-20250914063342/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=968b7231667f43f2a157c035d169d454_id_rsa follow=False checksum=bbcea50df70db3ed8f4854cebf8aaeb1d96c2ddf backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:27 np0005473739 python3[1642]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:56:27 np0005473739 python3[1713]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759841787.1580443-240-169051314516967/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=968b7231667f43f2a157c035d169d454_id_rsa.pub follow=False checksum=8c89d97d21c126352f1c3827ab0bf68d46737ee7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:29 np0005473739 python3[1761]: ansible-ping Invoked with data=pong
Oct  7 08:56:29 np0005473739 python3[1785]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 08:56:31 np0005473739 python3[1843]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Oct  7 08:56:32 np0005473739 python3[1875]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:32 np0005473739 python3[1899]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:32 np0005473739 python3[1923]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:33 np0005473739 python3[1947]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:33 np0005473739 python3[1971]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:33 np0005473739 python3[1995]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:35 np0005473739 python3[2021]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:35 np0005473739 python3[2099]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:56:36 np0005473739 python3[2172]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1759841795.5090153-21-150824669925921/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:37 np0005473739 python3[2220]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:37 np0005473739 python3[2244]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:37 np0005473739 python3[2268]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:37 np0005473739 python3[2292]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:38 np0005473739 python3[2316]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:38 np0005473739 python3[2340]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:38 np0005473739 python3[2364]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:39 np0005473739 python3[2388]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:39 np0005473739 python3[2412]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:39 np0005473739 python3[2436]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:39 np0005473739 python3[2460]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:40 np0005473739 python3[2484]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:40 np0005473739 python3[2508]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:40 np0005473739 python3[2532]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:40 np0005473739 python3[2556]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:41 np0005473739 python3[2580]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:41 np0005473739 python3[2604]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:41 np0005473739 python3[2628]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:41 np0005473739 python3[2652]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:42 np0005473739 python3[2676]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:42 np0005473739 python3[2700]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:42 np0005473739 python3[2724]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:42 np0005473739 python3[2748]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:43 np0005473739 python3[2772]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:43 np0005473739 python3[2796]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:43 np0005473739 python3[2820]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 08:56:46 np0005473739 python3[2846]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  7 08:56:46 np0005473739 systemd[1]: Starting Time & Date Service...
Oct  7 08:56:46 np0005473739 systemd[1]: Started Time & Date Service.
Oct  7 08:56:46 np0005473739 systemd-timedated[2848]: Changed time zone to 'UTC' (UTC).
Oct  7 08:56:47 np0005473739 python3[2877]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:48 np0005473739 python3[2953]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:56:48 np0005473739 python3[3024]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1759841808.1713073-153-143517241687879/source _original_basename=tmphft9l2pm follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:49 np0005473739 python3[3124]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:56:49 np0005473739 python3[3195]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759841809.030815-183-232727609010491/source _original_basename=tmpvqwi7n95 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:50 np0005473739 python3[3297]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:56:50 np0005473739 python3[3370]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1759841810.2522798-231-241750673813431/source _original_basename=tmp5xyz005u follow=False checksum=d5406dec095836dfbf4a1234607f75e98d61ac99 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:51 np0005473739 python3[3418]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 08:56:51 np0005473739 python3[3444]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 08:56:52 np0005473739 python3[3524]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:56:52 np0005473739 python3[3597]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1759841811.8898115-273-263022167839162/source _original_basename=tmpc4o0j2j0 follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:56:53 np0005473739 python3[3648]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-0010-eea9-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 08:56:53 np0005473739 python3[3676]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-0010-eea9-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Oct  7 08:56:55 np0005473739 python3[3705]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:57:15 np0005473739 python3[3731]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:57:16 np0005473739 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Oct  7 08:57:58 np0005473739 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Oct  7 08:57:58 np0005473739 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5734] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  7 08:57:58 np0005473739 systemd-udevd[3736]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5887] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5910] settings: (eth1): created default wired connection 'Wired connection 1'
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5913] device (eth1): carrier: link connected
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5915] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5921] policy: auto-activating connection 'Wired connection 1' (8162f4e3-3ec3-3b68-a641-59e6a34c7445)
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5924] device (eth1): Activation: starting connection 'Wired connection 1' (8162f4e3-3ec3-3b68-a641-59e6a34c7445)
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5925] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5926] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5930] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 08:57:58 np0005473739 NetworkManager[856]: <info>  [1759841878.5934] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  7 08:57:59 np0005473739 python3[3762]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-1d51-f927-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 08:58:09 np0005473739 python3[3842]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:58:09 np0005473739 python3[3915]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759841889.05883-102-262505289225068/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=da21fde539f81fbb09ca9a78382f7880fbe6a55e backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:58:10 np0005473739 python3[3965]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 08:58:10 np0005473739 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  7 08:58:10 np0005473739 systemd[1]: Stopped Network Manager Wait Online.
Oct  7 08:58:10 np0005473739 systemd[1]: Stopping Network Manager Wait Online...
Oct  7 08:58:10 np0005473739 systemd[1]: Stopping Network Manager...
Oct  7 08:58:10 np0005473739 NetworkManager[856]: <info>  [1759841890.6318] caught SIGTERM, shutting down normally.
Oct  7 08:58:10 np0005473739 NetworkManager[856]: <info>  [1759841890.6327] dhcp4 (eth0): canceled DHCP transaction
Oct  7 08:58:10 np0005473739 NetworkManager[856]: <info>  [1759841890.6327] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  7 08:58:10 np0005473739 NetworkManager[856]: <info>  [1759841890.6328] dhcp4 (eth0): state changed no lease
Oct  7 08:58:10 np0005473739 NetworkManager[856]: <info>  [1759841890.6329] manager: NetworkManager state is now CONNECTING
Oct  7 08:58:10 np0005473739 NetworkManager[856]: <info>  [1759841890.6419] dhcp4 (eth1): canceled DHCP transaction
Oct  7 08:58:10 np0005473739 NetworkManager[856]: <info>  [1759841890.6420] dhcp4 (eth1): state changed no lease
Oct  7 08:58:10 np0005473739 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  7 08:58:10 np0005473739 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  7 08:58:10 np0005473739 NetworkManager[856]: <info>  [1759841890.7165] exiting (success)
Oct  7 08:58:10 np0005473739 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  7 08:58:10 np0005473739 systemd[1]: Stopped Network Manager.
Oct  7 08:58:10 np0005473739 systemd[1]: NetworkManager.service: Consumed 11.457s CPU time, 10.2M memory peak.
Oct  7 08:58:10 np0005473739 systemd[1]: Starting Network Manager...
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.7687] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e0a9c9d9-31fa-465e-9260-63a0cf6f9771)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.7687] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.7733] manager[0x56550942d070]: monitoring kernel firmware directory '/lib/firmware'.
Oct  7 08:58:10 np0005473739 systemd[1]: Starting Hostname Service...
Oct  7 08:58:10 np0005473739 systemd[1]: Started Hostname Service.
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8602] hostname: hostname: using hostnamed
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8603] hostname: static hostname changed from (none) to "np0005473739.novalocal"
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8612] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8620] manager[0x56550942d070]: rfkill: Wi-Fi hardware radio set enabled
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8620] manager[0x56550942d070]: rfkill: WWAN hardware radio set enabled
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8664] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8664] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8666] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8667] manager: Networking is enabled by state file
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8671] settings: Loaded settings plugin: keyfile (internal)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8677] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8725] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8746] dhcp: init: Using DHCP client 'internal'
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8753] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8762] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8773] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8785] device (lo): Activation: starting connection 'lo' (77c9c6e3-c233-4fb7-ae8f-731a952ee065)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8796] device (eth0): carrier: link connected
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8805] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8813] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8815] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8826] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8837] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8847] device (eth1): carrier: link connected
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8857] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8867] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (8162f4e3-3ec3-3b68-a641-59e6a34c7445) (indicated)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8870] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8882] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8897] device (eth1): Activation: starting connection 'Wired connection 1' (8162f4e3-3ec3-3b68-a641-59e6a34c7445)
Oct  7 08:58:10 np0005473739 systemd[1]: Started Network Manager.
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8909] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8916] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8921] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8926] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8929] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8935] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8939] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8943] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8948] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8960] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8964] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8977] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.8982] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9009] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9018] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9027] device (lo): Activation: successful, device activated.
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9038] dhcp4 (eth0): state changed new lease, address=38.102.83.153
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9048] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  7 08:58:10 np0005473739 systemd[1]: Starting Network Manager Wait Online...
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9440] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9492] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9494] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9496] manager: NetworkManager state is now CONNECTED_SITE
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9498] device (eth0): Activation: successful, device activated.
Oct  7 08:58:10 np0005473739 NetworkManager[3983]: <info>  [1759841890.9503] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  7 08:58:11 np0005473739 python3[4051]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-1d51-f927-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 08:58:21 np0005473739 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  7 08:58:26 np0005473739 systemd[1088]: Starting Mark boot as successful...
Oct  7 08:58:26 np0005473739 systemd[1088]: Finished Mark boot as successful.
Oct  7 08:58:40 np0005473739 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.3767] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  7 08:58:56 np0005473739 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  7 08:58:56 np0005473739 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4013] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4016] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4025] device (eth1): Activation: successful, device activated.
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4033] manager: startup complete
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4035] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <warn>  [1759841936.4048] device (eth1): Activation: failed for connection 'Wired connection 1'
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4054] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Oct  7 08:58:56 np0005473739 systemd[1]: Finished Network Manager Wait Online.
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4170] dhcp4 (eth1): canceled DHCP transaction
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4171] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4171] dhcp4 (eth1): state changed no lease
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4203] policy: auto-activating connection 'ci-private-network' (033f9b1d-b888-5181-86df-f7fa46fd3740)
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4209] device (eth1): Activation: starting connection 'ci-private-network' (033f9b1d-b888-5181-86df-f7fa46fd3740)
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4211] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4215] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4227] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4241] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4285] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4287] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 08:58:56 np0005473739 NetworkManager[3983]: <info>  [1759841936.4295] device (eth1): Activation: successful, device activated.
Oct  7 08:59:06 np0005473739 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  7 08:59:11 np0005473739 systemd-logind[801]: Session 1 logged out. Waiting for processes to exit.
Oct  7 08:59:13 np0005473739 systemd-logind[801]: New session 3 of user zuul.
Oct  7 08:59:13 np0005473739 systemd[1]: Started Session 3 of User zuul.
Oct  7 08:59:13 np0005473739 python3[4164]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 08:59:13 np0005473739 python3[4237]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759841953.3774028-267-125479707620659/source _original_basename=tmpnedzvtk2 follow=False checksum=cdd226cdc2af0c5a4723181955eed40805977959 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 08:59:16 np0005473739 systemd[1]: session-3.scope: Deactivated successfully.
Oct  7 08:59:16 np0005473739 systemd-logind[801]: Session 3 logged out. Waiting for processes to exit.
Oct  7 08:59:16 np0005473739 systemd-logind[801]: Removed session 3.
Oct  7 09:01:26 np0005473739 systemd[1088]: Created slice User Background Tasks Slice.
Oct  7 09:01:27 np0005473739 systemd[1088]: Starting Cleanup of User's Temporary Files and Directories...
Oct  7 09:01:27 np0005473739 systemd[1088]: Finished Cleanup of User's Temporary Files and Directories.
Oct  7 09:05:37 np0005473739 systemd-logind[801]: New session 4 of user zuul.
Oct  7 09:05:37 np0005473739 systemd[1]: Started Session 4 of User zuul.
Oct  7 09:05:37 np0005473739 python3[4310]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-398d-2e0a-000000001ce8-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:05:38 np0005473739 python3[4339]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:05:38 np0005473739 python3[4365]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:05:38 np0005473739 python3[4391]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:05:38 np0005473739 python3[4417]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:05:39 np0005473739 python3[4443]: ansible-ansible.builtin.lineinfile Invoked with path=/etc/systemd/system.conf regexp=^#DefaultIOAccounting=no line=DefaultIOAccounting=yes state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:05:39 np0005473739 python3[4443]: ansible-ansible.builtin.lineinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Oct  7 09:05:40 np0005473739 python3[4469]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:05:40 np0005473739 systemd[1]: Reloading.
Oct  7 09:05:40 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:05:42 np0005473739 python3[4525]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Oct  7 09:05:42 np0005473739 python3[4551]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:05:42 np0005473739 python3[4579]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:05:43 np0005473739 python3[4607]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:05:43 np0005473739 python3[4635]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:05:43 np0005473739 python3[4662]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-398d-2e0a-000000001cee-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:05:44 np0005473739 python3[4692]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:05:46 np0005473739 systemd[1]: session-4.scope: Deactivated successfully.
Oct  7 09:05:46 np0005473739 systemd[1]: session-4.scope: Consumed 3.151s CPU time.
Oct  7 09:05:46 np0005473739 systemd-logind[801]: Session 4 logged out. Waiting for processes to exit.
Oct  7 09:05:46 np0005473739 systemd-logind[801]: Removed session 4.
Oct  7 09:05:48 np0005473739 systemd-logind[801]: New session 5 of user zuul.
Oct  7 09:05:48 np0005473739 systemd[1]: Started Session 5 of User zuul.
Oct  7 09:05:48 np0005473739 python3[4728]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  7 09:06:01 np0005473739 kernel: SELinux:  Converting 365 SID table entries...
Oct  7 09:06:01 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:06:01 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:06:01 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:06:01 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:06:01 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:06:01 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:06:01 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:06:10 np0005473739 kernel: SELinux:  Converting 365 SID table entries...
Oct  7 09:06:10 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:06:10 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:06:10 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:06:10 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:06:10 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:06:10 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:06:10 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:06:19 np0005473739 kernel: SELinux:  Converting 365 SID table entries...
Oct  7 09:06:19 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:06:19 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:06:19 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:06:19 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:06:19 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:06:19 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:06:19 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:06:20 np0005473739 setsebool[4790]: The virt_use_nfs policy boolean was changed to 1 by root
Oct  7 09:06:20 np0005473739 setsebool[4790]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Oct  7 09:06:31 np0005473739 kernel: SELinux:  Converting 368 SID table entries...
Oct  7 09:06:31 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:06:31 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:06:31 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:06:31 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:06:31 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:06:31 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:06:31 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:06:49 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  7 09:06:49 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:06:49 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:06:49 np0005473739 systemd[1]: Reloading.
Oct  7 09:06:49 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:06:49 np0005473739 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  7 09:06:50 np0005473739 systemd[1]: Starting PackageKit Daemon...
Oct  7 09:06:50 np0005473739 systemd[1]: Starting Authorization Manager...
Oct  7 09:06:50 np0005473739 polkitd[6444]: Started polkitd version 0.117
Oct  7 09:06:50 np0005473739 systemd[1]: Started Authorization Manager.
Oct  7 09:06:50 np0005473739 systemd[1]: Started PackageKit Daemon.
Oct  7 09:06:58 np0005473739 python3[12074]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2a04-6ea6-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:06:59 np0005473739 kernel: evm: overlay not supported
Oct  7 09:06:59 np0005473739 systemd[1088]: Starting D-Bus User Message Bus...
Oct  7 09:06:59 np0005473739 dbus-broker-launch[12663]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Oct  7 09:06:59 np0005473739 dbus-broker-launch[12663]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Oct  7 09:06:59 np0005473739 systemd[1088]: Started D-Bus User Message Bus.
Oct  7 09:06:59 np0005473739 dbus-broker-lau[12663]: Ready
Oct  7 09:06:59 np0005473739 systemd[1088]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Oct  7 09:06:59 np0005473739 systemd[1088]: Created slice Slice /user.
Oct  7 09:06:59 np0005473739 systemd[1088]: podman-12585.scope: unit configures an IP firewall, but not running as root.
Oct  7 09:06:59 np0005473739 systemd[1088]: (This warning is only shown for the first unit using IP firewalling.)
Oct  7 09:06:59 np0005473739 systemd[1088]: Started podman-12585.scope.
Oct  7 09:07:00 np0005473739 systemd[1088]: Started podman-pause-303e6247.scope.
Oct  7 09:07:00 np0005473739 python3[13080]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.214:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.214:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:07:01 np0005473739 systemd[1]: session-5.scope: Deactivated successfully.
Oct  7 09:07:01 np0005473739 systemd[1]: session-5.scope: Consumed 58.349s CPU time.
Oct  7 09:07:01 np0005473739 systemd-logind[801]: Session 5 logged out. Waiting for processes to exit.
Oct  7 09:07:01 np0005473739 systemd-logind[801]: Removed session 5.
Oct  7 09:07:24 np0005473739 systemd-logind[801]: New session 6 of user zuul.
Oct  7 09:07:24 np0005473739 systemd[1]: Started Session 6 of User zuul.
Oct  7 09:07:24 np0005473739 python3[23935]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2BLUsAFNiztPibULGGLRMsdOOvnqCQTe0YvjUb/dV7Y0ZhnhLxkNjRC8G9hVU644Hf/Tu5b0Au+uyGH93Fp4o= zuul@np0005473738.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 09:07:25 np0005473739 python3[24167]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2BLUsAFNiztPibULGGLRMsdOOvnqCQTe0YvjUb/dV7Y0ZhnhLxkNjRC8G9hVU644Hf/Tu5b0Au+uyGH93Fp4o= zuul@np0005473738.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 09:07:26 np0005473739 python3[24542]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005473739.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Oct  7 09:07:26 np0005473739 python3[24719]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBI2BLUsAFNiztPibULGGLRMsdOOvnqCQTe0YvjUb/dV7Y0ZhnhLxkNjRC8G9hVU644Hf/Tu5b0Au+uyGH93Fp4o= zuul@np0005473738.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Oct  7 09:07:27 np0005473739 python3[24964]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:07:27 np0005473739 python3[25260]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759842446.7052393-135-141368822900073/source _original_basename=tmpafwn2_hz follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:07:28 np0005473739 python3[25552]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Oct  7 09:07:28 np0005473739 systemd[1]: Starting Hostname Service...
Oct  7 09:07:28 np0005473739 systemd[1]: Started Hostname Service.
Oct  7 09:07:28 np0005473739 systemd-hostnamed[25678]: Changed pretty hostname to 'compute-0'
Oct  7 09:07:28 np0005473739 systemd-hostnamed[25678]: Hostname set to <compute-0> (static)
Oct  7 09:07:28 np0005473739 NetworkManager[3983]: <info>  [1759842448.4551] hostname: static hostname changed from "np0005473739.novalocal" to "compute-0"
Oct  7 09:07:28 np0005473739 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  7 09:07:28 np0005473739 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  7 09:07:28 np0005473739 systemd[1]: session-6.scope: Deactivated successfully.
Oct  7 09:07:28 np0005473739 systemd[1]: session-6.scope: Consumed 2.181s CPU time.
Oct  7 09:07:28 np0005473739 systemd-logind[801]: Session 6 logged out. Waiting for processes to exit.
Oct  7 09:07:28 np0005473739 systemd-logind[801]: Removed session 6.
Oct  7 09:07:31 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:07:31 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:07:31 np0005473739 systemd[1]: man-db-cache-update.service: Consumed 50.069s CPU time.
Oct  7 09:07:31 np0005473739 systemd[1]: run-r6bd8e339c82344d4a344f5c7a4002acd.service: Deactivated successfully.
Oct  7 09:07:38 np0005473739 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  7 09:07:58 np0005473739 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  7 09:11:07 np0005473739 systemd-logind[801]: New session 7 of user zuul.
Oct  7 09:11:07 np0005473739 systemd[1]: Started Session 7 of User zuul.
Oct  7 09:11:07 np0005473739 python3[26693]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:11:09 np0005473739 python3[26809]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:11:09 np0005473739 python3[26882]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759842668.779976-30209-76640598447270/source mode=0755 _original_basename=delorean.repo follow=False checksum=c02c26d38f431b15f6463fc53c3d93ed5138ff07 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:11:09 np0005473739 python3[26908]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:11:10 np0005473739 python3[26981]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759842668.779976-30209-76640598447270/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:11:10 np0005473739 python3[27007]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:11:10 np0005473739 python3[27080]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759842668.779976-30209-76640598447270/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:11:10 np0005473739 python3[27106]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:11:11 np0005473739 python3[27179]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759842668.779976-30209-76640598447270/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:11:11 np0005473739 python3[27205]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:11:11 np0005473739 python3[27278]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759842668.779976-30209-76640598447270/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:11:12 np0005473739 python3[27304]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:11:12 np0005473739 python3[27377]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759842668.779976-30209-76640598447270/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:11:12 np0005473739 python3[27403]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:11:12 np0005473739 python3[27476]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1759842668.779976-30209-76640598447270/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=75ca8f9fe9a538824fd094f239c30e8ce8652e8a backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:11:27 np0005473739 python3[27534]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:11:55 np0005473739 systemd[1]: packagekit.service: Deactivated successfully.
Oct  7 09:16:27 np0005473739 systemd[1]: session-7.scope: Deactivated successfully.
Oct  7 09:16:27 np0005473739 systemd[1]: session-7.scope: Consumed 4.569s CPU time.
Oct  7 09:16:27 np0005473739 systemd-logind[801]: Session 7 logged out. Waiting for processes to exit.
Oct  7 09:16:27 np0005473739 systemd-logind[801]: Removed session 7.
Oct  7 09:21:24 np0005473739 systemd-logind[801]: New session 8 of user zuul.
Oct  7 09:21:24 np0005473739 systemd[1]: Started Session 8 of User zuul.
Oct  7 09:21:25 np0005473739 python3.9[27693]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:21:26 np0005473739 python3.9[27874]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:21:34 np0005473739 systemd-logind[801]: Session 8 logged out. Waiting for processes to exit.
Oct  7 09:21:34 np0005473739 systemd[1]: session-8.scope: Deactivated successfully.
Oct  7 09:21:34 np0005473739 systemd[1]: session-8.scope: Consumed 7.489s CPU time.
Oct  7 09:21:34 np0005473739 systemd-logind[801]: Removed session 8.
Oct  7 09:21:50 np0005473739 systemd-logind[801]: New session 9 of user zuul.
Oct  7 09:21:50 np0005473739 systemd[1]: Started Session 9 of User zuul.
Oct  7 09:21:51 np0005473739 python3.9[28085]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  7 09:21:52 np0005473739 python3.9[28259]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:21:52 np0005473739 python3.9[28411]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:21:53 np0005473739 python3.9[28564]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:21:54 np0005473739 python3.9[28716]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:21:55 np0005473739 python3.9[28868]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:21:55 np0005473739 python3.9[28991]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843314.62557-73-78856559064177/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:21:56 np0005473739 python3.9[29143]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:21:57 np0005473739 python3.9[29299]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:21:57 np0005473739 python3.9[29449]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:22:01 np0005473739 python3.9[29704]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:22:02 np0005473739 python3.9[29854]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:22:03 np0005473739 python3.9[30008]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:22:04 np0005473739 python3.9[30166]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:22:05 np0005473739 python3.9[30250]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:22:54 np0005473739 systemd[1]: Reloading.
Oct  7 09:22:54 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:22:55 np0005473739 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Oct  7 09:22:55 np0005473739 systemd[1]: Reloading.
Oct  7 09:22:55 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:22:55 np0005473739 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Oct  7 09:22:55 np0005473739 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Oct  7 09:22:55 np0005473739 systemd[1]: Reloading.
Oct  7 09:22:55 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:22:55 np0005473739 systemd[1]: Listening on LVM2 poll daemon socket.
Oct  7 09:22:55 np0005473739 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Oct  7 09:22:55 np0005473739 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Oct  7 09:22:55 np0005473739 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Oct  7 09:24:06 np0005473739 kernel: SELinux:  Converting 2713 SID table entries...
Oct  7 09:24:06 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:24:06 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:24:06 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:24:06 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:24:06 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:24:06 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:24:06 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:24:06 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Oct  7 09:24:07 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:24:07 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:24:07 np0005473739 systemd[1]: Reloading.
Oct  7 09:24:07 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:24:07 np0005473739 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  7 09:24:08 np0005473739 systemd[1]: Starting PackageKit Daemon...
Oct  7 09:24:08 np0005473739 systemd[1]: Started PackageKit Daemon.
Oct  7 09:24:09 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:24:09 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:24:09 np0005473739 systemd[1]: man-db-cache-update.service: Consumed 1.221s CPU time.
Oct  7 09:24:09 np0005473739 systemd[1]: run-rae7db8b3abdc45dd8ce6117f74dd7fdb.service: Deactivated successfully.
Oct  7 09:24:09 np0005473739 python3.9[31754]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:24:12 np0005473739 python3.9[32035]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  7 09:24:13 np0005473739 python3.9[32187]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  7 09:24:15 np0005473739 python3.9[32341]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:24:16 np0005473739 python3.9[32493]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  7 09:24:17 np0005473739 python3.9[32645]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:24:17 np0005473739 python3.9[32797]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:24:18 np0005473739 python3.9[32920]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843457.2544358-227-40424139354424/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=21e54520d72c08ac2f6c098097c6f54e30e64e0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:24:20 np0005473739 python3.9[33072]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  7 09:24:22 np0005473739 python3.9[33226]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  7 09:24:22 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:24:22 np0005473739 python3.9[33385]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  7 09:24:23 np0005473739 python3.9[33545]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  7 09:24:24 np0005473739 python3.9[33698]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  7 09:24:25 np0005473739 python3.9[33856]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  7 09:24:25 np0005473739 python3.9[34008]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:24:27 np0005473739 python3.9[34161]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:24:28 np0005473739 python3.9[34313]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:24:28 np0005473739 python3.9[34436]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759843468.0435338-322-189670911362855/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:24:30 np0005473739 python3.9[34588]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:24:30 np0005473739 systemd[1]: Starting Load Kernel Modules...
Oct  7 09:24:30 np0005473739 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Oct  7 09:24:30 np0005473739 kernel: Bridge firewalling registered
Oct  7 09:24:30 np0005473739 systemd-modules-load[34592]: Inserted module 'br_netfilter'
Oct  7 09:24:30 np0005473739 systemd[1]: Finished Load Kernel Modules.
Oct  7 09:24:30 np0005473739 python3.9[34748]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:24:31 np0005473739 python3.9[34871]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759843470.3599744-345-134524952129471/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:24:32 np0005473739 python3.9[35023]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:24:35 np0005473739 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Oct  7 09:24:35 np0005473739 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Oct  7 09:24:35 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:24:35 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:24:35 np0005473739 systemd[1]: Reloading.
Oct  7 09:24:35 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:24:35 np0005473739 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  7 09:24:37 np0005473739 python3.9[36146]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:24:37 np0005473739 python3.9[37069]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  7 09:24:38 np0005473739 python3.9[37743]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:24:39 np0005473739 python3.9[38613]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:24:39 np0005473739 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  7 09:24:39 np0005473739 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  7 09:24:39 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:24:39 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:24:39 np0005473739 systemd[1]: man-db-cache-update.service: Consumed 5.126s CPU time.
Oct  7 09:24:39 np0005473739 systemd[1]: run-rdcd3d2da8fda460a94abd95892fd021e.service: Deactivated successfully.
Oct  7 09:24:40 np0005473739 python3.9[39604]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:24:41 np0005473739 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  7 09:24:41 np0005473739 systemd[1]: tuned.service: Deactivated successfully.
Oct  7 09:24:41 np0005473739 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  7 09:24:41 np0005473739 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  7 09:24:41 np0005473739 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  7 09:24:42 np0005473739 python3.9[39765]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  7 09:24:44 np0005473739 python3.9[39917]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:24:45 np0005473739 systemd[1]: Reloading.
Oct  7 09:24:45 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:24:46 np0005473739 python3.9[40106]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:24:46 np0005473739 systemd[1]: Reloading.
Oct  7 09:24:46 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:24:47 np0005473739 python3.9[40295]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:24:48 np0005473739 python3.9[40448]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:24:48 np0005473739 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Oct  7 09:24:48 np0005473739 python3.9[40601]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:24:50 np0005473739 python3.9[40763]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:24:51 np0005473739 python3.9[40916]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:24:51 np0005473739 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Oct  7 09:24:51 np0005473739 systemd[1]: Stopped Apply Kernel Variables.
Oct  7 09:24:51 np0005473739 systemd[1]: Stopping Apply Kernel Variables...
Oct  7 09:24:51 np0005473739 systemd[1]: Starting Apply Kernel Variables...
Oct  7 09:24:51 np0005473739 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Oct  7 09:24:51 np0005473739 systemd[1]: Finished Apply Kernel Variables.
Oct  7 09:24:51 np0005473739 systemd[1]: session-9.scope: Deactivated successfully.
Oct  7 09:24:51 np0005473739 systemd[1]: session-9.scope: Consumed 2min 5.035s CPU time.
Oct  7 09:24:51 np0005473739 systemd-logind[801]: Session 9 logged out. Waiting for processes to exit.
Oct  7 09:24:51 np0005473739 systemd-logind[801]: Removed session 9.
Oct  7 09:24:56 np0005473739 systemd-logind[801]: New session 10 of user zuul.
Oct  7 09:24:56 np0005473739 systemd[1]: Started Session 10 of User zuul.
Oct  7 09:24:58 np0005473739 python3.9[41099]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:24:59 np0005473739 python3.9[41255]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  7 09:25:00 np0005473739 python3.9[41408]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  7 09:25:01 np0005473739 python3.9[41566]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  7 09:25:02 np0005473739 python3.9[41726]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:25:02 np0005473739 python3.9[41810]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  7 09:25:05 np0005473739 python3.9[41973]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:25:16 np0005473739 kernel: SELinux:  Converting 2723 SID table entries...
Oct  7 09:25:16 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:25:16 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:25:16 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:25:16 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:25:16 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:25:16 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:25:16 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:25:17 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Oct  7 09:25:17 np0005473739 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Oct  7 09:25:18 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:25:18 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:25:18 np0005473739 systemd[1]: Reloading.
Oct  7 09:25:18 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:25:18 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:25:18 np0005473739 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  7 09:25:19 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:25:19 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:25:19 np0005473739 systemd[1]: run-r84069773d4c643fb9fd0c170f2a7cf76.service: Deactivated successfully.
Oct  7 09:25:20 np0005473739 python3.9[43075]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:25:20 np0005473739 systemd[1]: Reloading.
Oct  7 09:25:20 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:25:20 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:25:20 np0005473739 systemd[1]: Starting Open vSwitch Database Unit...
Oct  7 09:25:20 np0005473739 chown[43118]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Oct  7 09:25:20 np0005473739 ovs-ctl[43123]: /etc/openvswitch/conf.db does not exist ... (warning).
Oct  7 09:25:20 np0005473739 ovs-ctl[43123]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Oct  7 09:25:20 np0005473739 ovs-ctl[43123]: Starting ovsdb-server [  OK  ]
Oct  7 09:25:20 np0005473739 ovs-vsctl[43172]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Oct  7 09:25:20 np0005473739 ovs-vsctl[43191]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"52903340-c961-4e65-8ffc-97dd01d2b2e2\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Oct  7 09:25:20 np0005473739 ovs-ctl[43123]: Configuring Open vSwitch system IDs [  OK  ]
Oct  7 09:25:20 np0005473739 ovs-ctl[43123]: Enabling remote OVSDB managers [  OK  ]
Oct  7 09:25:20 np0005473739 ovs-vsctl[43197]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  7 09:25:20 np0005473739 systemd[1]: Started Open vSwitch Database Unit.
Oct  7 09:25:20 np0005473739 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Oct  7 09:25:20 np0005473739 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Oct  7 09:25:20 np0005473739 systemd[1]: Starting Open vSwitch Forwarding Unit...
Oct  7 09:25:20 np0005473739 kernel: openvswitch: Open vSwitch switching datapath
Oct  7 09:25:20 np0005473739 ovs-ctl[43241]: Inserting openvswitch module [  OK  ]
Oct  7 09:25:21 np0005473739 ovs-ctl[43210]: Starting ovs-vswitchd [  OK  ]
Oct  7 09:25:21 np0005473739 ovs-vsctl[43259]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Oct  7 09:25:21 np0005473739 ovs-ctl[43210]: Enabling remote OVSDB managers [  OK  ]
Oct  7 09:25:21 np0005473739 systemd[1]: Started Open vSwitch Forwarding Unit.
Oct  7 09:25:21 np0005473739 systemd[1]: Starting Open vSwitch...
Oct  7 09:25:21 np0005473739 systemd[1]: Finished Open vSwitch.
Oct  7 09:25:21 np0005473739 python3.9[43410]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:25:22 np0005473739 python3.9[43562]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  7 09:25:23 np0005473739 kernel: SELinux:  Converting 2737 SID table entries...
Oct  7 09:25:23 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:25:23 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:25:23 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:25:23 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:25:23 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:25:23 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:25:23 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:25:24 np0005473739 python3.9[43717]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:25:25 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Oct  7 09:25:25 np0005473739 python3.9[43875]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:25:27 np0005473739 python3.9[44028]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:25:29 np0005473739 python3.9[44315]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  7 09:25:30 np0005473739 python3.9[44465]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:25:30 np0005473739 python3.9[44619]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:25:32 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:25:32 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:25:32 np0005473739 systemd[1]: Reloading.
Oct  7 09:25:32 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:25:32 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:25:32 np0005473739 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  7 09:25:32 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:25:32 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:25:32 np0005473739 systemd[1]: run-r826d2ec6875246ccbf68d5c965d3b5b3.service: Deactivated successfully.
Oct  7 09:25:33 np0005473739 python3.9[44936]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:25:33 np0005473739 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Oct  7 09:25:33 np0005473739 systemd[1]: Stopped Network Manager Wait Online.
Oct  7 09:25:33 np0005473739 systemd[1]: Stopping Network Manager Wait Online...
Oct  7 09:25:33 np0005473739 systemd[1]: Stopping Network Manager...
Oct  7 09:25:33 np0005473739 NetworkManager[3983]: <info>  [1759843533.8418] caught SIGTERM, shutting down normally.
Oct  7 09:25:33 np0005473739 NetworkManager[3983]: <info>  [1759843533.8430] dhcp4 (eth0): canceled DHCP transaction
Oct  7 09:25:33 np0005473739 NetworkManager[3983]: <info>  [1759843533.8430] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  7 09:25:33 np0005473739 NetworkManager[3983]: <info>  [1759843533.8430] dhcp4 (eth0): state changed no lease
Oct  7 09:25:33 np0005473739 NetworkManager[3983]: <info>  [1759843533.8432] manager: NetworkManager state is now CONNECTED_SITE
Oct  7 09:25:33 np0005473739 NetworkManager[3983]: <info>  [1759843533.8503] exiting (success)
Oct  7 09:25:33 np0005473739 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  7 09:25:33 np0005473739 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  7 09:25:33 np0005473739 systemd[1]: NetworkManager.service: Deactivated successfully.
Oct  7 09:25:33 np0005473739 systemd[1]: Stopped Network Manager.
Oct  7 09:25:33 np0005473739 systemd[1]: NetworkManager.service: Consumed 8.238s CPU time, 4.0M memory peak, read 0B from disk, written 18.0K to disk.
Oct  7 09:25:33 np0005473739 systemd[1]: Starting Network Manager...
Oct  7 09:25:33 np0005473739 NetworkManager[44949]: <info>  [1759843533.9302] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:e0a9c9d9-31fa-465e-9260-63a0cf6f9771)
Oct  7 09:25:33 np0005473739 NetworkManager[44949]: <info>  [1759843533.9304] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Oct  7 09:25:33 np0005473739 NetworkManager[44949]: <info>  [1759843533.9363] manager[0x55bc9d7d5090]: monitoring kernel firmware directory '/lib/firmware'.
Oct  7 09:25:33 np0005473739 systemd[1]: Starting Hostname Service...
Oct  7 09:25:34 np0005473739 systemd[1]: Started Hostname Service.
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0114] hostname: hostname: using hostnamed
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0114] hostname: static hostname changed from (none) to "compute-0"
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0120] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0125] manager[0x55bc9d7d5090]: rfkill: Wi-Fi hardware radio set enabled
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0125] manager[0x55bc9d7d5090]: rfkill: WWAN hardware radio set enabled
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0146] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0156] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0157] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0157] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0158] manager: Networking is enabled by state file
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0160] settings: Loaded settings plugin: keyfile (internal)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0164] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0191] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0203] dhcp: init: Using DHCP client 'internal'
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0206] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0212] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0218] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0228] device (lo): Activation: starting connection 'lo' (77c9c6e3-c233-4fb7-ae8f-731a952ee065)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0235] device (eth0): carrier: link connected
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0239] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0243] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0244] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0249] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0254] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0258] device (eth1): carrier: link connected
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0262] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0265] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (033f9b1d-b888-5181-86df-f7fa46fd3740) (indicated)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0266] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0270] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0276] device (eth1): Activation: starting connection 'ci-private-network' (033f9b1d-b888-5181-86df-f7fa46fd3740)
Oct  7 09:25:34 np0005473739 systemd[1]: Started Network Manager.
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0283] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0290] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0299] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0303] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0306] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0311] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0313] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0316] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0321] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0329] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0332] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0347] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0363] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0372] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0375] dhcp4 (eth0): state changed new lease, address=38.102.83.153
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0377] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  7 09:25:34 np0005473739 systemd[1]: Starting Network Manager Wait Online...
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0382] device (lo): Activation: successful, device activated.
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0391] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0470] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0478] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0481] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0483] manager: NetworkManager state is now CONNECTED_LOCAL
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0487] device (eth1): Activation: successful, device activated.
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0496] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0498] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0503] manager: NetworkManager state is now CONNECTED_SITE
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0507] device (eth0): Activation: successful, device activated.
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0513] manager: NetworkManager state is now CONNECTED_GLOBAL
Oct  7 09:25:34 np0005473739 NetworkManager[44949]: <info>  [1759843534.0573] manager: startup complete
Oct  7 09:25:34 np0005473739 systemd[1]: Finished Network Manager Wait Online.
Oct  7 09:25:34 np0005473739 python3.9[45163]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:25:39 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:25:39 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:25:39 np0005473739 systemd[1]: Reloading.
Oct  7 09:25:39 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:25:39 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:25:39 np0005473739 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  7 09:25:40 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:25:40 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:25:40 np0005473739 systemd[1]: run-rafccebed769042a584060fe8b0004c80.service: Deactivated successfully.
Oct  7 09:25:41 np0005473739 python3.9[45628]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:25:41 np0005473739 python3.9[45780]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:25:42 np0005473739 python3.9[45934]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:25:43 np0005473739 python3.9[46086]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:25:44 np0005473739 python3.9[46238]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:25:44 np0005473739 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  7 09:25:44 np0005473739 python3.9[46390]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:25:45 np0005473739 python3.9[46542]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:25:45 np0005473739 python3.9[46665]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843544.8525531-229-187414151661175/.source _original_basename=.2ulhpb7d follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:25:46 np0005473739 python3.9[46817]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:25:47 np0005473739 python3.9[46969]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Oct  7 09:25:48 np0005473739 python3.9[47121]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:25:50 np0005473739 python3.9[47548]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Oct  7 09:25:51 np0005473739 ansible-async_wrapper.py[47723]: Invoked with j880471830299 300 /home/zuul/.ansible/tmp/ansible-tmp-1759843550.5639288-295-253082719855659/AnsiballZ_edpm_os_net_config.py _
Oct  7 09:25:51 np0005473739 ansible-async_wrapper.py[47726]: Starting module and watcher
Oct  7 09:25:51 np0005473739 ansible-async_wrapper.py[47726]: Start watching 47727 (300)
Oct  7 09:25:51 np0005473739 ansible-async_wrapper.py[47727]: Start module (47727)
Oct  7 09:25:51 np0005473739 ansible-async_wrapper.py[47723]: Return async_wrapper task started.
Oct  7 09:25:51 np0005473739 python3.9[47728]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Oct  7 09:25:52 np0005473739 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Oct  7 09:25:52 np0005473739 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Oct  7 09:25:52 np0005473739 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Oct  7 09:25:52 np0005473739 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Oct  7 09:25:52 np0005473739 kernel: cfg80211: failed to load regulatory.db
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.3764] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.3779] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4271] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4272] audit: op="connection-add" uuid="10175fc0-bcf5-4e21-af46-e903752daad1" name="br-ex-br" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4290] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4291] audit: op="connection-add" uuid="95ff70b1-b402-4dc3-b62d-3c755d5ae620" name="br-ex-port" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4304] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4306] audit: op="connection-add" uuid="d269bd48-f1ac-469a-8e48-e3096df3ab88" name="eth1-port" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4321] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4322] audit: op="connection-add" uuid="6ee2b959-78dc-457b-923d-b5403f21987b" name="vlan20-port" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4334] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4335] audit: op="connection-add" uuid="b33b69ef-b2ed-4917-b949-05b0b470081a" name="vlan21-port" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4346] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4348] audit: op="connection-add" uuid="f375ac65-14b5-4e71-98d7-59a5c29ef42b" name="vlan22-port" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4358] manager: (vlan23): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/10)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4360] audit: op="connection-add" uuid="a3b583bd-2087-48ad-90be-b06de2e98123" name="vlan23-port" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4379] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.timestamp,connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.dhcp-timeout,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4394] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4395] audit: op="connection-add" uuid="a261231f-64c8-4987-b956-7d512d5f9926" name="br-ex-if" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4428] audit: op="connection-update" uuid="033f9b1d-b888-5181-86df-f7fa46fd3740" name="ci-private-network" args="ovs-interface.type,ovs-external-ids.data,connection.timestamp,connection.controller,connection.master,connection.port-type,connection.slave-type,ipv6.method,ipv6.routes,ipv6.addr-gen-mode,ipv6.addresses,ipv6.dns,ipv6.routing-rules,ipv4.method,ipv4.never-default,ipv4.routes,ipv4.addresses,ipv4.dns,ipv4.routing-rules" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4443] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4444] audit: op="connection-add" uuid="3a61a637-eb0a-4ddd-b94a-8837fde69f68" name="vlan20-if" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4458] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4460] audit: op="connection-add" uuid="78ae866e-6a99-4e84-82f2-c17e16ba61f2" name="vlan21-if" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4476] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4478] audit: op="connection-add" uuid="389c4613-ce52-4b6a-94c3-42cfc21ea300" name="vlan22-if" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4495] manager: (vlan23): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/15)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4497] audit: op="connection-add" uuid="06fc7f7f-cb9e-4c2d-b74e-5be132604c89" name="vlan23-if" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4508] audit: op="connection-delete" uuid="8162f4e3-3ec3-3b68-a641-59e6a34c7445" name="Wired connection 1" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4520] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4530] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4534] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (10175fc0-bcf5-4e21-af46-e903752daad1)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4535] audit: op="connection-activate" uuid="10175fc0-bcf5-4e21-af46-e903752daad1" name="br-ex-br" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4538] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4545] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4549] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (95ff70b1-b402-4dc3-b62d-3c755d5ae620)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4552] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4558] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4562] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (d269bd48-f1ac-469a-8e48-e3096df3ab88)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4564] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4570] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4574] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (6ee2b959-78dc-457b-923d-b5403f21987b)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4576] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4583] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4587] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (b33b69ef-b2ed-4917-b949-05b0b470081a)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4589] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4595] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4598] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (f375ac65-14b5-4e71-98d7-59a5c29ef42b)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4600] device (vlan23)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4606] device (vlan23)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4609] device (vlan23)[Open vSwitch Port]: Activation: starting connection 'vlan23-port' (a3b583bd-2087-48ad-90be-b06de2e98123)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4610] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4612] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4614] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4619] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4623] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4627] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (a261231f-64c8-4987-b956-7d512d5f9926)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4628] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4631] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4633] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4633] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4635] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4645] device (eth1): disconnecting for new activation request.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4645] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4648] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4650] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4652] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4655] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4659] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4664] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (3a61a637-eb0a-4ddd-b94a-8837fde69f68)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4665] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4668] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4669] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4670] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4673] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4677] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4682] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (78ae866e-6a99-4e84-82f2-c17e16ba61f2)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4682] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4686] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4688] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4690] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4693] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4698] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4702] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (389c4613-ce52-4b6a-94c3-42cfc21ea300)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4703] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4706] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4708] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4710] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4713] device (vlan23)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4717] device (vlan23)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4722] device (vlan23)[Open vSwitch Interface]: Activation: starting connection 'vlan23-if' (06fc7f7f-cb9e-4c2d-b74e-5be132604c89)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4723] device (vlan23)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4726] device (vlan23)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4728] device (vlan23)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4729] device (vlan23)[Open vSwitch Port]: Activation: connection 'vlan23-port' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4731] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4743] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4746] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4748] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4750] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4757] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4761] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4765] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4768] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4770] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4776] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 kernel: ovs-system: entered promiscuous mode
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4780] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4784] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4785] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4791] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4795] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 kernel: Timeout policy base is empty
Oct  7 09:25:53 np0005473739 systemd-udevd[47735]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4798] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4803] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4808] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4812] device (vlan23)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4815] device (vlan23)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4817] device (vlan23)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4822] device (vlan23)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4826] dhcp4 (eth0): canceled DHCP transaction
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4826] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4826] dhcp4 (eth0): state changed no lease
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4828] dhcp4 (eth0): activation: beginning transaction (no timeout)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4843] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4847] audit: op="device-reapply" interface="eth1" ifindex=3 pid=47729 uid=0 result="fail" reason="Device is not activated"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4853] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4892] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4896] dhcp4 (eth0): state changed new lease, address=38.102.83.153
Oct  7 09:25:53 np0005473739 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4944] device (eth1): disconnecting for new activation request.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4945] audit: op="connection-activate" uuid="033f9b1d-b888-5181-86df-f7fa46fd3740" name="ci-private-network" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4949] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.4956] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5079] device (eth1): Activation: starting connection 'ci-private-network' (033f9b1d-b888-5181-86df-f7fa46fd3740)
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5084] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5087] device (vlan23)[Open vSwitch Interface]: Activation: connection 'vlan23-if' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5098] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5101] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5107] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 kernel: br-ex: entered promiscuous mode
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5111] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5114] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5115] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5115] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5116] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5117] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5118] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47729 uid=0 result="success"
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5118] device (vlan23)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5123] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5128] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5132] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5134] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5138] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5141] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5144] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5147] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5151] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5155] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5159] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5164] device (vlan23)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5167] device (vlan23)[Open vSwitch Port]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5174] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5178] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 kernel: vlan22: entered promiscuous mode
Oct  7 09:25:53 np0005473739 systemd-udevd[47733]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5230] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5235] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5240] device (eth1): Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5255] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Oct  7 09:25:53 np0005473739 kernel: vlan23: entered promiscuous mode
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5290] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5315] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5323] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5331] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5337] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5346] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 kernel: vlan20: entered promiscuous mode
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5367] device (vlan23)[Open vSwitch Interface]: carrier: link connected
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5377] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5381] device (vlan23)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5390] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5397] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5404] device (vlan23)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5406] device (vlan23)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5410] device (vlan23)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5446] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Oct  7 09:25:53 np0005473739 kernel: vlan21: entered promiscuous mode
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5462] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5493] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5496] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5501] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5530] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5542] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5571] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5573] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Oct  7 09:25:53 np0005473739 NetworkManager[44949]: <info>  [1759843553.5578] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Oct  7 09:25:54 np0005473739 NetworkManager[44949]: <info>  [1759843554.6689] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47729 uid=0 result="success"
Oct  7 09:25:54 np0005473739 NetworkManager[44949]: <info>  [1759843554.8230] checkpoint[0x55bc9d7ab950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Oct  7 09:25:54 np0005473739 NetworkManager[44949]: <info>  [1759843554.8232] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=47729 uid=0 result="success"
Oct  7 09:25:55 np0005473739 NetworkManager[44949]: <info>  [1759843555.1485] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47729 uid=0 result="success"
Oct  7 09:25:55 np0005473739 NetworkManager[44949]: <info>  [1759843555.1508] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47729 uid=0 result="success"
Oct  7 09:25:55 np0005473739 python3.9[48088]: ansible-ansible.legacy.async_status Invoked with jid=j880471830299.47723 mode=status _async_dir=/root/.ansible_async
Oct  7 09:25:55 np0005473739 NetworkManager[44949]: <info>  [1759843555.4863] audit: op="networking-control" arg="global-dns-configuration" pid=47729 uid=0 result="success"
Oct  7 09:25:55 np0005473739 NetworkManager[44949]: <info>  [1759843555.4900] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Oct  7 09:25:55 np0005473739 NetworkManager[44949]: <info>  [1759843555.4939] audit: op="networking-control" arg="global-dns-configuration" pid=47729 uid=0 result="success"
Oct  7 09:25:55 np0005473739 NetworkManager[44949]: <info>  [1759843555.4966] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47729 uid=0 result="success"
Oct  7 09:25:55 np0005473739 NetworkManager[44949]: <info>  [1759843555.6565] checkpoint[0x55bc9d7aba20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Oct  7 09:25:55 np0005473739 NetworkManager[44949]: <info>  [1759843555.6573] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=47729 uid=0 result="success"
Oct  7 09:25:55 np0005473739 ansible-async_wrapper.py[47727]: Module complete (47727)
Oct  7 09:25:56 np0005473739 ansible-async_wrapper.py[47726]: Done in kid B.
Oct  7 09:25:58 np0005473739 python3.9[48192]: ansible-ansible.legacy.async_status Invoked with jid=j880471830299.47723 mode=status _async_dir=/root/.ansible_async
Oct  7 09:25:59 np0005473739 python3.9[48292]: ansible-ansible.legacy.async_status Invoked with jid=j880471830299.47723 mode=cleanup _async_dir=/root/.ansible_async
Oct  7 09:26:00 np0005473739 python3.9[48444]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:26:00 np0005473739 python3.9[48567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843559.558381-322-254702330986552/.source.returncode _original_basename=.x11szq1a follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:26:01 np0005473739 python3.9[48719]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:26:01 np0005473739 python3.9[48842]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843560.9109588-338-10257740509766/.source.cfg _original_basename=.5dlkq57i follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:26:02 np0005473739 python3.9[48995]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:26:02 np0005473739 systemd[1]: Reloading Network Manager...
Oct  7 09:26:02 np0005473739 NetworkManager[44949]: <info>  [1759843562.8663] audit: op="reload" arg="0" pid=48999 uid=0 result="success"
Oct  7 09:26:02 np0005473739 NetworkManager[44949]: <info>  [1759843562.8674] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Oct  7 09:26:02 np0005473739 systemd[1]: Reloaded Network Manager.
Oct  7 09:26:03 np0005473739 systemd[1]: session-10.scope: Deactivated successfully.
Oct  7 09:26:03 np0005473739 systemd[1]: session-10.scope: Consumed 48.452s CPU time.
Oct  7 09:26:03 np0005473739 systemd-logind[801]: Session 10 logged out. Waiting for processes to exit.
Oct  7 09:26:03 np0005473739 systemd-logind[801]: Removed session 10.
Oct  7 09:26:04 np0005473739 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Oct  7 09:26:09 np0005473739 systemd-logind[801]: New session 11 of user zuul.
Oct  7 09:26:09 np0005473739 systemd[1]: Started Session 11 of User zuul.
Oct  7 09:26:10 np0005473739 python3.9[49186]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:26:11 np0005473739 python3.9[49340]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:26:12 np0005473739 python3.9[49534]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:26:12 np0005473739 systemd[1]: session-11.scope: Deactivated successfully.
Oct  7 09:26:12 np0005473739 systemd[1]: session-11.scope: Consumed 2.222s CPU time.
Oct  7 09:26:12 np0005473739 systemd-logind[801]: Session 11 logged out. Waiting for processes to exit.
Oct  7 09:26:12 np0005473739 systemd-logind[801]: Removed session 11.
Oct  7 09:26:12 np0005473739 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  7 09:26:18 np0005473739 systemd-logind[801]: New session 12 of user zuul.
Oct  7 09:26:18 np0005473739 systemd[1]: Started Session 12 of User zuul.
Oct  7 09:26:19 np0005473739 python3.9[49716]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:26:19 np0005473739 python3.9[49870]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:26:21 np0005473739 python3.9[50026]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:26:21 np0005473739 python3.9[50111]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:26:23 np0005473739 python3.9[50264]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:26:24 np0005473739 python3.9[50460]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:26:25 np0005473739 python3.9[50612]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:26:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-compat3086001878-merged.mount: Deactivated successfully.
Oct  7 09:26:25 np0005473739 podman[50613]: 2025-10-07 13:26:25.785412143 +0000 UTC m=+0.042894844 system refresh
Oct  7 09:26:26 np0005473739 python3.9[50774]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:26:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:26:27 np0005473739 python3.9[50897]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843585.9889445-79-191009117817021/.source.json follow=False _original_basename=podman_network_config.j2 checksum=0cbfc1525998e47459080fa11fa3b81d2b318d2b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:26:28 np0005473739 python3.9[51049]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:26:28 np0005473739 python3.9[51172]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759843587.69003-94-105071635020575/.source.conf follow=False _original_basename=registries.conf.j2 checksum=c13abb65735f8999338ec85714f5cfdece71aea3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:26:29 np0005473739 python3.9[51324]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:26:30 np0005473739 python3.9[51476]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:26:30 np0005473739 python3.9[51628]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:26:31 np0005473739 python3.9[51780]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:26:32 np0005473739 python3.9[51932]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:26:34 np0005473739 python3.9[52085]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:26:35 np0005473739 python3.9[52239]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:26:35 np0005473739 python3.9[52391]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:26:36 np0005473739 python3.9[52543]: ansible-service_facts Invoked
Oct  7 09:26:36 np0005473739 network[52560]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:26:36 np0005473739 network[52561]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:26:36 np0005473739 network[52562]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:26:42 np0005473739 python3.9[53016]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:26:45 np0005473739 python3.9[53169]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  7 09:26:46 np0005473739 python3.9[53321]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:26:46 np0005473739 python3.9[53446]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843605.8071275-226-182519309589213/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:26:47 np0005473739 python3.9[53600]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:26:48 np0005473739 python3.9[53725]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843607.2090676-241-125428610598308/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:26:49 np0005473739 python3.9[53879]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:26:50 np0005473739 python3.9[54033]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:26:51 np0005473739 python3.9[54117]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:26:52 np0005473739 python3.9[54271]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:26:53 np0005473739 python3.9[54355]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:26:53 np0005473739 chronyd[780]: chronyd exiting
Oct  7 09:26:53 np0005473739 systemd[1]: Stopping NTP client/server...
Oct  7 09:26:53 np0005473739 systemd[1]: chronyd.service: Deactivated successfully.
Oct  7 09:26:53 np0005473739 systemd[1]: Stopped NTP client/server.
Oct  7 09:26:53 np0005473739 systemd[1]: Starting NTP client/server...
Oct  7 09:26:53 np0005473739 chronyd[54363]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
Oct  7 09:26:53 np0005473739 chronyd[54363]: Frequency -26.274 +/- 0.606 ppm read from /var/lib/chrony/drift
Oct  7 09:26:53 np0005473739 chronyd[54363]: Loaded seccomp filter (level 2)
Oct  7 09:26:53 np0005473739 systemd[1]: Started NTP client/server.
Oct  7 09:26:53 np0005473739 systemd[1]: session-12.scope: Deactivated successfully.
Oct  7 09:26:53 np0005473739 systemd[1]: session-12.scope: Consumed 24.743s CPU time.
Oct  7 09:26:53 np0005473739 systemd-logind[801]: Session 12 logged out. Waiting for processes to exit.
Oct  7 09:26:53 np0005473739 systemd-logind[801]: Removed session 12.
Oct  7 09:26:58 np0005473739 systemd-logind[801]: New session 13 of user zuul.
Oct  7 09:26:58 np0005473739 systemd[1]: Started Session 13 of User zuul.
Oct  7 09:26:59 np0005473739 python3.9[54544]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:00 np0005473739 python3.9[54696]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:00 np0005473739 python3.9[54819]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/ceph-networks.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843619.3799827-34-168835414600133/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=729ea8396013e3343245d6e934e0dcef55029ad2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:01 np0005473739 systemd[1]: session-13.scope: Deactivated successfully.
Oct  7 09:27:01 np0005473739 systemd[1]: session-13.scope: Consumed 1.594s CPU time.
Oct  7 09:27:01 np0005473739 systemd-logind[801]: Session 13 logged out. Waiting for processes to exit.
Oct  7 09:27:01 np0005473739 systemd-logind[801]: Removed session 13.
Oct  7 09:27:06 np0005473739 systemd-logind[801]: New session 14 of user zuul.
Oct  7 09:27:06 np0005473739 systemd[1]: Started Session 14 of User zuul.
Oct  7 09:27:07 np0005473739 python3.9[54997]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:27:08 np0005473739 python3.9[55153]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:09 np0005473739 python3.9[55328]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:10 np0005473739 python3.9[55451]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1759843628.9635804-41-197466683428973/.source.json _original_basename=.bqp46uh8 follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:11 np0005473739 python3.9[55603]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:11 np0005473739 python3.9[55726]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843630.923856-64-17789212779735/.source _original_basename=.b6dagsf5 follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:12 np0005473739 python3.9[55878]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:27:13 np0005473739 python3.9[56030]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:13 np0005473739 python3.9[56153]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759843632.7938075-88-64789288623844/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:27:14 np0005473739 python3.9[56305]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:15 np0005473739 python3.9[56428]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759843634.115342-88-110436899817762/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:27:15 np0005473739 python3.9[56580]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:16 np0005473739 python3.9[56732]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:17 np0005473739 python3.9[56855]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843636.039337-125-96872805856233/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:17 np0005473739 python3.9[57007]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:18 np0005473739 python3.9[57130]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843637.230113-140-18164855471249/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:19 np0005473739 python3.9[57282]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:27:19 np0005473739 systemd[1]: Reloading.
Oct  7 09:27:19 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:27:19 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:27:19 np0005473739 systemd[1]: Reloading.
Oct  7 09:27:19 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:27:19 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:27:19 np0005473739 systemd[1]: Starting EDPM Container Shutdown...
Oct  7 09:27:19 np0005473739 systemd[1]: Finished EDPM Container Shutdown.
Oct  7 09:27:20 np0005473739 python3.9[57509]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:20 np0005473739 python3.9[57632]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843639.9256608-163-68740467825695/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:21 np0005473739 python3.9[57784]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:22 np0005473739 python3.9[57907]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843641.104961-178-229359416071115/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:22 np0005473739 python3.9[58059]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:27:22 np0005473739 systemd[1]: Reloading.
Oct  7 09:27:22 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:27:22 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:27:23 np0005473739 systemd[1]: Reloading.
Oct  7 09:27:23 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:27:23 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:27:23 np0005473739 systemd[1]: Starting Create netns directory...
Oct  7 09:27:23 np0005473739 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  7 09:27:23 np0005473739 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  7 09:27:23 np0005473739 systemd[1]: Finished Create netns directory.
Oct  7 09:27:24 np0005473739 python3.9[58286]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:27:24 np0005473739 network[58303]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:27:24 np0005473739 network[58304]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:27:24 np0005473739 network[58305]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:27:27 np0005473739 python3.9[58569]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:27:27 np0005473739 systemd[1]: Reloading.
Oct  7 09:27:27 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:27:27 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:27:27 np0005473739 systemd[1]: Stopping IPv4 firewall with iptables...
Oct  7 09:27:28 np0005473739 iptables.init[58609]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Oct  7 09:27:28 np0005473739 iptables.init[58609]: iptables: Flushing firewall rules: [  OK  ]
Oct  7 09:27:28 np0005473739 systemd[1]: iptables.service: Deactivated successfully.
Oct  7 09:27:28 np0005473739 systemd[1]: Stopped IPv4 firewall with iptables.
Oct  7 09:27:29 np0005473739 python3.9[58805]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:27:30 np0005473739 python3.9[58959]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:27:30 np0005473739 systemd[1]: Reloading.
Oct  7 09:27:30 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:27:30 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:27:30 np0005473739 systemd[1]: Starting Netfilter Tables...
Oct  7 09:27:30 np0005473739 systemd[1]: Finished Netfilter Tables.
Oct  7 09:27:31 np0005473739 python3.9[59151]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:27:32 np0005473739 python3.9[59304]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:27:33 np0005473739 python3.9[59429]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843651.861661-247-229221253311812/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=4729b6ffc5b555fa142bf0b6e6dc15609cb89a22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:27:33 np0005473739 python3.9[59580]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:27:59 np0005473739 systemd[1]: session-14.scope: Deactivated successfully.
Oct  7 09:27:59 np0005473739 systemd[1]: session-14.scope: Consumed 20.006s CPU time.
Oct  7 09:27:59 np0005473739 systemd-logind[801]: Session 14 logged out. Waiting for processes to exit.
Oct  7 09:27:59 np0005473739 systemd-logind[801]: Removed session 14.
Oct  7 09:28:12 np0005473739 systemd-logind[801]: New session 15 of user zuul.
Oct  7 09:28:12 np0005473739 systemd[1]: Started Session 15 of User zuul.
Oct  7 09:28:13 np0005473739 python3.9[59774]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:28:14 np0005473739 python3.9[59930]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:15 np0005473739 python3.9[60105]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:15 np0005473739 python3.9[60183]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.2gu2jomr recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:16 np0005473739 python3.9[60335]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:17 np0005473739 python3.9[60413]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.bb2s7uth recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:17 np0005473739 python3.9[60565]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:28:18 np0005473739 python3.9[60717]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:19 np0005473739 python3.9[60795]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:28:19 np0005473739 python3.9[60947]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:20 np0005473739 python3.9[61025]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:28:21 np0005473739 python3.9[61177]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:21 np0005473739 python3.9[61329]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:22 np0005473739 python3.9[61407]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:22 np0005473739 python3.9[61559]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:23 np0005473739 python3.9[61637]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:24 np0005473739 python3.9[61789]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:28:24 np0005473739 systemd[1]: Reloading.
Oct  7 09:28:24 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:28:24 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:28:25 np0005473739 python3.9[61978]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:25 np0005473739 python3.9[62056]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:26 np0005473739 python3.9[62208]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:26 np0005473739 python3.9[62286]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:27 np0005473739 python3.9[62438]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:28:27 np0005473739 systemd[1]: Reloading.
Oct  7 09:28:27 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:28:27 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:28:27 np0005473739 systemd[1]: Starting Create netns directory...
Oct  7 09:28:27 np0005473739 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  7 09:28:27 np0005473739 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  7 09:28:27 np0005473739 systemd[1]: Finished Create netns directory.
Oct  7 09:28:28 np0005473739 python3.9[62629]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:28:28 np0005473739 network[62646]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:28:28 np0005473739 network[62647]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:28:28 np0005473739 network[62648]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:28:32 np0005473739 python3.9[62911]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:32 np0005473739 python3.9[62989]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:33 np0005473739 python3.9[63141]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:34 np0005473739 python3.9[63293]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:35 np0005473739 python3.9[63416]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843713.7974625-216-185065362095308/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:36 np0005473739 python3.9[63568]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  7 09:28:36 np0005473739 systemd[1]: Starting Time & Date Service...
Oct  7 09:28:36 np0005473739 systemd[1]: Started Time & Date Service.
Oct  7 09:28:36 np0005473739 python3.9[63724]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:37 np0005473739 python3.9[63876]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:38 np0005473739 python3.9[63999]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843717.0381632-251-96629816242923/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:38 np0005473739 python3.9[64151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:39 np0005473739 python3.9[64274]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759843718.2340722-266-18769177995684/.source.yaml _original_basename=.m7avgvog follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:39 np0005473739 python3.9[64426]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:40 np0005473739 python3.9[64549]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843719.4437401-281-24648339198498/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:41 np0005473739 python3.9[64701]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:28:41 np0005473739 python3.9[64854]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:28:42 np0005473739 python3[65007]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  7 09:28:43 np0005473739 python3.9[65159]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:44 np0005473739 python3.9[65282]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843723.062817-320-123157070411356/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:44 np0005473739 python3.9[65434]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:45 np0005473739 python3.9[65557]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843724.364671-335-77137977378574/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:46 np0005473739 python3.9[65709]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:46 np0005473739 python3.9[65832]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843725.6364713-350-139416699572266/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:47 np0005473739 python3.9[65984]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:48 np0005473739 python3.9[66107]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843727.1321669-365-156679835329683/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:49 np0005473739 python3.9[66259]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:28:49 np0005473739 python3.9[66382]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759843728.570203-380-239155820606655/.source.nft follow=False _original_basename=ruleset.j2 checksum=693377dc03e5b6b24713cb537b18b88774724e35 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:50 np0005473739 python3.9[66534]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:51 np0005473739 python3.9[66686]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:28:51 np0005473739 python3.9[66845]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:52 np0005473739 python3.9[66998]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:53 np0005473739 python3.9[67150]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:28:54 np0005473739 python3.9[67302]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  7 09:28:54 np0005473739 python3.9[67455]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  7 09:28:55 np0005473739 systemd-logind[801]: Session 15 logged out. Waiting for processes to exit.
Oct  7 09:28:55 np0005473739 systemd[1]: session-15.scope: Deactivated successfully.
Oct  7 09:28:55 np0005473739 systemd[1]: session-15.scope: Consumed 33.005s CPU time.
Oct  7 09:28:55 np0005473739 systemd-logind[801]: Removed session 15.
Oct  7 09:29:00 np0005473739 systemd-logind[801]: New session 16 of user zuul.
Oct  7 09:29:01 np0005473739 systemd[1]: Started Session 16 of User zuul.
Oct  7 09:29:01 np0005473739 python3.9[67636]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  7 09:29:02 np0005473739 python3.9[67788]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:29:03 np0005473739 chronyd[54363]: Selected source 23.133.168.247 (pool.ntp.org)
Oct  7 09:29:03 np0005473739 python3.9[67940]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:29:04 np0005473739 python3.9[68092]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCss9vtiNZQ+c6V8hur44yjQlLn3NTXkBVZKor2H62rtudT4XVEQzGxi7gUwyRBte1R7sx4lqPoaKdKJZvRNgQvGY2Lv2fyd8EaX2Wrg97CplWcR7GA/CJbrXqozq2dLKTmKZaHTua3ql5+RRXXYrh+uisLXnV9Q4/PZN1YT5upOOwkGP/XYKheCA9G+0S61h1r3AwCU+J9wev+nTZBNm9WtF7qUcsAxH9AB5+bC45hVFLxIzvmOgaQdHD5W5Ak9xuJGAuzBDm2yj9X+NS9k1lfL9n809pJghMIiQZrC+D9sXqTiQ0aE7Wk7P5urmzOE1ScviwZWTZXPp5U8n1KCwJJ+BeCrhZpZoeSjLGmueTVJ4EDkywA9GavBjsh/S/P8x1Jr3JwVRJu4zaR+Nb2LvdvrhZ2idwDWJvx8jGH0K+R5Dm9ZHNNsgINe8uptoo317TDeW0H/ArL32AgFdDypUijFC1vWCCcysYZ7TYDKiIOmHY3rmhEN4SbXLyiUAA1K5E=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICGuEdBfoHqnIxaSxhCoPMgT82A1vTxx/84MRgwXo+he#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBkThUzBWDevZ6qT2+zfe0gUZaoXo9kc5/FHJDMm+yQRRP0i/33XvNHPmBucX4ysll81rQvIQhwXZ0yEC0uwqxA=#012 create=True mode=0644 path=/tmp/ansible.wx5ddgtr state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:29:05 np0005473739 python3.9[68244]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.wx5ddgtr' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:06 np0005473739 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  7 09:29:06 np0005473739 python3.9[68400]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.wx5ddgtr state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:29:06 np0005473739 systemd[1]: session-16.scope: Deactivated successfully.
Oct  7 09:29:06 np0005473739 systemd[1]: session-16.scope: Consumed 3.578s CPU time.
Oct  7 09:29:06 np0005473739 systemd-logind[801]: Session 16 logged out. Waiting for processes to exit.
Oct  7 09:29:06 np0005473739 systemd-logind[801]: Removed session 16.
Oct  7 09:29:12 np0005473739 systemd-logind[801]: New session 17 of user zuul.
Oct  7 09:29:12 np0005473739 systemd[1]: Started Session 17 of User zuul.
Oct  7 09:29:13 np0005473739 python3.9[68578]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:29:14 np0005473739 python3.9[68734]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  7 09:29:15 np0005473739 python3.9[68888]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:29:16 np0005473739 python3.9[69041]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:17 np0005473739 python3.9[69194]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:29:18 np0005473739 python3.9[69348]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:19 np0005473739 python3.9[69503]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:29:19 np0005473739 systemd[1]: session-17.scope: Deactivated successfully.
Oct  7 09:29:19 np0005473739 systemd[1]: session-17.scope: Consumed 5.110s CPU time.
Oct  7 09:29:19 np0005473739 systemd-logind[801]: Session 17 logged out. Waiting for processes to exit.
Oct  7 09:29:19 np0005473739 systemd-logind[801]: Removed session 17.
Oct  7 09:29:24 np0005473739 systemd-logind[801]: New session 18 of user zuul.
Oct  7 09:29:24 np0005473739 systemd[1]: Started Session 18 of User zuul.
Oct  7 09:29:25 np0005473739 python3.9[69682]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:29:26 np0005473739 python3.9[69838]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:29:27 np0005473739 python3.9[69922]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  7 09:29:29 np0005473739 python3.9[70073]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:30 np0005473739 python3.9[70224]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  7 09:29:31 np0005473739 python3.9[70374]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:29:31 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:29:31 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:29:32 np0005473739 python3.9[70525]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:29:32 np0005473739 systemd[1]: session-18.scope: Deactivated successfully.
Oct  7 09:29:32 np0005473739 systemd[1]: session-18.scope: Consumed 5.785s CPU time.
Oct  7 09:29:32 np0005473739 systemd-logind[801]: Session 18 logged out. Waiting for processes to exit.
Oct  7 09:29:32 np0005473739 systemd-logind[801]: Removed session 18.
Oct  7 09:29:40 np0005473739 systemd-logind[801]: New session 19 of user zuul.
Oct  7 09:29:40 np0005473739 systemd[1]: Started Session 19 of User zuul.
Oct  7 09:29:45 np0005473739 python3[71291]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:29:47 np0005473739 python3[71386]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  7 09:29:49 np0005473739 python3[71413]: ansible-ansible.builtin.stat Invoked with path=/dev/loop3 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:29:49 np0005473739 python3[71439]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-0.img bs=1 count=0 seek=20G#012losetup /dev/loop3 /var/lib/ceph-osd-0.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:49 np0005473739 kernel: loop: module loaded
Oct  7 09:29:49 np0005473739 kernel: loop3: detected capacity change from 0 to 41943040
Oct  7 09:29:49 np0005473739 python3[71474]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop3#012vgcreate ceph_vg0 /dev/loop3#012lvcreate -n ceph_lv0 -l +100%FREE ceph_vg0#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:49 np0005473739 lvm[71477]: PV /dev/loop3 not used.
Oct  7 09:29:49 np0005473739 lvm[71479]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  7 09:29:49 np0005473739 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg0.
Oct  7 09:29:49 np0005473739 lvm[71485]:  1 logical volume(s) in volume group "ceph_vg0" now active
Oct  7 09:29:50 np0005473739 lvm[71489]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  7 09:29:50 np0005473739 lvm[71489]: VG ceph_vg0 finished
Oct  7 09:29:50 np0005473739 systemd[1]: lvm-activate-ceph_vg0.service: Deactivated successfully.
Oct  7 09:29:50 np0005473739 python3[71567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-0.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:29:50 np0005473739 python3[71640]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843790.233032-32841-206373460433581/source dest=/etc/systemd/system/ceph-osd-losetup-0.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=427b1db064a970126b729b07acf99fa7d0eecb9c backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:29:51 np0005473739 python3[71690]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-0.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:29:51 np0005473739 systemd[1]: Reloading.
Oct  7 09:29:51 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:29:51 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:29:51 np0005473739 systemd[1]: Starting Ceph OSD losetup...
Oct  7 09:29:51 np0005473739 bash[71730]: /dev/loop3: [64513]:4329715 (/var/lib/ceph-osd-0.img)
Oct  7 09:29:51 np0005473739 systemd[1]: Finished Ceph OSD losetup.
Oct  7 09:29:51 np0005473739 lvm[71731]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  7 09:29:51 np0005473739 lvm[71731]: VG ceph_vg0 finished
Oct  7 09:29:52 np0005473739 python3[71758]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  7 09:29:53 np0005473739 python3[71785]: ansible-ansible.builtin.stat Invoked with path=/dev/loop4 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:29:54 np0005473739 python3[71811]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-1.img bs=1 count=0 seek=20G#012losetup /dev/loop4 /var/lib/ceph-osd-1.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:54 np0005473739 kernel: loop4: detected capacity change from 0 to 41943040
Oct  7 09:29:54 np0005473739 python3[71843]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop4#012vgcreate ceph_vg1 /dev/loop4#012lvcreate -n ceph_lv1 -l +100%FREE ceph_vg1#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:54 np0005473739 lvm[71846]: PV /dev/loop4 not used.
Oct  7 09:29:54 np0005473739 lvm[71856]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  7 09:29:54 np0005473739 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg1.
Oct  7 09:29:54 np0005473739 lvm[71858]:  1 logical volume(s) in volume group "ceph_vg1" now active
Oct  7 09:29:54 np0005473739 systemd[1]: lvm-activate-ceph_vg1.service: Deactivated successfully.
Oct  7 09:29:55 np0005473739 python3[71936]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-1.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:29:55 np0005473739 python3[72009]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843794.7739344-32868-203795829419454/source dest=/etc/systemd/system/ceph-osd-losetup-1.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=19612168ea279db4171b94ee1f8625de1ec44b58 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:29:55 np0005473739 python3[72059]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-1.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:29:55 np0005473739 systemd[1]: Reloading.
Oct  7 09:29:56 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:29:56 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:29:56 np0005473739 systemd[1]: Starting Ceph OSD losetup...
Oct  7 09:29:56 np0005473739 bash[72098]: /dev/loop4: [64513]:4720768 (/var/lib/ceph-osd-1.img)
Oct  7 09:29:56 np0005473739 systemd[1]: Finished Ceph OSD losetup.
Oct  7 09:29:56 np0005473739 lvm[72099]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  7 09:29:56 np0005473739 lvm[72099]: VG ceph_vg1 finished
Oct  7 09:29:56 np0005473739 python3[72125]: ansible-ansible.legacy.dnf Invoked with name=['util-linux', 'lvm2', 'jq', 'podman'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  7 09:29:58 np0005473739 python3[72152]: ansible-ansible.builtin.stat Invoked with path=/dev/loop5 follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:29:58 np0005473739 python3[72178]: ansible-ansible.legacy.command Invoked with _raw_params=dd if=/dev/zero of=/var/lib/ceph-osd-2.img bs=1 count=0 seek=20G#012losetup /dev/loop5 /var/lib/ceph-osd-2.img#012lsblk _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:58 np0005473739 kernel: loop5: detected capacity change from 0 to 41943040
Oct  7 09:29:58 np0005473739 python3[72210]: ansible-ansible.legacy.command Invoked with _raw_params=pvcreate /dev/loop5#012vgcreate ceph_vg2 /dev/loop5#012lvcreate -n ceph_lv2 -l +100%FREE ceph_vg2#012lvs _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:29:58 np0005473739 lvm[72213]: PV /dev/loop5 not used.
Oct  7 09:29:58 np0005473739 lvm[72215]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  7 09:29:58 np0005473739 systemd[1]: Started /usr/sbin/lvm vgchange -aay --autoactivation event ceph_vg2.
Oct  7 09:29:58 np0005473739 lvm[72224]:  1 logical volume(s) in volume group "ceph_vg2" now active
Oct  7 09:29:58 np0005473739 lvm[72226]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  7 09:29:58 np0005473739 lvm[72226]: VG ceph_vg2 finished
Oct  7 09:29:59 np0005473739 systemd[1]: lvm-activate-ceph_vg2.service: Deactivated successfully.
Oct  7 09:29:59 np0005473739 python3[72304]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/ceph-osd-losetup-2.service follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:30:00 np0005473739 python3[72377]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843799.2476811-32895-264788120601712/source dest=/etc/systemd/system/ceph-osd-losetup-2.service mode=0644 force=True follow=False _original_basename=ceph-osd-losetup.service.j2 checksum=4c5b1bc5693c499ffe2edaa97d63f5df7075d845 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:30:00 np0005473739 python3[72427]: ansible-ansible.builtin.systemd Invoked with state=started enabled=True name=ceph-osd-losetup-2.service daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:30:00 np0005473739 systemd[1]: Reloading.
Oct  7 09:30:00 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:30:00 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:30:00 np0005473739 systemd[1]: Starting Ceph OSD losetup...
Oct  7 09:30:00 np0005473739 bash[72467]: /dev/loop5: [64513]:4750466 (/var/lib/ceph-osd-2.img)
Oct  7 09:30:00 np0005473739 systemd[1]: Finished Ceph OSD losetup.
Oct  7 09:30:00 np0005473739 lvm[72468]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  7 09:30:00 np0005473739 lvm[72468]: VG ceph_vg2 finished
Oct  7 09:30:02 np0005473739 python3[72492]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:30:05 np0005473739 python3[72585]: ansible-ansible.legacy.dnf Invoked with name=['cephadm'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Oct  7 09:30:06 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:30:06 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:30:07 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:30:07 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:30:07 np0005473739 systemd[1]: run-rb7e9e908e3074b40b859ca2f6fedb4c9.service: Deactivated successfully.
Oct  7 09:30:07 np0005473739 python3[72700]: ansible-ansible.builtin.stat Invoked with path=/usr/sbin/cephadm follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:30:07 np0005473739 python3[72729]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm ls --no-detail _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:30:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:08 np0005473739 python3[72792]: ansible-ansible.builtin.file Invoked with path=/etc/ceph state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:30:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:09 np0005473739 python3[72818]: ansible-ansible.builtin.file Invoked with path=/home/ceph-admin/specs owner=ceph-admin group=ceph-admin mode=0755 state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:30:09 np0005473739 python3[72896]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:30:10 np0005473739 python3[72969]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843809.5149305-33042-125748189290011/source dest=/home/ceph-admin/specs/ceph_spec.yaml owner=ceph-admin group=ceph-admin mode=0644 _original_basename=ceph_spec.yml follow=False checksum=bb83c53af4ffd926a3f1eafe26a8be437df6401f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:30:11 np0005473739 python3[73071]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:30:11 np0005473739 python3[73144]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843810.8043375-33060-92104424116323/source dest=/home/ceph-admin/assimilate_ceph.conf owner=ceph-admin group=ceph-admin mode=0644 _original_basename=initial_ceph.conf follow=False checksum=41828f7c2442fdf376911255e33c12863fc3b1b3 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:30:11 np0005473739 python3[73194]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:30:12 np0005473739 python3[73222]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/.ssh/id_rsa.pub follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:30:12 np0005473739 python3[73250]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:30:13 np0005473739 python3[73278]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/cephadm bootstrap --skip-firewalld --skip-prepare-host --ssh-private-key /home/ceph-admin/.ssh/id_rsa --ssh-public-key /home/ceph-admin/.ssh/id_rsa.pub --ssh-user ceph-admin --allow-fqdn-hostname --output-keyring /etc/ceph/ceph.client.admin.keyring --output-config /etc/ceph/ceph.conf --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f --config /home/ceph-admin/assimilate_ceph.conf \--single-host-defaults \--skip-monitoring-stack --skip-dashboard --mon-ip 192.168.122.100#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:30:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:13 np0005473739 systemd[1]: Created slice User Slice of UID 42477.
Oct  7 09:30:13 np0005473739 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  7 09:30:13 np0005473739 systemd-logind[801]: New session 20 of user ceph-admin.
Oct  7 09:30:13 np0005473739 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  7 09:30:13 np0005473739 systemd[1]: Starting User Manager for UID 42477...
Oct  7 09:30:13 np0005473739 systemd[73299]: Queued start job for default target Main User Target.
Oct  7 09:30:13 np0005473739 systemd[73299]: Created slice User Application Slice.
Oct  7 09:30:13 np0005473739 systemd[73299]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  7 09:30:13 np0005473739 systemd[73299]: Started Daily Cleanup of User's Temporary Directories.
Oct  7 09:30:13 np0005473739 systemd[73299]: Reached target Paths.
Oct  7 09:30:13 np0005473739 systemd[73299]: Reached target Timers.
Oct  7 09:30:13 np0005473739 systemd[73299]: Starting D-Bus User Message Bus Socket...
Oct  7 09:30:13 np0005473739 systemd[73299]: Starting Create User's Volatile Files and Directories...
Oct  7 09:30:13 np0005473739 systemd[73299]: Listening on D-Bus User Message Bus Socket.
Oct  7 09:30:13 np0005473739 systemd[73299]: Reached target Sockets.
Oct  7 09:30:13 np0005473739 systemd[73299]: Finished Create User's Volatile Files and Directories.
Oct  7 09:30:13 np0005473739 systemd[73299]: Reached target Basic System.
Oct  7 09:30:13 np0005473739 systemd[73299]: Reached target Main User Target.
Oct  7 09:30:13 np0005473739 systemd[73299]: Startup finished in 193ms.
Oct  7 09:30:13 np0005473739 systemd[1]: Started User Manager for UID 42477.
Oct  7 09:30:13 np0005473739 systemd[1]: Started Session 20 of User ceph-admin.
Oct  7 09:30:13 np0005473739 systemd[1]: session-20.scope: Deactivated successfully.
Oct  7 09:30:13 np0005473739 systemd-logind[801]: Session 20 logged out. Waiting for processes to exit.
Oct  7 09:30:13 np0005473739 systemd-logind[801]: Removed session 20.
Oct  7 09:30:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-compat2712871474-merged.mount: Deactivated successfully.
Oct  7 09:30:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-compat2712871474-lower\x2dmapped.mount: Deactivated successfully.
Oct  7 09:30:23 np0005473739 systemd[1]: Stopping User Manager for UID 42477...
Oct  7 09:30:23 np0005473739 systemd[73299]: Activating special unit Exit the Session...
Oct  7 09:30:23 np0005473739 systemd[73299]: Stopped target Main User Target.
Oct  7 09:30:23 np0005473739 systemd[73299]: Stopped target Basic System.
Oct  7 09:30:23 np0005473739 systemd[73299]: Stopped target Paths.
Oct  7 09:30:23 np0005473739 systemd[73299]: Stopped target Sockets.
Oct  7 09:30:23 np0005473739 systemd[73299]: Stopped target Timers.
Oct  7 09:30:23 np0005473739 systemd[73299]: Stopped Mark boot as successful after the user session has run 2 minutes.
Oct  7 09:30:23 np0005473739 systemd[73299]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  7 09:30:23 np0005473739 systemd[73299]: Closed D-Bus User Message Bus Socket.
Oct  7 09:30:23 np0005473739 systemd[73299]: Stopped Create User's Volatile Files and Directories.
Oct  7 09:30:23 np0005473739 systemd[73299]: Removed slice User Application Slice.
Oct  7 09:30:23 np0005473739 systemd[73299]: Reached target Shutdown.
Oct  7 09:30:23 np0005473739 systemd[73299]: Finished Exit the Session.
Oct  7 09:30:23 np0005473739 systemd[73299]: Reached target Exit the Session.
Oct  7 09:30:23 np0005473739 systemd[1]: user@42477.service: Deactivated successfully.
Oct  7 09:30:23 np0005473739 systemd[1]: Stopped User Manager for UID 42477.
Oct  7 09:30:24 np0005473739 systemd[1]: Stopping User Runtime Directory /run/user/42477...
Oct  7 09:30:24 np0005473739 systemd[1]: run-user-42477.mount: Deactivated successfully.
Oct  7 09:30:24 np0005473739 systemd[1]: user-runtime-dir@42477.service: Deactivated successfully.
Oct  7 09:30:24 np0005473739 systemd[1]: Stopped User Runtime Directory /run/user/42477.
Oct  7 09:30:24 np0005473739 systemd[1]: Removed slice User Slice of UID 42477.
Oct  7 09:30:28 np0005473739 podman[73353]: 2025-10-07 13:30:28.248552091 +0000 UTC m=+14.338553626 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:28 np0005473739 podman[73414]: 2025-10-07 13:30:28.334595699 +0000 UTC m=+0.058729961 container create 6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf (image=quay.io/ceph/ceph:v18, name=kind_meitner, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:30:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-volatile\x2dcheck2547961000-merged.mount: Deactivated successfully.
Oct  7 09:30:28 np0005473739 systemd[1]: Created slice Virtual Machine and Container Slice.
Oct  7 09:30:28 np0005473739 systemd[1]: Started libpod-conmon-6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf.scope.
Oct  7 09:30:28 np0005473739 podman[73414]: 2025-10-07 13:30:28.306152994 +0000 UTC m=+0.030287296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:28 np0005473739 podman[73414]: 2025-10-07 13:30:28.450488326 +0000 UTC m=+0.174622558 container init 6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf (image=quay.io/ceph/ceph:v18, name=kind_meitner, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:28 np0005473739 podman[73414]: 2025-10-07 13:30:28.458092563 +0000 UTC m=+0.182226785 container start 6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf (image=quay.io/ceph/ceph:v18, name=kind_meitner, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 09:30:28 np0005473739 podman[73414]: 2025-10-07 13:30:28.462227757 +0000 UTC m=+0.186361999 container attach 6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf (image=quay.io/ceph/ceph:v18, name=kind_meitner, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:30:28 np0005473739 kind_meitner[73430]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  7 09:30:28 np0005473739 systemd[1]: libpod-6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf.scope: Deactivated successfully.
Oct  7 09:30:28 np0005473739 podman[73414]: 2025-10-07 13:30:28.800505534 +0000 UTC m=+0.524639786 container died 6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf (image=quay.io/ceph/ceph:v18, name=kind_meitner, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:30:28 np0005473739 podman[73414]: 2025-10-07 13:30:28.860576937 +0000 UTC m=+0.584711159 container remove 6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf (image=quay.io/ceph/ceph:v18, name=kind_meitner, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:30:28 np0005473739 systemd[1]: libpod-conmon-6a4c6d1670d91919423d8835b10483bc9f83ed23f5d9993482ac32b3532ceddf.scope: Deactivated successfully.
Oct  7 09:30:28 np0005473739 podman[73448]: 2025-10-07 13:30:28.938679515 +0000 UTC m=+0.052269970 container create 9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821 (image=quay.io/ceph/ceph:v18, name=pedantic_colden, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:30:28 np0005473739 systemd[1]: Started libpod-conmon-9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821.scope.
Oct  7 09:30:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:29 np0005473739 podman[73448]: 2025-10-07 13:30:28.917445215 +0000 UTC m=+0.031035670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:29 np0005473739 podman[73448]: 2025-10-07 13:30:29.016545166 +0000 UTC m=+0.130135671 container init 9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821 (image=quay.io/ceph/ceph:v18, name=pedantic_colden, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 09:30:29 np0005473739 podman[73448]: 2025-10-07 13:30:29.025897291 +0000 UTC m=+0.139487716 container start 9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821 (image=quay.io/ceph/ceph:v18, name=pedantic_colden, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:30:29 np0005473739 pedantic_colden[73464]: 167 167
Oct  7 09:30:29 np0005473739 podman[73448]: 2025-10-07 13:30:29.030305144 +0000 UTC m=+0.143895659 container attach 9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821 (image=quay.io/ceph/ceph:v18, name=pedantic_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:30:29 np0005473739 systemd[1]: libpod-9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821.scope: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73448]: 2025-10-07 13:30:29.033299561 +0000 UTC m=+0.146890076 container died 9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821 (image=quay.io/ceph/ceph:v18, name=pedantic_colden, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:30:29 np0005473739 podman[73448]: 2025-10-07 13:30:29.08094507 +0000 UTC m=+0.194535495 container remove 9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821 (image=quay.io/ceph/ceph:v18, name=pedantic_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:29 np0005473739 systemd[1]: libpod-conmon-9b0248d6904a0f8a7298af41d2ebd52c902f7cd60f3cb783bac5de33a0abb821.scope: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73480]: 2025-10-07 13:30:29.190666016 +0000 UTC m=+0.072639682 container create 17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6 (image=quay.io/ceph/ceph:v18, name=elastic_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:29 np0005473739 systemd[1]: Started libpod-conmon-17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6.scope.
Oct  7 09:30:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f9ab3e133bc1f398cc915fd12720c1df84f2769a030a72ee8f56f88fa62edcd7-merged.mount: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73480]: 2025-10-07 13:30:29.161492517 +0000 UTC m=+0.043466253 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:29 np0005473739 podman[73480]: 2025-10-07 13:30:29.290798531 +0000 UTC m=+0.172772257 container init 17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6 (image=quay.io/ceph/ceph:v18, name=elastic_benz, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 09:30:29 np0005473739 podman[73480]: 2025-10-07 13:30:29.301919093 +0000 UTC m=+0.183892769 container start 17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6 (image=quay.io/ceph/ceph:v18, name=elastic_benz, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 09:30:29 np0005473739 podman[73480]: 2025-10-07 13:30:29.307136313 +0000 UTC m=+0.189109979 container attach 17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6 (image=quay.io/ceph/ceph:v18, name=elastic_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:30:29 np0005473739 elastic_benz[73496]: AQD1FeVoENRvFBAACz1bWkqrxzfqffcZWjSTNw==
Oct  7 09:30:29 np0005473739 systemd[1]: libpod-17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6.scope: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73480]: 2025-10-07 13:30:29.349587792 +0000 UTC m=+0.231561458 container died 17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6 (image=quay.io/ceph/ceph:v18, name=elastic_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9fb24c761f8149bafa519938f3b7bef4b0e1ee6d800cb8291743e6983d16b359-merged.mount: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73480]: 2025-10-07 13:30:29.413339434 +0000 UTC m=+0.295313080 container remove 17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6 (image=quay.io/ceph/ceph:v18, name=elastic_benz, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:30:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:29 np0005473739 systemd[1]: libpod-conmon-17441e783e5eca4932ba095925f5dea4ea037d3d084ef5e713d33ebdfb8563e6.scope: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73515]: 2025-10-07 13:30:29.506224974 +0000 UTC m=+0.065578843 container create 6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc (image=quay.io/ceph/ceph:v18, name=trusting_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:30:29 np0005473739 systemd[1]: Started libpod-conmon-6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc.scope.
Oct  7 09:30:29 np0005473739 podman[73515]: 2025-10-07 13:30:29.48181187 +0000 UTC m=+0.041165769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:29 np0005473739 podman[73515]: 2025-10-07 13:30:29.614408441 +0000 UTC m=+0.173762340 container init 6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc (image=quay.io/ceph/ceph:v18, name=trusting_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:30:29 np0005473739 podman[73515]: 2025-10-07 13:30:29.621817911 +0000 UTC m=+0.181171800 container start 6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc (image=quay.io/ceph/ceph:v18, name=trusting_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 09:30:29 np0005473739 podman[73515]: 2025-10-07 13:30:29.626460063 +0000 UTC m=+0.185813962 container attach 6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc (image=quay.io/ceph/ceph:v18, name=trusting_shamir, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:30:29 np0005473739 trusting_shamir[73531]: AQD1FeVoaGi8JhAAK2vRe4gALyaiS4x8rnIzdQ==
Oct  7 09:30:29 np0005473739 systemd[1]: libpod-6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc.scope: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73515]: 2025-10-07 13:30:29.655339151 +0000 UTC m=+0.214693040 container died 6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc (image=quay.io/ceph/ceph:v18, name=trusting_shamir, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:29 np0005473739 podman[73515]: 2025-10-07 13:30:29.706212605 +0000 UTC m=+0.265566474 container remove 6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc (image=quay.io/ceph/ceph:v18, name=trusting_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 09:30:29 np0005473739 systemd[1]: libpod-conmon-6f3c8988e9a6cc835c9ec703b2b2de54c232774fa32fb3358cc7ce82f1f382dc.scope: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73550]: 2025-10-07 13:30:29.780448728 +0000 UTC m=+0.050085989 container create e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6 (image=quay.io/ceph/ceph:v18, name=modest_bardeen, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 09:30:29 np0005473739 systemd[1]: Started libpod-conmon-e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6.scope.
Oct  7 09:30:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:29 np0005473739 podman[73550]: 2025-10-07 13:30:29.759873409 +0000 UTC m=+0.029510700 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:29 np0005473739 podman[73550]: 2025-10-07 13:30:29.866313519 +0000 UTC m=+0.135950850 container init e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6 (image=quay.io/ceph/ceph:v18, name=modest_bardeen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:30:29 np0005473739 podman[73550]: 2025-10-07 13:30:29.873765101 +0000 UTC m=+0.143402382 container start e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6 (image=quay.io/ceph/ceph:v18, name=modest_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 09:30:29 np0005473739 podman[73550]: 2025-10-07 13:30:29.87835221 +0000 UTC m=+0.147989501 container attach e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6 (image=quay.io/ceph/ceph:v18, name=modest_bardeen, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:30:29 np0005473739 modest_bardeen[73566]: AQD1FeVo6pg5NRAAy+B/mpnFk5wHVBc+/H2aOA==
Oct  7 09:30:29 np0005473739 systemd[1]: libpod-e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6.scope: Deactivated successfully.
Oct  7 09:30:29 np0005473739 podman[73550]: 2025-10-07 13:30:29.896410267 +0000 UTC m=+0.166047598 container died e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6 (image=quay.io/ceph/ceph:v18, name=modest_bardeen, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:30:29 np0005473739 podman[73550]: 2025-10-07 13:30:29.944498901 +0000 UTC m=+0.214136152 container remove e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6 (image=quay.io/ceph/ceph:v18, name=modest_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True)
Oct  7 09:30:29 np0005473739 systemd[1]: libpod-conmon-e0736af26911d51817c988773efb6ab5e7414f0e858ceaf07447752e46be2bf6.scope: Deactivated successfully.
Oct  7 09:30:30 np0005473739 podman[73587]: 2025-10-07 13:30:30.000066297 +0000 UTC m=+0.034644187 container create 0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e (image=quay.io/ceph/ceph:v18, name=unruffled_tharp, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 09:30:30 np0005473739 systemd[1]: Started libpod-conmon-0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e.scope.
Oct  7 09:30:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/515d453d49001b00546ec672f3f7e2bbef591f98402b9bf4b3dbd2d409ccb882/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:30 np0005473739 podman[73587]: 2025-10-07 13:30:30.066896779 +0000 UTC m=+0.101474639 container init 0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e (image=quay.io/ceph/ceph:v18, name=unruffled_tharp, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:30:30 np0005473739 podman[73587]: 2025-10-07 13:30:30.076334106 +0000 UTC m=+0.110911966 container start 0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e (image=quay.io/ceph/ceph:v18, name=unruffled_tharp, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:30:30 np0005473739 podman[73587]: 2025-10-07 13:30:30.080235943 +0000 UTC m=+0.114813823 container attach 0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e (image=quay.io/ceph/ceph:v18, name=unruffled_tharp, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:30 np0005473739 podman[73587]: 2025-10-07 13:30:29.98447569 +0000 UTC m=+0.019053560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:30 np0005473739 unruffled_tharp[73603]: /usr/bin/monmaptool: monmap file /tmp/monmap
Oct  7 09:30:30 np0005473739 unruffled_tharp[73603]: setting min_mon_release = pacific
Oct  7 09:30:30 np0005473739 unruffled_tharp[73603]: /usr/bin/monmaptool: set fsid to 82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:30:30 np0005473739 unruffled_tharp[73603]: /usr/bin/monmaptool: writing epoch 0 to /tmp/monmap (1 monitors)
Oct  7 09:30:30 np0005473739 systemd[1]: libpod-0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e.scope: Deactivated successfully.
Oct  7 09:30:30 np0005473739 podman[73587]: 2025-10-07 13:30:30.130754795 +0000 UTC m=+0.165332655 container died 0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e (image=quay.io/ceph/ceph:v18, name=unruffled_tharp, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:30:30 np0005473739 podman[73587]: 2025-10-07 13:30:30.182675963 +0000 UTC m=+0.217253843 container remove 0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e (image=quay.io/ceph/ceph:v18, name=unruffled_tharp, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 09:30:30 np0005473739 systemd[1]: libpod-conmon-0fb9a35434e0a34bd8ef37f7ca1ebdace3902648e55ea43d7fb6964da34e463e.scope: Deactivated successfully.
Oct  7 09:30:30 np0005473739 podman[73621]: 2025-10-07 13:30:30.258985723 +0000 UTC m=+0.048826738 container create 83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:30:30 np0005473739 systemd[1]: Started libpod-conmon-83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88.scope.
Oct  7 09:30:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9e7e21fa686a371ac7193dd458a2ddaab41da46ed69755fcfff6f8364f8819/merged/tmp/keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9e7e21fa686a371ac7193dd458a2ddaab41da46ed69755fcfff6f8364f8819/merged/tmp/monmap supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9e7e21fa686a371ac7193dd458a2ddaab41da46ed69755fcfff6f8364f8819/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/af9e7e21fa686a371ac7193dd458a2ddaab41da46ed69755fcfff6f8364f8819/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:30 np0005473739 podman[73621]: 2025-10-07 13:30:30.322043453 +0000 UTC m=+0.111884448 container init 83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 09:30:30 np0005473739 podman[73621]: 2025-10-07 13:30:30.327301824 +0000 UTC m=+0.117142819 container start 83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:30:30 np0005473739 podman[73621]: 2025-10-07 13:30:30.33117353 +0000 UTC m=+0.121014545 container attach 83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:30 np0005473739 podman[73621]: 2025-10-07 13:30:30.23855738 +0000 UTC m=+0.028398415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:30 np0005473739 systemd[1]: libpod-83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88.scope: Deactivated successfully.
Oct  7 09:30:30 np0005473739 podman[73621]: 2025-10-07 13:30:30.421406313 +0000 UTC m=+0.211247318 container died 83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-af9e7e21fa686a371ac7193dd458a2ddaab41da46ed69755fcfff6f8364f8819-merged.mount: Deactivated successfully.
Oct  7 09:30:30 np0005473739 podman[73621]: 2025-10-07 13:30:30.463521972 +0000 UTC m=+0.253362967 container remove 83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88 (image=quay.io/ceph/ceph:v18, name=hardcore_lewin, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:30 np0005473739 systemd[1]: libpod-conmon-83bc16d382b588a9edbf7c367fe1bf1ef8bf6830882269c878a0801f27c60a88.scope: Deactivated successfully.
Oct  7 09:30:30 np0005473739 systemd[1]: Reloading.
Oct  7 09:30:30 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:30:30 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:30:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:30 np0005473739 systemd[1]: Reloading.
Oct  7 09:30:30 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:30:30 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:30:31 np0005473739 systemd[1]: Reached target All Ceph clusters and services.
Oct  7 09:30:31 np0005473739 systemd[1]: Reloading.
Oct  7 09:30:31 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:30:31 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:30:31 np0005473739 systemd[1]: Reached target Ceph cluster 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:30:31 np0005473739 systemd[1]: Reloading.
Oct  7 09:30:31 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:30:31 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:30:31 np0005473739 systemd[1]: Reloading.
Oct  7 09:30:31 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:30:31 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:30:31 np0005473739 systemd[1]: Created slice Slice /system/ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:30:31 np0005473739 systemd[1]: Reached target System Time Set.
Oct  7 09:30:31 np0005473739 systemd[1]: Reached target System Time Synchronized.
Oct  7 09:30:31 np0005473739 systemd[1]: Starting Ceph mon.compute-0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:30:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:32 np0005473739 podman[73916]: 2025-10-07 13:30:32.116773484 +0000 UTC m=+0.044744176 container create 0189e479bf060902968b68b579aee706da59c5d7a13b3f8d23bb864038edebde (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:30:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d723efdf03c9054adcf3947113fbdbfd77cd8ef792269d0eb8b80af3de960d6c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d723efdf03c9054adcf3947113fbdbfd77cd8ef792269d0eb8b80af3de960d6c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d723efdf03c9054adcf3947113fbdbfd77cd8ef792269d0eb8b80af3de960d6c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d723efdf03c9054adcf3947113fbdbfd77cd8ef792269d0eb8b80af3de960d6c/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:32 np0005473739 podman[73916]: 2025-10-07 13:30:32.187316847 +0000 UTC m=+0.115287549 container init 0189e479bf060902968b68b579aee706da59c5d7a13b3f8d23bb864038edebde (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 09:30:32 np0005473739 podman[73916]: 2025-10-07 13:30:32.096420973 +0000 UTC m=+0.024391685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:32 np0005473739 podman[73916]: 2025-10-07 13:30:32.194965886 +0000 UTC m=+0.122936578 container start 0189e479bf060902968b68b579aee706da59c5d7a13b3f8d23bb864038edebde (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 09:30:32 np0005473739 bash[73916]: 0189e479bf060902968b68b579aee706da59c5d7a13b3f8d23bb864038edebde
Oct  7 09:30:32 np0005473739 systemd[1]: Started Ceph mon.compute-0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: pidfile_write: ignore empty --pid-file
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: load: jerasure load: lrc 
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: RocksDB version: 7.9.2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Git sha 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: DB SUMMARY
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: DB Session ID:  FS223EQ6HR1KXK0OIM9Z
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: CURRENT file:  CURRENT
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: IDENTITY file:  IDENTITY
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: MANIFEST file:  MANIFEST-000005 size: 59 Bytes
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 0, files: 
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000004.log size: 807 ; 
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                         Options.error_if_exists: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                       Options.create_if_missing: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                         Options.paranoid_checks: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                                     Options.env: 0x5599f5dbac40
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                                Options.info_log: 0x5599f7b76e80
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.max_file_opening_threads: 16
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                              Options.statistics: (nil)
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                               Options.use_fsync: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                       Options.max_log_file_size: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                         Options.allow_fallocate: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                        Options.use_direct_reads: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:          Options.create_missing_column_families: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                              Options.db_log_dir: 
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                                 Options.wal_dir: 
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                   Options.advise_random_on_open: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                    Options.write_buffer_manager: 0x5599f7b86b40
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                            Options.rate_limiter: (nil)
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.unordered_write: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                               Options.row_cache: None
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                              Options.wal_filter: None
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.allow_ingest_behind: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.two_write_queues: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.manual_wal_flush: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.wal_compression: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.atomic_flush: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                 Options.log_readahead_size: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.allow_data_in_errors: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.db_host_id: __hostname__
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.max_background_jobs: 2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.max_background_compactions: -1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.max_subcompactions: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.max_total_wal_size: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                          Options.max_open_files: -1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                          Options.bytes_per_sync: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:       Options.compaction_readahead_size: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.max_background_flushes: -1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Compression algorithms supported:
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: #011kZSTD supported: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: #011kXpressCompression supported: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: #011kBZip2Compression supported: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: #011kLZ4Compression supported: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: #011kZlibCompression supported: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: #011kSnappyCompression supported: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:           Options.merge_operator: 
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:        Options.compaction_filter: None
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x5599f7b76a80)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5599f7b6f1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:        Options.write_buffer_size: 33554432
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:  Options.max_write_buffer_number: 2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:          Options.compression: NoCompression
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.num_levels: 7
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000005 succeeded,manifest_file_number is 5, next_file_number is 7, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e373f884-c326-49b5-81e6-2b809f3b2d39
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843832238236, "job": 1, "event": "recovery_started", "wal_files": [4]}
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #4 mode 2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843832240225, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 8, "file_size": 1944, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 1, "largest_seqno": 5, "table_properties": {"data_size": 819, "index_size": 31, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 696, "raw_average_value_size": 139, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "FS223EQ6HR1KXK0OIM9Z", "orig_file_number": 8, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843832240315, "job": 1, "event": "recovery_finished"}
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/version_set.cc:5047] Creating manifest 10
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000004.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x5599f7b98e00
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: DB pointer 0x5599f7c22000
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      1/0    1.90 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Sum      1/0    1.90 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.9      0.00              0.00         1    0.002       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.16 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5599f7b6f1f0#2 capacity: 512.00 MB usage: 0.22 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(1,0.11 KB,2.08616e-05%) IndexBlock(1,0.11 KB,2.08616e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@-1(???) e0 preinit fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@-1(probing) e0  my rank is now 0 (was -1)
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(probing) e0 win_standalone_election
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: paxos.0).electionLogic(0) init, first boot, initializing epoch at 1 
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(electing) e0 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: paxos.0).electionLogic(2) init, last seen epoch 2
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mgrc update_daemon_metadata mon.compute-0 metadata {addrs=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0],arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,ceph_version_when_created=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),compression_algorithms=none, snappy, zlib, zstd, lz4,container_hostname=compute-0,container_image=quay.io/ceph/ceph:v18,cpu=AMD EPYC-Rome Processor,created_at=2025-10-07T13:30:30.361783Z,device_ids=,device_paths=vda=/dev/disk/by-path/pci-0000:00:04.0,devices=vda,distro=centos,distro_description=CentOS Stream 9,distro_version=9,hostname=compute-0,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864108,os=Linux}
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 create_pending setting full_ratio = 0.95
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 do_prune osdmap full prune enabled
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout,12=octopus ondisk layout,13=pacific ondisk layout,14=quincy ondisk layout,15=reef ondisk layout}
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).mds e1 new map
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: log_channel(cluster) log [DBG] : fsmap 
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e0 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e1 e1: 0 total, 0 up, 0 in
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mkfs 82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  7 09:30:32 np0005473739 podman[73937]: 2025-10-07 13:30:32.295460132 +0000 UTC m=+0.050415909 container create 49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  7 09:30:32 np0005473739 systemd[1]: Started libpod-conmon-49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e.scope.
Oct  7 09:30:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df918b3528581fd1157cb27d034783f0043e6bd968e628cc618813ad8329946/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df918b3528581fd1157cb27d034783f0043e6bd968e628cc618813ad8329946/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6df918b3528581fd1157cb27d034783f0043e6bd968e628cc618813ad8329946/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:32 np0005473739 podman[73937]: 2025-10-07 13:30:32.276077112 +0000 UTC m=+0.031032899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:32 np0005473739 podman[73937]: 2025-10-07 13:30:32.40089795 +0000 UTC m=+0.155853767 container init 49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:30:32 np0005473739 podman[73937]: 2025-10-07 13:30:32.40889266 +0000 UTC m=+0.163848457 container start 49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct  7 09:30:32 np0005473739 podman[73937]: 2025-10-07 13:30:32.41289882 +0000 UTC m=+0.167854637 container attach 49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  7 09:30:32 np0005473739 ceph-mon[73936]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/184141761' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:  cluster:
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    id:     82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    health: HEALTH_OK
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]: 
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:  services:
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    mon: 1 daemons, quorum compute-0 (age 0.531878s)
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    mgr: no daemons active
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    osd: 0 osds: 0 up, 0 in
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]: 
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:  data:
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    pools:   0 pools, 0 pgs
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    objects: 0 objects, 0 B
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    usage:   0 B used, 0 B / 0 B avail
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]:    pgs:     
Oct  7 09:30:32 np0005473739 dazzling_lehmann[73992]: 
Oct  7 09:30:32 np0005473739 systemd[1]: libpod-49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e.scope: Deactivated successfully.
Oct  7 09:30:32 np0005473739 podman[74018]: 2025-10-07 13:30:32.890252257 +0000 UTC m=+0.043633390 container died 49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:30:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6df918b3528581fd1157cb27d034783f0043e6bd968e628cc618813ad8329946-merged.mount: Deactivated successfully.
Oct  7 09:30:32 np0005473739 podman[74018]: 2025-10-07 13:30:32.9346695 +0000 UTC m=+0.088050553 container remove 49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e (image=quay.io/ceph/ceph:v18, name=dazzling_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:30:32 np0005473739 systemd[1]: libpod-conmon-49695ab71dd9aaef443d4ccf904c043858b7f99f05493a38b75f8da8b9b2020e.scope: Deactivated successfully.
Oct  7 09:30:33 np0005473739 podman[74033]: 2025-10-07 13:30:33.015119996 +0000 UTC m=+0.054987859 container create 88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e (image=quay.io/ceph/ceph:v18, name=gifted_bell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:30:33 np0005473739 systemd[1]: Started libpod-conmon-88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e.scope.
Oct  7 09:30:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:33 np0005473739 podman[74033]: 2025-10-07 13:30:32.985597536 +0000 UTC m=+0.025465439 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98c298a2c83f099f38c041d5b4212d739fd3a221384d5d72f806576360e720f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98c298a2c83f099f38c041d5b4212d739fd3a221384d5d72f806576360e720f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98c298a2c83f099f38c041d5b4212d739fd3a221384d5d72f806576360e720f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e98c298a2c83f099f38c041d5b4212d739fd3a221384d5d72f806576360e720f/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:33 np0005473739 podman[74033]: 2025-10-07 13:30:33.113322837 +0000 UTC m=+0.153190730 container init 88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e (image=quay.io/ceph/ceph:v18, name=gifted_bell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:30:33 np0005473739 podman[74033]: 2025-10-07 13:30:33.120071957 +0000 UTC m=+0.159939860 container start 88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e (image=quay.io/ceph/ceph:v18, name=gifted_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:30:33 np0005473739 podman[74033]: 2025-10-07 13:30:33.125257695 +0000 UTC m=+0.165125588 container attach 88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e (image=quay.io/ceph/ceph:v18, name=gifted_bell, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 09:30:33 np0005473739 ceph-mon[73936]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  7 09:30:33 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  7 09:30:33 np0005473739 ceph-mon[73936]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1560608702' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  7 09:30:33 np0005473739 ceph-mon[73936]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1560608702' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  7 09:30:33 np0005473739 gifted_bell[74050]: 
Oct  7 09:30:33 np0005473739 gifted_bell[74050]: [global]
Oct  7 09:30:33 np0005473739 gifted_bell[74050]: #011fsid = 82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:30:33 np0005473739 gifted_bell[74050]: #011mon_host = [v2:192.168.122.100:3300,v1:192.168.122.100:6789]
Oct  7 09:30:33 np0005473739 gifted_bell[74050]: #011osd_crush_chooseleaf_type = 0
Oct  7 09:30:33 np0005473739 systemd[1]: libpod-88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e.scope: Deactivated successfully.
Oct  7 09:30:33 np0005473739 podman[74033]: 2025-10-07 13:30:33.555229463 +0000 UTC m=+0.595097326 container died 88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e (image=quay.io/ceph/ceph:v18, name=gifted_bell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 09:30:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e98c298a2c83f099f38c041d5b4212d739fd3a221384d5d72f806576360e720f-merged.mount: Deactivated successfully.
Oct  7 09:30:33 np0005473739 podman[74033]: 2025-10-07 13:30:33.604292278 +0000 UTC m=+0.644160141 container remove 88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e (image=quay.io/ceph/ceph:v18, name=gifted_bell, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 09:30:33 np0005473739 systemd[1]: libpod-conmon-88c41b876dfc1dd41b18c7628bdbdc9557c6e080d80ac4ec4619a240d38e4e8e.scope: Deactivated successfully.
Oct  7 09:30:33 np0005473739 podman[74088]: 2025-10-07 13:30:33.67481622 +0000 UTC m=+0.049087797 container create d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1 (image=quay.io/ceph/ceph:v18, name=affectionate_yonath, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:30:33 np0005473739 systemd[1]: Started libpod-conmon-d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1.scope.
Oct  7 09:30:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e14c262772af46b5a469229a0016e0e11da0630d86a8ec7b3b83c1a20d9670/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e14c262772af46b5a469229a0016e0e11da0630d86a8ec7b3b83c1a20d9670/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e14c262772af46b5a469229a0016e0e11da0630d86a8ec7b3b83c1a20d9670/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8e14c262772af46b5a469229a0016e0e11da0630d86a8ec7b3b83c1a20d9670/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:33 np0005473739 podman[74088]: 2025-10-07 13:30:33.738083436 +0000 UTC m=+0.112355043 container init d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1 (image=quay.io/ceph/ceph:v18, name=affectionate_yonath, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:30:33 np0005473739 podman[74088]: 2025-10-07 13:30:33.744733313 +0000 UTC m=+0.119004890 container start d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1 (image=quay.io/ceph/ceph:v18, name=affectionate_yonath, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 09:30:33 np0005473739 podman[74088]: 2025-10-07 13:30:33.654507209 +0000 UTC m=+0.028778836 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:33 np0005473739 podman[74088]: 2025-10-07 13:30:33.750969065 +0000 UTC m=+0.125240652 container attach d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1 (image=quay.io/ceph/ceph:v18, name=affectionate_yonath, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3695082754' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:30:34 np0005473739 systemd[1]: libpod-d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1.scope: Deactivated successfully.
Oct  7 09:30:34 np0005473739 podman[74088]: 2025-10-07 13:30:34.12149119 +0000 UTC m=+0.495762807 container died d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1 (image=quay.io/ceph/ceph:v18, name=affectionate_yonath, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a8e14c262772af46b5a469229a0016e0e11da0630d86a8ec7b3b83c1a20d9670-merged.mount: Deactivated successfully.
Oct  7 09:30:34 np0005473739 podman[74088]: 2025-10-07 13:30:34.175267787 +0000 UTC m=+0.549539404 container remove d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1 (image=quay.io/ceph/ceph:v18, name=affectionate_yonath, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:30:34 np0005473739 systemd[1]: libpod-conmon-d3a3051367a04efa5fcb089fe47778ef771ef37aa8d13d35f37a0bdbd63e68b1.scope: Deactivated successfully.
Oct  7 09:30:34 np0005473739 systemd[1]: Stopping Ceph mon.compute-0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: from='client.? 192.168.122.100:0/1560608702' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: from='client.? 192.168.122.100:0/1560608702' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: mon.compute-0@0(leader) e1 shutdown
Oct  7 09:30:34 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0[73932]: 2025-10-07T13:30:34.380+0000 7fa9ad56b640 -1 received  signal: Terminated from /run/podman-init -- /usr/bin/ceph-mon -n mon.compute-0 -f --setuser ceph --setgroup ceph --default-log-to-file=false --default-log-to-journald=true --default-log-to-stderr=false --default-mon-cluster-log-to-file=false --default-mon-cluster-log-to-journald=true --default-mon-cluster-log-to-stderr=false  (PID: 1) UID: 0
Oct  7 09:30:34 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0[73932]: 2025-10-07T13:30:34.380+0000 7fa9ad56b640 -1 mon.compute-0@0(leader) e1 *** Got Signal Terminated ***
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  7 09:30:34 np0005473739 ceph-mon[73936]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  7 09:30:34 np0005473739 podman[74174]: 2025-10-07 13:30:34.504134348 +0000 UTC m=+0.162875496 container died 0189e479bf060902968b68b579aee706da59c5d7a13b3f8d23bb864038edebde (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:30:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d723efdf03c9054adcf3947113fbdbfd77cd8ef792269d0eb8b80af3de960d6c-merged.mount: Deactivated successfully.
Oct  7 09:30:34 np0005473739 podman[74174]: 2025-10-07 13:30:34.542376731 +0000 UTC m=+0.201117849 container remove 0189e479bf060902968b68b579aee706da59c5d7a13b3f8d23bb864038edebde (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 09:30:34 np0005473739 bash[74174]: ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0
Oct  7 09:30:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Oct  7 09:30:34 np0005473739 systemd[1]: ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@mon.compute-0.service: Deactivated successfully.
Oct  7 09:30:34 np0005473739 systemd[1]: Stopped Ceph mon.compute-0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:30:34 np0005473739 systemd[1]: ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@mon.compute-0.service: Consumed 1.021s CPU time.
Oct  7 09:30:34 np0005473739 systemd[1]: Starting Ceph mon.compute-0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:30:34 np0005473739 podman[74275]: 2025-10-07 13:30:34.993276408 +0000 UTC m=+0.063204476 container create f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35e68d0816e2621476e84fa054905f77410f1a18eac8484b6a1542cb8e5fed5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35e68d0816e2621476e84fa054905f77410f1a18eac8484b6a1542cb8e5fed5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35e68d0816e2621476e84fa054905f77410f1a18eac8484b6a1542cb8e5fed5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d35e68d0816e2621476e84fa054905f77410f1a18eac8484b6a1542cb8e5fed5/merged/var/lib/ceph/mon/ceph-compute-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 podman[74275]: 2025-10-07 13:30:35.055380897 +0000 UTC m=+0.125308995 container init f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:30:35 np0005473739 podman[74275]: 2025-10-07 13:30:34.962171046 +0000 UTC m=+0.032099204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:35 np0005473739 podman[74275]: 2025-10-07 13:30:35.060634238 +0000 UTC m=+0.130562316 container start f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:30:35 np0005473739 bash[74275]: f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63
Oct  7 09:30:35 np0005473739 systemd[1]: Started Ceph mon.compute-0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mon, pid 2
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: pidfile_write: ignore empty --pid-file
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: load: jerasure load: lrc 
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: RocksDB version: 7.9.2
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Git sha 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: DB SUMMARY
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: DB Session ID:  T0TL9PKSKWA4P62ZNSKG
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: CURRENT file:  CURRENT
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: IDENTITY file:  IDENTITY
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: MANIFEST file:  MANIFEST-000010 size: 179 Bytes
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: SST files in /var/lib/ceph/mon/ceph-compute-0/store.db dir, Total Num: 1, files: 000008.sst 
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-compute-0/store.db: 000009.log size: 55676 ; 
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                         Options.error_if_exists: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                       Options.create_if_missing: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                         Options.paranoid_checks: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                                     Options.env: 0x56190e0dfc40
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                                      Options.fs: PosixFileSystem
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                                Options.info_log: 0x56191014d040
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.max_file_opening_threads: 16
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                              Options.statistics: (nil)
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                               Options.use_fsync: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                       Options.max_log_file_size: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                         Options.allow_fallocate: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                        Options.use_direct_reads: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:          Options.create_missing_column_families: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                              Options.db_log_dir: 
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                                 Options.wal_dir: 
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                   Options.advise_random_on_open: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                    Options.write_buffer_manager: 0x56191015cb40
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                            Options.rate_limiter: (nil)
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.unordered_write: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                               Options.row_cache: None
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                              Options.wal_filter: None
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.allow_ingest_behind: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.two_write_queues: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.manual_wal_flush: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.wal_compression: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.atomic_flush: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                 Options.log_readahead_size: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.allow_data_in_errors: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.db_host_id: __hostname__
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.max_background_jobs: 2
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.max_background_compactions: -1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.max_subcompactions: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:           Options.writable_file_max_buffer_size: 1048576
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.max_total_wal_size: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                          Options.max_open_files: -1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                          Options.bytes_per_sync: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:       Options.compaction_readahead_size: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.max_background_flushes: -1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Compression algorithms supported:
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: #011kZSTD supported: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: #011kXpressCompression supported: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: #011kBZip2Compression supported: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: #011kLZ4Compression supported: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: #011kZlibCompression supported: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: #011kSnappyCompression supported: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: /var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:           Options.merge_operator: 
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:        Options.compaction_filter: None
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x56191014cc40)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x5619101451f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:        Options.write_buffer_size: 33554432
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:  Options.max_write_buffer_number: 2
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:          Options.compression: NoCompression
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.num_levels: 7
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:        Options.min_write_buffer_number_to_merge: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:      Options.level0_file_num_compaction_trigger: 4
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.max_bytes_for_level_base: 268435456
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.level_compaction_dynamic_level_bytes: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:          Options.max_bytes_for_level_multiplier: 10.000000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:/var/lib/ceph/mon/ceph-compute-0/store.db/MANIFEST-000010 succeeded,manifest_file_number is 10, next_file_number is 12, last_sequence is 5, log_number is 5,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 5
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: e373f884-c326-49b5-81e6-2b809f3b2d39
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843835099539, "job": 1, "event": "recovery_started", "wal_files": [9]}
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #9 mode 2
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843835103574, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 13, "file_size": 55257, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8, "largest_seqno": 138, "table_properties": {"data_size": 53797, "index_size": 166, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 261, "raw_key_size": 3050, "raw_average_key_size": 30, "raw_value_size": 51386, "raw_average_value_size": 508, "num_data_blocks": 9, "num_entries": 101, "num_filter_entries": 101, "num_deletions": 3, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843835, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 13, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843835103690, "job": 1, "event": "recovery_finished"}
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:5047] Creating manifest 15
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000009.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x56191016ee00
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: DB pointer 0x5619101f8000
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0   55.86 KB   0.5      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Sum      2/0   55.86 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 3.73 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 3.73 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 512.00 MB usage: 0.78 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 0 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(2,0.42 KB,8.04663e-05%) IndexBlock(2,0.36 KB,6.85453e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: starting mon.compute-0 rank 0 at public addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] at bind addrs [v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0] mon_data /var/lib/ceph/mon/ceph-compute-0 fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(???) e1 preinit fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(???).mds e1 new map
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(???).mds e1 print_map#012e1#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: -1#012 #012No filesystems configured
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(???).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(???).osd e1 crush map has features 288514050185494528, adjusting msgr requires
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(???).paxosservice(auth 1..2) refresh upgraded, format 0 -> 3
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@-1(probing) e1  my rank is now 0 (was -1)
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(probing) e1 win_standalone_election
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: paxos.0).electionLogic(3) init, last seen epoch 3, mid-election, bumping
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(electing) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : monmap e1: 1 mons at {compute-0=[v2:192.168.122.100:3300/0,v1:192.168.122.100:6789/0]} removed_ranks: {} disallowed_leaders: {}
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 collect_metadata vda:  no unique device id for vda: fallback method has no model nor serial
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : fsmap 
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
Oct  7 09:30:35 np0005473739 podman[74296]: 2025-10-07 13:30:35.138035924 +0000 UTC m=+0.047439434 container create efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:35 np0005473739 systemd[1]: Started libpod-conmon-efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0.scope.
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0 is new leader, mons compute-0 in quorum (ranks 0)
Oct  7 09:30:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c037954893a9306ceac93c25e4e6e260c1d96e7f269e5c402de637a9049775a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c037954893a9306ceac93c25e4e6e260c1d96e7f269e5c402de637a9049775a8/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c037954893a9306ceac93c25e4e6e260c1d96e7f269e5c402de637a9049775a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 podman[74296]: 2025-10-07 13:30:35.116904737 +0000 UTC m=+0.026308267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:35 np0005473739 podman[74296]: 2025-10-07 13:30:35.223857703 +0000 UTC m=+0.133261303 container init efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:30:35 np0005473739 podman[74296]: 2025-10-07 13:30:35.237245509 +0000 UTC m=+0.146649029 container start efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:30:35 np0005473739 podman[74296]: 2025-10-07 13:30:35.24128674 +0000 UTC m=+0.150690330 container attach efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=public_network}] v 0) v1
Oct  7 09:30:35 np0005473739 systemd[1]: libpod-efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0.scope: Deactivated successfully.
Oct  7 09:30:35 np0005473739 conmon[74351]: conmon efbe5b417b60032a4868 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0.scope/container/memory.events
Oct  7 09:30:35 np0005473739 podman[74296]: 2025-10-07 13:30:35.660491837 +0000 UTC m=+0.569895347 container died efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c037954893a9306ceac93c25e4e6e260c1d96e7f269e5c402de637a9049775a8-merged.mount: Deactivated successfully.
Oct  7 09:30:35 np0005473739 podman[74296]: 2025-10-07 13:30:35.702038427 +0000 UTC m=+0.611441947 container remove efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0 (image=quay.io/ceph/ceph:v18, name=heuristic_dewdney, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 09:30:35 np0005473739 systemd[1]: libpod-conmon-efbe5b417b60032a48682565ff5ee1628c5ccb6803d3188905797dce5a0f7db0.scope: Deactivated successfully.
Oct  7 09:30:35 np0005473739 podman[74389]: 2025-10-07 13:30:35.772674613 +0000 UTC m=+0.045676365 container create 07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da (image=quay.io/ceph/ceph:v18, name=sweet_feynman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:35 np0005473739 systemd[1]: Started libpod-conmon-07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da.scope.
Oct  7 09:30:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:35 np0005473739 podman[74389]: 2025-10-07 13:30:35.755325319 +0000 UTC m=+0.028327081 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54fd7768bb9a4bef5ac21f7af04513a177ad47a80996d7aa76e1c239e8119f9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54fd7768bb9a4bef5ac21f7af04513a177ad47a80996d7aa76e1c239e8119f9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c54fd7768bb9a4bef5ac21f7af04513a177ad47a80996d7aa76e1c239e8119f9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:35 np0005473739 podman[74389]: 2025-10-07 13:30:35.867657821 +0000 UTC m=+0.140659603 container init 07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da (image=quay.io/ceph/ceph:v18, name=sweet_feynman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:30:35 np0005473739 podman[74389]: 2025-10-07 13:30:35.87871111 +0000 UTC m=+0.151712882 container start 07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da (image=quay.io/ceph/ceph:v18, name=sweet_feynman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:30:35 np0005473739 podman[74389]: 2025-10-07 13:30:35.8827109 +0000 UTC m=+0.155712732 container attach 07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da (image=quay.io/ceph/ceph:v18, name=sweet_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 09:30:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=cluster_network}] v 0) v1
Oct  7 09:30:36 np0005473739 systemd[1]: libpod-07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da.scope: Deactivated successfully.
Oct  7 09:30:36 np0005473739 podman[74389]: 2025-10-07 13:30:36.301338018 +0000 UTC m=+0.574339790 container died 07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da (image=quay.io/ceph/ceph:v18, name=sweet_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:30:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c54fd7768bb9a4bef5ac21f7af04513a177ad47a80996d7aa76e1c239e8119f9-merged.mount: Deactivated successfully.
Oct  7 09:30:36 np0005473739 podman[74389]: 2025-10-07 13:30:36.35305414 +0000 UTC m=+0.626055872 container remove 07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da (image=quay.io/ceph/ceph:v18, name=sweet_feynman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 09:30:36 np0005473739 systemd[1]: libpod-conmon-07d79e4281dde944a10f8997537e64611a03d05016e505d67dd5383fe2aeb4da.scope: Deactivated successfully.
Oct  7 09:30:36 np0005473739 systemd[1]: Reloading.
Oct  7 09:30:36 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:30:36 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:30:36 np0005473739 systemd[1]: Reloading.
Oct  7 09:30:36 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:30:36 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:30:36 np0005473739 systemd[1]: Starting Ceph mgr.compute-0.kdyrcd for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:30:37 np0005473739 podman[74568]: 2025-10-07 13:30:37.186014736 +0000 UTC m=+0.051601038 container create a97aec2b1c22097e614a49c173b81a69133b6fe362149c1917c67a46280273b5 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 09:30:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a356964d72584adddba54d576149a258865f2e173d389a11050f4292b347ed21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a356964d72584adddba54d576149a258865f2e173d389a11050f4292b347ed21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a356964d72584adddba54d576149a258865f2e173d389a11050f4292b347ed21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a356964d72584adddba54d576149a258865f2e173d389a11050f4292b347ed21/merged/var/lib/ceph/mgr/ceph-compute-0.kdyrcd supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:37 np0005473739 podman[74568]: 2025-10-07 13:30:37.159333699 +0000 UTC m=+0.024920101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:37 np0005473739 podman[74568]: 2025-10-07 13:30:37.252448885 +0000 UTC m=+0.118035287 container init a97aec2b1c22097e614a49c173b81a69133b6fe362149c1917c67a46280273b5 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 09:30:37 np0005473739 podman[74568]: 2025-10-07 13:30:37.257229371 +0000 UTC m=+0.122815713 container start a97aec2b1c22097e614a49c173b81a69133b6fe362149c1917c67a46280273b5 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 09:30:37 np0005473739 bash[74568]: a97aec2b1c22097e614a49c173b81a69133b6fe362149c1917c67a46280273b5
Oct  7 09:30:37 np0005473739 systemd[1]: Started Ceph mgr.compute-0.kdyrcd for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:30:37 np0005473739 ceph-mgr[74587]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:30:37 np0005473739 ceph-mgr[74587]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  7 09:30:37 np0005473739 ceph-mgr[74587]: pidfile_write: ignore empty --pid-file
Oct  7 09:30:37 np0005473739 podman[74588]: 2025-10-07 13:30:37.366484453 +0000 UTC m=+0.052948923 container create ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d (image=quay.io/ceph/ceph:v18, name=great_ptolemy, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:30:37 np0005473739 systemd[1]: Started libpod-conmon-ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d.scope.
Oct  7 09:30:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:37 np0005473739 podman[74588]: 2025-10-07 13:30:37.344766596 +0000 UTC m=+0.031231106 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6fae0edf9bae2ff7bc41da7b236dd19552999aff9efd4a5dc9c950aeaac726/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6fae0edf9bae2ff7bc41da7b236dd19552999aff9efd4a5dc9c950aeaac726/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2a6fae0edf9bae2ff7bc41da7b236dd19552999aff9efd4a5dc9c950aeaac726/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:37 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'alerts'
Oct  7 09:30:37 np0005473739 podman[74588]: 2025-10-07 13:30:37.470766222 +0000 UTC m=+0.157230712 container init ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d (image=quay.io/ceph/ceph:v18, name=great_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:30:37 np0005473739 podman[74588]: 2025-10-07 13:30:37.48484428 +0000 UTC m=+0.171308770 container start ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d (image=quay.io/ceph/ceph:v18, name=great_ptolemy, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:30:37 np0005473739 podman[74588]: 2025-10-07 13:30:37.489699668 +0000 UTC m=+0.176164168 container attach ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d (image=quay.io/ceph/ceph:v18, name=great_ptolemy, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:37 np0005473739 ceph-mgr[74587]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  7 09:30:37 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'balancer'
Oct  7 09:30:37 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:37.774+0000 7f19ec247140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  7 09:30:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:30:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1112751793' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]: 
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]: {
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "health": {
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "status": "HEALTH_OK",
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "checks": {},
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "mutes": []
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    },
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "election_epoch": 5,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "quorum": [
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        0
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    ],
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "quorum_names": [
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "compute-0"
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    ],
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "quorum_age": 2,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "monmap": {
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "epoch": 1,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "min_mon_release_name": "reef",
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_mons": 1
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    },
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "osdmap": {
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "epoch": 1,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_osds": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_up_osds": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "osd_up_since": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_in_osds": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "osd_in_since": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_remapped_pgs": 0
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    },
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "pgmap": {
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "pgs_by_state": [],
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_pgs": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_pools": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_objects": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "data_bytes": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "bytes_used": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "bytes_avail": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "bytes_total": 0
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    },
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "fsmap": {
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "epoch": 1,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "by_rank": [],
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "up:standby": 0
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    },
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "mgrmap": {
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "available": false,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "num_standbys": 0,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "modules": [
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:            "iostat",
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:            "nfs",
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:            "restful"
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        ],
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "services": {}
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    },
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "servicemap": {
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "epoch": 1,
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:        "services": {}
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    },
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]:    "progress_events": {}
Oct  7 09:30:37 np0005473739 great_ptolemy[74628]: }
Oct  7 09:30:37 np0005473739 systemd[1]: libpod-ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d.scope: Deactivated successfully.
Oct  7 09:30:37 np0005473739 podman[74588]: 2025-10-07 13:30:37.945455713 +0000 UTC m=+0.631920203 container died ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d (image=quay.io/ceph/ceph:v18, name=great_ptolemy, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2a6fae0edf9bae2ff7bc41da7b236dd19552999aff9efd4a5dc9c950aeaac726-merged.mount: Deactivated successfully.
Oct  7 09:30:38 np0005473739 podman[74588]: 2025-10-07 13:30:38.014679113 +0000 UTC m=+0.701143573 container remove ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d (image=quay.io/ceph/ceph:v18, name=great_ptolemy, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:30:38 np0005473739 systemd[1]: libpod-conmon-ca27dea31fc9130cc6cf547beff5be3e6c8abb9a469b2fe6df1918da1285db8d.scope: Deactivated successfully.
Oct  7 09:30:38 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:38.044+0000 7f19ec247140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  7 09:30:38 np0005473739 ceph-mgr[74587]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  7 09:30:38 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'cephadm'
Oct  7 09:30:39 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'crash'
Oct  7 09:30:40 np0005473739 podman[74676]: 2025-10-07 13:30:40.099264815 +0000 UTC m=+0.054079078 container create d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c (image=quay.io/ceph/ceph:v18, name=flamboyant_bhaskara, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:30:40 np0005473739 systemd[1]: Started libpod-conmon-d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c.scope.
Oct  7 09:30:40 np0005473739 podman[74676]: 2025-10-07 13:30:40.070340786 +0000 UTC m=+0.025155079 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:40 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:40.175+0000 7f19ec247140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  7 09:30:40 np0005473739 ceph-mgr[74587]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  7 09:30:40 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'dashboard'
Oct  7 09:30:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63e9f5f31a5dad6e1c44a38460c6282aaac5e8c2df9e63aaf8a7fc0cfd8c595/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63e9f5f31a5dad6e1c44a38460c6282aaac5e8c2df9e63aaf8a7fc0cfd8c595/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c63e9f5f31a5dad6e1c44a38460c6282aaac5e8c2df9e63aaf8a7fc0cfd8c595/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:40 np0005473739 podman[74676]: 2025-10-07 13:30:40.201972415 +0000 UTC m=+0.156786748 container init d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c (image=quay.io/ceph/ceph:v18, name=flamboyant_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:30:40 np0005473739 podman[74676]: 2025-10-07 13:30:40.209269031 +0000 UTC m=+0.164083284 container start d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c (image=quay.io/ceph/ceph:v18, name=flamboyant_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:30:40 np0005473739 podman[74676]: 2025-10-07 13:30:40.212570588 +0000 UTC m=+0.167384861 container attach d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c (image=quay.io/ceph/ceph:v18, name=flamboyant_bhaskara, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:30:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:30:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/900082923' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]: 
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]: {
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "health": {
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "status": "HEALTH_OK",
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "checks": {},
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "mutes": []
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    },
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "election_epoch": 5,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "quorum": [
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        0
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    ],
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "quorum_names": [
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "compute-0"
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    ],
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "quorum_age": 5,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "monmap": {
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "epoch": 1,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "min_mon_release_name": "reef",
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_mons": 1
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    },
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "osdmap": {
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "epoch": 1,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_osds": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_up_osds": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "osd_up_since": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_in_osds": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "osd_in_since": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_remapped_pgs": 0
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    },
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "pgmap": {
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "pgs_by_state": [],
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_pgs": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_pools": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_objects": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "data_bytes": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "bytes_used": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "bytes_avail": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "bytes_total": 0
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    },
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "fsmap": {
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "epoch": 1,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "by_rank": [],
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "up:standby": 0
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    },
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "mgrmap": {
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "available": false,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "num_standbys": 0,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "modules": [
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:            "iostat",
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:            "nfs",
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:            "restful"
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        ],
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "services": {}
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    },
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "servicemap": {
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "epoch": 1,
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:        "services": {}
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    },
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]:    "progress_events": {}
Oct  7 09:30:40 np0005473739 flamboyant_bhaskara[74692]: }
Oct  7 09:30:40 np0005473739 systemd[1]: libpod-d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c.scope: Deactivated successfully.
Oct  7 09:30:40 np0005473739 podman[74719]: 2025-10-07 13:30:40.663004061 +0000 UTC m=+0.046536764 container died d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c (image=quay.io/ceph/ceph:v18, name=flamboyant_bhaskara, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 09:30:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c63e9f5f31a5dad6e1c44a38460c6282aaac5e8c2df9e63aaf8a7fc0cfd8c595-merged.mount: Deactivated successfully.
Oct  7 09:30:40 np0005473739 podman[74719]: 2025-10-07 13:30:40.70820841 +0000 UTC m=+0.091741033 container remove d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c (image=quay.io/ceph/ceph:v18, name=flamboyant_bhaskara, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:40 np0005473739 systemd[1]: libpod-conmon-d35ff3a1bb3d481a197eaa536ba5c153d06613088a01c4c22a178d9098e3010c.scope: Deactivated successfully.
Oct  7 09:30:41 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'devicehealth'
Oct  7 09:30:41 np0005473739 ceph-mgr[74587]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  7 09:30:41 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'diskprediction_local'
Oct  7 09:30:41 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:41.842+0000 7f19ec247140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  7 09:30:42 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  7 09:30:42 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  7 09:30:42 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]:  from numpy import show_config as show_numpy_config
Oct  7 09:30:42 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:42.354+0000 7f19ec247140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  7 09:30:42 np0005473739 ceph-mgr[74587]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  7 09:30:42 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'influx'
Oct  7 09:30:42 np0005473739 ceph-mgr[74587]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  7 09:30:42 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'insights'
Oct  7 09:30:42 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:42.600+0000 7f19ec247140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  7 09:30:42 np0005473739 podman[74734]: 2025-10-07 13:30:42.791091418 +0000 UTC m=+0.048578041 container create be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d (image=quay.io/ceph/ceph:v18, name=wonderful_curie, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:42 np0005473739 systemd[1]: Started libpod-conmon-be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d.scope.
Oct  7 09:30:42 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'iostat'
Oct  7 09:30:42 np0005473739 podman[74734]: 2025-10-07 13:30:42.770411215 +0000 UTC m=+0.027897818 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ceb750ac951aa6747cf4b44c31d15e663ecbbc329dfeb345d845517daf21b9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ceb750ac951aa6747cf4b44c31d15e663ecbbc329dfeb345d845517daf21b9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27ceb750ac951aa6747cf4b44c31d15e663ecbbc329dfeb345d845517daf21b9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:42 np0005473739 podman[74734]: 2025-10-07 13:30:42.882248571 +0000 UTC m=+0.139735154 container init be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d (image=quay.io/ceph/ceph:v18, name=wonderful_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:30:42 np0005473739 podman[74734]: 2025-10-07 13:30:42.894124406 +0000 UTC m=+0.151611019 container start be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d (image=quay.io/ceph/ceph:v18, name=wonderful_curie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:30:42 np0005473739 podman[74734]: 2025-10-07 13:30:42.899262793 +0000 UTC m=+0.156749386 container attach be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d (image=quay.io/ceph/ceph:v18, name=wonderful_curie, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:30:43 np0005473739 ceph-mgr[74587]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  7 09:30:43 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'k8sevents'
Oct  7 09:30:43 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:43.057+0000 7f19ec247140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  7 09:30:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:30:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3343126771' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]: 
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]: {
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "health": {
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "status": "HEALTH_OK",
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "checks": {},
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "mutes": []
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    },
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "election_epoch": 5,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "quorum": [
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        0
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    ],
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "quorum_names": [
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "compute-0"
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    ],
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "quorum_age": 8,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "monmap": {
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "epoch": 1,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "min_mon_release_name": "reef",
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_mons": 1
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    },
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "osdmap": {
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "epoch": 1,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_osds": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_up_osds": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "osd_up_since": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_in_osds": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "osd_in_since": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_remapped_pgs": 0
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    },
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "pgmap": {
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "pgs_by_state": [],
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_pgs": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_pools": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_objects": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "data_bytes": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "bytes_used": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "bytes_avail": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "bytes_total": 0
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    },
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "fsmap": {
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "epoch": 1,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "by_rank": [],
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "up:standby": 0
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    },
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "mgrmap": {
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "available": false,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "num_standbys": 0,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "modules": [
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:            "iostat",
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:            "nfs",
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:            "restful"
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        ],
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "services": {}
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    },
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "servicemap": {
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "epoch": 1,
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:        "services": {}
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    },
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]:    "progress_events": {}
Oct  7 09:30:43 np0005473739 wonderful_curie[74750]: }
Oct  7 09:30:43 np0005473739 systemd[1]: libpod-be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d.scope: Deactivated successfully.
Oct  7 09:30:43 np0005473739 podman[74734]: 2025-10-07 13:30:43.271350289 +0000 UTC m=+0.528836922 container died be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d (image=quay.io/ceph/ceph:v18, name=wonderful_curie, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-27ceb750ac951aa6747cf4b44c31d15e663ecbbc329dfeb345d845517daf21b9-merged.mount: Deactivated successfully.
Oct  7 09:30:43 np0005473739 podman[74734]: 2025-10-07 13:30:43.327835515 +0000 UTC m=+0.585322138 container remove be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d (image=quay.io/ceph/ceph:v18, name=wonderful_curie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:30:43 np0005473739 systemd[1]: libpod-conmon-be5d35e81e6e13df51c121c3de1c2f38ea41df12c13d687bea2022472fbaea8d.scope: Deactivated successfully.
Oct  7 09:30:44 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'localpool'
Oct  7 09:30:45 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'mds_autoscaler'
Oct  7 09:30:45 np0005473739 podman[74788]: 2025-10-07 13:30:45.399747996 +0000 UTC m=+0.049105918 container create 801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18 (image=quay.io/ceph/ceph:v18, name=loving_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 09:30:45 np0005473739 systemd[1]: Started libpod-conmon-801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18.scope.
Oct  7 09:30:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31d0293f516e723321ee178425a5913c19bb41c775f98237709ea02e97a12a5/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31d0293f516e723321ee178425a5913c19bb41c775f98237709ea02e97a12a5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31d0293f516e723321ee178425a5913c19bb41c775f98237709ea02e97a12a5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:45 np0005473739 podman[74788]: 2025-10-07 13:30:45.375229158 +0000 UTC m=+0.024587110 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:45 np0005473739 podman[74788]: 2025-10-07 13:30:45.483671134 +0000 UTC m=+0.133029076 container init 801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18 (image=quay.io/ceph/ceph:v18, name=loving_taussig, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  7 09:30:45 np0005473739 podman[74788]: 2025-10-07 13:30:45.490077822 +0000 UTC m=+0.139435724 container start 801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18 (image=quay.io/ceph/ceph:v18, name=loving_taussig, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 09:30:45 np0005473739 podman[74788]: 2025-10-07 13:30:45.493526964 +0000 UTC m=+0.142884916 container attach 801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18 (image=quay.io/ceph/ceph:v18, name=loving_taussig, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:45 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'mirroring'
Oct  7 09:30:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:30:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4042635389' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:30:45 np0005473739 loving_taussig[74805]: 
Oct  7 09:30:45 np0005473739 loving_taussig[74805]: {
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "health": {
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "status": "HEALTH_OK",
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "checks": {},
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "mutes": []
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    },
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "election_epoch": 5,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "quorum": [
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        0
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    ],
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "quorum_names": [
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "compute-0"
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    ],
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "quorum_age": 10,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "monmap": {
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "epoch": 1,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "min_mon_release_name": "reef",
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_mons": 1
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    },
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "osdmap": {
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "epoch": 1,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_osds": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_up_osds": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "osd_up_since": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_in_osds": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "osd_in_since": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_remapped_pgs": 0
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    },
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "pgmap": {
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "pgs_by_state": [],
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_pgs": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_pools": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_objects": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "data_bytes": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "bytes_used": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "bytes_avail": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "bytes_total": 0
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    },
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "fsmap": {
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "epoch": 1,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "by_rank": [],
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "up:standby": 0
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    },
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "mgrmap": {
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "available": false,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "num_standbys": 0,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "modules": [
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:            "iostat",
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:            "nfs",
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:            "restful"
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        ],
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "services": {}
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    },
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "servicemap": {
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "epoch": 1,
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:        "services": {}
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    },
Oct  7 09:30:45 np0005473739 loving_taussig[74805]:    "progress_events": {}
Oct  7 09:30:45 np0005473739 loving_taussig[74805]: }
Oct  7 09:30:45 np0005473739 systemd[1]: libpod-801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18.scope: Deactivated successfully.
Oct  7 09:30:45 np0005473739 podman[74788]: 2025-10-07 13:30:45.897905429 +0000 UTC m=+0.547263361 container died 801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18 (image=quay.io/ceph/ceph:v18, name=loving_taussig, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:30:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a31d0293f516e723321ee178425a5913c19bb41c775f98237709ea02e97a12a5-merged.mount: Deactivated successfully.
Oct  7 09:30:45 np0005473739 podman[74788]: 2025-10-07 13:30:45.95699876 +0000 UTC m=+0.606356702 container remove 801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18 (image=quay.io/ceph/ceph:v18, name=loving_taussig, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 09:30:45 np0005473739 systemd[1]: libpod-conmon-801abf97cc0c329729aae9466c73216b7a1596e5e501d371ddec857bd9dcfc18.scope: Deactivated successfully.
Oct  7 09:30:46 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'nfs'
Oct  7 09:30:46 np0005473739 ceph-mgr[74587]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  7 09:30:46 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'orchestrator'
Oct  7 09:30:46 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:46.728+0000 7f19ec247140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  7 09:30:47 np0005473739 ceph-mgr[74587]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  7 09:30:47 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'osd_perf_query'
Oct  7 09:30:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:47.420+0000 7f19ec247140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  7 09:30:47 np0005473739 ceph-mgr[74587]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  7 09:30:47 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'osd_support'
Oct  7 09:30:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:47.692+0000 7f19ec247140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  7 09:30:47 np0005473739 ceph-mgr[74587]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  7 09:30:47 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'pg_autoscaler'
Oct  7 09:30:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:47.928+0000 7f19ec247140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  7 09:30:48 np0005473739 podman[74844]: 2025-10-07 13:30:48.05533941 +0000 UTC m=+0.065050946 container create 56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7 (image=quay.io/ceph/ceph:v18, name=compassionate_zhukovsky, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:48 np0005473739 systemd[1]: Started libpod-conmon-56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7.scope.
Oct  7 09:30:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef5b2af13790be7993b7f67c9ff0e446c6b943fed9796d76f41e837cf2416e0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef5b2af13790be7993b7f67c9ff0e446c6b943fed9796d76f41e837cf2416e0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bef5b2af13790be7993b7f67c9ff0e446c6b943fed9796d76f41e837cf2416e0/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:48 np0005473739 podman[74844]: 2025-10-07 13:30:48.034867444 +0000 UTC m=+0.044578990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:48 np0005473739 podman[74844]: 2025-10-07 13:30:48.150438171 +0000 UTC m=+0.160149717 container init 56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7 (image=quay.io/ceph/ceph:v18, name=compassionate_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:30:48 np0005473739 podman[74844]: 2025-10-07 13:30:48.156397135 +0000 UTC m=+0.166108661 container start 56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7 (image=quay.io/ceph/ceph:v18, name=compassionate_zhukovsky, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:30:48 np0005473739 podman[74844]: 2025-10-07 13:30:48.159995721 +0000 UTC m=+0.169707277 container attach 56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7 (image=quay.io/ceph/ceph:v18, name=compassionate_zhukovsky, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:30:48 np0005473739 ceph-mgr[74587]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  7 09:30:48 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'progress'
Oct  7 09:30:48 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:48.220+0000 7f19ec247140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  7 09:30:48 np0005473739 ceph-mgr[74587]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  7 09:30:48 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'prometheus'
Oct  7 09:30:48 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:48.453+0000 7f19ec247140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  7 09:30:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:30:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370441292' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]: 
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]: {
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "health": {
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "status": "HEALTH_OK",
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "checks": {},
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "mutes": []
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    },
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "election_epoch": 5,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "quorum": [
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        0
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    ],
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "quorum_names": [
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "compute-0"
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    ],
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "quorum_age": 13,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "monmap": {
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "epoch": 1,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "min_mon_release_name": "reef",
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_mons": 1
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    },
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "osdmap": {
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "epoch": 1,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_osds": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_up_osds": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "osd_up_since": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_in_osds": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "osd_in_since": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_remapped_pgs": 0
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    },
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "pgmap": {
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "pgs_by_state": [],
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_pgs": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_pools": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_objects": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "data_bytes": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "bytes_used": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "bytes_avail": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "bytes_total": 0
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    },
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "fsmap": {
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "epoch": 1,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "by_rank": [],
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "up:standby": 0
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    },
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "mgrmap": {
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "available": false,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "num_standbys": 0,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "modules": [
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:            "iostat",
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:            "nfs",
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:            "restful"
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        ],
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "services": {}
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    },
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "servicemap": {
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "epoch": 1,
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:        "services": {}
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    },
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]:    "progress_events": {}
Oct  7 09:30:48 np0005473739 compassionate_zhukovsky[74861]: }
Oct  7 09:30:48 np0005473739 systemd[1]: libpod-56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7.scope: Deactivated successfully.
Oct  7 09:30:48 np0005473739 podman[74844]: 2025-10-07 13:30:48.557462332 +0000 UTC m=+0.567173848 container died 56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7 (image=quay.io/ceph/ceph:v18, name=compassionate_zhukovsky, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 09:30:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bef5b2af13790be7993b7f67c9ff0e446c6b943fed9796d76f41e837cf2416e0-merged.mount: Deactivated successfully.
Oct  7 09:30:48 np0005473739 podman[74844]: 2025-10-07 13:30:48.627633142 +0000 UTC m=+0.637344698 container remove 56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7 (image=quay.io/ceph/ceph:v18, name=compassionate_zhukovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:30:48 np0005473739 systemd[1]: libpod-conmon-56c2822719845990c03fba5442d5f2143098f500d3db2592c0cb0f983192b4e7.scope: Deactivated successfully.
Oct  7 09:30:49 np0005473739 ceph-mgr[74587]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  7 09:30:49 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:49.505+0000 7f19ec247140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  7 09:30:49 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'rbd_support'
Oct  7 09:30:49 np0005473739 ceph-mgr[74587]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  7 09:30:49 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'restful'
Oct  7 09:30:49 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:49.803+0000 7f19ec247140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  7 09:30:50 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'rgw'
Oct  7 09:30:50 np0005473739 podman[74899]: 2025-10-07 13:30:50.701640422 +0000 UTC m=+0.048346743 container create 92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1 (image=quay.io/ceph/ceph:v18, name=heuristic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:30:50 np0005473739 systemd[1]: Started libpod-conmon-92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1.scope.
Oct  7 09:30:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ba866f8c37a8ec0c3acbc855c3a83fedebb1210416266ac2ee745eefb0b1520/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ba866f8c37a8ec0c3acbc855c3a83fedebb1210416266ac2ee745eefb0b1520/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ba866f8c37a8ec0c3acbc855c3a83fedebb1210416266ac2ee745eefb0b1520/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:50 np0005473739 podman[74899]: 2025-10-07 13:30:50.766980015 +0000 UTC m=+0.113686366 container init 92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1 (image=quay.io/ceph/ceph:v18, name=heuristic_blackwell, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:50 np0005473739 podman[74899]: 2025-10-07 13:30:50.684954149 +0000 UTC m=+0.031660500 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:50 np0005473739 podman[74899]: 2025-10-07 13:30:50.78221076 +0000 UTC m=+0.128917081 container start 92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1 (image=quay.io/ceph/ceph:v18, name=heuristic_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:30:50 np0005473739 podman[74899]: 2025-10-07 13:30:50.786420587 +0000 UTC m=+0.133126908 container attach 92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1 (image=quay.io/ceph/ceph:v18, name=heuristic_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  7 09:30:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:30:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351311102' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]: 
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]: {
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "health": {
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "status": "HEALTH_OK",
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "checks": {},
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "mutes": []
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    },
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "election_epoch": 5,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "quorum": [
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        0
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    ],
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "quorum_names": [
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "compute-0"
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    ],
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "quorum_age": 16,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "monmap": {
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "epoch": 1,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "min_mon_release_name": "reef",
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_mons": 1
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    },
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "osdmap": {
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "epoch": 1,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_osds": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_up_osds": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "osd_up_since": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_in_osds": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "osd_in_since": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_remapped_pgs": 0
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    },
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "pgmap": {
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "pgs_by_state": [],
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_pgs": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_pools": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_objects": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "data_bytes": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "bytes_used": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "bytes_avail": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "bytes_total": 0
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    },
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "fsmap": {
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "epoch": 1,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "by_rank": [],
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "up:standby": 0
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    },
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "mgrmap": {
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "available": false,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "num_standbys": 0,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "modules": [
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:            "iostat",
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:            "nfs",
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:            "restful"
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        ],
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "services": {}
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    },
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "servicemap": {
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "epoch": 1,
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:        "services": {}
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    },
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]:    "progress_events": {}
Oct  7 09:30:51 np0005473739 heuristic_blackwell[74916]: }
Oct  7 09:30:51 np0005473739 systemd[1]: libpod-92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1.scope: Deactivated successfully.
Oct  7 09:30:51 np0005473739 podman[74899]: 2025-10-07 13:30:51.170534944 +0000 UTC m=+0.517241295 container died 92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1 (image=quay.io/ceph/ceph:v18, name=heuristic_blackwell, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6ba866f8c37a8ec0c3acbc855c3a83fedebb1210416266ac2ee745eefb0b1520-merged.mount: Deactivated successfully.
Oct  7 09:30:51 np0005473739 ceph-mgr[74587]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  7 09:30:51 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'rook'
Oct  7 09:30:51 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:51.216+0000 7f19ec247140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  7 09:30:51 np0005473739 podman[74899]: 2025-10-07 13:30:51.226480432 +0000 UTC m=+0.573186763 container remove 92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1 (image=quay.io/ceph/ceph:v18, name=heuristic_blackwell, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 09:30:51 np0005473739 systemd[1]: libpod-conmon-92b2993e1e5b4ea5808cf2d9b0cd70c95c29e8263ea447ee66e66d856545b0a1.scope: Deactivated successfully.
Oct  7 09:30:53 np0005473739 podman[74954]: 2025-10-07 13:30:53.28906711 +0000 UTC m=+0.039634160 container create b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a (image=quay.io/ceph/ceph:v18, name=affectionate_mcclintock, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:53 np0005473739 systemd[1]: Started libpod-conmon-b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a.scope.
Oct  7 09:30:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf203009cccf607a203780111f02782e9b0fcbe730aab046f3435bc47bcca25/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf203009cccf607a203780111f02782e9b0fcbe730aab046f3435bc47bcca25/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eaf203009cccf607a203780111f02782e9b0fcbe730aab046f3435bc47bcca25/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:53 np0005473739 podman[74954]: 2025-10-07 13:30:53.359069805 +0000 UTC m=+0.109636875 container init b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a (image=quay.io/ceph/ceph:v18, name=affectionate_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 09:30:53 np0005473739 podman[74954]: 2025-10-07 13:30:53.363490779 +0000 UTC m=+0.114057829 container start b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a (image=quay.io/ceph/ceph:v18, name=affectionate_mcclintock, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 09:30:53 np0005473739 podman[74954]: 2025-10-07 13:30:53.271503898 +0000 UTC m=+0.022070968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:53 np0005473739 podman[74954]: 2025-10-07 13:30:53.37246467 +0000 UTC m=+0.123031750 container attach b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a (image=quay.io/ceph/ceph:v18, name=affectionate_mcclintock, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:30:53 np0005473739 ceph-mgr[74587]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  7 09:30:53 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'selftest'
Oct  7 09:30:53 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:53.384+0000 7f19ec247140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  7 09:30:53 np0005473739 ceph-mgr[74587]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  7 09:30:53 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'snap_schedule'
Oct  7 09:30:53 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:53.635+0000 7f19ec247140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  7 09:30:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:30:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076747161' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]: 
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]: {
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "health": {
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "status": "HEALTH_OK",
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "checks": {},
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "mutes": []
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    },
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "election_epoch": 5,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "quorum": [
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        0
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    ],
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "quorum_names": [
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "compute-0"
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    ],
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "quorum_age": 18,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "monmap": {
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "epoch": 1,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "min_mon_release_name": "reef",
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_mons": 1
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    },
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "osdmap": {
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "epoch": 1,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_osds": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_up_osds": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "osd_up_since": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_in_osds": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "osd_in_since": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_remapped_pgs": 0
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    },
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "pgmap": {
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "pgs_by_state": [],
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_pgs": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_pools": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_objects": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "data_bytes": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "bytes_used": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "bytes_avail": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "bytes_total": 0
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    },
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "fsmap": {
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "epoch": 1,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "by_rank": [],
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "up:standby": 0
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    },
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "mgrmap": {
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "available": false,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "num_standbys": 0,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "modules": [
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:            "iostat",
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:            "nfs",
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:            "restful"
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        ],
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "services": {}
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    },
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "servicemap": {
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "epoch": 1,
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:        "services": {}
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    },
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]:    "progress_events": {}
Oct  7 09:30:53 np0005473739 affectionate_mcclintock[74970]: }
Oct  7 09:30:53 np0005473739 systemd[1]: libpod-b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a.scope: Deactivated successfully.
Oct  7 09:30:53 np0005473739 podman[74954]: 2025-10-07 13:30:53.739686227 +0000 UTC m=+0.490253287 container died b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a (image=quay.io/ceph/ceph:v18, name=affectionate_mcclintock, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 09:30:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-eaf203009cccf607a203780111f02782e9b0fcbe730aab046f3435bc47bcca25-merged.mount: Deactivated successfully.
Oct  7 09:30:53 np0005473739 podman[74954]: 2025-10-07 13:30:53.802097486 +0000 UTC m=+0.552664566 container remove b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a (image=quay.io/ceph/ceph:v18, name=affectionate_mcclintock, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:30:53 np0005473739 systemd[1]: libpod-conmon-b3d499f0686ac9e80e1437f223f3a6ef76d371db5d12d0c60becb849a815466a.scope: Deactivated successfully.
Oct  7 09:30:53 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:53.885+0000 7f19ec247140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  7 09:30:53 np0005473739 ceph-mgr[74587]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  7 09:30:53 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'stats'
Oct  7 09:30:54 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'status'
Oct  7 09:30:54 np0005473739 ceph-mgr[74587]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  7 09:30:54 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'telegraf'
Oct  7 09:30:54 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:54.386+0000 7f19ec247140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  7 09:30:54 np0005473739 ceph-mgr[74587]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  7 09:30:54 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'telemetry'
Oct  7 09:30:54 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:54.611+0000 7f19ec247140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  7 09:30:55 np0005473739 ceph-mgr[74587]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  7 09:30:55 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'test_orchestrator'
Oct  7 09:30:55 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:55.229+0000 7f19ec247140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  7 09:30:55 np0005473739 ceph-mgr[74587]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  7 09:30:55 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'volumes'
Oct  7 09:30:55 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:55.899+0000 7f19ec247140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  7 09:30:55 np0005473739 podman[75010]: 2025-10-07 13:30:55.850784402 +0000 UTC m=+0.019764104 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:30:56 np0005473739 ceph-mgr[74587]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  7 09:30:56 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'zabbix'
Oct  7 09:30:56 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:56.628+0000 7f19ec247140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  7 09:30:56 np0005473739 ceph-mgr[74587]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  7 09:30:56 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:30:56.876+0000 7f19ec247140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  7 09:30:56 np0005473739 ceph-mgr[74587]: ms_deliver_dispatch: unhandled message 0x55db459211e0 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  7 09:30:56 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kdyrcd
Oct  7 09:30:57 np0005473739 podman[75010]: 2025-10-07 13:30:57.066684437 +0000 UTC m=+1.235664129 container create 2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8 (image=quay.io/ceph/ceph:v18, name=serene_cori, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e2: compute-0.kdyrcd(active, starting, since 0.190346s)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr handle_mgr_map Activating!
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr handle_mgr_map I am now activating
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e1 all = 1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kdyrcd", "id": "compute-0.kdyrcd"} v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kdyrcd", "id": "compute-0.kdyrcd"}]: dispatch
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Manager daemon compute-0.kdyrcd is now available
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: balancer
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: crash
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [balancer INFO root] Starting
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: devicehealth
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:30:57
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [balancer INFO root] No pools available
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: iostat
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Starting
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: nfs
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: orchestrator
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: pg_autoscaler
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: progress
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:30:57 np0005473739 systemd[1]: Started libpod-conmon-2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8.scope.
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [progress INFO root] Loading...
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [progress INFO root] No stored events to load
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [progress INFO root] Loaded [] historic events
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [progress INFO root] Loaded OSDMap, ready.
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] recovery thread starting
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] starting setup
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: rbd_support
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: restful
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [restful INFO root] server_addr: :: server_port: 8003
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [restful WARNING root] server not running: no certificate configured
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/mirror_snapshot_schedule"} v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/mirror_snapshot_schedule"}]: dispatch
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: status
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: telemetry
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] PerfHandler: starting
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: Activating manager daemon compute-0.kdyrcd
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: Manager daemon compute-0.kdyrcd is now available
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/mirror_snapshot_schedule"}]: dispatch
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TaskHandler: starting
Oct  7 09:30:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/report_id}] v 0) v1
Oct  7 09:30:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e16fd0f27e56a2b33ef85131f6dccf9c7c0ba45ccecdaf360778db5a0189d4b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e16fd0f27e56a2b33ef85131f6dccf9c7c0ba45ccecdaf360778db5a0189d4b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e16fd0f27e56a2b33ef85131f6dccf9c7c0ba45ccecdaf360778db5a0189d4b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/trash_purge_schedule"} v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/trash_purge_schedule"}]: dispatch
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/salt}] v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] setup complete
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/telemetry/collection}] v 0) v1
Oct  7 09:30:57 np0005473739 podman[75010]: 2025-10-07 13:30:57.142133999 +0000 UTC m=+1.311113751 container init 2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8 (image=quay.io/ceph/ceph:v18, name=serene_cori, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:30:57 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: volumes
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:30:57 np0005473739 podman[75010]: 2025-10-07 13:30:57.147815674 +0000 UTC m=+1.316795386 container start 2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8 (image=quay.io/ceph/ceph:v18, name=serene_cori, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:30:57 np0005473739 podman[75010]: 2025-10-07 13:30:57.151486763 +0000 UTC m=+1.320466535 container attach 2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8 (image=quay.io/ceph/ceph:v18, name=serene_cori, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:30:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2350848850' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:30:57 np0005473739 serene_cori[75057]: 
Oct  7 09:30:57 np0005473739 serene_cori[75057]: {
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "health": {
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "status": "HEALTH_OK",
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "checks": {},
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "mutes": []
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    },
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "election_epoch": 5,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "quorum": [
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        0
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    ],
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "quorum_names": [
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "compute-0"
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    ],
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "quorum_age": 22,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "monmap": {
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "epoch": 1,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "min_mon_release_name": "reef",
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_mons": 1
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    },
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "osdmap": {
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "epoch": 1,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_osds": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_up_osds": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "osd_up_since": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_in_osds": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "osd_in_since": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_remapped_pgs": 0
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    },
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "pgmap": {
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "pgs_by_state": [],
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_pgs": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_pools": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_objects": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "data_bytes": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "bytes_used": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "bytes_avail": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "bytes_total": 0
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    },
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "fsmap": {
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "epoch": 1,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "by_rank": [],
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "up:standby": 0
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    },
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "mgrmap": {
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "available": false,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "num_standbys": 0,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "modules": [
Oct  7 09:30:57 np0005473739 serene_cori[75057]:            "iostat",
Oct  7 09:30:57 np0005473739 serene_cori[75057]:            "nfs",
Oct  7 09:30:57 np0005473739 serene_cori[75057]:            "restful"
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        ],
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "services": {}
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    },
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "servicemap": {
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "epoch": 1,
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:30:57 np0005473739 serene_cori[75057]:        "services": {}
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    },
Oct  7 09:30:57 np0005473739 serene_cori[75057]:    "progress_events": {}
Oct  7 09:30:57 np0005473739 serene_cori[75057]: }
Oct  7 09:30:57 np0005473739 systemd[1]: libpod-2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8.scope: Deactivated successfully.
Oct  7 09:30:57 np0005473739 podman[75010]: 2025-10-07 13:30:57.554004638 +0000 UTC m=+1.722984320 container died 2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8 (image=quay.io/ceph/ceph:v18, name=serene_cori, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 09:30:57 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5e16fd0f27e56a2b33ef85131f6dccf9c7c0ba45ccecdaf360778db5a0189d4b-merged.mount: Deactivated successfully.
Oct  7 09:30:57 np0005473739 podman[75010]: 2025-10-07 13:30:57.599481596 +0000 UTC m=+1.768461318 container remove 2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8 (image=quay.io/ceph/ceph:v18, name=serene_cori, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:30:57 np0005473739 systemd[1]: libpod-conmon-2697f7081c6b2855ea1325d0a2135f9e6fb9ad0e65e2bd6183fa2f56a3b58af8.scope: Deactivated successfully.
Oct  7 09:30:58 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e3: compute-0.kdyrcd(active, since 1.21351s)
Oct  7 09:30:58 np0005473739 ceph-mon[74295]: from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/trash_purge_schedule"}]: dispatch
Oct  7 09:30:58 np0005473739 ceph-mon[74295]: from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:30:58 np0005473739 ceph-mon[74295]: from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:30:58 np0005473739 ceph-mon[74295]: from='mgr.14102 192.168.122.100:0/3771297848' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:30:59 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:30:59 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e4: compute-0.kdyrcd(active, since 2s)
Oct  7 09:30:59 np0005473739 podman[75145]: 2025-10-07 13:30:59.668203424 +0000 UTC m=+0.042600427 container create f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13 (image=quay.io/ceph/ceph:v18, name=xenodochial_brown, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:30:59 np0005473739 systemd[1]: Started libpod-conmon-f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13.scope.
Oct  7 09:30:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:30:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708591b66180912e8a60bd2d965cb7dc9a739b4a07b7c1c8e26debec435b4615/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708591b66180912e8a60bd2d965cb7dc9a739b4a07b7c1c8e26debec435b4615/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/708591b66180912e8a60bd2d965cb7dc9a739b4a07b7c1c8e26debec435b4615/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:30:59 np0005473739 podman[75145]: 2025-10-07 13:30:59.717244288 +0000 UTC m=+0.091641371 container init f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13 (image=quay.io/ceph/ceph:v18, name=xenodochial_brown, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:30:59 np0005473739 podman[75145]: 2025-10-07 13:30:59.723884513 +0000 UTC m=+0.098281516 container start f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13 (image=quay.io/ceph/ceph:v18, name=xenodochial_brown, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:30:59 np0005473739 podman[75145]: 2025-10-07 13:30:59.726448246 +0000 UTC m=+0.100845309 container attach f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13 (image=quay.io/ceph/ceph:v18, name=xenodochial_brown, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 09:30:59 np0005473739 podman[75145]: 2025-10-07 13:30:59.652772102 +0000 UTC m=+0.027169155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json-pretty"} v 0) v1
Oct  7 09:31:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3971766605' entity='client.admin' cmd=[{"prefix": "status", "format": "json-pretty"}]: dispatch
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]: 
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]: {
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "health": {
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "status": "HEALTH_OK",
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "checks": {},
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "mutes": []
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    },
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "election_epoch": 5,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "quorum": [
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        0
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    ],
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "quorum_names": [
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "compute-0"
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    ],
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "quorum_age": 25,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "monmap": {
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "epoch": 1,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "min_mon_release_name": "reef",
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_mons": 1
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    },
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "osdmap": {
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "epoch": 1,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_osds": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_up_osds": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "osd_up_since": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_in_osds": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "osd_in_since": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_remapped_pgs": 0
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    },
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "pgmap": {
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "pgs_by_state": [],
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_pgs": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_pools": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_objects": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "data_bytes": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "bytes_used": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "bytes_avail": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "bytes_total": 0
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    },
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "fsmap": {
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "epoch": 1,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "by_rank": [],
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "up:standby": 0
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    },
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "mgrmap": {
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "available": true,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "num_standbys": 0,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "modules": [
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:            "iostat",
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:            "nfs",
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:            "restful"
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        ],
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "services": {}
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    },
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "servicemap": {
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "epoch": 1,
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "modified": "2025-10-07T13:30:32.279410+0000",
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:        "services": {}
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    },
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]:    "progress_events": {}
Oct  7 09:31:00 np0005473739 xenodochial_brown[75161]: }
Oct  7 09:31:00 np0005473739 systemd[1]: libpod-f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13.scope: Deactivated successfully.
Oct  7 09:31:00 np0005473739 conmon[75161]: conmon f220566517b1c1762bbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13.scope/container/memory.events
Oct  7 09:31:00 np0005473739 podman[75145]: 2025-10-07 13:31:00.337191529 +0000 UTC m=+0.711588562 container died f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13 (image=quay.io/ceph/ceph:v18, name=xenodochial_brown, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:31:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-708591b66180912e8a60bd2d965cb7dc9a739b4a07b7c1c8e26debec435b4615-merged.mount: Deactivated successfully.
Oct  7 09:31:00 np0005473739 podman[75145]: 2025-10-07 13:31:00.379686181 +0000 UTC m=+0.754083184 container remove f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13 (image=quay.io/ceph/ceph:v18, name=xenodochial_brown, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:31:00 np0005473739 systemd[1]: libpod-conmon-f220566517b1c1762bbd4dec3887f430e27064b9cda99b53ad36847d185a0a13.scope: Deactivated successfully.
Oct  7 09:31:00 np0005473739 podman[75197]: 2025-10-07 13:31:00.437616014 +0000 UTC m=+0.038364598 container create a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8 (image=quay.io/ceph/ceph:v18, name=gracious_clarke, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:31:00 np0005473739 systemd[1]: Started libpod-conmon-a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8.scope.
Oct  7 09:31:00 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c12d76461b30e6ac333f8504e0b6a31b37cbb4beaf80e7cd32de8c1a427a1b/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c12d76461b30e6ac333f8504e0b6a31b37cbb4beaf80e7cd32de8c1a427a1b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c12d76461b30e6ac333f8504e0b6a31b37cbb4beaf80e7cd32de8c1a427a1b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08c12d76461b30e6ac333f8504e0b6a31b37cbb4beaf80e7cd32de8c1a427a1b/merged/var/lib/ceph/user.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:00 np0005473739 podman[75197]: 2025-10-07 13:31:00.498199723 +0000 UTC m=+0.098948367 container init a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8 (image=quay.io/ceph/ceph:v18, name=gracious_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:00 np0005473739 podman[75197]: 2025-10-07 13:31:00.503174305 +0000 UTC m=+0.103922899 container start a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8 (image=quay.io/ceph/ceph:v18, name=gracious_clarke, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:31:00 np0005473739 podman[75197]: 2025-10-07 13:31:00.507063842 +0000 UTC m=+0.107812436 container attach a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8 (image=quay.io/ceph/ceph:v18, name=gracious_clarke, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:00 np0005473739 podman[75197]: 2025-10-07 13:31:00.420377624 +0000 UTC m=+0.021126198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  7 09:31:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1424265949' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  7 09:31:01 np0005473739 systemd[1]: libpod-a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8.scope: Deactivated successfully.
Oct  7 09:31:01 np0005473739 podman[75197]: 2025-10-07 13:31:01.01770722 +0000 UTC m=+0.618455784 container died a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8 (image=quay.io/ceph/ceph:v18, name=gracious_clarke, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-08c12d76461b30e6ac333f8504e0b6a31b37cbb4beaf80e7cd32de8c1a427a1b-merged.mount: Deactivated successfully.
Oct  7 09:31:01 np0005473739 podman[75197]: 2025-10-07 13:31:01.07090844 +0000 UTC m=+0.671656994 container remove a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8 (image=quay.io/ceph/ceph:v18, name=gracious_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:31:01 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:01 np0005473739 systemd[1]: libpod-conmon-a1d1ce84730ae1d14bc0801c12d2ac24014518ac53c17b5fe65694589b24d1a8.scope: Deactivated successfully.
Oct  7 09:31:01 np0005473739 podman[75251]: 2025-10-07 13:31:01.136080599 +0000 UTC m=+0.030444801 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:01 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1424265949' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  7 09:31:02 np0005473739 podman[75251]: 2025-10-07 13:31:02.411304892 +0000 UTC m=+1.305669064 container create 8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a (image=quay.io/ceph/ceph:v18, name=hungry_johnson, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:31:02 np0005473739 systemd[1]: Started libpod-conmon-8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a.scope.
Oct  7 09:31:02 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e03378e490e677061d19f4b8c393fbd13dd7555f6796a479cff9c1c1897729/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e03378e490e677061d19f4b8c393fbd13dd7555f6796a479cff9c1c1897729/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/65e03378e490e677061d19f4b8c393fbd13dd7555f6796a479cff9c1c1897729/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:02 np0005473739 podman[75251]: 2025-10-07 13:31:02.494285299 +0000 UTC m=+1.388649481 container init 8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a (image=quay.io/ceph/ceph:v18, name=hungry_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:02 np0005473739 podman[75251]: 2025-10-07 13:31:02.499831249 +0000 UTC m=+1.394195441 container start 8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a (image=quay.io/ceph/ceph:v18, name=hungry_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:31:02 np0005473739 podman[75251]: 2025-10-07 13:31:02.503773647 +0000 UTC m=+1.398137829 container attach 8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a (image=quay.io/ceph/ceph:v18, name=hungry_johnson, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module enable", "module": "cephadm"} v 0) v1
Oct  7 09:31:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3952810173' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3952810173' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr handle_mgr_map respawning because set of enabled modules changed!
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr respawn  e: '/usr/bin/ceph-mgr'
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr respawn  0: '/usr/bin/ceph-mgr'
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr respawn  1: '-n'
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr respawn  2: 'mgr.compute-0.kdyrcd'
Oct  7 09:31:03 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e5: compute-0.kdyrcd(active, since 6s)
Oct  7 09:31:03 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3952810173' entity='client.admin' cmd=[{"prefix": "mgr module enable", "module": "cephadm"}]: dispatch
Oct  7 09:31:03 np0005473739 systemd[1]: libpod-8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a.scope: Deactivated successfully.
Oct  7 09:31:03 np0005473739 podman[75251]: 2025-10-07 13:31:03.137068123 +0000 UTC m=+2.031432345 container died 8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a (image=quay.io/ceph/ceph:v18, name=hungry_johnson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 09:31:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-65e03378e490e677061d19f4b8c393fbd13dd7555f6796a479cff9c1c1897729-merged.mount: Deactivated successfully.
Oct  7 09:31:03 np0005473739 podman[75251]: 2025-10-07 13:31:03.189196958 +0000 UTC m=+2.083561160 container remove 8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a (image=quay.io/ceph/ceph:v18, name=hungry_johnson, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:03 np0005473739 systemd[1]: libpod-conmon-8370d5a2d2afa027f720b78145f826bdaa977cb4649885a92771f078a627212a.scope: Deactivated successfully.
Oct  7 09:31:03 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: ignoring --setuser ceph since I am not root
Oct  7 09:31:03 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: ignoring --setgroup ceph since I am not root
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: pidfile_write: ignore empty --pid-file
Oct  7 09:31:03 np0005473739 podman[75308]: 2025-10-07 13:31:03.254956975 +0000 UTC m=+0.047824685 container create e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce (image=quay.io/ceph/ceph:v18, name=happy_lehmann, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 09:31:03 np0005473739 systemd[1]: Started libpod-conmon-e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce.scope.
Oct  7 09:31:03 np0005473739 podman[75308]: 2025-10-07 13:31:03.228531426 +0000 UTC m=+0.021399126 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'alerts'
Oct  7 09:31:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d07003cc863fc8fedf3bb044b0c5c5a793990bdd2c686d267409addfe3eeca/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d07003cc863fc8fedf3bb044b0c5c5a793990bdd2c686d267409addfe3eeca/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9d07003cc863fc8fedf3bb044b0c5c5a793990bdd2c686d267409addfe3eeca/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:03 np0005473739 podman[75308]: 2025-10-07 13:31:03.345665474 +0000 UTC m=+0.138533174 container init e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce (image=quay.io/ceph/ceph:v18, name=happy_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:03 np0005473739 podman[75308]: 2025-10-07 13:31:03.351946399 +0000 UTC m=+0.144814089 container start e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce (image=quay.io/ceph/ceph:v18, name=happy_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 09:31:03 np0005473739 podman[75308]: 2025-10-07 13:31:03.355270407 +0000 UTC m=+0.148138147 container attach e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce (image=quay.io/ceph/ceph:v18, name=happy_lehmann, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'balancer'
Oct  7 09:31:03 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:03.657+0000 7fbd693c1140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  7 09:31:03 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'cephadm'
Oct  7 09:31:03 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:03.910+0000 7fbd693c1140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  7 09:31:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  7 09:31:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/370465981' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  7 09:31:03 np0005473739 happy_lehmann[75347]: {
Oct  7 09:31:03 np0005473739 happy_lehmann[75347]:    "epoch": 5,
Oct  7 09:31:03 np0005473739 happy_lehmann[75347]:    "available": true,
Oct  7 09:31:03 np0005473739 happy_lehmann[75347]:    "active_name": "compute-0.kdyrcd",
Oct  7 09:31:03 np0005473739 happy_lehmann[75347]:    "num_standby": 0
Oct  7 09:31:03 np0005473739 happy_lehmann[75347]: }
Oct  7 09:31:03 np0005473739 systemd[1]: libpod-e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce.scope: Deactivated successfully.
Oct  7 09:31:03 np0005473739 podman[75308]: 2025-10-07 13:31:03.948059606 +0000 UTC m=+0.740927336 container died e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce (image=quay.io/ceph/ceph:v18, name=happy_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:31:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e9d07003cc863fc8fedf3bb044b0c5c5a793990bdd2c686d267409addfe3eeca-merged.mount: Deactivated successfully.
Oct  7 09:31:03 np0005473739 podman[75308]: 2025-10-07 13:31:03.994525217 +0000 UTC m=+0.787392917 container remove e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce (image=quay.io/ceph/ceph:v18, name=happy_lehmann, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:04 np0005473739 systemd[1]: libpod-conmon-e920e6116511c160120dec28b72d2e13d8dcf98a3330b402f5be9c5821eb93ce.scope: Deactivated successfully.
Oct  7 09:31:04 np0005473739 podman[75384]: 2025-10-07 13:31:04.062203516 +0000 UTC m=+0.046923196 container create 21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25 (image=quay.io/ceph/ceph:v18, name=wizardly_aryabhata, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 09:31:04 np0005473739 systemd[1]: Started libpod-conmon-21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25.scope.
Oct  7 09:31:04 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3952810173' entity='client.admin' cmd='[{"prefix": "mgr module enable", "module": "cephadm"}]': finished
Oct  7 09:31:04 np0005473739 podman[75384]: 2025-10-07 13:31:04.041608397 +0000 UTC m=+0.026328097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052274ea34bda636b246ef3315ff4686718873cf02f5c44a865d1e70aa0c2307/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052274ea34bda636b246ef3315ff4686718873cf02f5c44a865d1e70aa0c2307/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/052274ea34bda636b246ef3315ff4686718873cf02f5c44a865d1e70aa0c2307/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:04 np0005473739 podman[75384]: 2025-10-07 13:31:04.154023241 +0000 UTC m=+0.138742951 container init 21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25 (image=quay.io/ceph/ceph:v18, name=wizardly_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:31:04 np0005473739 podman[75384]: 2025-10-07 13:31:04.159086906 +0000 UTC m=+0.143806576 container start 21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25 (image=quay.io/ceph/ceph:v18, name=wizardly_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:04 np0005473739 podman[75384]: 2025-10-07 13:31:04.162411283 +0000 UTC m=+0.147130953 container attach 21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25 (image=quay.io/ceph/ceph:v18, name=wizardly_aryabhata, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:05 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'crash'
Oct  7 09:31:06 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:06.238+0000 7fbd693c1140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  7 09:31:06 np0005473739 ceph-mgr[74587]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  7 09:31:06 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'dashboard'
Oct  7 09:31:07 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'devicehealth'
Oct  7 09:31:07 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:07.868+0000 7fbd693c1140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  7 09:31:07 np0005473739 ceph-mgr[74587]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  7 09:31:07 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'diskprediction_local'
Oct  7 09:31:08 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: /lib64/python3.9/site-packages/scipy/__init__.py:73: UserWarning: NumPy was imported from a Python sub-interpreter but NumPy does not properly support sub-interpreters. This will likely work for most users but might cause hard to track down issues or subtle bugs. A common user of the rare sub-interpreter feature is wsgi which also allows single-interpreter mode.
Oct  7 09:31:08 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: Improvements in the case of bugs are welcome, but is not on the NumPy roadmap, and full support may require significant effort to achieve.
Oct  7 09:31:08 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]:  from numpy import show_config as show_numpy_config
Oct  7 09:31:08 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:08.411+0000 7fbd693c1140 -1 mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  7 09:31:08 np0005473739 ceph-mgr[74587]: mgr[py] Module diskprediction_local has missing NOTIFY_TYPES member
Oct  7 09:31:08 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'influx'
Oct  7 09:31:08 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:08.637+0000 7fbd693c1140 -1 mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  7 09:31:08 np0005473739 ceph-mgr[74587]: mgr[py] Module influx has missing NOTIFY_TYPES member
Oct  7 09:31:08 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'insights'
Oct  7 09:31:08 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'iostat'
Oct  7 09:31:09 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:09.103+0000 7fbd693c1140 -1 mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  7 09:31:09 np0005473739 ceph-mgr[74587]: mgr[py] Module iostat has missing NOTIFY_TYPES member
Oct  7 09:31:09 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'k8sevents'
Oct  7 09:31:10 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'localpool'
Oct  7 09:31:11 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'mds_autoscaler'
Oct  7 09:31:11 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'mirroring'
Oct  7 09:31:11 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'nfs'
Oct  7 09:31:12 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:12.584+0000 7fbd693c1140 -1 mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  7 09:31:12 np0005473739 ceph-mgr[74587]: mgr[py] Module nfs has missing NOTIFY_TYPES member
Oct  7 09:31:12 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'orchestrator'
Oct  7 09:31:13 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:13.246+0000 7fbd693c1140 -1 mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  7 09:31:13 np0005473739 ceph-mgr[74587]: mgr[py] Module orchestrator has missing NOTIFY_TYPES member
Oct  7 09:31:13 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'osd_perf_query'
Oct  7 09:31:13 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:13.500+0000 7fbd693c1140 -1 mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  7 09:31:13 np0005473739 ceph-mgr[74587]: mgr[py] Module osd_perf_query has missing NOTIFY_TYPES member
Oct  7 09:31:13 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'osd_support'
Oct  7 09:31:13 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:13.727+0000 7fbd693c1140 -1 mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  7 09:31:13 np0005473739 ceph-mgr[74587]: mgr[py] Module osd_support has missing NOTIFY_TYPES member
Oct  7 09:31:13 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'pg_autoscaler'
Oct  7 09:31:14 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:14.004+0000 7fbd693c1140 -1 mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  7 09:31:14 np0005473739 ceph-mgr[74587]: mgr[py] Module pg_autoscaler has missing NOTIFY_TYPES member
Oct  7 09:31:14 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'progress'
Oct  7 09:31:14 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:14.244+0000 7fbd693c1140 -1 mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  7 09:31:14 np0005473739 ceph-mgr[74587]: mgr[py] Module progress has missing NOTIFY_TYPES member
Oct  7 09:31:14 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'prometheus'
Oct  7 09:31:15 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:15.285+0000 7fbd693c1140 -1 mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  7 09:31:15 np0005473739 ceph-mgr[74587]: mgr[py] Module prometheus has missing NOTIFY_TYPES member
Oct  7 09:31:15 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'rbd_support'
Oct  7 09:31:15 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:15.586+0000 7fbd693c1140 -1 mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  7 09:31:15 np0005473739 ceph-mgr[74587]: mgr[py] Module rbd_support has missing NOTIFY_TYPES member
Oct  7 09:31:15 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'restful'
Oct  7 09:31:16 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'rgw'
Oct  7 09:31:17 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:17.000+0000 7fbd693c1140 -1 mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  7 09:31:17 np0005473739 ceph-mgr[74587]: mgr[py] Module rgw has missing NOTIFY_TYPES member
Oct  7 09:31:17 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'rook'
Oct  7 09:31:19 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:19.083+0000 7fbd693c1140 -1 mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  7 09:31:19 np0005473739 ceph-mgr[74587]: mgr[py] Module rook has missing NOTIFY_TYPES member
Oct  7 09:31:19 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'selftest'
Oct  7 09:31:19 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:19.330+0000 7fbd693c1140 -1 mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  7 09:31:19 np0005473739 ceph-mgr[74587]: mgr[py] Module selftest has missing NOTIFY_TYPES member
Oct  7 09:31:19 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'snap_schedule'
Oct  7 09:31:19 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:19.584+0000 7fbd693c1140 -1 mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  7 09:31:19 np0005473739 ceph-mgr[74587]: mgr[py] Module snap_schedule has missing NOTIFY_TYPES member
Oct  7 09:31:19 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'stats'
Oct  7 09:31:19 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'status'
Oct  7 09:31:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:20.072+0000 7fbd693c1140 -1 mgr[py] Module status has missing NOTIFY_TYPES member
Oct  7 09:31:20 np0005473739 ceph-mgr[74587]: mgr[py] Module status has missing NOTIFY_TYPES member
Oct  7 09:31:20 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'telegraf'
Oct  7 09:31:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:20.298+0000 7fbd693c1140 -1 mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  7 09:31:20 np0005473739 ceph-mgr[74587]: mgr[py] Module telegraf has missing NOTIFY_TYPES member
Oct  7 09:31:20 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'telemetry'
Oct  7 09:31:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:20.894+0000 7fbd693c1140 -1 mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  7 09:31:20 np0005473739 ceph-mgr[74587]: mgr[py] Module telemetry has missing NOTIFY_TYPES member
Oct  7 09:31:20 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'test_orchestrator'
Oct  7 09:31:21 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:21.545+0000 7fbd693c1140 -1 mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  7 09:31:21 np0005473739 ceph-mgr[74587]: mgr[py] Module test_orchestrator has missing NOTIFY_TYPES member
Oct  7 09:31:21 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'volumes'
Oct  7 09:31:22 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:22.230+0000 7fbd693c1140 -1 mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr[py] Module volumes has missing NOTIFY_TYPES member
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr[py] Loading python module 'zabbix'
Oct  7 09:31:22 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T13:31:22.475+0000 7fbd693c1140 -1 mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr[py] Module zabbix has missing NOTIFY_TYPES member
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Active manager daemon compute-0.kdyrcd restarted
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e1 do_prune osdmap full prune enabled
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e1 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Activating manager daemon compute-0.kdyrcd
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: ms_deliver_dispatch: unhandled message 0x55867e26e420 mon_map magic: 0 v1 from mon.0 v2:192.168.122.100:3300/0
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e1 _set_cache_ratios kv ratio 0.25 inc ratio 0.375 full ratio 0.375
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e1 register_cache_with_pcm pcm target: 2147483648 pcm max: 1020054732 pcm min: 134217728 inc_osd_cache size: 1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e2 e2: 0 total, 0 up, 0 in
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr handle_mgr_map Activating!
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr handle_mgr_map I am now activating
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e2: 0 total, 0 up, 0 in
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e6: compute-0.kdyrcd(active, starting, since 0.015067s)
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "who": "compute-0.kdyrcd", "id": "compute-0.kdyrcd"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mgr metadata", "who": "compute-0.kdyrcd", "id": "compute-0.kdyrcd"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mds metadata"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e1 all = 1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mon metadata"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Manager daemon compute-0.kdyrcd is now available
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [balancer DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: balancer
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Starting
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [cephadm DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:31:22
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] No pools available
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: Active manager daemon compute-0.kdyrcd restarted
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: Activating manager daemon compute-0.kdyrcd
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: Manager daemon compute-0.kdyrcd is now available
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.migrations] Found migration_current of "None". Setting to last migration.
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Found migration_current of "None". Setting to last migration.
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/migration_current}] v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/config_checks}] v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: cephadm
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [crash DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: crash
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [devicehealth DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: devicehealth
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Starting
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [iostat DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: iostat
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [nfs DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: nfs
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [orchestrator DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: orchestrator
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: pg_autoscaler
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [progress DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: progress
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [progress INFO root] Loading...
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [progress INFO root] No stored events to load
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [progress INFO root] Loaded [] historic events
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [progress INFO root] Loaded OSDMap, ready.
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] recovery thread starting
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] starting setup
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: rbd_support
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [restful DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: restful
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/mirror_snapshot_schedule"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [restful INFO root] server_addr: :: server_port: 8003
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/mirror_snapshot_schedule"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [restful WARNING root] server not running: no certificate configured
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [status DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: status
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: starting
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [telemetry DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] PerfHandler: starting
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TaskHandler: starting
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: telemetry
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/trash_purge_schedule"} v 0) v1
Oct  7 09:31:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/trash_purge_schedule"}]: dispatch
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [volumes DEBUG root] setting log level based on debug_mgr: INFO (2/5)
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: starting
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] setup complete
Oct  7 09:31:22 np0005473739 ceph-mgr[74587]: mgr load Constructed class from module: volumes
Oct  7 09:31:23 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "get_command_descriptions"}]: dispatch
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e7: compute-0.kdyrcd(active, since 1.03129s)
Oct  7 09:31:23 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14136 -' entity='client.admin' cmd=[{"prefix": "mgr_status"}]: dispatch
Oct  7 09:31:23 np0005473739 wizardly_aryabhata[75402]: {
Oct  7 09:31:23 np0005473739 wizardly_aryabhata[75402]:    "mgrmap_epoch": 7,
Oct  7 09:31:23 np0005473739 wizardly_aryabhata[75402]:    "initialized": true
Oct  7 09:31:23 np0005473739 wizardly_aryabhata[75402]: }
Oct  7 09:31:23 np0005473739 systemd[1]: libpod-21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25.scope: Deactivated successfully.
Oct  7 09:31:23 np0005473739 podman[75384]: 2025-10-07 13:31:23.538715187 +0000 UTC m=+19.523434877 container died 21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25 (image=quay.io/ceph/ceph:v18, name=wizardly_aryabhata, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: Found migration_current of "None". Setting to last migration.
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/mirror_snapshot_schedule"}]: dispatch
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix":"config rm","who":"mgr","name":"mgr/rbd_support/compute-0.kdyrcd/trash_purge_schedule"}]: dispatch
Oct  7 09:31:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-052274ea34bda636b246ef3315ff4686718873cf02f5c44a865d1e70aa0c2307-merged.mount: Deactivated successfully.
Oct  7 09:31:23 np0005473739 podman[75384]: 2025-10-07 13:31:23.578862074 +0000 UTC m=+19.563581744 container remove 21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25 (image=quay.io/ceph/ceph:v18, name=wizardly_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 09:31:23 np0005473739 systemd[1]: libpod-conmon-21c4354ba26aa4876a145d2e7e7a24cb444a0a45ad1eb10cf0d333162f887b25.scope: Deactivated successfully.
Oct  7 09:31:23 np0005473739 podman[75560]: 2025-10-07 13:31:23.632281222 +0000 UTC m=+0.034664693 container create 64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b (image=quay.io/ceph/ceph:v18, name=dreamy_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:23 np0005473739 systemd[1]: Started libpod-conmon-64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b.scope.
Oct  7 09:31:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a42cbfa1e934eacd6a278188c3a3ed0a428ff637040119d64deaa1e0066a2dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a42cbfa1e934eacd6a278188c3a3ed0a428ff637040119d64deaa1e0066a2dd/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8a42cbfa1e934eacd6a278188c3a3ed0a428ff637040119d64deaa1e0066a2dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:23 np0005473739 podman[75560]: 2025-10-07 13:31:23.709454217 +0000 UTC m=+0.111837738 container init 64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b (image=quay.io/ceph/ceph:v18, name=dreamy_hypatia, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:23 np0005473739 podman[75560]: 2025-10-07 13:31:23.617400244 +0000 UTC m=+0.019783715 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:23 np0005473739 podman[75560]: 2025-10-07 13:31:23.719797408 +0000 UTC m=+0.122180919 container start 64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b (image=quay.io/ceph/ceph:v18, name=dreamy_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:23 np0005473739 podman[75560]: 2025-10-07 13:31:23.724061747 +0000 UTC m=+0.126445268 container attach 64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b (image=quay.io/ceph/ceph:v18, name=dreamy_hypatia, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/cert}] v 0) v1
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/cephadm_agent/root/key}] v 0) v1
Oct  7 09:31:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14144 -' entity='client.admin' cmd=[{"prefix": "orch set backend", "module_name": "cephadm", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/orchestrator/orchestrator}] v 0) v1
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  7 09:31:24 np0005473739 systemd[1]: libpod-64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b.scope: Deactivated successfully.
Oct  7 09:31:24 np0005473739 podman[75560]: 2025-10-07 13:31:24.299897792 +0000 UTC m=+0.702281263 container died 64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b (image=quay.io/ceph/ceph:v18, name=dreamy_hypatia, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 09:31:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8a42cbfa1e934eacd6a278188c3a3ed0a428ff637040119d64deaa1e0066a2dd-merged.mount: Deactivated successfully.
Oct  7 09:31:24 np0005473739 podman[75560]: 2025-10-07 13:31:24.335757577 +0000 UTC m=+0.738141048 container remove 64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b (image=quay.io/ceph/ceph:v18, name=dreamy_hypatia, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 09:31:24 np0005473739 systemd[1]: libpod-conmon-64976b2143866284d68c728a26f3daf9daca066eabd4247f073afeb11091a47b.scope: Deactivated successfully.
Oct  7 09:31:24 np0005473739 podman[75615]: 2025-10-07 13:31:24.413236741 +0000 UTC m=+0.057976528 container create 45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9 (image=quay.io/ceph/ceph:v18, name=pensive_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 09:31:24 np0005473739 systemd[1]: Started libpod-conmon-45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9.scope.
Oct  7 09:31:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e3b3a19024d5663b3fbb96f602befd0846dab50866f725202e4ba95d54f698/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e3b3a19024d5663b3fbb96f602befd0846dab50866f725202e4ba95d54f698/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67e3b3a19024d5663b3fbb96f602befd0846dab50866f725202e4ba95d54f698/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:24 np0005473739 podman[75615]: 2025-10-07 13:31:24.490525489 +0000 UTC m=+0.135265256 container init 45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9 (image=quay.io/ceph/ceph:v18, name=pensive_chaplygin, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:31:24 np0005473739 podman[75615]: 2025-10-07 13:31:24.398019164 +0000 UTC m=+0.042758951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:24 np0005473739 podman[75615]: 2025-10-07 13:31:24.495717155 +0000 UTC m=+0.140456932 container start 45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9 (image=quay.io/ceph/ceph:v18, name=pensive_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:24 np0005473739 podman[75615]: 2025-10-07 13:31:24.498885044 +0000 UTC m=+0.143624811 container attach 45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9 (image=quay.io/ceph/ceph:v18, name=pensive_chaplygin, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: [cephadm INFO cherrypy.error] [07/Oct/2025:13:31:24] ENGINE Bus STARTING
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : [07/Oct/2025:13:31:24] ENGINE Bus STARTING
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: [cephadm INFO cherrypy.error] [07/Oct/2025:13:31:24] ENGINE Serving on https://192.168.122.100:7150
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : [07/Oct/2025:13:31:24] ENGINE Serving on https://192.168.122.100:7150
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: [cephadm INFO cherrypy.error] [07/Oct/2025:13:31:24] ENGINE Client ('192.168.122.100', 41804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : [07/Oct/2025:13:31:24] ENGINE Client ('192.168.122.100', 41804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: [cephadm INFO cherrypy.error] [07/Oct/2025:13:31:24] ENGINE Serving on http://192.168.122.100:8765
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : [07/Oct/2025:13:31:24] ENGINE Serving on http://192.168.122.100:8765
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: [cephadm INFO cherrypy.error] [07/Oct/2025:13:31:24] ENGINE Bus STARTED
Oct  7 09:31:24 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : [07/Oct/2025:13:31:24] ENGINE Bus STARTED
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  7 09:31:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14146 -' entity='client.admin' cmd=[{"prefix": "cephadm set-user", "user": "ceph-admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_user}] v 0) v1
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Set ssh ssh_user
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Set ssh ssh_user
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_config}] v 0) v1
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Set ssh ssh_config
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Set ssh ssh_config
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: [cephadm INFO root] ssh user set to ceph-admin. sudo will be used
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : ssh user set to ceph-admin. sudo will be used
Oct  7 09:31:25 np0005473739 pensive_chaplygin[75632]: ssh user set to ceph-admin. sudo will be used
Oct  7 09:31:25 np0005473739 systemd[1]: libpod-45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9.scope: Deactivated successfully.
Oct  7 09:31:25 np0005473739 podman[75615]: 2025-10-07 13:31:25.042674419 +0000 UTC m=+0.687414186 container died 45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9 (image=quay.io/ceph/ceph:v18, name=pensive_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  7 09:31:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-67e3b3a19024d5663b3fbb96f602befd0846dab50866f725202e4ba95d54f698-merged.mount: Deactivated successfully.
Oct  7 09:31:25 np0005473739 podman[75615]: 2025-10-07 13:31:25.085908112 +0000 UTC m=+0.730647899 container remove 45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9 (image=quay.io/ceph/ceph:v18, name=pensive_chaplygin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:31:25 np0005473739 systemd[1]: libpod-conmon-45ac99ba71606454cfbd15011d4bea241c2c0c8e54ebfc8073cdc651b322b6e9.scope: Deactivated successfully.
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1019920547 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:31:25 np0005473739 podman[75696]: 2025-10-07 13:31:25.154689681 +0000 UTC m=+0.046954138 container create 093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898 (image=quay.io/ceph/ceph:v18, name=practical_williams, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:25 np0005473739 systemd[1]: Started libpod-conmon-093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898.scope.
Oct  7 09:31:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa4e0077e693c82b1733d5e42e93a695ddd69b5330f39052314a8ef88def9/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa4e0077e693c82b1733d5e42e93a695ddd69b5330f39052314a8ef88def9/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa4e0077e693c82b1733d5e42e93a695ddd69b5330f39052314a8ef88def9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa4e0077e693c82b1733d5e42e93a695ddd69b5330f39052314a8ef88def9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c0fa4e0077e693c82b1733d5e42e93a695ddd69b5330f39052314a8ef88def9/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 podman[75696]: 2025-10-07 13:31:25.133287521 +0000 UTC m=+0.025551998 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:25 np0005473739 podman[75696]: 2025-10-07 13:31:25.230990022 +0000 UTC m=+0.123254509 container init 093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898 (image=quay.io/ceph/ceph:v18, name=practical_williams, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:25 np0005473739 podman[75696]: 2025-10-07 13:31:25.237109193 +0000 UTC m=+0.129373630 container start 093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898 (image=quay.io/ceph/ceph:v18, name=practical_williams, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:25 np0005473739 podman[75696]: 2025-10-07 13:31:25.240063676 +0000 UTC m=+0.132328173 container attach 093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898 (image=quay.io/ceph/ceph:v18, name=practical_williams, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e8: compute-0.kdyrcd(active, since 2s)
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14148 -' entity='client.admin' cmd=[{"prefix": "cephadm set-priv-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_key}] v 0) v1
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Set ssh ssh_identity_key
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_key
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Set ssh private key
Oct  7 09:31:25 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Set ssh private key
Oct  7 09:31:25 np0005473739 systemd[1]: libpod-093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898.scope: Deactivated successfully.
Oct  7 09:31:25 np0005473739 podman[75696]: 2025-10-07 13:31:25.76595833 +0000 UTC m=+0.658222767 container died 093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898 (image=quay.io/ceph/ceph:v18, name=practical_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: [07/Oct/2025:13:31:24] ENGINE Bus STARTING
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: [07/Oct/2025:13:31:24] ENGINE Serving on https://192.168.122.100:7150
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: [07/Oct/2025:13:31:24] ENGINE Client ('192.168.122.100', 41804) lost — peer dropped the TLS connection suddenly, during handshake: (6, 'TLS/SSL connection has been closed (EOF) (_ssl.c:1147)')
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: [07/Oct/2025:13:31:24] ENGINE Serving on http://192.168.122.100:8765
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: [07/Oct/2025:13:31:24] ENGINE Bus STARTED
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: Set ssh ssh_user
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: Set ssh ssh_config
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: ssh user set to ceph-admin. sudo will be used
Oct  7 09:31:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9c0fa4e0077e693c82b1733d5e42e93a695ddd69b5330f39052314a8ef88def9-merged.mount: Deactivated successfully.
Oct  7 09:31:25 np0005473739 podman[75696]: 2025-10-07 13:31:25.805774046 +0000 UTC m=+0.698038483 container remove 093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898 (image=quay.io/ceph/ceph:v18, name=practical_williams, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 09:31:25 np0005473739 systemd[1]: libpod-conmon-093061599c6ca530a05cfd88d6f56010f7fa5d67060dfc4318eca6d85cffe898.scope: Deactivated successfully.
Oct  7 09:31:25 np0005473739 podman[75750]: 2025-10-07 13:31:25.863470146 +0000 UTC m=+0.036618079 container create 8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9 (image=quay.io/ceph/ceph:v18, name=reverent_kare, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:25 np0005473739 systemd[1]: Started libpod-conmon-8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9.scope.
Oct  7 09:31:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15bc4344bb7ab8920592d420a7d9d214aa171eb9a9558089c6c2350a9cf5249/merged/tmp/cephadm-ssh-key supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15bc4344bb7ab8920592d420a7d9d214aa171eb9a9558089c6c2350a9cf5249/merged/tmp/cephadm-ssh-key.pub supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15bc4344bb7ab8920592d420a7d9d214aa171eb9a9558089c6c2350a9cf5249/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15bc4344bb7ab8920592d420a7d9d214aa171eb9a9558089c6c2350a9cf5249/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e15bc4344bb7ab8920592d420a7d9d214aa171eb9a9558089c6c2350a9cf5249/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:25 np0005473739 podman[75750]: 2025-10-07 13:31:25.918237422 +0000 UTC m=+0.091385375 container init 8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9 (image=quay.io/ceph/ceph:v18, name=reverent_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:25 np0005473739 podman[75750]: 2025-10-07 13:31:25.928601532 +0000 UTC m=+0.101749465 container start 8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9 (image=quay.io/ceph/ceph:v18, name=reverent_kare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:31:25 np0005473739 podman[75750]: 2025-10-07 13:31:25.93173704 +0000 UTC m=+0.104884993 container attach 8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9 (image=quay.io/ceph/ceph:v18, name=reverent_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:25 np0005473739 podman[75750]: 2025-10-07 13:31:25.847785665 +0000 UTC m=+0.020933618 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:26 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14150 -' entity='client.admin' cmd=[{"prefix": "cephadm set-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/ssh_identity_pub}] v 0) v1
Oct  7 09:31:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:26 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Set ssh ssh_identity_pub
Oct  7 09:31:26 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Set ssh ssh_identity_pub
Oct  7 09:31:26 np0005473739 systemd[1]: libpod-8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9.scope: Deactivated successfully.
Oct  7 09:31:26 np0005473739 podman[75750]: 2025-10-07 13:31:26.433673102 +0000 UTC m=+0.606821045 container died 8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9 (image=quay.io/ceph/ceph:v18, name=reverent_kare, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:31:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e15bc4344bb7ab8920592d420a7d9d214aa171eb9a9558089c6c2350a9cf5249-merged.mount: Deactivated successfully.
Oct  7 09:31:26 np0005473739 podman[75750]: 2025-10-07 13:31:26.471234326 +0000 UTC m=+0.644382259 container remove 8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9 (image=quay.io/ceph/ceph:v18, name=reverent_kare, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:31:26 np0005473739 systemd[1]: libpod-conmon-8fdc1ed3ae9f2a34e57852727cc7cb453d854e8d6b6e666c18facc194649b9c9.scope: Deactivated successfully.
Oct  7 09:31:26 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:26 np0005473739 podman[75805]: 2025-10-07 13:31:26.528412299 +0000 UTC m=+0.039987632 container create 5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f (image=quay.io/ceph/ceph:v18, name=nostalgic_kalam, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:26 np0005473739 systemd[1]: Started libpod-conmon-5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f.scope.
Oct  7 09:31:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca37ac0257359afc4de163f614e0890f13d1e229b19d5bd49387f3ec29c71970/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca37ac0257359afc4de163f614e0890f13d1e229b19d5bd49387f3ec29c71970/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca37ac0257359afc4de163f614e0890f13d1e229b19d5bd49387f3ec29c71970/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:26 np0005473739 podman[75805]: 2025-10-07 13:31:26.511710261 +0000 UTC m=+0.023285614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:26 np0005473739 podman[75805]: 2025-10-07 13:31:26.616755948 +0000 UTC m=+0.128331311 container init 5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f (image=quay.io/ceph/ceph:v18, name=nostalgic_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:31:26 np0005473739 podman[75805]: 2025-10-07 13:31:26.625220965 +0000 UTC m=+0.136796328 container start 5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f (image=quay.io/ceph/ceph:v18, name=nostalgic_kalam, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:26 np0005473739 podman[75805]: 2025-10-07 13:31:26.641888483 +0000 UTC m=+0.153463816 container attach 5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f (image=quay.io/ceph/ceph:v18, name=nostalgic_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:26 np0005473739 ceph-mon[74295]: Set ssh ssh_identity_key
Oct  7 09:31:26 np0005473739 ceph-mon[74295]: Set ssh private key
Oct  7 09:31:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:27 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14152 -' entity='client.admin' cmd=[{"prefix": "cephadm get-pub-key", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:27 np0005473739 nostalgic_kalam[75822]: ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmieCO/0brDl4QTOFmoO2unaJuUti8lfmbi5dSbUhgeUpsCRcWC2FoW8Jle3jK9aaiAqr6O9NLYl9GAst6RyaRskHIUCDWaX6hv0RJkVBJyKazYe1ygzLB8oLRUwNUSluaA5H3S3a4YveqbkthhrsFVnYtFDulf9hWC0DBO0KfQ5BAmkHlu4kUn5zpfdLsBsrCLLtntfZfVRl1vCeZ+lcrLu2EnT8Cavu5dnI9zEhaZaBHn0UszO7pMA32BsSepFQQFETBuWUpwY8slGJOJniMjt0J2fmbJJtGV1X0pA/f3YTfBMAn/oE6RbxGPsQGwE4YvdJt0IKIkYx20Egas0rS5bLflofldW15QORscBzacVfMRCnvN1/8Puud/eyDENOSUlkrHkBGwwSFQZltzmxuMY35SJvQSLIMvqN4DgZX7ShABogkTOQ4oRyio3vbXTtQigpeP9RhstdTNl1g7BWigJJanInEthOohUtU/5wxtKdAEbJTXtwDQuQs0FRDrKM= zuul@controller
Oct  7 09:31:27 np0005473739 systemd[1]: libpod-5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f.scope: Deactivated successfully.
Oct  7 09:31:27 np0005473739 podman[75805]: 2025-10-07 13:31:27.144395919 +0000 UTC m=+0.655971312 container died 5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f (image=quay.io/ceph/ceph:v18, name=nostalgic_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ca37ac0257359afc4de163f614e0890f13d1e229b19d5bd49387f3ec29c71970-merged.mount: Deactivated successfully.
Oct  7 09:31:27 np0005473739 podman[75805]: 2025-10-07 13:31:27.189893926 +0000 UTC m=+0.701469269 container remove 5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f (image=quay.io/ceph/ceph:v18, name=nostalgic_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:31:27 np0005473739 systemd[1]: libpod-conmon-5889042e43fa5157629a5c35642b946502b7cbfa0733ad46464842758f30349f.scope: Deactivated successfully.
Oct  7 09:31:27 np0005473739 podman[75862]: 2025-10-07 13:31:27.281253909 +0000 UTC m=+0.056086584 container create e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8 (image=quay.io/ceph/ceph:v18, name=quizzical_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:31:27 np0005473739 systemd[1]: Started libpod-conmon-e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8.scope.
Oct  7 09:31:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5be05afebaa3fd0922084fb3e2240fb9d22f6cc4ed31e773a582c587aba8075/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5be05afebaa3fd0922084fb3e2240fb9d22f6cc4ed31e773a582c587aba8075/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5be05afebaa3fd0922084fb3e2240fb9d22f6cc4ed31e773a582c587aba8075/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:27 np0005473739 podman[75862]: 2025-10-07 13:31:27.355997567 +0000 UTC m=+0.130830252 container init e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8 (image=quay.io/ceph/ceph:v18, name=quizzical_wescoff, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 09:31:27 np0005473739 podman[75862]: 2025-10-07 13:31:27.261862415 +0000 UTC m=+0.036695130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:27 np0005473739 podman[75862]: 2025-10-07 13:31:27.364109373 +0000 UTC m=+0.138942048 container start e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8 (image=quay.io/ceph/ceph:v18, name=quizzical_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:31:27 np0005473739 podman[75862]: 2025-10-07 13:31:27.367258372 +0000 UTC m=+0.142091047 container attach e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8 (image=quay.io/ceph/ceph:v18, name=quizzical_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:31:27 np0005473739 ceph-mon[74295]: Set ssh ssh_identity_pub
Oct  7 09:31:27 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14154 -' entity='client.admin' cmd=[{"prefix": "orch host add", "hostname": "compute-0", "addr": "192.168.122.100", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:28 np0005473739 systemd-logind[801]: New session 22 of user ceph-admin.
Oct  7 09:31:28 np0005473739 systemd[1]: Created slice User Slice of UID 42477.
Oct  7 09:31:28 np0005473739 systemd[1]: Starting User Runtime Directory /run/user/42477...
Oct  7 09:31:28 np0005473739 systemd[1]: Finished User Runtime Directory /run/user/42477.
Oct  7 09:31:28 np0005473739 systemd[1]: Starting User Manager for UID 42477...
Oct  7 09:31:28 np0005473739 systemd[75908]: Queued start job for default target Main User Target.
Oct  7 09:31:28 np0005473739 systemd[75908]: Created slice User Application Slice.
Oct  7 09:31:28 np0005473739 systemd[75908]: Started Mark boot as successful after the user session has run 2 minutes.
Oct  7 09:31:28 np0005473739 systemd[75908]: Started Daily Cleanup of User's Temporary Directories.
Oct  7 09:31:28 np0005473739 systemd[75908]: Reached target Paths.
Oct  7 09:31:28 np0005473739 systemd[75908]: Reached target Timers.
Oct  7 09:31:28 np0005473739 systemd[75908]: Starting D-Bus User Message Bus Socket...
Oct  7 09:31:28 np0005473739 systemd[75908]: Starting Create User's Volatile Files and Directories...
Oct  7 09:31:28 np0005473739 systemd[75908]: Finished Create User's Volatile Files and Directories.
Oct  7 09:31:28 np0005473739 systemd[75908]: Listening on D-Bus User Message Bus Socket.
Oct  7 09:31:28 np0005473739 systemd[75908]: Reached target Sockets.
Oct  7 09:31:28 np0005473739 systemd[75908]: Reached target Basic System.
Oct  7 09:31:28 np0005473739 systemd[75908]: Reached target Main User Target.
Oct  7 09:31:28 np0005473739 systemd[75908]: Startup finished in 142ms.
Oct  7 09:31:28 np0005473739 systemd[1]: Started User Manager for UID 42477.
Oct  7 09:31:28 np0005473739 systemd[1]: Started Session 22 of User ceph-admin.
Oct  7 09:31:28 np0005473739 systemd-logind[801]: New session 24 of user ceph-admin.
Oct  7 09:31:28 np0005473739 systemd[1]: Started Session 24 of User ceph-admin.
Oct  7 09:31:28 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:28 np0005473739 systemd-logind[801]: New session 25 of user ceph-admin.
Oct  7 09:31:28 np0005473739 systemd[1]: Started Session 25 of User ceph-admin.
Oct  7 09:31:29 np0005473739 systemd-logind[801]: New session 26 of user ceph-admin.
Oct  7 09:31:29 np0005473739 systemd[1]: Started Session 26 of User ceph-admin.
Oct  7 09:31:29 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Deploying cephadm binary to compute-0
Oct  7 09:31:29 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Deploying cephadm binary to compute-0
Oct  7 09:31:29 np0005473739 systemd-logind[801]: New session 27 of user ceph-admin.
Oct  7 09:31:29 np0005473739 systemd[1]: Started Session 27 of User ceph-admin.
Oct  7 09:31:29 np0005473739 systemd-logind[801]: New session 28 of user ceph-admin.
Oct  7 09:31:29 np0005473739 systemd[1]: Started Session 28 of User ceph-admin.
Oct  7 09:31:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020052996 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:31:30 np0005473739 ceph-mon[74295]: Deploying cephadm binary to compute-0
Oct  7 09:31:30 np0005473739 systemd-logind[801]: New session 29 of user ceph-admin.
Oct  7 09:31:30 np0005473739 systemd[1]: Started Session 29 of User ceph-admin.
Oct  7 09:31:30 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:30 np0005473739 systemd-logind[801]: New session 30 of user ceph-admin.
Oct  7 09:31:30 np0005473739 systemd[1]: Started Session 30 of User ceph-admin.
Oct  7 09:31:31 np0005473739 systemd-logind[801]: New session 31 of user ceph-admin.
Oct  7 09:31:31 np0005473739 systemd[1]: Started Session 31 of User ceph-admin.
Oct  7 09:31:31 np0005473739 systemd-logind[801]: New session 32 of user ceph-admin.
Oct  7 09:31:31 np0005473739 systemd[1]: Started Session 32 of User ceph-admin.
Oct  7 09:31:32 np0005473739 systemd-logind[801]: New session 33 of user ceph-admin.
Oct  7 09:31:32 np0005473739 systemd[1]: Started Session 33 of User ceph-admin.
Oct  7 09:31:32 np0005473739 systemd-logind[801]: New session 34 of user ceph-admin.
Oct  7 09:31:32 np0005473739 systemd[1]: Started Session 34 of User ceph-admin.
Oct  7 09:31:32 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  7 09:31:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:32 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Added host compute-0
Oct  7 09:31:32 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  7 09:31:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  7 09:31:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  7 09:31:32 np0005473739 quizzical_wescoff[75878]: Added host 'compute-0' with addr '192.168.122.100'
Oct  7 09:31:32 np0005473739 systemd[1]: libpod-e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8.scope: Deactivated successfully.
Oct  7 09:31:32 np0005473739 podman[75862]: 2025-10-07 13:31:32.943831605 +0000 UTC m=+5.718664310 container died e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8 (image=quay.io/ceph/ceph:v18, name=quizzical_wescoff, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:31:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b5be05afebaa3fd0922084fb3e2240fb9d22f6cc4ed31e773a582c587aba8075-merged.mount: Deactivated successfully.
Oct  7 09:31:33 np0005473739 podman[75862]: 2025-10-07 13:31:33.000173686 +0000 UTC m=+5.775006361 container remove e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8 (image=quay.io/ceph/ceph:v18, name=quizzical_wescoff, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 09:31:33 np0005473739 systemd[1]: libpod-conmon-e33d854e245c7e7457c407d1a87f3fd87a4f5e32e05411a89a3d7b195f2027c8.scope: Deactivated successfully.
Oct  7 09:31:33 np0005473739 podman[76556]: 2025-10-07 13:31:33.057751231 +0000 UTC m=+0.036923227 container create 512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:33 np0005473739 systemd[1]: Started libpod-conmon-512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46.scope.
Oct  7 09:31:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71ad4db216a44591c076765239ef40356f4dd15eaa0315abf9e0188db7d440e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71ad4db216a44591c076765239ef40356f4dd15eaa0315abf9e0188db7d440e/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a71ad4db216a44591c076765239ef40356f4dd15eaa0315abf9e0188db7d440e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:33 np0005473739 podman[76556]: 2025-10-07 13:31:33.128573788 +0000 UTC m=+0.107745804 container init 512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46 (image=quay.io/ceph/ceph:v18, name=exciting_pike, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 09:31:33 np0005473739 podman[76556]: 2025-10-07 13:31:33.135370888 +0000 UTC m=+0.114542874 container start 512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 09:31:33 np0005473739 podman[76556]: 2025-10-07 13:31:33.041958649 +0000 UTC m=+0.021130655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:33 np0005473739 podman[76556]: 2025-10-07 13:31:33.138472575 +0000 UTC m=+0.117644571 container attach 512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46 (image=quay.io/ceph/ceph:v18, name=exciting_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:33 np0005473739 podman[76671]: 2025-10-07 13:31:33.380865636 +0000 UTC m=+0.046635230 container create fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac (image=quay.io/ceph/ceph:v18, name=agitated_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:31:33 np0005473739 systemd[1]: Started libpod-conmon-fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac.scope.
Oct  7 09:31:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:33 np0005473739 podman[76671]: 2025-10-07 13:31:33.442822474 +0000 UTC m=+0.108592098 container init fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac (image=quay.io/ceph/ceph:v18, name=agitated_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 09:31:33 np0005473739 podman[76671]: 2025-10-07 13:31:33.448664888 +0000 UTC m=+0.114434482 container start fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac (image=quay.io/ceph/ceph:v18, name=agitated_bhaskara, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 09:31:33 np0005473739 podman[76671]: 2025-10-07 13:31:33.451786175 +0000 UTC m=+0.117555789 container attach fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac (image=quay.io/ceph/ceph:v18, name=agitated_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:31:33 np0005473739 podman[76671]: 2025-10-07 13:31:33.365734092 +0000 UTC m=+0.031503706 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:33 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14156 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mon", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:33 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service mon spec with placement count:5
Oct  7 09:31:33 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service mon spec with placement count:5
Oct  7 09:31:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  7 09:31:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:33 np0005473739 exciting_pike[76615]: Scheduled mon update...
Oct  7 09:31:33 np0005473739 systemd[1]: libpod-512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46.scope: Deactivated successfully.
Oct  7 09:31:33 np0005473739 podman[76556]: 2025-10-07 13:31:33.694487704 +0000 UTC m=+0.673659700 container died 512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a71ad4db216a44591c076765239ef40356f4dd15eaa0315abf9e0188db7d440e-merged.mount: Deactivated successfully.
Oct  7 09:31:33 np0005473739 agitated_bhaskara[76687]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)
Oct  7 09:31:33 np0005473739 podman[76556]: 2025-10-07 13:31:33.727249883 +0000 UTC m=+0.706421879 container remove 512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46 (image=quay.io/ceph/ceph:v18, name=exciting_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:33 np0005473739 systemd[1]: libpod-fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac.scope: Deactivated successfully.
Oct  7 09:31:33 np0005473739 podman[76671]: 2025-10-07 13:31:33.731320837 +0000 UTC m=+0.397090431 container died fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac (image=quay.io/ceph/ceph:v18, name=agitated_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:31:33 np0005473739 systemd[1]: libpod-conmon-512595ceabb098a5d38e9dc99b39096bbba181f70c58998cf34bef5f81811b46.scope: Deactivated successfully.
Oct  7 09:31:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-156321675fbbf84baeea941c5206fbe136a2369ddf4deddedaa4eb6c9bc9788f-merged.mount: Deactivated successfully.
Oct  7 09:31:33 np0005473739 podman[76671]: 2025-10-07 13:31:33.76598227 +0000 UTC m=+0.431751854 container remove fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac (image=quay.io/ceph/ceph:v18, name=agitated_bhaskara, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:33 np0005473739 systemd[1]: libpod-conmon-fcd762e536f4f5dd395bb97f5c367ed60f2e1b4e7c86e1b098c9432648354aac.scope: Deactivated successfully.
Oct  7 09:31:33 np0005473739 podman[76727]: 2025-10-07 13:31:33.790768685 +0000 UTC m=+0.045282251 container create 31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7 (image=quay.io/ceph/ceph:v18, name=nice_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 09:31:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=container_image}] v 0) v1
Oct  7 09:31:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:33 np0005473739 systemd[1]: Started libpod-conmon-31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7.scope.
Oct  7 09:31:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6f9e131a98ad77d08c27a0964c2955bd861ad0f8fa1fb44dfb130b87922925/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6f9e131a98ad77d08c27a0964c2955bd861ad0f8fa1fb44dfb130b87922925/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f6f9e131a98ad77d08c27a0964c2955bd861ad0f8fa1fb44dfb130b87922925/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:33 np0005473739 podman[76727]: 2025-10-07 13:31:33.858584928 +0000 UTC m=+0.113098514 container init 31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7 (image=quay.io/ceph/ceph:v18, name=nice_heisenberg, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:33 np0005473739 podman[76727]: 2025-10-07 13:31:33.865456681 +0000 UTC m=+0.119970247 container start 31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7 (image=quay.io/ceph/ceph:v18, name=nice_heisenberg, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:33 np0005473739 podman[76727]: 2025-10-07 13:31:33.775846137 +0000 UTC m=+0.030359723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:33 np0005473739 podman[76727]: 2025-10-07 13:31:33.869519935 +0000 UTC m=+0.124033681 container attach 31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7 (image=quay.io/ceph/ceph:v18, name=nice_heisenberg, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:33 np0005473739 ceph-mon[74295]: Added host compute-0
Oct  7 09:31:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:34 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14158 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "mgr", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:34 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service mgr spec with placement count:2
Oct  7 09:31:34 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement count:2
Oct  7 09:31:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  7 09:31:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:34 np0005473739 nice_heisenberg[76761]: Scheduled mgr update...
Oct  7 09:31:34 np0005473739 systemd[1]: libpod-31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7.scope: Deactivated successfully.
Oct  7 09:31:34 np0005473739 podman[76727]: 2025-10-07 13:31:34.421446138 +0000 UTC m=+0.675959744 container died 31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7 (image=quay.io/ceph/ceph:v18, name=nice_heisenberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5f6f9e131a98ad77d08c27a0964c2955bd861ad0f8fa1fb44dfb130b87922925-merged.mount: Deactivated successfully.
Oct  7 09:31:34 np0005473739 podman[76727]: 2025-10-07 13:31:34.479800734 +0000 UTC m=+0.734314300 container remove 31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7 (image=quay.io/ceph/ceph:v18, name=nice_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:31:34 np0005473739 systemd[1]: libpod-conmon-31e31e4a81deb31dbe80ec4c0d8107cf6dc89a405d68fe9a21b4dcb6f3c4c9b7.scope: Deactivated successfully.
Oct  7 09:31:34 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:34 np0005473739 podman[77001]: 2025-10-07 13:31:34.541829045 +0000 UTC m=+0.040037074 container create ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa (image=quay.io/ceph/ceph:v18, name=optimistic_lederberg, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:31:34 np0005473739 systemd[1]: Started libpod-conmon-ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa.scope.
Oct  7 09:31:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266aa8bf00dd65466b8e0b686ff908145e6f8d8e260353f002ee1d5420ebff89/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266aa8bf00dd65466b8e0b686ff908145e6f8d8e260353f002ee1d5420ebff89/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/266aa8bf00dd65466b8e0b686ff908145e6f8d8e260353f002ee1d5420ebff89/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:34 np0005473739 podman[77001]: 2025-10-07 13:31:34.524608281 +0000 UTC m=+0.022816410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:34 np0005473739 podman[77001]: 2025-10-07 13:31:34.627643792 +0000 UTC m=+0.125851831 container init ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa (image=quay.io/ceph/ceph:v18, name=optimistic_lederberg, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 09:31:34 np0005473739 podman[77001]: 2025-10-07 13:31:34.638400314 +0000 UTC m=+0.136608353 container start ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa (image=quay.io/ceph/ceph:v18, name=optimistic_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:34 np0005473739 podman[77001]: 2025-10-07 13:31:34.641257825 +0000 UTC m=+0.139465914 container attach ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa (image=quay.io/ceph/ceph:v18, name=optimistic_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 09:31:34 np0005473739 ceph-mon[74295]: Saving service mon spec with placement count:5
Oct  7 09:31:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:34 np0005473739 podman[77109]: 2025-10-07 13:31:34.96121008 +0000 UTC m=+0.064092819 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 09:31:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054709 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:31:35 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14160 -' entity='client.admin' cmd=[{"prefix": "orch apply", "service_type": "crash", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:35 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service crash spec with placement *
Oct  7 09:31:35 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service crash spec with placement *
Oct  7 09:31:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  7 09:31:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:35 np0005473739 optimistic_lederberg[77030]: Scheduled crash update...
Oct  7 09:31:35 np0005473739 systemd[1]: libpod-ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa.scope: Deactivated successfully.
Oct  7 09:31:35 np0005473739 podman[77001]: 2025-10-07 13:31:35.191400828 +0000 UTC m=+0.689608867 container died ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa (image=quay.io/ceph/ceph:v18, name=optimistic_lederberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:31:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-266aa8bf00dd65466b8e0b686ff908145e6f8d8e260353f002ee1d5420ebff89-merged.mount: Deactivated successfully.
Oct  7 09:31:35 np0005473739 podman[77001]: 2025-10-07 13:31:35.246752021 +0000 UTC m=+0.744960090 container remove ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa (image=quay.io/ceph/ceph:v18, name=optimistic_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:35 np0005473739 systemd[1]: libpod-conmon-ed9d1798428835807c25b0edfb0fe8aad5fb2632a0046f804d3bfa050baf32aa.scope: Deactivated successfully.
Oct  7 09:31:35 np0005473739 podman[77109]: 2025-10-07 13:31:35.281322381 +0000 UTC m=+0.384205110 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:31:35 np0005473739 podman[77162]: 2025-10-07 13:31:35.307430573 +0000 UTC m=+0.041075414 container create ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25 (image=quay.io/ceph/ceph:v18, name=competent_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:35 np0005473739 systemd[1]: Started libpod-conmon-ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25.scope.
Oct  7 09:31:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:35 np0005473739 podman[77162]: 2025-10-07 13:31:35.289715566 +0000 UTC m=+0.023360317 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba9de706526124d40cebba5f099ac6972268b8adb6f6fc27fa4f49af9d9708d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba9de706526124d40cebba5f099ac6972268b8adb6f6fc27fa4f49af9d9708d/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba9de706526124d40cebba5f099ac6972268b8adb6f6fc27fa4f49af9d9708d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:35 np0005473739 podman[77162]: 2025-10-07 13:31:35.402002556 +0000 UTC m=+0.135647307 container init ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25 (image=quay.io/ceph/ceph:v18, name=competent_lewin, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 09:31:35 np0005473739 podman[77162]: 2025-10-07 13:31:35.407552652 +0000 UTC m=+0.141197383 container start ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25 (image=quay.io/ceph/ceph:v18, name=competent_lewin, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:35 np0005473739 podman[77162]: 2025-10-07 13:31:35.411075481 +0000 UTC m=+0.144720212 container attach ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25 (image=quay.io/ceph/ceph:v18, name=competent_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 09:31:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:35 np0005473739 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 77342 (sysctl)
Oct  7 09:31:35 np0005473739 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Oct  7 09:31:35 np0005473739 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Oct  7 09:31:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/container_init}] v 0) v1
Oct  7 09:31:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/237378452' entity='client.admin' 
Oct  7 09:31:35 np0005473739 systemd[1]: libpod-ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25.scope: Deactivated successfully.
Oct  7 09:31:35 np0005473739 podman[77162]: 2025-10-07 13:31:35.967658465 +0000 UTC m=+0.701303196 container died ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25 (image=quay.io/ceph/ceph:v18, name=competent_lewin, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 09:31:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2ba9de706526124d40cebba5f099ac6972268b8adb6f6fc27fa4f49af9d9708d-merged.mount: Deactivated successfully.
Oct  7 09:31:36 np0005473739 podman[77162]: 2025-10-07 13:31:36.012436341 +0000 UTC m=+0.746081072 container remove ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25 (image=quay.io/ceph/ceph:v18, name=competent_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 09:31:36 np0005473739 systemd[1]: libpod-conmon-ef00cf191dc0883d53fab6965f016043ffa143800f1d3828f69260ba90b0ad25.scope: Deactivated successfully.
Oct  7 09:31:36 np0005473739 podman[77363]: 2025-10-07 13:31:36.0740647 +0000 UTC m=+0.039356605 container create 230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6 (image=quay.io/ceph/ceph:v18, name=crazy_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:36 np0005473739 systemd[1]: Started libpod-conmon-230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6.scope.
Oct  7 09:31:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b3f73dd2b66f8f970088d0942cdee4465e93d5f8eb8b755dd4874c552ca8b3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b3f73dd2b66f8f970088d0942cdee4465e93d5f8eb8b755dd4874c552ca8b3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b3f73dd2b66f8f970088d0942cdee4465e93d5f8eb8b755dd4874c552ca8b3/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:36 np0005473739 podman[77363]: 2025-10-07 13:31:36.148624371 +0000 UTC m=+0.113916276 container init 230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6 (image=quay.io/ceph/ceph:v18, name=crazy_hellman, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:31:36 np0005473739 podman[77363]: 2025-10-07 13:31:36.155158405 +0000 UTC m=+0.120450300 container start 230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6 (image=quay.io/ceph/ceph:v18, name=crazy_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:36 np0005473739 podman[77363]: 2025-10-07 13:31:36.059591054 +0000 UTC m=+0.024882979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:36 np0005473739 podman[77363]: 2025-10-07 13:31:36.158136798 +0000 UTC m=+0.123428723 container attach 230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6 (image=quay.io/ceph/ceph:v18, name=crazy_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: Saving service mgr spec with placement count:2
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: Saving service crash spec with placement *
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/237378452' entity='client.admin' 
Oct  7 09:31:36 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:36 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14164 -' entity='client.admin' cmd=[{"prefix": "orch client-keyring set", "entity": "client.admin", "placement": "label:_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/client_keyrings}] v 0) v1
Oct  7 09:31:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:36 np0005473739 systemd[1]: libpod-230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6.scope: Deactivated successfully.
Oct  7 09:31:36 np0005473739 podman[77363]: 2025-10-07 13:31:36.689016731 +0000 UTC m=+0.654308636 container died 230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6 (image=quay.io/ceph/ceph:v18, name=crazy_hellman, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:31:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-31b3f73dd2b66f8f970088d0942cdee4465e93d5f8eb8b755dd4874c552ca8b3-merged.mount: Deactivated successfully.
Oct  7 09:31:36 np0005473739 podman[77363]: 2025-10-07 13:31:36.731703739 +0000 UTC m=+0.696995654 container remove 230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6 (image=quay.io/ceph/ceph:v18, name=crazy_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:36 np0005473739 systemd[1]: libpod-conmon-230f422c257921b1934a14e689c75d94a9c990eb0b584b5498a186bebb46ace6.scope: Deactivated successfully.
Oct  7 09:31:36 np0005473739 podman[77596]: 2025-10-07 13:31:36.805073017 +0000 UTC m=+0.052624937 container create e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8 (image=quay.io/ceph/ceph:v18, name=brave_carson, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:36 np0005473739 systemd[1]: Started libpod-conmon-e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8.scope.
Oct  7 09:31:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d93786f1c69e9f9c4edc4331b23939e88eb96a13c810e7747224876cd24aed/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d93786f1c69e9f9c4edc4331b23939e88eb96a13c810e7747224876cd24aed/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/87d93786f1c69e9f9c4edc4331b23939e88eb96a13c810e7747224876cd24aed/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:36 np0005473739 podman[77596]: 2025-10-07 13:31:36.87039618 +0000 UTC m=+0.117948140 container init e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8 (image=quay.io/ceph/ceph:v18, name=brave_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:36 np0005473739 podman[77596]: 2025-10-07 13:31:36.779246613 +0000 UTC m=+0.026798573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:36 np0005473739 podman[77596]: 2025-10-07 13:31:36.877044807 +0000 UTC m=+0.124596737 container start e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8 (image=quay.io/ceph/ceph:v18, name=brave_carson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:36 np0005473739 podman[77596]: 2025-10-07 13:31:36.880320818 +0000 UTC m=+0.127872748 container attach e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8 (image=quay.io/ceph/ceph:v18, name=brave_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 09:31:37 np0005473739 podman[77710]: 2025-10-07 13:31:37.193451573 +0000 UTC m=+0.056721302 container create 572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:37 np0005473739 systemd[1]: Started libpod-conmon-572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3.scope.
Oct  7 09:31:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:37 np0005473739 podman[77710]: 2025-10-07 13:31:37.171082166 +0000 UTC m=+0.034351935 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:37 np0005473739 podman[77710]: 2025-10-07 13:31:37.277544732 +0000 UTC m=+0.140814501 container init 572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:37 np0005473739 podman[77710]: 2025-10-07 13:31:37.288561481 +0000 UTC m=+0.151831220 container start 572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:37 np0005473739 podman[77710]: 2025-10-07 13:31:37.29172883 +0000 UTC m=+0.154998569 container attach 572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:31:37 np0005473739 interesting_hertz[77745]: 167 167
Oct  7 09:31:37 np0005473739 systemd[1]: libpod-572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3.scope: Deactivated successfully.
Oct  7 09:31:37 np0005473739 podman[77710]: 2025-10-07 13:31:37.296316158 +0000 UTC m=+0.159585937 container died 572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:31:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f8ce9891832f415675d10ddd3f8f1c647249d3dc08bd63b64288769eb066b0b9-merged.mount: Deactivated successfully.
Oct  7 09:31:37 np0005473739 podman[77710]: 2025-10-07 13:31:37.342463623 +0000 UTC m=+0.205733362 container remove 572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_hertz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:31:37 np0005473739 systemd[1]: libpod-conmon-572cb79d72d57c3ba6c94e21a6a6ba3e1aea9ed75317367fd34de2cad97f65a3.scope: Deactivated successfully.
Oct  7 09:31:37 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14166 -' entity='client.admin' cmd=[{"prefix": "orch host label add", "hostname": "compute-0", "label": "_admin", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  7 09:31:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:37 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Added label _admin to host compute-0
Oct  7 09:31:37 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Added label _admin to host compute-0
Oct  7 09:31:37 np0005473739 brave_carson[77658]: Added label _admin to host compute-0
Oct  7 09:31:37 np0005473739 systemd[1]: libpod-e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8.scope: Deactivated successfully.
Oct  7 09:31:37 np0005473739 podman[77764]: 2025-10-07 13:31:37.415717448 +0000 UTC m=+0.021694159 container died e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8 (image=quay.io/ceph/ceph:v18, name=brave_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-87d93786f1c69e9f9c4edc4331b23939e88eb96a13c810e7747224876cd24aed-merged.mount: Deactivated successfully.
Oct  7 09:31:37 np0005473739 podman[77764]: 2025-10-07 13:31:37.460511715 +0000 UTC m=+0.066488386 container remove e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8 (image=quay.io/ceph/ceph:v18, name=brave_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 09:31:37 np0005473739 systemd[1]: libpod-conmon-e28566726367f944a69576deff28442632310688ee2844fec45b8f1dbc3bb2d8.scope: Deactivated successfully.
Oct  7 09:31:37 np0005473739 podman[77780]: 2025-10-07 13:31:37.534550242 +0000 UTC m=+0.044252322 container create edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:37 np0005473739 systemd[1]: Started libpod-conmon-edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03.scope.
Oct  7 09:31:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e6c6e4fcf2fcc12036eab0f86323c1131b1e3e037243ea7bc11c41a2a997395/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e6c6e4fcf2fcc12036eab0f86323c1131b1e3e037243ea7bc11c41a2a997395/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e6c6e4fcf2fcc12036eab0f86323c1131b1e3e037243ea7bc11c41a2a997395/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:37 np0005473739 podman[77780]: 2025-10-07 13:31:37.515664052 +0000 UTC m=+0.025366162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:37 np0005473739 podman[77780]: 2025-10-07 13:31:37.617268882 +0000 UTC m=+0.126970962 container init edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:31:37 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:37 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:37 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:37 np0005473739 podman[77780]: 2025-10-07 13:31:37.623557189 +0000 UTC m=+0.133259299 container start edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:31:37 np0005473739 podman[77780]: 2025-10-07 13:31:37.627704485 +0000 UTC m=+0.137406565 container attach edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 09:31:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target_autotune}] v 0) v1
Oct  7 09:31:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/178767237' entity='client.admin' 
Oct  7 09:31:38 np0005473739 systemd[1]: libpod-edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03.scope: Deactivated successfully.
Oct  7 09:31:38 np0005473739 podman[77780]: 2025-10-07 13:31:38.17186088 +0000 UTC m=+0.681562950 container died edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 09:31:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3e6c6e4fcf2fcc12036eab0f86323c1131b1e3e037243ea7bc11c41a2a997395-merged.mount: Deactivated successfully.
Oct  7 09:31:38 np0005473739 podman[77780]: 2025-10-07 13:31:38.307783073 +0000 UTC m=+0.817485153 container remove edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03 (image=quay.io/ceph/ceph:v18, name=gracious_liskov, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:31:38 np0005473739 systemd[1]: libpod-conmon-edc42d45de7b3f45252d48a5e4c94e18ff2c4fe3039d8a44e5b435d099b22c03.scope: Deactivated successfully.
Oct  7 09:31:38 np0005473739 podman[77837]: 2025-10-07 13:31:38.369800804 +0000 UTC m=+0.039666275 container create 3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7 (image=quay.io/ceph/ceph:v18, name=sweet_euclid, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:38 np0005473739 systemd[1]: Started libpod-conmon-3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7.scope.
Oct  7 09:31:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6977e7f392d0a25f6383bb07f77b751374a7fffbbd631a9b3d883193f616062f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6977e7f392d0a25f6383bb07f77b751374a7fffbbd631a9b3d883193f616062f/merged/etc/ceph/ceph.client.admin.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6977e7f392d0a25f6383bb07f77b751374a7fffbbd631a9b3d883193f616062f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:38 np0005473739 podman[77837]: 2025-10-07 13:31:38.440832016 +0000 UTC m=+0.110697527 container init 3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7 (image=quay.io/ceph/ceph:v18, name=sweet_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:38 np0005473739 podman[77837]: 2025-10-07 13:31:38.349900065 +0000 UTC m=+0.019765556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:38 np0005473739 podman[77837]: 2025-10-07 13:31:38.446775263 +0000 UTC m=+0.116640724 container start 3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7 (image=quay.io/ceph/ceph:v18, name=sweet_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:38 np0005473739 podman[77837]: 2025-10-07 13:31:38.450178078 +0000 UTC m=+0.120043589 container attach 3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7 (image=quay.io/ceph/ceph:v18, name=sweet_euclid, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  7 09:31:38 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/dashboard/cluster/status}] v 0) v1
Oct  7 09:31:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/641682577' entity='client.admin' 
Oct  7 09:31:39 np0005473739 sweet_euclid[77853]: set mgr/dashboard/cluster/status
Oct  7 09:31:39 np0005473739 systemd[1]: libpod-3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7.scope: Deactivated successfully.
Oct  7 09:31:39 np0005473739 podman[77879]: 2025-10-07 13:31:39.100525874 +0000 UTC m=+0.024620422 container died 3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7 (image=quay.io/ceph/ceph:v18, name=sweet_euclid, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6977e7f392d0a25f6383bb07f77b751374a7fffbbd631a9b3d883193f616062f-merged.mount: Deactivated successfully.
Oct  7 09:31:39 np0005473739 podman[77879]: 2025-10-07 13:31:39.144743974 +0000 UTC m=+0.068838482 container remove 3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7 (image=quay.io/ceph/ceph:v18, name=sweet_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:39 np0005473739 systemd[1]: libpod-conmon-3f6a4b8a47d9d223190c6cae14acd4a635eb3cc963e229fe4c679215d5dcd2e7.scope: Deactivated successfully.
Oct  7 09:31:39 np0005473739 ceph-mon[74295]: Added label _admin to host compute-0
Oct  7 09:31:39 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/178767237' entity='client.admin' 
Oct  7 09:31:39 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/641682577' entity='client.admin' 
Oct  7 09:31:39 np0005473739 podman[77901]: 2025-10-07 13:31:39.344441026 +0000 UTC m=+0.051468925 container create 8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elgamal, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 09:31:39 np0005473739 systemd[1]: Started libpod-conmon-8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4.scope.
Oct  7 09:31:39 np0005473739 podman[77901]: 2025-10-07 13:31:39.319369363 +0000 UTC m=+0.026397352 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996df606588583180ada3ca6657619f64b1482be6dd1843b40143b82b34b4cad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996df606588583180ada3ca6657619f64b1482be6dd1843b40143b82b34b4cad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996df606588583180ada3ca6657619f64b1482be6dd1843b40143b82b34b4cad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/996df606588583180ada3ca6657619f64b1482be6dd1843b40143b82b34b4cad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:39 np0005473739 podman[77901]: 2025-10-07 13:31:39.448412713 +0000 UTC m=+0.155440632 container init 8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:39 np0005473739 podman[77901]: 2025-10-07 13:31:39.455485681 +0000 UTC m=+0.162513580 container start 8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:39 np0005473739 podman[77901]: 2025-10-07 13:31:39.458768603 +0000 UTC m=+0.165796542 container attach 8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 09:31:39 np0005473739 python3[77948]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set mgr mgr/cephadm/use_repo_digest false#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:31:39 np0005473739 podman[77949]: 2025-10-07 13:31:39.820977045 +0000 UTC m=+0.048553384 container create 7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e (image=quay.io/ceph/ceph:v18, name=great_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:39 np0005473739 systemd[1]: Started libpod-conmon-7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e.scope.
Oct  7 09:31:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd6ae4bc3468635db8b50b9c5bc7599484a888ca884c21ef850167059e33b52/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ffd6ae4bc3468635db8b50b9c5bc7599484a888ca884c21ef850167059e33b52/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:39 np0005473739 podman[77949]: 2025-10-07 13:31:39.79479176 +0000 UTC m=+0.022368109 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:39 np0005473739 podman[77949]: 2025-10-07 13:31:39.89495878 +0000 UTC m=+0.122535139 container init 7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e (image=quay.io/ceph/ceph:v18, name=great_mirzakhani, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:31:39 np0005473739 podman[77949]: 2025-10-07 13:31:39.90386915 +0000 UTC m=+0.131445489 container start 7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e (image=quay.io/ceph/ceph:v18, name=great_mirzakhani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:39 np0005473739 podman[77949]: 2025-10-07 13:31:39.908226312 +0000 UTC m=+0.135802681 container attach 7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e (image=quay.io/ceph/ceph:v18, name=great_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mgr/cephadm/use_repo_digest}] v 0) v1
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/515741119' entity='client.admin' 
Oct  7 09:31:40 np0005473739 ceph-mgr[74587]: mgr.server send_report Not sending PG status to monitor yet, waiting for OSDs
Oct  7 09:31:40 np0005473739 systemd[1]: libpod-7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e.scope: Deactivated successfully.
Oct  7 09:31:40 np0005473739 podman[77949]: 2025-10-07 13:31:40.509724487 +0000 UTC m=+0.737300826 container died 7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e (image=quay.io/ceph/ceph:v18, name=great_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:31:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ffd6ae4bc3468635db8b50b9c5bc7599484a888ca884c21ef850167059e33b52-merged.mount: Deactivated successfully.
Oct  7 09:31:40 np0005473739 podman[77949]: 2025-10-07 13:31:40.556970782 +0000 UTC m=+0.784547121 container remove 7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e (image=quay.io/ceph/ceph:v18, name=great_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:31:40 np0005473739 systemd[1]: libpod-conmon-7d0897232224dcb5a6d41b2967f37dc466c45521fbb6cc85461e14ffc6fab95e.scope: Deactivated successfully.
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]: [
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:    {
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        "available": false,
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        "ceph_device": false,
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        "lsm_data": {},
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        "lvs": [],
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        "path": "/dev/sr0",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        "rejected_reasons": [
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "Insufficient space (<5GB)",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "Has a FileSystem"
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        ],
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        "sys_api": {
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "actuators": null,
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "device_nodes": "sr0",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "devname": "sr0",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "human_readable_size": "482.00 KB",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "id_bus": "ata",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "model": "QEMU DVD-ROM",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "nr_requests": "2",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "parent": "/dev/sr0",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "partitions": {},
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "path": "/dev/sr0",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "removable": "1",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "rev": "2.5+",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "ro": "0",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "rotational": "0",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "sas_address": "",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "sas_device_handle": "",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "scheduler_mode": "mq-deadline",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "sectors": 0,
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "sectorsize": "2048",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "size": 493568.0,
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "support_discard": "2048",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "type": "disk",
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:            "vendor": "QEMU"
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:        }
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]:    }
Oct  7 09:31:40 np0005473739 zealous_elgamal[77918]: ]
Oct  7 09:31:40 np0005473739 systemd[1]: libpod-8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4.scope: Deactivated successfully.
Oct  7 09:31:40 np0005473739 systemd[1]: libpod-8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4.scope: Consumed 1.384s CPU time.
Oct  7 09:31:40 np0005473739 podman[79686]: 2025-10-07 13:31:40.855270061 +0000 UTC m=+0.021964538 container died 8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elgamal, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:31:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-996df606588583180ada3ca6657619f64b1482be6dd1843b40143b82b34b4cad-merged.mount: Deactivated successfully.
Oct  7 09:31:40 np0005473739 podman[79686]: 2025-10-07 13:31:40.912071044 +0000 UTC m=+0.078765511 container remove 8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_elgamal, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 09:31:40 np0005473739 systemd[1]: libpod-conmon-8b3cbd08b0ee067906300a92e1c9378dab29d53d2d25c15bba44127d949038d4.scope: Deactivated successfully.
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:31:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:31:40 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.conf
Oct  7 09:31:40 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.conf
Oct  7 09:31:41 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/515741119' entity='client.admin' 
Oct  7 09:31:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 09:31:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:31:41 np0005473739 ceph-mon[74295]: Updating compute-0:/etc/ceph/ceph.conf
Oct  7 09:31:41 np0005473739 ansible-async_wrapper.py[80026]: Invoked with j336305382475 30 /home/zuul/.ansible/tmp/ansible-tmp-1759843900.9129574-33101-153935334677237/AnsiballZ_command.py _
Oct  7 09:31:41 np0005473739 ansible-async_wrapper.py[80101]: Starting module and watcher
Oct  7 09:31:41 np0005473739 ansible-async_wrapper.py[80101]: Start watching 80103 (30)
Oct  7 09:31:41 np0005473739 ansible-async_wrapper.py[80103]: Start module (80103)
Oct  7 09:31:41 np0005473739 ansible-async_wrapper.py[80026]: Return async_wrapper task started.
Oct  7 09:31:41 np0005473739 python3[80104]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:31:41 np0005473739 podman[80152]: 2025-10-07 13:31:41.806732622 +0000 UTC m=+0.053873322 container create 7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:31:41 np0005473739 systemd[1]: Started libpod-conmon-7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40.scope.
Oct  7 09:31:41 np0005473739 podman[80152]: 2025-10-07 13:31:41.784215391 +0000 UTC m=+0.031356101 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1621ea41a159da28c8bc885d82a6410f4ff60077fd33d225ade4d6b871f18594/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1621ea41a159da28c8bc885d82a6410f4ff60077fd33d225ade4d6b871f18594/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:41 np0005473739 podman[80152]: 2025-10-07 13:31:41.918080117 +0000 UTC m=+0.165220827 container init 7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 09:31:41 np0005473739 podman[80152]: 2025-10-07 13:31:41.925661809 +0000 UTC m=+0.172802509 container start 7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:41 np0005473739 podman[80152]: 2025-10-07 13:31:41.929824786 +0000 UTC m=+0.176965516 container attach 7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:42 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/config/ceph.conf
Oct  7 09:31:42 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/config/ceph.conf
Oct  7 09:31:42 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14174 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  7 09:31:42 np0005473739 nostalgic_hawking[80193]: 
Oct  7 09:31:42 np0005473739 nostalgic_hawking[80193]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  7 09:31:42 np0005473739 systemd[1]: libpod-7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40.scope: Deactivated successfully.
Oct  7 09:31:42 np0005473739 podman[80152]: 2025-10-07 13:31:42.480984108 +0000 UTC m=+0.728124798 container died 7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 09:31:42 np0005473739 ceph-mon[74295]: Updating compute-0:/var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/config/ceph.conf
Oct  7 09:31:42 np0005473739 ceph-mgr[74587]: mgr.server send_report Giving up on OSDs that haven't reported yet, sending potentially incomplete PG state to mon
Oct  7 09:31:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:42 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  7 09:31:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1621ea41a159da28c8bc885d82a6410f4ff60077fd33d225ade4d6b871f18594-merged.mount: Deactivated successfully.
Oct  7 09:31:42 np0005473739 podman[80152]: 2025-10-07 13:31:42.535182979 +0000 UTC m=+0.782323649 container remove 7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40 (image=quay.io/ceph/ceph:v18, name=nostalgic_hawking, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 09:31:42 np0005473739 systemd[1]: libpod-conmon-7b5ae537ec91a53a41c41593c00f28c6441d680301c1caafeaee4426aba16a40.scope: Deactivated successfully.
Oct  7 09:31:42 np0005473739 ansible-async_wrapper.py[80103]: Module complete (80103)
Oct  7 09:31:43 np0005473739 python3[80677]: ansible-ansible.legacy.async_status Invoked with jid=j336305382475.80026 mode=status _async_dir=/root/.ansible_async
Oct  7 09:31:43 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  7 09:31:43 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  7 09:31:43 np0005473739 python3[80845]: ansible-ansible.legacy.async_status Invoked with jid=j336305382475.80026 mode=cleanup _async_dir=/root/.ansible_async
Oct  7 09:31:43 np0005473739 ceph-mon[74295]: Health check failed: OSD count 0 < osd_pool_default_size 1 (TOO_FEW_OSDS)
Oct  7 09:31:43 np0005473739 ceph-mon[74295]: Updating compute-0:/etc/ceph/ceph.client.admin.keyring
Oct  7 09:31:43 np0005473739 python3[81069]: ansible-ansible.builtin.stat Invoked with path=/home/ceph-admin/specs/ceph_spec.yaml follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:31:44 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Updating compute-0:/var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/config/ceph.client.admin.keyring
Oct  7 09:31:44 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Updating compute-0:/var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/config/ceph.client.admin.keyring
Oct  7 09:31:44 np0005473739 python3[81330]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:31:44 np0005473739 podman[81381]: 2025-10-07 13:31:44.45869668 +0000 UTC m=+0.113115114 container create d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71 (image=quay.io/ceph/ceph:v18, name=cranky_golick, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:44 np0005473739 podman[81381]: 2025-10-07 13:31:44.384281732 +0000 UTC m=+0.038700276 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v4: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:44 np0005473739 ceph-mon[74295]: Updating compute-0:/var/lib/ceph/82044f27-a8da-5b2a-a297-ff6afc620e1f/config/ceph.client.admin.keyring
Oct  7 09:31:44 np0005473739 systemd[1]: Started libpod-conmon-d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71.scope.
Oct  7 09:31:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3faa750752e2277ebfbe6e6e56f8392f4dee19524712f9b1605a3e49f4a8b923/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3faa750752e2277ebfbe6e6e56f8392f4dee19524712f9b1605a3e49f4a8b923/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3faa750752e2277ebfbe6e6e56f8392f4dee19524712f9b1605a3e49f4a8b923/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:44 np0005473739 podman[81381]: 2025-10-07 13:31:44.865845202 +0000 UTC m=+0.520263666 container init d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71 (image=quay.io/ceph/ceph:v18, name=cranky_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 09:31:44 np0005473739 podman[81381]: 2025-10-07 13:31:44.87358089 +0000 UTC m=+0.527999324 container start d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71 (image=quay.io/ceph/ceph:v18, name=cranky_golick, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 09:31:44 np0005473739 podman[81381]: 2025-10-07 13:31:44.904551028 +0000 UTC m=+0.558969492 container attach d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71 (image=quay.io/ceph/ceph:v18, name=cranky_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e2 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:45 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev da1e6bf1-2d5a-4b28-9fb4-ed6ce6d5c4f4 (Updating crash deployment (+1 -> 1))
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]} v 0) v1
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:31:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:31:45 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Deploying daemon crash.compute-0 on compute-0
Oct  7 09:31:45 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Deploying daemon crash.compute-0 on compute-0
Oct  7 09:31:45 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14176 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  7 09:31:45 np0005473739 cranky_golick[81519]: 
Oct  7 09:31:45 np0005473739 cranky_golick[81519]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  7 09:31:45 np0005473739 systemd[1]: libpod-d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71.scope: Deactivated successfully.
Oct  7 09:31:45 np0005473739 podman[81381]: 2025-10-07 13:31:45.410689917 +0000 UTC m=+1.065108371 container died d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71 (image=quay.io/ceph/ceph:v18, name=cranky_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3faa750752e2277ebfbe6e6e56f8392f4dee19524712f9b1605a3e49f4a8b923-merged.mount: Deactivated successfully.
Oct  7 09:31:45 np0005473739 podman[81381]: 2025-10-07 13:31:45.45853603 +0000 UTC m=+1.112954464 container remove d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71 (image=quay.io/ceph/ceph:v18, name=cranky_golick, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct  7 09:31:45 np0005473739 systemd[1]: libpod-conmon-d15e7e491302e03106bcc6a9e06a35b0ecceedc127137d766b49404cf21ada71.scope: Deactivated successfully.
Oct  7 09:31:45 np0005473739 podman[81945]: 2025-10-07 13:31:45.780694467 +0000 UTC m=+0.034920680 container create aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:31:45 np0005473739 systemd[1]: Started libpod-conmon-aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789.scope.
Oct  7 09:31:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:45 np0005473739 podman[81945]: 2025-10-07 13:31:45.766101828 +0000 UTC m=+0.020328061 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:45 np0005473739 podman[81945]: 2025-10-07 13:31:45.863281524 +0000 UTC m=+0.117507777 container init aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:31:45 np0005473739 podman[81945]: 2025-10-07 13:31:45.869685173 +0000 UTC m=+0.123911386 container start aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:45 np0005473739 trusting_fermi[81966]: 167 167
Oct  7 09:31:45 np0005473739 systemd[1]: libpod-aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789.scope: Deactivated successfully.
Oct  7 09:31:45 np0005473739 podman[81945]: 2025-10-07 13:31:45.874578961 +0000 UTC m=+0.128805234 container attach aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 09:31:45 np0005473739 podman[81945]: 2025-10-07 13:31:45.875408264 +0000 UTC m=+0.129634497 container died aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b05bd0153ab822a6651f46e700de46ed8570bca9dabc224dca65bf0f12183b48-merged.mount: Deactivated successfully.
Oct  7 09:31:45 np0005473739 podman[81945]: 2025-10-07 13:31:45.910388325 +0000 UTC m=+0.164614538 container remove aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_fermi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 09:31:45 np0005473739 systemd[1]: libpod-conmon-aa4d9793903722241c600c87ef2113d1524e1d8a30599d333a7397b1cb0c2789.scope: Deactivated successfully.
Oct  7 09:31:45 np0005473739 systemd[1]: Reloading.
Oct  7 09:31:46 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:31:46 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:31:46 np0005473739 python3[81991]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:31:46 np0005473739 podman[82042]: 2025-10-07 13:31:46.086798454 +0000 UTC m=+0.040991580 container create 8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24 (image=quay.io/ceph/ceph:v18, name=quizzical_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:31:46 np0005473739 podman[82042]: 2025-10-07 13:31:46.070065865 +0000 UTC m=+0.024259031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:46 np0005473739 systemd[1]: Started libpod-conmon-8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24.scope.
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]: dispatch
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth get-or-create", "entity": "client.crash.compute-0", "caps": ["mon", "profile crash", "mgr", "profile crash"]}]': finished
Oct  7 09:31:46 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcb269fb2ca1c2f3e17e74920dc4ad8db013fe477a1ae582350ae86da744736/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcb269fb2ca1c2f3e17e74920dc4ad8db013fe477a1ae582350ae86da744736/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1bcb269fb2ca1c2f3e17e74920dc4ad8db013fe477a1ae582350ae86da744736/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:46 np0005473739 podman[82042]: 2025-10-07 13:31:46.253709406 +0000 UTC m=+0.207902562 container init 8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24 (image=quay.io/ceph/ceph:v18, name=quizzical_haibt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:46 np0005473739 systemd[1]: Reloading.
Oct  7 09:31:46 np0005473739 podman[82042]: 2025-10-07 13:31:46.261084694 +0000 UTC m=+0.215277820 container start 8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24 (image=quay.io/ceph/ceph:v18, name=quizzical_haibt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 09:31:46 np0005473739 podman[82042]: 2025-10-07 13:31:46.264578862 +0000 UTC m=+0.218771988 container attach 8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24 (image=quay.io/ceph/ceph:v18, name=quizzical_haibt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:46 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:31:46 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:31:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v5: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:46 np0005473739 systemd[1]: Starting Ceph crash.compute-0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:31:46 np0005473739 ansible-async_wrapper.py[80101]: Done in kid B.
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=log_to_file}] v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4270431825' entity='client.admin' 
Oct  7 09:31:46 np0005473739 podman[82171]: 2025-10-07 13:31:46.791765892 +0000 UTC m=+0.046884277 container create b63fb31a4b12a6f108788b29221eaa714d1de5382db9a7387c71346595a82b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:46 np0005473739 systemd[1]: libpod-8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24.scope: Deactivated successfully.
Oct  7 09:31:46 np0005473739 podman[82042]: 2025-10-07 13:31:46.803213613 +0000 UTC m=+0.757406729 container died 8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24 (image=quay.io/ceph/ceph:v18, name=quizzical_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:46 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1bcb269fb2ca1c2f3e17e74920dc4ad8db013fe477a1ae582350ae86da744736-merged.mount: Deactivated successfully.
Oct  7 09:31:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972abb60cb94e09c14c1d700da81cfc9772e56f65620e8e6b82bc9ce4363bd7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972abb60cb94e09c14c1d700da81cfc9772e56f65620e8e6b82bc9ce4363bd7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972abb60cb94e09c14c1d700da81cfc9772e56f65620e8e6b82bc9ce4363bd7/merged/etc/ceph/ceph.client.crash.compute-0.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c972abb60cb94e09c14c1d700da81cfc9772e56f65620e8e6b82bc9ce4363bd7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:46 np0005473739 podman[82042]: 2025-10-07 13:31:46.849687536 +0000 UTC m=+0.803880662 container remove 8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24 (image=quay.io/ceph/ceph:v18, name=quizzical_haibt, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 09:31:46 np0005473739 podman[82171]: 2025-10-07 13:31:46.858883005 +0000 UTC m=+0.114001470 container init b63fb31a4b12a6f108788b29221eaa714d1de5382db9a7387c71346595a82b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:31:46 np0005473739 podman[82171]: 2025-10-07 13:31:46.774580549 +0000 UTC m=+0.029698944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:46 np0005473739 podman[82171]: 2025-10-07 13:31:46.866375774 +0000 UTC m=+0.121494199 container start b63fb31a4b12a6f108788b29221eaa714d1de5382db9a7387c71346595a82b85 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:46 np0005473739 bash[82171]: b63fb31a4b12a6f108788b29221eaa714d1de5382db9a7387c71346595a82b85
Oct  7 09:31:46 np0005473739 systemd[1]: libpod-conmon-8984533fe8c4d4d9327f43b859764ca1959a612cc76d4e0dab722c615d6f6d24.scope: Deactivated successfully.
Oct  7 09:31:46 np0005473739 systemd[1]: Started Ceph crash.compute-0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:46 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev da1e6bf1-2d5a-4b28-9fb4-ed6ce6d5c4f4 (Updating crash deployment (+1 -> 1))
Oct  7 09:31:46 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event da1e6bf1-2d5a-4b28-9fb4-ed6ce6d5c4f4 (Updating crash deployment (+1 -> 1)) in 2 seconds
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.crash}] v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a89d7c75-85d9-4f39-87e8-b514deb0193d does not exist
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:46 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 88b7e60a-1831-4b24-93de-c6b6cfbef95b (Updating mgr deployment (+1 -> 2))
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdcgnb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdcgnb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdcgnb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:31:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:31:46 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Deploying daemon mgr.compute-0.jdcgnb on compute-0
Oct  7 09:31:46 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Deploying daemon mgr.compute-0.jdcgnb on compute-0
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: INFO:ceph-crash:pinging cluster to exercise our key
Oct  7 09:31:47 np0005473739 python3[82254]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config set global mon_cluster_log_to_file true _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: Deploying daemon crash.compute-0 on compute-0
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/4270431825' entity='client.admin' 
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdcgnb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.jdcgnb", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]': finished
Oct  7 09:31:47 np0005473739 podman[82329]: 2025-10-07 13:31:47.257011713 +0000 UTC m=+0.035925559 container create e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd (image=quay.io/ceph/ceph:v18, name=upbeat_diffie, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: 2025-10-07T13:31:47.262+0000 7fe4bed34640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: 2025-10-07T13:31:47.262+0000 7fe4bed34640 -1 AuthRegistry(0x7fe4b8067440) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: 2025-10-07T13:31:47.263+0000 7fe4bed34640 -1 auth: unable to find a keyring on /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: 2025-10-07T13:31:47.263+0000 7fe4bed34640 -1 AuthRegistry(0x7fe4bed33000) no keyring found at /etc/ceph/ceph.client.admin.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin, disabling cephx
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: 2025-10-07T13:31:47.264+0000 7fe4bcaa9640 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: 2025-10-07T13:31:47.264+0000 7fe4bed34640 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: [errno 13] RADOS permission denied (error connecting to the cluster)
Oct  7 09:31:47 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-crash-compute-0[82190]: INFO:ceph-crash:monitoring path /var/lib/ceph/crash, delay 600s
Oct  7 09:31:47 np0005473739 systemd[1]: Started libpod-conmon-e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd.scope.
Oct  7 09:31:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a1c4abe5396b9bcb78127c540752b51af3a7d87cfaff7ae4ce2c4d27669eef/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a1c4abe5396b9bcb78127c540752b51af3a7d87cfaff7ae4ce2c4d27669eef/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/06a1c4abe5396b9bcb78127c540752b51af3a7d87cfaff7ae4ce2c4d27669eef/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:47 np0005473739 podman[82329]: 2025-10-07 13:31:47.325960417 +0000 UTC m=+0.104874273 container init e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd (image=quay.io/ceph/ceph:v18, name=upbeat_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:47 np0005473739 podman[82329]: 2025-10-07 13:31:47.331335968 +0000 UTC m=+0.110249814 container start e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd (image=quay.io/ceph/ceph:v18, name=upbeat_diffie, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:31:47 np0005473739 podman[82329]: 2025-10-07 13:31:47.334133637 +0000 UTC m=+0.113047513 container attach e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd (image=quay.io/ceph/ceph:v18, name=upbeat_diffie, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:47 np0005473739 podman[82329]: 2025-10-07 13:31:47.244013629 +0000 UTC m=+0.022927495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:47 np0005473739 podman[82399]: 2025-10-07 13:31:47.495527064 +0000 UTC m=+0.034841587 container create f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:47 np0005473739 systemd[1]: Started libpod-conmon-f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce.scope.
Oct  7 09:31:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:47 np0005473739 podman[82399]: 2025-10-07 13:31:47.567291618 +0000 UTC m=+0.106606141 container init f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:47 np0005473739 podman[82399]: 2025-10-07 13:31:47.573031269 +0000 UTC m=+0.112345792 container start f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:31:47 np0005473739 epic_mayer[82415]: 167 167
Oct  7 09:31:47 np0005473739 systemd[1]: libpod-f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce.scope: Deactivated successfully.
Oct  7 09:31:47 np0005473739 podman[82399]: 2025-10-07 13:31:47.480005299 +0000 UTC m=+0.019319852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:47 np0005473739 podman[82399]: 2025-10-07 13:31:47.578119361 +0000 UTC m=+0.117433904 container attach f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 09:31:47 np0005473739 podman[82399]: 2025-10-07 13:31:47.578374499 +0000 UTC m=+0.117689032 container died f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:31:47 np0005473739 ceph-mgr[74587]: [progress INFO root] Writing back 1 completed events
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b463c7caa72471cbf230bf9dcf539dc22db10409ebde05fdce03b9fab62fdfa0-merged.mount: Deactivated successfully.
Oct  7 09:31:47 np0005473739 podman[82399]: 2025-10-07 13:31:47.613997128 +0000 UTC m=+0.153311651 container remove f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 09:31:47 np0005473739 systemd[1]: libpod-conmon-f4266b1f64581f7bf1ad3754796856948697761c8d25e6b00de0b54bb2036bce.scope: Deactivated successfully.
Oct  7 09:31:47 np0005473739 systemd[1]: Reloading.
Oct  7 09:31:47 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:31:47 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mon_cluster_log_to_file}] v 0) v1
Oct  7 09:31:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1549962654' entity='client.admin' 
Oct  7 09:31:47 np0005473739 podman[82329]: 2025-10-07 13:31:47.868679573 +0000 UTC m=+0.647593419 container died e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd (image=quay.io/ceph/ceph:v18, name=upbeat_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:47 np0005473739 systemd[1]: libpod-e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd.scope: Deactivated successfully.
Oct  7 09:31:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-06a1c4abe5396b9bcb78127c540752b51af3a7d87cfaff7ae4ce2c4d27669eef-merged.mount: Deactivated successfully.
Oct  7 09:31:47 np0005473739 podman[82329]: 2025-10-07 13:31:47.960671664 +0000 UTC m=+0.739585510 container remove e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd (image=quay.io/ceph/ceph:v18, name=upbeat_diffie, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:47 np0005473739 systemd[1]: libpod-conmon-e103f93ceeacb310744ac77a59715dc226b838cfcb99fe37a18b62ce43270dbd.scope: Deactivated successfully.
Oct  7 09:31:47 np0005473739 systemd[1]: Reloading.
Oct  7 09:31:48 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:31:48 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: Deploying daemon mgr.compute-0.jdcgnb on compute-0
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1549962654' entity='client.admin' 
Oct  7 09:31:48 np0005473739 systemd[1]: Starting Ceph mgr.compute-0.jdcgnb for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:31:48 np0005473739 python3[82570]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd set-require-min-compat-client mimic#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:31:48 np0005473739 podman[82618]: 2025-10-07 13:31:48.473800979 +0000 UTC m=+0.038567523 container create a752ffd4102f2c370cb89ddc513155579d87f3caddbd6c3f57baae4eacc4fa93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:31:48 np0005473739 podman[82624]: 2025-10-07 13:31:48.496480875 +0000 UTC m=+0.037996197 container create 94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7 (image=quay.io/ceph/ceph:v18, name=cranky_lewin, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v6: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f860dd1b3a27c413ec7960e6577873321a7461a04e2e6c7bc7f357aa88cfc4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f860dd1b3a27c413ec7960e6577873321a7461a04e2e6c7bc7f357aa88cfc4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f860dd1b3a27c413ec7960e6577873321a7461a04e2e6c7bc7f357aa88cfc4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a0f860dd1b3a27c413ec7960e6577873321a7461a04e2e6c7bc7f357aa88cfc4/merged/var/lib/ceph/mgr/ceph-compute-0.jdcgnb supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:48 np0005473739 systemd[1]: Started libpod-conmon-94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7.scope.
Oct  7 09:31:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b68e3cbfb72ddc8ff6b16ea481b7767950b7ab64b8e617235416193c3507698/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b68e3cbfb72ddc8ff6b16ea481b7767950b7ab64b8e617235416193c3507698/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7b68e3cbfb72ddc8ff6b16ea481b7767950b7ab64b8e617235416193c3507698/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:48 np0005473739 podman[82618]: 2025-10-07 13:31:48.543057482 +0000 UTC m=+0.107824076 container init a752ffd4102f2c370cb89ddc513155579d87f3caddbd6c3f57baae4eacc4fa93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 09:31:48 np0005473739 podman[82618]: 2025-10-07 13:31:48.455067534 +0000 UTC m=+0.019834108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:48 np0005473739 podman[82624]: 2025-10-07 13:31:48.553505485 +0000 UTC m=+0.095020827 container init 94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7 (image=quay.io/ceph/ceph:v18, name=cranky_lewin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:31:48 np0005473739 podman[82618]: 2025-10-07 13:31:48.553903156 +0000 UTC m=+0.118669720 container start a752ffd4102f2c370cb89ddc513155579d87f3caddbd6c3f57baae4eacc4fa93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:48 np0005473739 bash[82618]: a752ffd4102f2c370cb89ddc513155579d87f3caddbd6c3f57baae4eacc4fa93
Oct  7 09:31:48 np0005473739 podman[82624]: 2025-10-07 13:31:48.563176706 +0000 UTC m=+0.104692008 container start 94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7 (image=quay.io/ceph/ceph:v18, name=cranky_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:31:48 np0005473739 systemd[1]: Started Ceph mgr.compute-0.jdcgnb for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:31:48 np0005473739 podman[82624]: 2025-10-07 13:31:48.566206651 +0000 UTC m=+0.107721953 container attach 94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7 (image=quay.io/ceph/ceph:v18, name=cranky_lewin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:48 np0005473739 podman[82624]: 2025-10-07 13:31:48.481261568 +0000 UTC m=+0.022776890 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:48 np0005473739 ceph-mgr[82654]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:31:48 np0005473739 ceph-mgr[82654]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mgr, pid 2
Oct  7 09:31:48 np0005473739 ceph-mgr[82654]: pidfile_write: ignore empty --pid-file
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:48 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 88b7e60a-1831-4b24-93de-c6b6cfbef95b (Updating mgr deployment (+1 -> 2))
Oct  7 09:31:48 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 88b7e60a-1831-4b24-93de-c6b6cfbef95b (Updating mgr deployment (+1 -> 2)) in 2 seconds
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  7 09:31:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:48 np0005473739 ceph-mgr[82654]: mgr[py] Loading python module 'alerts'
Oct  7 09:31:49 np0005473739 ceph-mgr[82654]: mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  7 09:31:49 np0005473739 ceph-mgr[82654]: mgr[py] Loading python module 'balancer'
Oct  7 09:31:49 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb[82645]: 2025-10-07T13:31:49.040+0000 7f9548a75140 -1 mgr[py] Module alerts has missing NOTIFY_TYPES member
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd set-require-min-compat-client", "version": "mimic"} v 0) v1
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/668171641' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  7 09:31:49 np0005473739 ceph-mgr[82654]: mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  7 09:31:49 np0005473739 ceph-mgr[82654]: mgr[py] Loading python module 'cephadm'
Oct  7 09:31:49 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb[82645]: 2025-10-07T13:31:49.301+0000 7f9548a75140 -1 mgr[py] Module balancer has missing NOTIFY_TYPES member
Oct  7 09:31:49 np0005473739 podman[82922]: 2025-10-07 13:31:49.491820498 +0000 UTC m=+0.060958271 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:49 np0005473739 podman[82922]: 2025-10-07 13:31:49.594282302 +0000 UTC m=+0.163420045 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/668171641' entity='client.admin' cmd=[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]: dispatch
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e2 do_prune osdmap full prune enabled
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e2 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/668171641' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e3 e3: 0 total, 0 up, 0 in
Oct  7 09:31:49 np0005473739 cranky_lewin[82650]: set require_min_compat_client to mimic
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e3: 0 total, 0 up, 0 in
Oct  7 09:31:49 np0005473739 systemd[1]: libpod-94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7.scope: Deactivated successfully.
Oct  7 09:31:49 np0005473739 podman[82624]: 2025-10-07 13:31:49.666074767 +0000 UTC m=+1.207590069 container died 94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7 (image=quay.io/ceph/ceph:v18, name=cranky_lewin, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:49 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7b68e3cbfb72ddc8ff6b16ea481b7767950b7ab64b8e617235416193c3507698-merged.mount: Deactivated successfully.
Oct  7 09:31:49 np0005473739 podman[82624]: 2025-10-07 13:31:49.722754006 +0000 UTC m=+1.264269308 container remove 94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7 (image=quay.io/ceph/ceph:v18, name=cranky_lewin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:49 np0005473739 systemd[1]: libpod-conmon-94b10bf86ef0ac076d644f8ffb0e957c55b9a04d7706f34bd18cc90a09b57ca7.scope: Deactivated successfully.
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:31:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2f07faa6-277e-455f-8b99-bb3fcf6a23f1 does not exist
Oct  7 09:31:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ed2b3f4c-6ed5-443d-8a68-1d9fdeb63dcb does not exist
Oct  7 09:31:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3c89b45e-1d95-42fb-abb2-577b209b6e6d does not exist
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_user}] v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/alertmanager/web_password}] v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_user}] v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/prometheus/web_password}] v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Reconfiguring mon.compute-0 (unknown last config time)...
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Reconfiguring mon.compute-0 (unknown last config time)...
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "mon."} v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config get", "who": "mon", "key": "public_network"} v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config get", "who": "mon", "key": "public_network"}]: dispatch
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Reconfiguring daemon mon.compute-0 on compute-0
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Reconfiguring daemon mon.compute-0 on compute-0
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:31:50 np0005473739 python3[83170]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:31:50 np0005473739 podman[83195]: 2025-10-07 13:31:50.448007232 +0000 UTC m=+0.068406140 container create 1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd (image=quay.io/ceph/ceph:v18, name=vigilant_raman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v8: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:50 np0005473739 systemd[1]: Started libpod-conmon-1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd.scope.
Oct  7 09:31:50 np0005473739 podman[83195]: 2025-10-07 13:31:50.427748595 +0000 UTC m=+0.048147523 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c075f0faeb6b53a6a8d120e20cfd3dccb832832756b135f23bcf6e0574b6c5e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c075f0faeb6b53a6a8d120e20cfd3dccb832832756b135f23bcf6e0574b6c5e8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c075f0faeb6b53a6a8d120e20cfd3dccb832832756b135f23bcf6e0574b6c5e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:50 np0005473739 podman[83222]: 2025-10-07 13:31:50.555853458 +0000 UTC m=+0.060746565 container create 36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_black, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 09:31:50 np0005473739 podman[83195]: 2025-10-07 13:31:50.570451528 +0000 UTC m=+0.190850466 container init 1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd (image=quay.io/ceph/ceph:v18, name=vigilant_raman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:50 np0005473739 podman[83195]: 2025-10-07 13:31:50.586000544 +0000 UTC m=+0.206399452 container start 1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd (image=quay.io/ceph/ceph:v18, name=vigilant_raman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:31:50 np0005473739 podman[83195]: 2025-10-07 13:31:50.589381198 +0000 UTC m=+0.209780147 container attach 1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd (image=quay.io/ceph/ceph:v18, name=vigilant_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 09:31:50 np0005473739 systemd[1]: Started libpod-conmon-36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56.scope.
Oct  7 09:31:50 np0005473739 podman[83222]: 2025-10-07 13:31:50.518108229 +0000 UTC m=+0.023001376 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:50 np0005473739 podman[83222]: 2025-10-07 13:31:50.641959104 +0000 UTC m=+0.146852191 container init 36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_black, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 09:31:50 np0005473739 podman[83222]: 2025-10-07 13:31:50.647171609 +0000 UTC m=+0.152064686 container start 36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_black, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/668171641' entity='client.admin' cmd='[{"prefix": "osd set-require-min-compat-client", "version": "mimic"}]': finished
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: Reconfiguring mon.compute-0 (unknown last config time)...
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "mon."}]: dispatch
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: Reconfiguring daemon mon.compute-0 on compute-0
Oct  7 09:31:50 np0005473739 podman[83222]: 2025-10-07 13:31:50.65110938 +0000 UTC m=+0.156002467 container attach 36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_black, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:31:50 np0005473739 crazy_black[83242]: 167 167
Oct  7 09:31:50 np0005473739 systemd[1]: libpod-36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56.scope: Deactivated successfully.
Oct  7 09:31:50 np0005473739 podman[83222]: 2025-10-07 13:31:50.654832245 +0000 UTC m=+0.159725322 container died 36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_black, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e7fc0e4ddc8e7a600af7c336dbc89dcb686812ba62a64515b932e426a80a9e3d-merged.mount: Deactivated successfully.
Oct  7 09:31:50 np0005473739 podman[83222]: 2025-10-07 13:31:50.701580816 +0000 UTC m=+0.206473893 container remove 36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_black, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:31:50 np0005473739 systemd[1]: libpod-conmon-36be745f060a014e9ed22f5faf0a93e4bac3a9dba6328ef6dace68534237fa56.scope: Deactivated successfully.
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Reconfiguring mgr.compute-0.kdyrcd (unknown last config time)...
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Reconfiguring mgr.compute-0.kdyrcd (unknown last config time)...
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mgr.compute-0.kdyrcd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]} v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kdyrcd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:31:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Reconfiguring daemon mgr.compute-0.kdyrcd on compute-0
Oct  7 09:31:50 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Reconfiguring daemon mgr.compute-0.kdyrcd on compute-0
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14186 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:31:51 np0005473739 podman[83431]: 2025-10-07 13:31:51.24851126 +0000 UTC m=+0.037156534 container create 3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:31:51 np0005473739 systemd[1]: Started libpod-conmon-3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b.scope.
Oct  7 09:31:51 np0005473739 ceph-mgr[82654]: mgr[py] Loading python module 'crash'
Oct  7 09:31:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:51 np0005473739 podman[83431]: 2025-10-07 13:31:51.325547371 +0000 UTC m=+0.114192685 container init 3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:31:51 np0005473739 podman[83431]: 2025-10-07 13:31:51.232363466 +0000 UTC m=+0.021008780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:51 np0005473739 podman[83431]: 2025-10-07 13:31:51.331613991 +0000 UTC m=+0.120259275 container start 3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:51 np0005473739 stoic_hermann[83471]: 167 167
Oct  7 09:31:51 np0005473739 systemd[1]: libpod-3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b.scope: Deactivated successfully.
Oct  7 09:31:51 np0005473739 podman[83431]: 2025-10-07 13:31:51.335360986 +0000 UTC m=+0.124006270 container attach 3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:51 np0005473739 podman[83431]: 2025-10-07 13:31:51.335808308 +0000 UTC m=+0.124453612 container died 3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c5cf05b5668ef76c913fa76889503d5ff60e012f590b203355b7bd158228419e-merged.mount: Deactivated successfully.
Oct  7 09:31:51 np0005473739 podman[83431]: 2025-10-07 13:31:51.368265799 +0000 UTC m=+0.156911073 container remove 3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_hermann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef)
Oct  7 09:31:51 np0005473739 systemd[1]: libpod-conmon-3821d8f5af0e884a5b33c7dbddab3214d54aab19e649736fd1c6bb9993c7998b.scope: Deactivated successfully.
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mgr[82654]: mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  7 09:31:51 np0005473739 ceph-mgr[82654]: mgr[py] Loading python module 'dashboard'
Oct  7 09:31:51 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb[82645]: 2025-10-07T13:31:51.590+0000 7f9548a75140 -1 mgr[py] Module crash has missing NOTIFY_TYPES member
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/inventory}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Added host compute-0
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Added host compute-0
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service mon spec with placement compute-0
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service mon spec with placement compute-0
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service mgr spec with placement compute-0
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service mgr spec with placement compute-0
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Marking host: compute-0 for OSDSpec preview refresh.
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Marking host: compute-0 for OSDSpec preview refresh.
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service osd.default_drive_group spec with placement compute-0
Oct  7 09:31:51 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service osd.default_drive_group spec with placement compute-0
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.osd.default_drive_group}] v 0) v1
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 vigilant_raman[83235]: Added host 'compute-0' with addr '192.168.122.100'
Oct  7 09:31:51 np0005473739 vigilant_raman[83235]: Scheduled mon update...
Oct  7 09:31:51 np0005473739 vigilant_raman[83235]: Scheduled mgr update...
Oct  7 09:31:51 np0005473739 vigilant_raman[83235]: Scheduled osd.default_drive_group update...
Oct  7 09:31:51 np0005473739 systemd[1]: libpod-1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd.scope: Deactivated successfully.
Oct  7 09:31:51 np0005473739 podman[83195]: 2025-10-07 13:31:51.702261839 +0000 UTC m=+1.322660757 container died 1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd (image=quay.io/ceph/ceph:v18, name=vigilant_raman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c075f0faeb6b53a6a8d120e20cfd3dccb832832756b135f23bcf6e0574b6c5e8-merged.mount: Deactivated successfully.
Oct  7 09:31:51 np0005473739 podman[83195]: 2025-10-07 13:31:51.747441806 +0000 UTC m=+1.367840704 container remove 1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd (image=quay.io/ceph/ceph:v18, name=vigilant_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:31:51 np0005473739 systemd[1]: libpod-conmon-1cfba0a05b5534809efdfc001a5323da39071847356c8f4fa5d45f38d8d3c0bd.scope: Deactivated successfully.
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: Reconfiguring mgr.compute-0.kdyrcd (unknown last config time)...
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "mgr.compute-0.kdyrcd", "caps": ["mon", "profile mgr", "osd", "allow *", "mds", "allow *"]}]: dispatch
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: Reconfiguring daemon mgr.compute-0.kdyrcd on compute-0
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 podman[83746]: 2025-10-07 13:31:52.09371354 +0000 UTC m=+0.080286293 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:31:52 np0005473739 python3[83789]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:31:52 np0005473739 podman[83792]: 2025-10-07 13:31:52.252054423 +0000 UTC m=+0.050685774 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:52 np0005473739 podman[83746]: 2025-10-07 13:31:52.256445066 +0000 UTC m=+0.243017779 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:31:52 np0005473739 podman[83805]: 2025-10-07 13:31:52.288999069 +0000 UTC m=+0.053018698 container create c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b (image=quay.io/ceph/ceph:v18, name=pensive_boyd, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:31:52 np0005473739 systemd[1]: Started libpod-conmon-c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b.scope.
Oct  7 09:31:52 np0005473739 podman[83805]: 2025-10-07 13:31:52.259468971 +0000 UTC m=+0.023488600 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:31:52 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:52 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60e9fe5556ac7171c48042b3562080daff382ae3b52881fa136be2419f706b4/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:52 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60e9fe5556ac7171c48042b3562080daff382ae3b52881fa136be2419f706b4/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:52 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a60e9fe5556ac7171c48042b3562080daff382ae3b52881fa136be2419f706b4/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:52 np0005473739 podman[83805]: 2025-10-07 13:31:52.385049214 +0000 UTC m=+0.149068853 container init c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b (image=quay.io/ceph/ceph:v18, name=pensive_boyd, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:52 np0005473739 podman[83805]: 2025-10-07 13:31:52.391969378 +0000 UTC m=+0.155989007 container start c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b (image=quay.io/ceph/ceph:v18, name=pensive_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 09:31:52 np0005473739 podman[83805]: 2025-10-07 13:31:52.395021204 +0000 UTC m=+0.159040873 container attach c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b (image=quay.io/ceph/ceph:v18, name=pensive_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v9: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [progress INFO root] Writing back 2 completed events
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6642e4f2-9d88-4134-bcef-7d121889b898 does not exist
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mon}] v 0) v1
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev bd9086a7-2cf6-482a-86e3-d5b0040fbcb0 (Updating mgr deployment (-1 -> 1))
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Removing daemon mgr.compute-0.jdcgnb from compute-0 -- ports [8765]
Oct  7 09:31:52 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Removing daemon mgr.compute-0.jdcgnb from compute-0 -- ports [8765]
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: Added host compute-0
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: Saving service mon spec with placement compute-0
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: Saving service mgr spec with placement compute-0
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: Marking host: compute-0 for OSDSpec preview refresh.
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: Saving service osd.default_drive_group spec with placement compute-0
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:53 np0005473739 ceph-mgr[82654]: mgr[py] Loading python module 'devicehealth'
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1656616449' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  7 09:31:53 np0005473739 pensive_boyd[83825]: 
Oct  7 09:31:53 np0005473739 pensive_boyd[83825]: {"fsid":"82044f27-a8da-5b2a-a297-ff6afc620e1f","health":{"status":"HEALTH_WARN","checks":{"TOO_FEW_OSDS":{"severity":"HEALTH_WARN","summary":{"message":"OSD count 0 < osd_pool_default_size 1","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":77,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":3,"num_osds":0,"num_up_osds":0,"osd_up_since":0,"num_in_osds":0,"osd_in_since":0,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":0,"bytes_avail":0,"bytes_total":0},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-07T13:30:32.279410+0000","services":{}},"progress_events":{}}
Oct  7 09:31:53 np0005473739 systemd[1]: libpod-c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b.scope: Deactivated successfully.
Oct  7 09:31:53 np0005473739 podman[83805]: 2025-10-07 13:31:53.070879463 +0000 UTC m=+0.834899092 container died c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b (image=quay.io/ceph/ceph:v18, name=pensive_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 09:31:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a60e9fe5556ac7171c48042b3562080daff382ae3b52881fa136be2419f706b4-merged.mount: Deactivated successfully.
Oct  7 09:31:53 np0005473739 podman[83805]: 2025-10-07 13:31:53.110484535 +0000 UTC m=+0.874504164 container remove c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b (image=quay.io/ceph/ceph:v18, name=pensive_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:53 np0005473739 systemd[1]: libpod-conmon-c7970599e75fa82876c313e5e1350663b54bc69cf06099c4037556626cd8e94b.scope: Deactivated successfully.
Oct  7 09:31:53 np0005473739 systemd[1]: Stopping Ceph mgr.compute-0.jdcgnb for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:31:53 np0005473739 ceph-mgr[82654]: mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  7 09:31:53 np0005473739 ceph-mgr[82654]: mgr[py] Loading python module 'diskprediction_local'
Oct  7 09:31:53 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb[82645]: 2025-10-07T13:31:53.292+0000 7f9548a75140 -1 mgr[py] Module devicehealth has missing NOTIFY_TYPES member
Oct  7 09:31:53 np0005473739 podman[84081]: 2025-10-07 13:31:53.333156322 +0000 UTC m=+0.061843166 container died a752ffd4102f2c370cb89ddc513155579d87f3caddbd6c3f57baae4eacc4fa93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:31:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a0f860dd1b3a27c413ec7960e6577873321a7461a04e2e6c7bc7f357aa88cfc4-merged.mount: Deactivated successfully.
Oct  7 09:31:53 np0005473739 podman[84081]: 2025-10-07 13:31:53.375820208 +0000 UTC m=+0.104507042 container remove a752ffd4102f2c370cb89ddc513155579d87f3caddbd6c3f57baae4eacc4fa93 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:53 np0005473739 bash[84081]: ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-jdcgnb
Oct  7 09:31:53 np0005473739 systemd[1]: ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@mgr.compute-0.jdcgnb.service: Main process exited, code=exited, status=143/n/a
Oct  7 09:31:53 np0005473739 systemd[1]: ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@mgr.compute-0.jdcgnb.service: Failed with result 'exit-code'.
Oct  7 09:31:53 np0005473739 systemd[1]: Stopped Ceph mgr.compute-0.jdcgnb for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:31:53 np0005473739 systemd[1]: ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@mgr.compute-0.jdcgnb.service: Consumed 5.423s CPU time.
Oct  7 09:31:53 np0005473739 systemd[1]: Reloading.
Oct  7 09:31:53 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:31:53 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: Removing daemon mgr.compute-0.jdcgnb from compute-0 -- ports [8765]
Oct  7 09:31:53 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.services.cephadmservice] Removing key for mgr.compute-0.jdcgnb
Oct  7 09:31:53 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Removing key for mgr.compute-0.jdcgnb
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth rm", "entity": "mgr.compute-0.jdcgnb"} v 0) v1
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jdcgnb"}]: dispatch
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jdcgnb"}]': finished
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:53 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev bd9086a7-2cf6-482a-86e3-d5b0040fbcb0 (Updating mgr deployment (-1 -> 1))
Oct  7 09:31:53 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event bd9086a7-2cf6-482a-86e3-d5b0040fbcb0 (Updating mgr deployment (-1 -> 1)) in 1 seconds
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mgr}] v 0) v1
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:53 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c9ed80fc-d11c-48d7-ae38-028f4fcd4cb3 does not exist
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:31:54 np0005473739 podman[84313]: 2025-10-07 13:31:54.412563233 +0000 UTC m=+0.034338935 container create 7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_borg, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:31:54 np0005473739 systemd[1]: Started libpod-conmon-7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021.scope.
Oct  7 09:31:54 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:54 np0005473739 podman[84313]: 2025-10-07 13:31:54.47340552 +0000 UTC m=+0.095181232 container init 7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_borg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:31:54 np0005473739 podman[84313]: 2025-10-07 13:31:54.479512721 +0000 UTC m=+0.101288423 container start 7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:31:54 np0005473739 podman[84313]: 2025-10-07 13:31:54.482157715 +0000 UTC m=+0.103933427 container attach 7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:31:54 np0005473739 laughing_borg[84329]: 167 167
Oct  7 09:31:54 np0005473739 systemd[1]: libpod-7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021.scope: Deactivated successfully.
Oct  7 09:31:54 np0005473739 podman[84313]: 2025-10-07 13:31:54.48446075 +0000 UTC m=+0.106236452 container died 7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 09:31:54 np0005473739 podman[84313]: 2025-10-07 13:31:54.397403608 +0000 UTC m=+0.019179330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v10: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b28187f7d01c0a73d3033f96b2273de647cbbc75ab68ea3fa4dd8064081a3710-merged.mount: Deactivated successfully.
Oct  7 09:31:54 np0005473739 podman[84313]: 2025-10-07 13:31:54.527390205 +0000 UTC m=+0.149165907 container remove 7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_borg, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:31:54 np0005473739 systemd[1]: libpod-conmon-7b377091df422b7efc0ae3724732f85f914975a6f3089ac4b3ebc95778bed021.scope: Deactivated successfully.
Oct  7 09:31:54 np0005473739 podman[84355]: 2025-10-07 13:31:54.655423956 +0000 UTC m=+0.036700530 container create 0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_darwin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:54 np0005473739 systemd[1]: Started libpod-conmon-0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26.scope.
Oct  7 09:31:54 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:31:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454dcd72bc137aac8c7019f768a2ed6cc9d3f397d07479efcafbbbed332200bc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454dcd72bc137aac8c7019f768a2ed6cc9d3f397d07479efcafbbbed332200bc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454dcd72bc137aac8c7019f768a2ed6cc9d3f397d07479efcafbbbed332200bc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454dcd72bc137aac8c7019f768a2ed6cc9d3f397d07479efcafbbbed332200bc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/454dcd72bc137aac8c7019f768a2ed6cc9d3f397d07479efcafbbbed332200bc/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:31:54 np0005473739 podman[84355]: 2025-10-07 13:31:54.731178241 +0000 UTC m=+0.112454835 container init 0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:31:54 np0005473739 podman[84355]: 2025-10-07 13:31:54.639790187 +0000 UTC m=+0.021066771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:31:54 np0005473739 podman[84355]: 2025-10-07 13:31:54.73968794 +0000 UTC m=+0.120964524 container start 0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_darwin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:31:54 np0005473739 podman[84355]: 2025-10-07 13:31:54.743127697 +0000 UTC m=+0.124404301 container attach 0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_darwin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:31:54 np0005473739 ceph-mon[74295]: Removing key for mgr.compute-0.jdcgnb
Oct  7 09:31:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth rm", "entity": "mgr.compute-0.jdcgnb"}]: dispatch
Oct  7 09:31:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth rm", "entity": "mgr.compute-0.jdcgnb"}]': finished
Oct  7 09:31:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:31:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e3 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:31:55 np0005473739 eager_darwin[84372]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:31:55 np0005473739 eager_darwin[84372]: --> relative data size: 1.0
Oct  7 09:31:55 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  7 09:31:55 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new d73aa1fa-846f-4eff-9f39-a8604df9c070
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070"} v 0) v1
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2564755247' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070"}]: dispatch
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e3 do_prune osdmap full prune enabled
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e3 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2564755247' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070"}]': finished
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e4 e4: 1 total, 0 up, 1 in
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e4: 1 total, 0 up, 1 in
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:31:56 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  7 09:31:56 np0005473739 lvm[84433]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  7 09:31:56 np0005473739 lvm[84433]: VG ceph_vg0 finished
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-0
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg0/ceph_lv0
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ln -s /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-0/activate.monmap
Oct  7 09:31:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v12: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737298269' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: stderr: got monmap epoch 1
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: --> Creating keyring file for osd.0
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/keyring
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0/
Oct  7 09:31:56 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 0 --monmap /var/lib/ceph/osd/ceph-0/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-0/ --osd-uuid d73aa1fa-846f-4eff-9f39-a8604df9c070 --setuser ceph --setgroup ceph
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2564755247' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070"}]: dispatch
Oct  7 09:31:56 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2564755247' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070"}]': finished
Oct  7 09:31:57 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  7 09:31:57 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  7 09:31:57 np0005473739 ceph-mgr[74587]: [progress INFO root] Writing back 3 completed events
Oct  7 09:31:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  7 09:31:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:57 np0005473739 ceph-mon[74295]: Health check cleared: TOO_FEW_OSDS (was: OSD count 0 < osd_pool_default_size 1)
Oct  7 09:31:57 np0005473739 ceph-mon[74295]: Cluster is now healthy
Oct  7 09:31:57 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:31:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v13: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:31:58 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:31:56.784+0000 7fbf0fe90740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:31:58 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:31:56.785+0000 7fbf0fe90740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:31:58 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:31:56.785+0000 7fbf0fe90740 -1 bluestore(/var/lib/ceph/osd/ceph-0//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:31:58 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:31:56.785+0000 7fbf0fe90740 -1 bluestore(/var/lib/ceph/osd/ceph-0/) _read_fsid unparsable uuid
Oct  7 09:31:58 np0005473739 eager_darwin[84372]: --> ceph-volume lvm prepare successful for: ceph_vg0/ceph_lv0
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg0/ceph_lv0 --path /var/lib/ceph/osd/ceph-0 --no-mon-config
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ln -snf /dev/ceph_vg0/ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-0/block
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: --> ceph-volume lvm activate successful for osd ID: 0
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: --> ceph-volume lvm create successful for: ceph_vg0/ceph_lv0
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50"} v 0) v1
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3714041785' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50"}]: dispatch
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e4 do_prune osdmap full prune enabled
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e4 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3714041785' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50"}]': finished
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e5 e5: 2 total, 0 up, 2 in
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e5: 2 total, 0 up, 2 in
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:31:59 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:31:59 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  7 09:31:59 np0005473739 lvm[85369]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  7 09:31:59 np0005473739 lvm[85369]: VG ceph_vg1 finished
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-1
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg1/ceph_lv1
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ln -s /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  7 09:31:59 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-1/activate.monmap
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3714041785' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50"}]: dispatch
Oct  7 09:31:59 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3714041785' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50"}]': finished
Oct  7 09:32:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e5 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  7 09:32:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/264361435' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  7 09:32:00 np0005473739 eager_darwin[84372]: stderr: got monmap epoch 1
Oct  7 09:32:00 np0005473739 eager_darwin[84372]: --> Creating keyring file for osd.1
Oct  7 09:32:00 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/keyring
Oct  7 09:32:00 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1/
Oct  7 09:32:00 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 1 --monmap /var/lib/ceph/osd/ceph-1/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-1/ --osd-uuid 953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50 --setuser ceph --setgroup ceph
Oct  7 09:32:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v15: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v16: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:32:00.344+0000 7f1e8d2b4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:32:00.345+0000 7f1e8d2b4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:32:00.345+0000 7f1e8d2b4740 -1 bluestore(/var/lib/ceph/osd/ceph-1//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:32:00.345+0000 7f1e8d2b4740 -1 bluestore(/var/lib/ceph/osd/ceph-1/) _read_fsid unparsable uuid
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: --> ceph-volume lvm prepare successful for: ceph_vg1/ceph_lv1
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg1/ceph_lv1 --path /var/lib/ceph/osd/ceph-1 --no-mon-config
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ln -snf /dev/ceph_vg1/ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: --> ceph-volume lvm activate successful for osd ID: 1
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: --> ceph-volume lvm create successful for: ceph_vg1/ceph_lv1
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  7 09:32:02 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 4fdcd5b3-36e5-4c62-927d-1d87ba9aa204
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd new", "uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204"} v 0) v1
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1239087338' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204"}]: dispatch
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e5 do_prune osdmap full prune enabled
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e5 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1239087338' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204"}]': finished
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e6 e6: 3 total, 0 up, 3 in
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e6: 3 total, 0 up, 3 in
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:03 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:03 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:03 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-authtool --gen-print-key
Oct  7 09:32:03 np0005473739 lvm[86310]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  7 09:32:03 np0005473739 lvm[86310]: VG ceph_vg2 finished
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-2
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -h ceph:ceph /dev/ceph_vg2/ceph_lv2
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ln -s /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring mon getmap -o /var/lib/ceph/osd/ceph-2/activate.monmap
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon getmap"} v 0) v1
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3553395897' entity='client.bootstrap-osd' cmd=[{"prefix": "mon getmap"}]: dispatch
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: stderr: got monmap epoch 1
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: --> Creating keyring file for osd.2
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/keyring
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2/
Oct  7 09:32:03 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-osd --cluster ceph --osd-objectstore bluestore --mkfs -i 2 --monmap /var/lib/ceph/osd/ceph-2/activate.monmap --keyfile - --osdspec-affinity default_drive_group --osd-data /var/lib/ceph/osd/ceph-2/ --osd-uuid 4fdcd5b3-36e5-4c62-927d-1d87ba9aa204 --setuser ceph --setgroup ceph
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1239087338' entity='client.bootstrap-osd' cmd=[{"prefix": "osd new", "uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204"}]: dispatch
Oct  7 09:32:03 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1239087338' entity='client.bootstrap-osd' cmd='[{"prefix": "osd new", "uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204"}]': finished
Oct  7 09:32:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v18: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:05 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:32:03.802+0000 7fa00135c740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:32:05 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:32:03.802+0000 7fa00135c740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:32:05 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:32:03.802+0000 7fa00135c740 -1 bluestore(/var/lib/ceph/osd/ceph-2//block) _read_bdev_label unable to decode label at offset 102: void bluestore_bdev_label_t::decode(ceph::buffer::v15_2_0::list::const_iterator&) decode past end of struct encoding: Malformed input [buffer:3]
Oct  7 09:32:05 np0005473739 eager_darwin[84372]: stderr: 2025-10-07T13:32:03.803+0000 7fa00135c740 -1 bluestore(/var/lib/ceph/osd/ceph-2/) _read_fsid unparsable uuid
Oct  7 09:32:05 np0005473739 eager_darwin[84372]: --> ceph-volume lvm prepare successful for: ceph_vg2/ceph_lv2
Oct  7 09:32:06 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  7 09:32:06 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev /dev/ceph_vg2/ceph_lv2 --path /var/lib/ceph/osd/ceph-2 --no-mon-config
Oct  7 09:32:06 np0005473739 eager_darwin[84372]: Running command: /usr/bin/ln -snf /dev/ceph_vg2/ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:06 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -h ceph:ceph /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:06 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  7 09:32:06 np0005473739 eager_darwin[84372]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  7 09:32:06 np0005473739 eager_darwin[84372]: --> ceph-volume lvm activate successful for osd ID: 2
Oct  7 09:32:06 np0005473739 eager_darwin[84372]: --> ceph-volume lvm create successful for: ceph_vg2/ceph_lv2
Oct  7 09:32:06 np0005473739 systemd[1]: libpod-0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26.scope: Deactivated successfully.
Oct  7 09:32:06 np0005473739 systemd[1]: libpod-0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26.scope: Consumed 5.704s CPU time.
Oct  7 09:32:06 np0005473739 podman[84355]: 2025-10-07 13:32:06.149133416 +0000 UTC m=+11.530410000 container died 0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_darwin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:32:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-454dcd72bc137aac8c7019f768a2ed6cc9d3f397d07479efcafbbbed332200bc-merged.mount: Deactivated successfully.
Oct  7 09:32:06 np0005473739 podman[84355]: 2025-10-07 13:32:06.209384337 +0000 UTC m=+11.590660911 container remove 0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_darwin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:06 np0005473739 systemd[1]: libpod-conmon-0a5eeddef5581f94d541d60d3abc6dafc5b493f440d42b595f79098ee8147f26.scope: Deactivated successfully.
Oct  7 09:32:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v19: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:06 np0005473739 podman[87369]: 2025-10-07 13:32:06.794844311 +0000 UTC m=+0.034771467 container create 0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:06 np0005473739 systemd[1]: Started libpod-conmon-0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27.scope.
Oct  7 09:32:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:06 np0005473739 podman[87369]: 2025-10-07 13:32:06.85685531 +0000 UTC m=+0.096782486 container init 0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:06 np0005473739 podman[87369]: 2025-10-07 13:32:06.862465498 +0000 UTC m=+0.102392654 container start 0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:06 np0005473739 jovial_perlman[87385]: 167 167
Oct  7 09:32:06 np0005473739 podman[87369]: 2025-10-07 13:32:06.86505274 +0000 UTC m=+0.104979886 container attach 0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:06 np0005473739 systemd[1]: libpod-0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27.scope: Deactivated successfully.
Oct  7 09:32:06 np0005473739 podman[87369]: 2025-10-07 13:32:06.867633673 +0000 UTC m=+0.107560839 container died 0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:06 np0005473739 podman[87369]: 2025-10-07 13:32:06.780593411 +0000 UTC m=+0.020520587 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b333e443aee7f1f1628446730a5b2618528b3014fef5989c31f59e231af6f859-merged.mount: Deactivated successfully.
Oct  7 09:32:06 np0005473739 podman[87369]: 2025-10-07 13:32:06.900376732 +0000 UTC m=+0.140303888 container remove 0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:32:06 np0005473739 systemd[1]: libpod-conmon-0fbe55e585a5e8f7bb978982e1b16a2e5f784b77954d0209be5eaa18c6784a27.scope: Deactivated successfully.
Oct  7 09:32:07 np0005473739 podman[87408]: 2025-10-07 13:32:07.047780997 +0000 UTC m=+0.045485368 container create fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:07 np0005473739 systemd[1]: Started libpod-conmon-fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10.scope.
Oct  7 09:32:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f4a6f7349860fc905a2f52d0f49fa79cc795e604f63db147bf3a94a8b77d1c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f4a6f7349860fc905a2f52d0f49fa79cc795e604f63db147bf3a94a8b77d1c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f4a6f7349860fc905a2f52d0f49fa79cc795e604f63db147bf3a94a8b77d1c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b1f4a6f7349860fc905a2f52d0f49fa79cc795e604f63db147bf3a94a8b77d1c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:07 np0005473739 podman[87408]: 2025-10-07 13:32:07.124386336 +0000 UTC m=+0.122090707 container init fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:32:07 np0005473739 podman[87408]: 2025-10-07 13:32:07.031779918 +0000 UTC m=+0.029484289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:07 np0005473739 podman[87408]: 2025-10-07 13:32:07.133678246 +0000 UTC m=+0.131382607 container start fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:07 np0005473739 podman[87408]: 2025-10-07 13:32:07.151580079 +0000 UTC m=+0.149284430 container attach fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]: {
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:    "0": [
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:        {
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "devices": [
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "/dev/loop3"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            ],
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_name": "ceph_lv0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_size": "21470642176",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "name": "ceph_lv0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "tags": {
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.crush_device_class": "",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.encrypted": "0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osd_id": "0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.type": "block",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.vdo": "0"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            },
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "type": "block",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "vg_name": "ceph_vg0"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:        }
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:    ],
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:    "1": [
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:        {
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "devices": [
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "/dev/loop4"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            ],
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_name": "ceph_lv1",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_size": "21470642176",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "name": "ceph_lv1",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "tags": {
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.crush_device_class": "",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.encrypted": "0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osd_id": "1",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.type": "block",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.vdo": "0"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            },
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "type": "block",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "vg_name": "ceph_vg1"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:        }
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:    ],
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:    "2": [
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:        {
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "devices": [
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "/dev/loop5"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            ],
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_name": "ceph_lv2",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_size": "21470642176",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "name": "ceph_lv2",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "tags": {
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.crush_device_class": "",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.encrypted": "0",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osd_id": "2",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.type": "block",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:                "ceph.vdo": "0"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            },
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "type": "block",
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:            "vg_name": "ceph_vg2"
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:        }
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]:    ]
Oct  7 09:32:07 np0005473739 zealous_gauss[87425]: }
Oct  7 09:32:07 np0005473739 systemd[1]: libpod-fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10.scope: Deactivated successfully.
Oct  7 09:32:07 np0005473739 podman[87408]: 2025-10-07 13:32:07.93688678 +0000 UTC m=+0.934591141 container died fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b1f4a6f7349860fc905a2f52d0f49fa79cc795e604f63db147bf3a94a8b77d1c-merged.mount: Deactivated successfully.
Oct  7 09:32:07 np0005473739 podman[87408]: 2025-10-07 13:32:07.999197247 +0000 UTC m=+0.996901598 container remove fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_gauss, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:08 np0005473739 systemd[1]: libpod-conmon-fb2065c9bd743c7675f57fb1c6a89c0b84ed58049ec65bb30eab8c4e63340f10.scope: Deactivated successfully.
Oct  7 09:32:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.0"} v 0) v1
Oct  7 09:32:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  7 09:32:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:32:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:32:08 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Deploying daemon osd.0 on compute-0
Oct  7 09:32:08 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Deploying daemon osd.0 on compute-0
Oct  7 09:32:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v20: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:08 np0005473739 podman[87587]: 2025-10-07 13:32:08.578147749 +0000 UTC m=+0.033376827 container create 08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 09:32:08 np0005473739 systemd[1]: Started libpod-conmon-08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b.scope.
Oct  7 09:32:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:08 np0005473739 podman[87587]: 2025-10-07 13:32:08.642436523 +0000 UTC m=+0.097665631 container init 08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 09:32:08 np0005473739 podman[87587]: 2025-10-07 13:32:08.648623676 +0000 UTC m=+0.103852754 container start 08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:08 np0005473739 podman[87587]: 2025-10-07 13:32:08.652361791 +0000 UTC m=+0.107590889 container attach 08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:08 np0005473739 wizardly_herschel[87603]: 167 167
Oct  7 09:32:08 np0005473739 systemd[1]: libpod-08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b.scope: Deactivated successfully.
Oct  7 09:32:08 np0005473739 podman[87587]: 2025-10-07 13:32:08.653284958 +0000 UTC m=+0.108514026 container died 08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:08 np0005473739 podman[87587]: 2025-10-07 13:32:08.563983642 +0000 UTC m=+0.019212750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8746dcb4a1396a2d6aba82712ed6b8e1f999cd8c98de365eb2920ddb450cc60e-merged.mount: Deactivated successfully.
Oct  7 09:32:08 np0005473739 podman[87587]: 2025-10-07 13:32:08.692532989 +0000 UTC m=+0.147762067 container remove 08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_herschel, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:08 np0005473739 systemd[1]: libpod-conmon-08d583715e66ae22d1314fdc0bc47db4493e31715020a89eedf4cdfd1b9bfb2b.scope: Deactivated successfully.
Oct  7 09:32:08 np0005473739 podman[87634]: 2025-10-07 13:32:08.909380791 +0000 UTC m=+0.040204698 container create d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 09:32:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "osd.0"}]: dispatch
Oct  7 09:32:08 np0005473739 ceph-mon[74295]: Deploying daemon osd.0 on compute-0
Oct  7 09:32:08 np0005473739 systemd[1]: Started libpod-conmon-d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83.scope.
Oct  7 09:32:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a769524bd509f029a4277612e8195200f258081f9cd5a8785456f78d0f4d687d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a769524bd509f029a4277612e8195200f258081f9cd5a8785456f78d0f4d687d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a769524bd509f029a4277612e8195200f258081f9cd5a8785456f78d0f4d687d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a769524bd509f029a4277612e8195200f258081f9cd5a8785456f78d0f4d687d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a769524bd509f029a4277612e8195200f258081f9cd5a8785456f78d0f4d687d/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:08 np0005473739 podman[87634]: 2025-10-07 13:32:08.978198572 +0000 UTC m=+0.109022489 container init d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 09:32:08 np0005473739 podman[87634]: 2025-10-07 13:32:08.987336848 +0000 UTC m=+0.118160735 container start d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:08 np0005473739 podman[87634]: 2025-10-07 13:32:08.99061686 +0000 UTC m=+0.121440777 container attach d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:08 np0005473739 podman[87634]: 2025-10-07 13:32:08.894303009 +0000 UTC m=+0.025126916 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:09 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test[87650]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  7 09:32:09 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test[87650]:                            [--no-systemd] [--no-tmpfs]
Oct  7 09:32:09 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test[87650]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  7 09:32:09 np0005473739 systemd[1]: libpod-d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83.scope: Deactivated successfully.
Oct  7 09:32:09 np0005473739 podman[87634]: 2025-10-07 13:32:09.623833055 +0000 UTC m=+0.754656942 container died d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 09:32:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a769524bd509f029a4277612e8195200f258081f9cd5a8785456f78d0f4d687d-merged.mount: Deactivated successfully.
Oct  7 09:32:09 np0005473739 podman[87634]: 2025-10-07 13:32:09.677305605 +0000 UTC m=+0.808129532 container remove d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate-test, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:09 np0005473739 systemd[1]: libpod-conmon-d289e5e0d21ee4f986825e303352df8c289945a80b0fde73b4f6fdb7b6cd8a83.scope: Deactivated successfully.
Oct  7 09:32:09 np0005473739 systemd[1]: Reloading.
Oct  7 09:32:10 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:32:10 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:32:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e6 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:10 np0005473739 systemd[1]: Reloading.
Oct  7 09:32:10 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:32:10 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:32:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v21: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:10 np0005473739 systemd[1]: Starting Ceph osd.0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:32:10 np0005473739 podman[87813]: 2025-10-07 13:32:10.724029189 +0000 UTC m=+0.042983777 container create 5e2997a7d346fea2c9421fbdf2f745833ec51741c8051fe0f5999a23535b401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:32:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb78948da631746a3bdc5ee56de7785ce5969a24384d4598a97c3d97139ff001/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb78948da631746a3bdc5ee56de7785ce5969a24384d4598a97c3d97139ff001/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb78948da631746a3bdc5ee56de7785ce5969a24384d4598a97c3d97139ff001/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb78948da631746a3bdc5ee56de7785ce5969a24384d4598a97c3d97139ff001/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb78948da631746a3bdc5ee56de7785ce5969a24384d4598a97c3d97139ff001/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:10 np0005473739 podman[87813]: 2025-10-07 13:32:10.703908474 +0000 UTC m=+0.022863092 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:10 np0005473739 podman[87813]: 2025-10-07 13:32:10.812042928 +0000 UTC m=+0.130997546 container init 5e2997a7d346fea2c9421fbdf2f745833ec51741c8051fe0f5999a23535b401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:10 np0005473739 podman[87813]: 2025-10-07 13:32:10.819028154 +0000 UTC m=+0.137982732 container start 5e2997a7d346fea2c9421fbdf2f745833ec51741c8051fe0f5999a23535b401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:32:10 np0005473739 podman[87813]: 2025-10-07 13:32:10.822594104 +0000 UTC m=+0.141548702 container attach 5e2997a7d346fea2c9421fbdf2f745833ec51741c8051fe0f5999a23535b401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:11 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate[87828]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  7 09:32:11 np0005473739 bash[87813]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  7 09:32:11 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate[87828]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  7 09:32:11 np0005473739 bash[87813]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-0 --no-mon-config --dev /dev/mapper/ceph_vg0-ceph_lv0
Oct  7 09:32:11 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate[87828]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  7 09:32:11 np0005473739 bash[87813]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg0-ceph_lv0
Oct  7 09:32:11 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate[87828]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  7 09:32:11 np0005473739 bash[87813]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-0
Oct  7 09:32:11 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate[87828]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:11 np0005473739 bash[87813]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg0-ceph_lv0 /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:11 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate[87828]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  7 09:32:11 np0005473739 bash[87813]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-0
Oct  7 09:32:11 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate[87828]: --> ceph-volume raw activate successful for osd ID: 0
Oct  7 09:32:11 np0005473739 bash[87813]: --> ceph-volume raw activate successful for osd ID: 0
Oct  7 09:32:11 np0005473739 systemd[1]: libpod-5e2997a7d346fea2c9421fbdf2f745833ec51741c8051fe0f5999a23535b401f.scope: Deactivated successfully.
Oct  7 09:32:11 np0005473739 systemd[1]: libpod-5e2997a7d346fea2c9421fbdf2f745833ec51741c8051fe0f5999a23535b401f.scope: Consumed 1.049s CPU time.
Oct  7 09:32:11 np0005473739 podman[87960]: 2025-10-07 13:32:11.896327357 +0000 UTC m=+0.027850153 container died 5e2997a7d346fea2c9421fbdf2f745833ec51741c8051fe0f5999a23535b401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:32:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fb78948da631746a3bdc5ee56de7785ce5969a24384d4598a97c3d97139ff001-merged.mount: Deactivated successfully.
Oct  7 09:32:11 np0005473739 podman[87960]: 2025-10-07 13:32:11.967159103 +0000 UTC m=+0.098681899 container remove 5e2997a7d346fea2c9421fbdf2f745833ec51741c8051fe0f5999a23535b401f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 09:32:12 np0005473739 podman[88020]: 2025-10-07 13:32:12.23974344 +0000 UTC m=+0.045703543 container create 266f50edea4d7718ef229b4e419e4f655c7a900605ece5209b4d493fca344803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:32:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825cb9a663f8f7b0279e720cd6e29d9509367340c916860c33c39e058ea18b01/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825cb9a663f8f7b0279e720cd6e29d9509367340c916860c33c39e058ea18b01/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825cb9a663f8f7b0279e720cd6e29d9509367340c916860c33c39e058ea18b01/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825cb9a663f8f7b0279e720cd6e29d9509367340c916860c33c39e058ea18b01/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/825cb9a663f8f7b0279e720cd6e29d9509367340c916860c33c39e058ea18b01/merged/var/lib/ceph/osd/ceph-0 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:12 np0005473739 podman[88020]: 2025-10-07 13:32:12.312107541 +0000 UTC m=+0.118067734 container init 266f50edea4d7718ef229b4e419e4f655c7a900605ece5209b4d493fca344803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 09:32:12 np0005473739 podman[88020]: 2025-10-07 13:32:12.221035216 +0000 UTC m=+0.026995349 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:12 np0005473739 podman[88020]: 2025-10-07 13:32:12.328082369 +0000 UTC m=+0.134042502 container start 266f50edea4d7718ef229b4e419e4f655c7a900605ece5209b4d493fca344803 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:32:12 np0005473739 bash[88020]: 266f50edea4d7718ef229b4e419e4f655c7a900605ece5209b4d493fca344803
Oct  7 09:32:12 np0005473739 systemd[1]: Started Ceph osd.0 for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:32:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: pidfile_write: ignore empty --pid-file
Oct  7 09:32:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf627f800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf627f800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf627f800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf70b7800 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf70b7800 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf70b7800 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf70b7800 /var/lib/ceph/osd/ceph-0/block) close
Oct  7 09:32:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.1"} v 0) v1
Oct  7 09:32:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  7 09:32:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:32:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:32:12 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Deploying daemon osd.1 on compute-0
Oct  7 09:32:12 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Deploying daemon osd.1 on compute-0
Oct  7 09:32:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v22: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf627f800 /var/lib/ceph/osd/ceph-0/block) close
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: starting osd.0 osd_data /var/lib/ceph/osd/ceph-0 /var/lib/ceph/osd/ceph-0/journal
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: load: jerasure load: lrc 
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:12 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  7 09:32:13 np0005473739 podman[88200]: 2025-10-07 13:32:13.026263775 +0000 UTC m=+0.066526657 container create 7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:13 np0005473739 systemd[1]: Started libpod-conmon-7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183.scope.
Oct  7 09:32:13 np0005473739 podman[88200]: 2025-10-07 13:32:12.984692029 +0000 UTC m=+0.024954931 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:13 np0005473739 podman[88200]: 2025-10-07 13:32:13.114036167 +0000 UTC m=+0.154299069 container init 7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 09:32:13 np0005473739 podman[88200]: 2025-10-07 13:32:13.121506857 +0000 UTC m=+0.161769749 container start 7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:13 np0005473739 podman[88200]: 2025-10-07 13:32:13.124435589 +0000 UTC m=+0.164698471 container attach 7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 09:32:13 np0005473739 great_matsumoto[88216]: 167 167
Oct  7 09:32:13 np0005473739 systemd[1]: libpod-7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183.scope: Deactivated successfully.
Oct  7 09:32:13 np0005473739 podman[88200]: 2025-10-07 13:32:13.12624052 +0000 UTC m=+0.166503412 container died 7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5ae047230f68393ea3f03a6ba2f5e165cb54a4fd46fc9cffb5a750ee5a38fefc-merged.mount: Deactivated successfully.
Oct  7 09:32:13 np0005473739 podman[88200]: 2025-10-07 13:32:13.161816268 +0000 UTC m=+0.202079150 container remove 7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_matsumoto, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:13 np0005473739 systemd[1]: libpod-conmon-7de784ab5118699477942c5fe24dc38abcce460c2fda35f08c0307516e2d3183.scope: Deactivated successfully.
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) close
Oct  7 09:32:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "osd.1"}]: dispatch
Oct  7 09:32:13 np0005473739 ceph-mon[74295]: Deploying daemon osd.1 on compute-0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: osd.0:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  7 09:32:13 np0005473739 podman[88252]: 2025-10-07 13:32:13.468857692 +0000 UTC m=+0.033929883 container create 0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7138c00 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7139400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7139400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7139400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs mount
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs mount shared_bdev_used = 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: RocksDB version: 7.9.2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Git sha 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: DB SUMMARY
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: DB Session ID:  HK5M0WJ9MSI3QFA43WS8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: CURRENT file:  CURRENT
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: IDENTITY file:  IDENTITY
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                         Options.error_if_exists: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.create_if_missing: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                         Options.paranoid_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                                     Options.env: 0x55daf7109c70
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                                Options.info_log: 0x55daf63068a0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_file_opening_threads: 16
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                              Options.statistics: (nil)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.use_fsync: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.max_log_file_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                         Options.allow_fallocate: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.use_direct_reads: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.create_missing_column_families: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                              Options.db_log_dir: 
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                                 Options.wal_dir: db.wal
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.advise_random_on_open: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.write_buffer_manager: 0x55daf7212460
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                            Options.rate_limiter: (nil)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.unordered_write: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.row_cache: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                              Options.wal_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.allow_ingest_behind: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.two_write_queues: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.manual_wal_flush: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.wal_compression: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.atomic_flush: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.log_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.allow_data_in_errors: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.db_host_id: __hostname__
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.max_background_jobs: 4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.max_background_compactions: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.max_subcompactions: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.max_open_files: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.bytes_per_sync: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.max_background_flushes: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Compression algorithms supported:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kZSTD supported: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kXpressCompression supported: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kBZip2Compression supported: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kLZ4Compression supported: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kZlibCompression supported: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kSnappyCompression supported: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf63062c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf63062c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf63062c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf63062c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf63062c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf63062c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf63062c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf6306240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf6306240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf6306240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1df61f83-e896-4625-8e6b-1e2054bc3e1f
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843933494548, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843933494773, "job": 1, "event": "recovery_finished"}
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old nid_max 1025
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta old blobid_max 10240
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _open_super_meta min_alloc_size 0x1000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: freelist init
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: freelist _read_cfg
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs umount
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7139400 /var/lib/ceph/osd/ceph-0/block) close
Oct  7 09:32:13 np0005473739 systemd[1]: Started libpod-conmon-0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb.scope.
Oct  7 09:32:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445227876fc8437cafdd35b30a64e0f2da7f3c8f0e959944e93ae1c2f109e278/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445227876fc8437cafdd35b30a64e0f2da7f3c8f0e959944e93ae1c2f109e278/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445227876fc8437cafdd35b30a64e0f2da7f3c8f0e959944e93ae1c2f109e278/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445227876fc8437cafdd35b30a64e0f2da7f3c8f0e959944e93ae1c2f109e278/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445227876fc8437cafdd35b30a64e0f2da7f3c8f0e959944e93ae1c2f109e278/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:13 np0005473739 podman[88252]: 2025-10-07 13:32:13.4549068 +0000 UTC m=+0.019979011 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:13 np0005473739 podman[88252]: 2025-10-07 13:32:13.553716052 +0000 UTC m=+0.118788263 container init 0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:13 np0005473739 podman[88252]: 2025-10-07 13:32:13.562116857 +0000 UTC m=+0.127189058 container start 0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:13 np0005473739 podman[88252]: 2025-10-07 13:32:13.565431981 +0000 UTC m=+0.130504172 container attach 0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7139400 /var/lib/ceph/osd/ceph-0/block) open path /var/lib/ceph/osd/ceph-0/block
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7139400 /var/lib/ceph/osd/ceph-0/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-0/block failed: (22) Invalid argument
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bdev(0x55daf7139400 /var/lib/ceph/osd/ceph-0/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-0/block size 20 GiB
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs mount
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluefs mount shared_bdev_used = 4718592
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: RocksDB version: 7.9.2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Git sha 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: DB SUMMARY
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: DB Session ID:  HK5M0WJ9MSI3QFA43WS9
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: CURRENT file:  CURRENT
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: IDENTITY file:  IDENTITY
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                         Options.error_if_exists: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.create_if_missing: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                         Options.paranoid_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                                     Options.env: 0x55daf72ba380
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                                Options.info_log: 0x55daf62fcb40
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_file_opening_threads: 16
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                              Options.statistics: (nil)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.use_fsync: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.max_log_file_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                         Options.allow_fallocate: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.use_direct_reads: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.create_missing_column_families: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                              Options.db_log_dir: 
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                                 Options.wal_dir: db.wal
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.advise_random_on_open: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.write_buffer_manager: 0x55daf72126e0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                            Options.rate_limiter: (nil)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.unordered_write: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.row_cache: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                              Options.wal_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.allow_ingest_behind: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.two_write_queues: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.manual_wal_flush: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.wal_compression: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.atomic_flush: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.log_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.allow_data_in_errors: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.db_host_id: __hostname__
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.max_background_jobs: 4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.max_background_compactions: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.max_subcompactions: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.max_open_files: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.bytes_per_sync: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.max_background_flushes: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Compression algorithms supported:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kZSTD supported: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kXpressCompression supported: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kBZip2Compression supported: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kLZ4Compression supported: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kZlibCompression supported: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: #011kSnappyCompression supported: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd220)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f31f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd160)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd160)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55daf62fd160)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55daf62f3090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 1df61f83-e896-4625-8e6b-1e2054bc3e1f
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843933758371, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843933763225, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843933, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1df61f83-e896-4625-8e6b-1e2054bc3e1f", "db_session_id": "HK5M0WJ9MSI3QFA43WS9", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843933765621, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843933, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1df61f83-e896-4625-8e6b-1e2054bc3e1f", "db_session_id": "HK5M0WJ9MSI3QFA43WS9", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843933768304, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843933, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "1df61f83-e896-4625-8e6b-1e2054bc3e1f", "db_session_id": "HK5M0WJ9MSI3QFA43WS9", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843933769713, "job": 1, "event": "recovery_finished"}
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55daf6461c00
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: DB pointer 0x55daf71fba00
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super from 4, latest 4
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: bluestore(/var/lib/ceph/osd/ceph-0) _upgrade_super done
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.0 total, 0.0 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.03 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1,0.27 KB,5.62933e-05%) FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.0 total, 0.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: _get_class not permitted to load lua
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: _get_class not permitted to load sdk
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: _get_class not permitted to load test_remote_reads
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: osd.0 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: osd.0 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: osd.0 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: osd.0 0 load_pgs
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: osd.0 0 load_pgs opened 0 pgs
Oct  7 09:32:13 np0005473739 ceph-osd[88039]: osd.0 0 log_to_monitors true
Oct  7 09:32:13 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0[88035]: 2025-10-07T13:32:13.792+0000 7f9b521c7740 -1 osd.0 0 log_to_monitors true
Oct  7 09:32:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]} v 0) v1
Oct  7 09:32:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  7 09:32:14 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test[88462]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  7 09:32:14 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test[88462]:                            [--no-systemd] [--no-tmpfs]
Oct  7 09:32:14 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test[88462]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  7 09:32:14 np0005473739 systemd[1]: libpod-0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb.scope: Deactivated successfully.
Oct  7 09:32:14 np0005473739 podman[88252]: 2025-10-07 13:32:14.217473813 +0000 UTC m=+0.782546004 container died 0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 09:32:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-445227876fc8437cafdd35b30a64e0f2da7f3c8f0e959944e93ae1c2f109e278-merged.mount: Deactivated successfully.
Oct  7 09:32:14 np0005473739 podman[88252]: 2025-10-07 13:32:14.269095621 +0000 UTC m=+0.834167812 container remove 0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate-test, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:32:14 np0005473739 systemd[1]: libpod-conmon-0ba713110a16412386934067adb054cf82f1b267ffc4bdd137f63f759154a8fb.scope: Deactivated successfully.
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e6 do_prune osdmap full prune enabled
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e6 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: from='osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960]' entity='osd.0' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]: dispatch
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e7 e7: 3 total, 0 up, 3 in
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e7: 3 total, 0 up, 3 in
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e7 create-or-move crush item name 'osd.0' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:14 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:14 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:14 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:14 np0005473739 systemd[1]: Reloading.
Oct  7 09:32:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v24: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:14 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:32:14 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:32:14 np0005473739 systemd[1]: Reloading.
Oct  7 09:32:14 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  7 09:32:14 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  7 09:32:14 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:32:14 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:32:14 np0005473739 systemd[1]: Starting Ceph osd.1 for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e7 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:15 np0005473739 podman[88838]: 2025-10-07 13:32:15.160982181 +0000 UTC m=+0.033139440 container create 7bb8a2e1eeaab77958e80fa1a9a782d0f14e4a9a1c75ef6e238cbea67b4c4a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7b1e8feb5b7d12cda82645a9c3cf821605b54aacaff61e08b1d919bb4378bf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7b1e8feb5b7d12cda82645a9c3cf821605b54aacaff61e08b1d919bb4378bf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7b1e8feb5b7d12cda82645a9c3cf821605b54aacaff61e08b1d919bb4378bf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7b1e8feb5b7d12cda82645a9c3cf821605b54aacaff61e08b1d919bb4378bf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ae7b1e8feb5b7d12cda82645a9c3cf821605b54aacaff61e08b1d919bb4378bf/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:15 np0005473739 podman[88838]: 2025-10-07 13:32:15.222290482 +0000 UTC m=+0.094447741 container init 7bb8a2e1eeaab77958e80fa1a9a782d0f14e4a9a1c75ef6e238cbea67b4c4a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:15 np0005473739 podman[88838]: 2025-10-07 13:32:15.227814967 +0000 UTC m=+0.099972226 container start 7bb8a2e1eeaab77958e80fa1a9a782d0f14e4a9a1c75ef6e238cbea67b4c4a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:15 np0005473739 podman[88838]: 2025-10-07 13:32:15.232383835 +0000 UTC m=+0.104541124 container attach 7bb8a2e1eeaab77958e80fa1a9a782d0f14e4a9a1c75ef6e238cbea67b4c4a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 09:32:15 np0005473739 podman[88838]: 2025-10-07 13:32:15.146001932 +0000 UTC m=+0.018159211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e7 do_prune osdmap full prune enabled
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e7 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e8 e8: 3 total, 0 up, 3 in
Oct  7 09:32:15 np0005473739 ceph-osd[88039]: osd.0 0 done with init, starting boot process
Oct  7 09:32:15 np0005473739 ceph-osd[88039]: osd.0 0 start_boot
Oct  7 09:32:15 np0005473739 ceph-osd[88039]: osd.0 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  7 09:32:15 np0005473739 ceph-osd[88039]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  7 09:32:15 np0005473739 ceph-osd[88039]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  7 09:32:15 np0005473739 ceph-osd[88039]: osd.0 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  7 09:32:15 np0005473739 ceph-osd[88039]: osd.0 0  bench count 12288000 bsize 4 KiB
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e8: 3 total, 0 up, 3 in
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:15 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:15 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:15 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: from='osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960]' entity='osd.0' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["0"]}]': finished
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: from='osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960]' entity='osd.0' cmd=[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  7 09:32:15 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4286245960; not ready for session (expect reconnect)
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:15 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:16 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate[88853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  7 09:32:16 np0005473739 bash[88838]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  7 09:32:16 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate[88853]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  7 09:32:16 np0005473739 bash[88838]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-1 --no-mon-config --dev /dev/mapper/ceph_vg1-ceph_lv1
Oct  7 09:32:16 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate[88853]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  7 09:32:16 np0005473739 bash[88838]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg1-ceph_lv1
Oct  7 09:32:16 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate[88853]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  7 09:32:16 np0005473739 bash[88838]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-1
Oct  7 09:32:16 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate[88853]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:16 np0005473739 bash[88838]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg1-ceph_lv1 /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:16 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate[88853]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  7 09:32:16 np0005473739 bash[88838]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-1
Oct  7 09:32:16 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate[88853]: --> ceph-volume raw activate successful for osd ID: 1
Oct  7 09:32:16 np0005473739 bash[88838]: --> ceph-volume raw activate successful for osd ID: 1
Oct  7 09:32:16 np0005473739 systemd[1]: libpod-7bb8a2e1eeaab77958e80fa1a9a782d0f14e4a9a1c75ef6e238cbea67b4c4a47.scope: Deactivated successfully.
Oct  7 09:32:16 np0005473739 podman[88838]: 2025-10-07 13:32:16.264437998 +0000 UTC m=+1.136595267 container died 7bb8a2e1eeaab77958e80fa1a9a782d0f14e4a9a1c75ef6e238cbea67b4c4a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:32:16 np0005473739 systemd[1]: libpod-7bb8a2e1eeaab77958e80fa1a9a782d0f14e4a9a1c75ef6e238cbea67b4c4a47.scope: Consumed 1.036s CPU time.
Oct  7 09:32:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ae7b1e8feb5b7d12cda82645a9c3cf821605b54aacaff61e08b1d919bb4378bf-merged.mount: Deactivated successfully.
Oct  7 09:32:16 np0005473739 podman[88838]: 2025-10-07 13:32:16.391297767 +0000 UTC m=+1.263455026 container remove 7bb8a2e1eeaab77958e80fa1a9a782d0f14e4a9a1c75ef6e238cbea67b4c4a47 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:16 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4286245960; not ready for session (expect reconnect)
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:16 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: from='osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960]' entity='osd.0' cmd='[{"prefix": "osd crush create-or-move", "id": 0, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  7 09:32:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v26: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:16 np0005473739 podman[89042]: 2025-10-07 13:32:16.611263818 +0000 UTC m=+0.054781625 container create aa275ffdd3c6d706a31bd513dcbda90b1ff7124f1cf535cbdebbf8856cbe079d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 09:32:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f1de7ce5e22c3d26c519e023d31b2baac5cdab80afbc7a2b4e38f8647f09b0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f1de7ce5e22c3d26c519e023d31b2baac5cdab80afbc7a2b4e38f8647f09b0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f1de7ce5e22c3d26c519e023d31b2baac5cdab80afbc7a2b4e38f8647f09b0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f1de7ce5e22c3d26c519e023d31b2baac5cdab80afbc7a2b4e38f8647f09b0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/05f1de7ce5e22c3d26c519e023d31b2baac5cdab80afbc7a2b4e38f8647f09b0/merged/var/lib/ceph/osd/ceph-1 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:16 np0005473739 podman[89042]: 2025-10-07 13:32:16.579020448 +0000 UTC m=+0.022538295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:16 np0005473739 podman[89042]: 2025-10-07 13:32:16.693511383 +0000 UTC m=+0.137029240 container init aa275ffdd3c6d706a31bd513dcbda90b1ff7124f1cf535cbdebbf8856cbe079d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:16 np0005473739 podman[89042]: 2025-10-07 13:32:16.700588753 +0000 UTC m=+0.144106560 container start aa275ffdd3c6d706a31bd513dcbda90b1ff7124f1cf535cbdebbf8856cbe079d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:32:16 np0005473739 bash[89042]: aa275ffdd3c6d706a31bd513dcbda90b1ff7124f1cf535cbdebbf8856cbe079d
Oct  7 09:32:16 np0005473739 systemd[1]: Started Ceph osd.1 for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: pidfile_write: ignore empty --pid-file
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bdev(0x5569319c7800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bdev(0x5569319c7800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bdev(0x5569319c7800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bdev(0x556932809800 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bdev(0x556932809800 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bdev(0x556932809800 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  7 09:32:16 np0005473739 ceph-osd[89062]: bdev(0x556932809800 /var/lib/ceph/osd/ceph-1/block) close
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "osd.2"} v 0) v1
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:32:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:32:16 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Deploying daemon osd.2 on compute-0
Oct  7 09:32:16 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Deploying daemon osd.2 on compute-0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x5569319c7800 /var/lib/ceph/osd/ceph-1/block) close
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: starting osd.1 osd_data /var/lib/ceph/osd/ceph-1 /var/lib/ceph/osd/ceph-1/journal
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: load: jerasure load: lrc 
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) close
Oct  7 09:32:17 np0005473739 podman[89223]: 2025-10-07 13:32:17.392209157 +0000 UTC m=+0.058727456 container create 1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:17 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4286245960; not ready for session (expect reconnect)
Oct  7 09:32:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:17 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:17 np0005473739 systemd[1]: Started libpod-conmon-1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258.scope.
Oct  7 09:32:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "osd.2"}]: dispatch
Oct  7 09:32:17 np0005473739 ceph-mon[74295]: Deploying daemon osd.2 on compute-0
Oct  7 09:32:17 np0005473739 podman[89223]: 2025-10-07 13:32:17.355122723 +0000 UTC m=+0.021641022 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:17 np0005473739 podman[89223]: 2025-10-07 13:32:17.469205127 +0000 UTC m=+0.135723436 container init 1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mccarthy, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct  7 09:32:17 np0005473739 podman[89223]: 2025-10-07 13:32:17.476246066 +0000 UTC m=+0.142764345 container start 1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mccarthy, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  7 09:32:17 np0005473739 recursing_mccarthy[89239]: 167 167
Oct  7 09:32:17 np0005473739 systemd[1]: libpod-1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258.scope: Deactivated successfully.
Oct  7 09:32:17 np0005473739 podman[89223]: 2025-10-07 13:32:17.495686301 +0000 UTC m=+0.162204580 container attach 1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mccarthy, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:17 np0005473739 podman[89223]: 2025-10-07 13:32:17.496980624 +0000 UTC m=+0.163498913 container died 1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mccarthy, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) close
Oct  7 09:32:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4143443d7c71aee7e47a99133d3c3286ecfd5f01255a477a93bdceb0339624fa-merged.mount: Deactivated successfully.
Oct  7 09:32:17 np0005473739 podman[89223]: 2025-10-07 13:32:17.604559453 +0000 UTC m=+0.271077722 container remove 1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:17 np0005473739 systemd[1]: libpod-conmon-1b68573a71809f6ed0abee8d058a411bd3bd32a4d6f99ff189f57097c9a62258.scope: Deactivated successfully.
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: osd.1:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288ac00 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288b400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288b400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288b400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluefs mount
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluefs mount shared_bdev_used = 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: RocksDB version: 7.9.2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Git sha 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: DB SUMMARY
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: DB Session ID:  TP8ZXO4F9J01M5LM8TTE
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: CURRENT file:  CURRENT
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: IDENTITY file:  IDENTITY
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                         Options.error_if_exists: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.create_if_missing: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                         Options.paranoid_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                                     Options.env: 0x55693285bc70
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                                Options.info_log: 0x556931a4e8a0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_file_opening_threads: 16
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                              Options.statistics: (nil)
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.use_fsync: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.max_log_file_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                         Options.allow_fallocate: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.use_direct_reads: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.create_missing_column_families: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                              Options.db_log_dir: 
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                                 Options.wal_dir: db.wal
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.advise_random_on_open: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.write_buffer_manager: 0x556932964460
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                            Options.rate_limiter: (nil)
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.unordered_write: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.row_cache: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                              Options.wal_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.allow_ingest_behind: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.two_write_queues: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.manual_wal_flush: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.wal_compression: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.atomic_flush: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.log_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.allow_data_in_errors: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.db_host_id: __hostname__
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.max_background_jobs: 4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.max_background_compactions: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.max_subcompactions: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.max_open_files: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.bytes_per_sync: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.max_background_flushes: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Compression algorithms supported:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: #011kZSTD supported: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: #011kXpressCompression supported: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: #011kBZip2Compression supported: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: #011kLZ4Compression supported: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: #011kZlibCompression supported: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: #011kSnappyCompression supported: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 57ce3b19-550a-4273-b665-fbcf4346c6d4
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843937836482, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843937836705, "job": 1, "event": "recovery_finished"}
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old nid_max 1025
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta old blobid_max 10240
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _open_super_meta min_alloc_size 0x1000
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: freelist init
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: freelist _read_cfg
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bluefs umount
Oct  7 09:32:17 np0005473739 ceph-osd[89062]: bdev(0x55693288b400 /var/lib/ceph/osd/ceph-1/block) close
Oct  7 09:32:17 np0005473739 podman[89275]: 2025-10-07 13:32:17.861118844 +0000 UTC m=+0.050721492 container create c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:17 np0005473739 systemd[1]: Started libpod-conmon-c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135.scope.
Oct  7 09:32:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512140c1d1a8e54d95042c0cfa3e1c1218dab0f0942ee78b8aeab5c1b2534edb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512140c1d1a8e54d95042c0cfa3e1c1218dab0f0942ee78b8aeab5c1b2534edb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512140c1d1a8e54d95042c0cfa3e1c1218dab0f0942ee78b8aeab5c1b2534edb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512140c1d1a8e54d95042c0cfa3e1c1218dab0f0942ee78b8aeab5c1b2534edb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512140c1d1a8e54d95042c0cfa3e1c1218dab0f0942ee78b8aeab5c1b2534edb/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:17 np0005473739 podman[89275]: 2025-10-07 13:32:17.842790288 +0000 UTC m=+0.032392956 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:17 np0005473739 podman[89275]: 2025-10-07 13:32:17.958031661 +0000 UTC m=+0.147634319 container init c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:17 np0005473739 podman[89275]: 2025-10-07 13:32:17.964129196 +0000 UTC m=+0.153731834 container start c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:32:17 np0005473739 podman[89275]: 2025-10-07 13:32:17.992293263 +0000 UTC m=+0.181895911 container attach c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bdev(0x55693288b400 /var/lib/ceph/osd/ceph-1/block) open path /var/lib/ceph/osd/ceph-1/block
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bdev(0x55693288b400 /var/lib/ceph/osd/ceph-1/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-1/block failed: (22) Invalid argument
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bdev(0x55693288b400 /var/lib/ceph/osd/ceph-1/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-1/block size 20 GiB
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bluefs mount
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bluefs mount shared_bdev_used = 4718592
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: RocksDB version: 7.9.2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Git sha 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: DB SUMMARY
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: DB Session ID:  TP8ZXO4F9J01M5LM8TTF
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: CURRENT file:  CURRENT
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: IDENTITY file:  IDENTITY
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                         Options.error_if_exists: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.create_if_missing: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                         Options.paranoid_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                                     Options.env: 0x556932a0cb60
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                                Options.info_log: 0x556931a4e600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_file_opening_threads: 16
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                              Options.statistics: (nil)
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.use_fsync: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.max_log_file_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                         Options.allow_fallocate: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.use_direct_reads: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.create_missing_column_families: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                              Options.db_log_dir: 
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                                 Options.wal_dir: db.wal
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.advise_random_on_open: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.write_buffer_manager: 0x5569329646e0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                            Options.rate_limiter: (nil)
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.unordered_write: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.row_cache: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                              Options.wal_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.allow_ingest_behind: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.two_write_queues: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.manual_wal_flush: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.wal_compression: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.atomic_flush: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.log_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.allow_data_in_errors: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.db_host_id: __hostname__
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.max_background_jobs: 4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.max_background_compactions: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.max_subcompactions: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.max_open_files: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.bytes_per_sync: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.max_background_flushes: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Compression algorithms supported:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: #011kZSTD supported: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: #011kXpressCompression supported: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: #011kBZip2Compression supported: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: #011kLZ4Compression supported: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: #011kZlibCompression supported: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: #011kSnappyCompression supported: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4ea20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b1f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x556931a4e380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x556931a3b090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 57ce3b19-550a-4273-b665-fbcf4346c6d4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843938114695, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843938129385, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843938, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "57ce3b19-550a-4273-b665-fbcf4346c6d4", "db_session_id": "TP8ZXO4F9J01M5LM8TTF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843938133777, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1594, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 468, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843938, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "57ce3b19-550a-4273-b665-fbcf4346c6d4", "db_session_id": "TP8ZXO4F9J01M5LM8TTF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843938136147, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843938, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "57ce3b19-550a-4273-b665-fbcf4346c6d4", "db_session_id": "TP8ZXO4F9J01M5LM8TTF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843938137512, "job": 1, "event": "recovery_finished"}
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x556931ba8000
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: DB pointer 0x55693294da00
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super from 4, latest 4
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _upgrade_super done
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.01 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 1.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 460.80 MB usag
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: _get_class not permitted to load lua
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: _get_class not permitted to load sdk
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: _get_class not permitted to load test_remote_reads
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: osd.1 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: osd.1 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: osd.1 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: osd.1 0 load_pgs
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: osd.1 0 load_pgs opened 0 pgs
Oct  7 09:32:18 np0005473739 ceph-osd[89062]: osd.1 0 log_to_monitors true
Oct  7 09:32:18 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1[89058]: 2025-10-07T13:32:18.199+0000 7f8cb4c06740 -1 osd.1 0 log_to_monitors true
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]} v 0) v1
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  7 09:32:18 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4286245960; not ready for session (expect reconnect)
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:18 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e8 do_prune osdmap full prune enabled
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e8 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: from='osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500]' entity='osd.1' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]: dispatch
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e9 e9: 3 total, 0 up, 3 in
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e9: 3 total, 0 up, 3 in
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e9 create-or-move crush item name 'osd.1' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:18 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:18 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:18 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v28: 0 pgs: ; 0 B data, 0 B used, 0 B / 0 B avail
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 32.427 iops: 8301.371 elapsed_sec: 0.361
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: log_channel(cluster) log [WRN] : OSD bench result of 8301.371462 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 0 waiting for initial osdmap
Oct  7 09:32:18 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0[88035]: 2025-10-07T13:32:18.537+0000 7f9b4e95e640 -1 osd.0 0 waiting for initial osdmap
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 9 crush map has features 288514050185494528, adjusting msgr requires for clients
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 9 crush map has features 288514050185494528 was 288232575208792577, adjusting msgr requires for mons
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 9 crush map has features 3314932999778484224, adjusting msgr requires for osds
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 9 check_osdmap_features require_osd_release unknown -> reef
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 9 set_numa_affinity not setting numa affinity
Oct  7 09:32:18 np0005473739 ceph-osd[88039]: osd.0 9 _collect_metadata loop3:  no unique device id for loop3: fallback method has no model nor serial
Oct  7 09:32:18 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-0[88035]: 2025-10-07T13:32:18.567+0000 7f9b4976f640 -1 osd.0 9 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  7 09:32:18 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test[89485]: usage: ceph-volume activate [-h] [--osd-id OSD_ID] [--osd-uuid OSD_UUID]
Oct  7 09:32:18 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test[89485]:                            [--no-systemd] [--no-tmpfs]
Oct  7 09:32:18 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test[89485]: ceph-volume activate: error: unrecognized arguments: --bad-option
Oct  7 09:32:18 np0005473739 systemd[1]: libpod-c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135.scope: Deactivated successfully.
Oct  7 09:32:18 np0005473739 podman[89275]: 2025-10-07 13:32:18.6230827 +0000 UTC m=+0.812685358 container died c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:32:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-512140c1d1a8e54d95042c0cfa3e1c1218dab0f0942ee78b8aeab5c1b2534edb-merged.mount: Deactivated successfully.
Oct  7 09:32:18 np0005473739 podman[89275]: 2025-10-07 13:32:18.672662881 +0000 UTC m=+0.862265529 container remove c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate-test, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:18 np0005473739 systemd[1]: libpod-conmon-c83cd370f83c3604388c5b6478db0e59982f6e71b791c158e73002764a651135.scope: Deactivated successfully.
Oct  7 09:32:18 np0005473739 systemd[1]: Reloading.
Oct  7 09:32:18 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:32:18 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  7 09:32:19 np0005473739 systemd[1]: Reloading.
Oct  7 09:32:19 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:32:19 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:32:19 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.0 v2:192.168.122.100:6802/4286245960; not ready for session (expect reconnect)
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:19 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.0: (2) No such file or directory
Oct  7 09:32:19 np0005473739 systemd[1]: Starting Ceph osd.2 for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e9 do_prune osdmap full prune enabled
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e9 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e10 e10: 3 total, 1 up, 3 in
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: osd.1 0 done with init, starting boot process
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: osd.1 0 start_boot
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: osd.1 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: osd.1 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  7 09:32:19 np0005473739 ceph-osd[89062]: osd.1 0  bench count 12288000 bsize 4 KiB
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960] boot
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e10: 3 total, 1 up, 3 in
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 0} v 0) v1
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 0}]: dispatch
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:19 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:19 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:19 np0005473739 ceph-osd[88039]: osd.0 10 state: booting -> active
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: from='osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500]' entity='osd.1' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["1"]}]': finished
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: from='osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500]' entity='osd.1' cmd=[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: OSD bench result of 8301.371462 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.0. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  7 09:32:19 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/215390500; not ready for session (expect reconnect)
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:19 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:19 np0005473739 podman[89861]: 2025-10-07 13:32:19.7508962 +0000 UTC m=+0.100192642 container create 96cf83a20b0de4161eabd24482d584dc06c6aac10caf8982b723902f6b6943d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 09:32:19 np0005473739 podman[89861]: 2025-10-07 13:32:19.678846155 +0000 UTC m=+0.028142627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758ad7625975758b291c20fbb30aecefac1d805c232b3bdc5df3f4abcbee343/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758ad7625975758b291c20fbb30aecefac1d805c232b3bdc5df3f4abcbee343/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758ad7625975758b291c20fbb30aecefac1d805c232b3bdc5df3f4abcbee343/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758ad7625975758b291c20fbb30aecefac1d805c232b3bdc5df3f4abcbee343/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7758ad7625975758b291c20fbb30aecefac1d805c232b3bdc5df3f4abcbee343/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:19 np0005473739 podman[89861]: 2025-10-07 13:32:19.908254415 +0000 UTC m=+0.257550847 container init 96cf83a20b0de4161eabd24482d584dc06c6aac10caf8982b723902f6b6943d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:32:19 np0005473739 podman[89861]: 2025-10-07 13:32:19.917175441 +0000 UTC m=+0.266471873 container start 96cf83a20b0de4161eabd24482d584dc06c6aac10caf8982b723902f6b6943d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:19 np0005473739 podman[89861]: 2025-10-07 13:32:19.945139133 +0000 UTC m=+0.294435565 container attach 96cf83a20b0de4161eabd24482d584dc06c6aac10caf8982b723902f6b6943d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e10 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:20 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/215390500; not ready for session (expect reconnect)
Oct  7 09:32:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v30: 0 pgs: ; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e10 do_prune osdmap full prune enabled
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e10 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:20 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: from='osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500]' entity='osd.1' cmd='[{"prefix": "osd crush create-or-move", "id": 1, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: osd.0 [v2:192.168.122.100:6802/4286245960,v1:192.168.122.100:6803/4286245960] boot
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e11 e11: 3 total, 1 up, 3 in
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e11: 3 total, 1 up, 3 in
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:20 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:20 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:20 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] creating mgr pool
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true} v 0) v1
Oct  7 09:32:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  7 09:32:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate[89876]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  7 09:32:20 np0005473739 bash[89861]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  7 09:32:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate[89876]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  7 09:32:20 np0005473739 bash[89861]: Running command: /usr/bin/ceph-bluestore-tool prime-osd-dir --path /var/lib/ceph/osd/ceph-2 --no-mon-config --dev /dev/mapper/ceph_vg2-ceph_lv2
Oct  7 09:32:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate[89876]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  7 09:32:20 np0005473739 bash[89861]: Running command: /usr/bin/chown -h ceph:ceph /dev/mapper/ceph_vg2-ceph_lv2
Oct  7 09:32:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate[89876]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  7 09:32:20 np0005473739 bash[89861]: Running command: /usr/bin/chown -R ceph:ceph /dev/dm-2
Oct  7 09:32:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate[89876]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:20 np0005473739 bash[89861]: Running command: /usr/bin/ln -s /dev/mapper/ceph_vg2-ceph_lv2 /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate[89876]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  7 09:32:20 np0005473739 bash[89861]: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/osd/ceph-2
Oct  7 09:32:20 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate[89876]: --> ceph-volume raw activate successful for osd ID: 2
Oct  7 09:32:20 np0005473739 bash[89861]: --> ceph-volume raw activate successful for osd ID: 2
Oct  7 09:32:20 np0005473739 systemd[1]: libpod-96cf83a20b0de4161eabd24482d584dc06c6aac10caf8982b723902f6b6943d8.scope: Deactivated successfully.
Oct  7 09:32:20 np0005473739 podman[89861]: 2025-10-07 13:32:20.8855001 +0000 UTC m=+1.234796532 container died 96cf83a20b0de4161eabd24482d584dc06c6aac10caf8982b723902f6b6943d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 09:32:20 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7758ad7625975758b291c20fbb30aecefac1d805c232b3bdc5df3f4abcbee343-merged.mount: Deactivated successfully.
Oct  7 09:32:20 np0005473739 podman[89861]: 2025-10-07 13:32:20.978177149 +0000 UTC m=+1.327473581 container remove 96cf83a20b0de4161eabd24482d584dc06c6aac10caf8982b723902f6b6943d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2-activate, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:21 np0005473739 podman[90072]: 2025-10-07 13:32:21.164237785 +0000 UTC m=+0.041852946 container create f4c98c027c3569c8b4f2a19e907aacf75cf5db055f037f202a6073d0c0daed64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 09:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a80f3349352a9bb33ac58d23a9a34eb20b931cd0994a51206cdf3a7d18238f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a80f3349352a9bb33ac58d23a9a34eb20b931cd0994a51206cdf3a7d18238f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a80f3349352a9bb33ac58d23a9a34eb20b931cd0994a51206cdf3a7d18238f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a80f3349352a9bb33ac58d23a9a34eb20b931cd0994a51206cdf3a7d18238f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/39a80f3349352a9bb33ac58d23a9a34eb20b931cd0994a51206cdf3a7d18238f/merged/var/lib/ceph/osd/ceph-2 supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:21 np0005473739 podman[90072]: 2025-10-07 13:32:21.22453335 +0000 UTC m=+0.102148521 container init f4c98c027c3569c8b4f2a19e907aacf75cf5db055f037f202a6073d0c0daed64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 09:32:21 np0005473739 podman[90072]: 2025-10-07 13:32:21.230444971 +0000 UTC m=+0.108060132 container start f4c98c027c3569c8b4f2a19e907aacf75cf5db055f037f202a6073d0c0daed64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:32:21 np0005473739 podman[90072]: 2025-10-07 13:32:21.147038038 +0000 UTC m=+0.024653229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:21 np0005473739 bash[90072]: f4c98c027c3569c8b4f2a19e907aacf75cf5db055f037f202a6073d0c0daed64
Oct  7 09:32:21 np0005473739 systemd[1]: Started Ceph osd.2 for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-osd, pid 2
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: pidfile_write: ignore empty --pid-file
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fb873800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fb873800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fb873800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc6b5800 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc6b5800 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc6b5800 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc6b5800 /var/lib/ceph/osd/ceph-2/block) close
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fb873800 /var/lib/ceph/osd/ceph-2/block) close
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:21 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/215390500; not ready for session (expect reconnect)
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:21 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]: dispatch
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: starting osd.2 osd_data /var/lib/ceph/osd/ceph-2 /var/lib/ceph/osd/ceph-2/journal
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: load: jerasure load: lrc 
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e11 do_prune osdmap full prune enabled
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e11 encode_pending skipping prime_pg_temp; mapping job did not start
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e12 e12: 3 total, 1 up, 3 in
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e12 crush map has features 3314933000852226048, adjusting msgr requires
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e12 crush map has features 288514051259236352, adjusting msgr requires
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e12: 3 total, 1 up, 3 in
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:21 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:21 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true} v 0) v1
Oct  7 09:32:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  7 09:32:21 np0005473739 ceph-osd[88039]: osd.0 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  7 09:32:21 np0005473739 ceph-osd[88039]: osd.0 12 crush map has features 288514051259236352 was 288514050185503233, adjusting msgr requires for mons
Oct  7 09:32:21 np0005473739 ceph-osd[88039]: osd.0 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:21 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) close
Oct  7 09:32:21 np0005473739 podman[90251]: 2025-10-07 13:32:21.854989558 +0000 UTC m=+0.047204952 container create 564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:21 np0005473739 systemd[1]: Started libpod-conmon-564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080.scope.
Oct  7 09:32:21 np0005473739 podman[90251]: 2025-10-07 13:32:21.830008782 +0000 UTC m=+0.022224176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:21 np0005473739 podman[90251]: 2025-10-07 13:32:21.950398876 +0000 UTC m=+0.142614270 container init 564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:21 np0005473739 podman[90251]: 2025-10-07 13:32:21.963601503 +0000 UTC m=+0.155816877 container start 564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:21 np0005473739 podman[90251]: 2025-10-07 13:32:21.967268867 +0000 UTC m=+0.159484241 container attach 564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 09:32:21 np0005473739 loving_thompson[90272]: 167 167
Oct  7 09:32:21 np0005473739 systemd[1]: libpod-564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080.scope: Deactivated successfully.
Oct  7 09:32:21 np0005473739 podman[90251]: 2025-10-07 13:32:21.972671804 +0000 UTC m=+0.164887178 container died 564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b540e9dcf581226a4448321baf8e30fd27cb560dee054d2cef9180b564dd8240-merged.mount: Deactivated successfully.
Oct  7 09:32:22 np0005473739 podman[90251]: 2025-10-07 13:32:22.061841264 +0000 UTC m=+0.254056638 container remove 564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_thompson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:32:22 np0005473739 systemd[1]: libpod-conmon-564b89c637cc60477efa5add0ade5734fd5421e89d03a9de2d014b0d8eab9080.scope: Deactivated successfully.
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: mClockScheduler: set_osd_capacity_params_from_config: osd_bandwidth_cost_per_io: 499321.90 bytes/io, osd_bandwidth_capacity_per_shard 157286400.00 bytes/second
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: osd.2:0.OSDShard using op scheduler mclock_scheduler, cutoff=196
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc736c00 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _set_cache_sizes cache_size 1073741824 meta 0.45 kv 0.45 kv_onode 0.04 data 0.06
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc737400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc737400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc737400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs mount
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs mount shared_bdev_used = 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: RocksDB version: 7.9.2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Git sha 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: DB SUMMARY
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: DB Session ID:  P7CGIDJI77P8MFE5I1ZE
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: CURRENT file:  CURRENT
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: IDENTITY file:  IDENTITY
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                         Options.error_if_exists: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.create_if_missing: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                         Options.paranoid_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                                     Options.env: 0x55f4fc707c70
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                                Options.info_log: 0x55f4fb8fa8a0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_file_opening_threads: 16
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                              Options.statistics: (nil)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.use_fsync: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.max_log_file_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                         Options.allow_fallocate: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.use_direct_reads: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.create_missing_column_families: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                              Options.db_log_dir: 
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                                 Options.wal_dir: db.wal
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.advise_random_on_open: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.write_buffer_manager: 0x55f4fc810460
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                            Options.rate_limiter: (nil)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.unordered_write: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.row_cache: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                              Options.wal_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.allow_ingest_behind: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.two_write_queues: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.manual_wal_flush: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.wal_compression: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.atomic_flush: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.log_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.allow_data_in_errors: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.db_host_id: __hostname__
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.max_background_jobs: 4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.max_background_compactions: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.max_subcompactions: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.max_open_files: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.bytes_per_sync: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.max_background_flushes: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Compression algorithms supported:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kZSTD supported: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kXpressCompression supported: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kBZip2Compression supported: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kLZ4Compression supported: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kZlibCompression supported: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kSnappyCompression supported: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_readonly.cc:25] Opening the db in read only mode
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa2c0)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa240)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 08e03913-6658-4f74-8dc9-cb8ac6d324e4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843942125286, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843942125505, "job": 1, "event": "recovery_finished"}
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old nid_max 1025
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta old blobid_max 10240
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta ondisk_format 4 compat_ondisk_format 3
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _open_super_meta min_alloc_size 0x1000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: freelist init
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: freelist _read_cfg
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _init_alloc loaded 20 GiB in 2 extents, allocator type hybrid, capacity 0x4ffc00000, block size 0x1000, free 0x4ffbfd000, fragmentation 1.9e-07
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:496] Shutdown: canceling all background work
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:704] Shutdown complete
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs umount
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc737400 /var/lib/ceph/osd/ceph-2/block) close
Oct  7 09:32:22 np0005473739 podman[90489]: 2025-10-07 13:32:22.212712694 +0000 UTC m=+0.045990281 container create 8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  7 09:32:22 np0005473739 systemd[1]: Started libpod-conmon-8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f.scope.
Oct  7 09:32:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c724ab7782736045a39077172d66c6b1b3a4c4203ab566039fbdc9306208c81f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c724ab7782736045a39077172d66c6b1b3a4c4203ab566039fbdc9306208c81f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c724ab7782736045a39077172d66c6b1b3a4c4203ab566039fbdc9306208c81f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c724ab7782736045a39077172d66c6b1b3a4c4203ab566039fbdc9306208c81f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:22 np0005473739 podman[90489]: 2025-10-07 13:32:22.188859137 +0000 UTC m=+0.022136744 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:22 np0005473739 podman[90489]: 2025-10-07 13:32:22.290862904 +0000 UTC m=+0.124140521 container init 8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lederberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:22 np0005473739 podman[90489]: 2025-10-07 13:32:22.298309813 +0000 UTC m=+0.131587400 container start 8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:22 np0005473739 podman[90489]: 2025-10-07 13:32:22.306187264 +0000 UTC m=+0.139464851 container attach 8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lederberg, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc737400 /var/lib/ceph/osd/ceph-2/block) open path /var/lib/ceph/osd/ceph-2/block
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc737400 /var/lib/ceph/osd/ceph-2/block) ioctl(F_SET_FILE_RW_HINT) on /var/lib/ceph/osd/ceph-2/block failed: (22) Invalid argument
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bdev(0x55f4fc737400 /var/lib/ceph/osd/ceph-2/block) open size 21470642176 (0x4ffc00000, 20 GiB) block_size 4096 (4 KiB) rotational device, discard supported
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs add_block_device bdev 1 path /var/lib/ceph/osd/ceph-2/block size 20 GiB
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs mount
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs _init_alloc shared, id 1, capacity 0x4ffc00000, block size 0x10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluefs mount shared_bdev_used = 4718592
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _prepare_db_environment set db_paths to db,20397110067 db.slow,20397110067
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: RocksDB version: 7.9.2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Git sha 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Compile date 2025-05-06 23:30:25
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: DB SUMMARY
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: DB Session ID:  P7CGIDJI77P8MFE5I1ZF
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: CURRENT file:  CURRENT
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: IDENTITY file:  IDENTITY
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: MANIFEST file:  MANIFEST-000032 size: 1007 Bytes
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: SST files in db dir, Total Num: 1, files: 000030.sst 
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: SST files in db.slow dir, Total Num: 0, files: 
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Write Ahead Log file in db.wal: 000031.log size: 5093 ; 
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                         Options.error_if_exists: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.create_if_missing: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                         Options.paranoid_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.flush_verify_memtable_count: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.track_and_verify_wals_in_manifest: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.verify_sst_unique_id_in_manifest: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                                     Options.env: 0x55f4fc8b8b60
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                                      Options.fs: LegacyFileSystem
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                                Options.info_log: 0x55f4fb8fa620
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_file_opening_threads: 16
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                              Options.statistics: (nil)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.use_fsync: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.max_log_file_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.max_manifest_file_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.log_file_time_to_roll: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.keep_log_file_num: 1000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.recycle_log_file_num: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                         Options.allow_fallocate: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.allow_mmap_reads: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.allow_mmap_writes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.use_direct_reads: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.use_direct_io_for_flush_and_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.create_missing_column_families: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                              Options.db_log_dir: 
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                                 Options.wal_dir: db.wal
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.table_cache_numshardbits: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                         Options.WAL_ttl_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.WAL_size_limit_MB: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.max_write_batch_group_size_bytes: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.manifest_preallocation_size: 4194304
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                     Options.is_fd_close_on_exec: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.advise_random_on_open: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.db_write_buffer_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.write_buffer_manager: 0x55f4fc8106e0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.access_hint_on_compaction_start: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.random_access_max_buffer_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                      Options.use_adaptive_mutex: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                            Options.rate_limiter: (nil)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.sst_file_manager.rate_bytes_per_sec: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.wal_recovery_mode: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.enable_thread_tracking: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.enable_pipelined_write: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.unordered_write: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.allow_concurrent_memtable_write: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.enable_write_thread_adaptive_yield: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.write_thread_max_yield_usec: 100
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.write_thread_slow_yield_usec: 3
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.row_cache: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                              Options.wal_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.avoid_flush_during_recovery: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.allow_ingest_behind: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.two_write_queues: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.manual_wal_flush: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.wal_compression: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.atomic_flush: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.avoid_unnecessary_blocking_io: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.persist_stats_to_disk: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.write_dbid_to_manifest: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.log_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.file_checksum_gen_factory: Unknown
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.best_efforts_recovery: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bgerror_resume_count: 2147483647
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bgerror_resume_retry_interval: 1000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.allow_data_in_errors: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.db_host_id: __hostname__
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.enforce_single_del_contracts: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.max_background_jobs: 4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.max_background_compactions: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.max_subcompactions: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.avoid_flush_during_shutdown: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.writable_file_max_buffer_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.delayed_write_rate : 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.max_total_wal_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.delete_obsolete_files_period_micros: 21600000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.stats_dump_period_sec: 600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.stats_persist_period_sec: 600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.stats_history_buffer_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.max_open_files: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.bytes_per_sync: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                      Options.wal_bytes_per_sync: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.strict_bytes_per_sync: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.compaction_readahead_size: 2097152
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.max_background_flushes: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Compression algorithms supported:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kZSTD supported: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kXpressCompression supported: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kBZip2Compression supported: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kZSTDNotFinalCompression supported: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kLZ4Compression supported: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kZlibCompression supported: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kLZ4HCCompression supported: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: #011kSnappyCompression supported: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Fast CRC32 supported: Supported on x86
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: DMutex implementation: pthread_mutex_t
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5527] Recovering from manifest file: db/MANIFEST-000032
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [default]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: .T:int64_array.b:bitwise_xor
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-0]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-1]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [m-2]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-0]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-1]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [p-2]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8faa20)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e71f0#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 483183820#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-0]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-1]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:630] --------------- Options for column family [O-2]:
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.comparator: leveldb.BytewiseComparator
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:           Options.merge_operator: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.compaction_filter_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.sst_partitioner_factory: None
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.memtable_factory: SkipListFactory
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.table_factory: BlockBasedTable
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            table_factory options:   flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55f4fb8fa380)#012  cache_index_and_filter_blocks: 1#012  cache_index_and_filter_blocks_with_high_priority: 0#012  pin_l0_filter_and_index_blocks_in_cache: 0#012  pin_top_level_index_and_filter: 1#012  index_type: 0#012  data_block_index_type: 0#012  index_shortening: 1#012  data_block_hash_table_util_ratio: 0.750000#012  checksum: 4#012  no_block_cache: 0#012  block_cache: 0x55f4fb8e7090#012  block_cache_name: BinnedLRUCache#012  block_cache_options:#012    capacity : 536870912#012    num_shard_bits : 4#012    strict_capacity_limit : 0#012    high_pri_pool_ratio: 0.000#012  block_cache_compressed: (nil)#012  persistent_cache: (nil)#012  block_size: 4096#012  block_size_deviation: 10#012  block_restart_interval: 16#012  index_block_restart_interval: 1#012  metadata_block_size: 4096#012  partition_filters: 0#012  use_delta_encoding: 1#012  filter_policy: bloomfilter#012  whole_key_filtering: 1#012  verify_compression: 0#012  read_amp_bytes_per_bit: 0#012  format_version: 5#012  enable_index_compression: 1#012  block_align: 0#012  max_auto_readahead_size: 262144#012  prepopulate_block_cache: 0#012  initial_auto_readahead_size: 8192#012  num_file_reads_for_auto_readahead: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.write_buffer_size: 16777216
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.max_write_buffer_number: 64
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.compression: LZ4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression: Disabled
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_insert_with_hint_prefix_extractor: nullptr
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.num_levels: 7
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:        Options.min_write_buffer_number_to_merge: 6
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_number_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:     Options.max_write_buffer_size_to_maintain: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.bottommost_compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.bottommost_compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.bottommost_compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.bottommost_compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:            Options.compression_opts.window_bits: -14
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.level: 32767
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.compression_opts.strategy: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.zstd_max_train_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.use_zstd_dict_trainer: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.parallel_threads: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                  Options.compression_opts.enabled: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:         Options.compression_opts.max_dict_buffer_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.level0_file_num_compaction_trigger: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.level0_slowdown_writes_trigger: 20
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:              Options.level0_stop_writes_trigger: 36
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.target_file_size_base: 67108864
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:             Options.target_file_size_multiplier: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.max_bytes_for_level_base: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.level_compaction_dynamic_level_bytes: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.max_bytes_for_level_multiplier: 8.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:       Options.max_sequential_skip_in_iterations: 8
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_compaction_bytes: 1677721600
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.ignore_max_compaction_bytes_for_input: true
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.arena_block_size: 1048576
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.soft_pending_compaction_bytes_limit: 68719476736
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.hard_pending_compaction_bytes_limit: 274877906944
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.disable_auto_compactions: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                        Options.compaction_style: kCompactionStyleLevel
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.compaction_pri: kMinOverlappingRatio
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.size_ratio: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.min_merge_width: 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.compression_size_percent: -1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.compaction_options_fifo.allow_compaction: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.table_properties_collectors: CompactOnDeletionCollector (Sliding window size = 32768 Deletion trigger = 16384 Deletion ratio = 0);
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.inplace_update_support: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                 Options.inplace_update_num_locks: 10000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_prefix_bloom_size_ratio: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:               Options.memtable_whole_key_filtering: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:   Options.memtable_huge_page_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.bloom_locality: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                    Options.max_successive_merges: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.optimize_filters_for_hits: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.paranoid_file_checks: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.force_consistency_checks: 1
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.report_bg_io_stats: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                               Options.ttl: 2592000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.periodic_compaction_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:  Options.preclude_last_level_data_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:    Options.preserve_internal_time_seconds: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                       Options.enable_blob_files: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                           Options.min_blob_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                          Options.blob_file_size: 268435456
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                   Options.blob_compression_type: NoCompression
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.enable_blob_garbage_collection: false
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:      Options.blob_garbage_collection_age_cutoff: 0.250000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.blob_garbage_collection_force_threshold: 1.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:          Options.blob_compaction_readahead_size: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb:                Options.blob_file_starting_level: 0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: Options.experimental_mempurge_threshold: 0.000000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/column_family.cc:635] #011(skipping printing options)
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5566] Recovered from manifest file:db/MANIFEST-000032 succeeded,manifest_file_number is 32, next_file_number is 34, last_sequence is 12, log_number is 5,prev_log_number is 0,max_column_family is 11,min_log_number_to_keep is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [default] (ID 0), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-0] (ID 1), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-1] (ID 2), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [m-2] (ID 3), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-0] (ID 4), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-1] (ID 5), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [p-2] (ID 6), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-0] (ID 7), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-1] (ID 8), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [O-2] (ID 9), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [L] (ID 10), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5581] Column family [P] (ID 11), log number is 5
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:539] DB ID: 08e03913-6658-4f74-8dc9-cb8ac6d324e4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843942396066, "job": 1, "event": "recovery_started", "wal_files": [31]}
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:1043] Recovering log #31 mode 2
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843942403515, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 35, "file_size": 1272, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13, "largest_seqno": 21, "table_properties": {"data_size": 128, "index_size": 27, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 87, "raw_average_key_size": 17, "raw_value_size": 82, "raw_average_value_size": 16, "num_data_blocks": 1, "num_entries": 5, "num_filter_entries": 5, "num_deletions": 0, "num_merge_operands": 2, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": ".T:int64_array.b:bitwise_xor", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843942, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "08e03913-6658-4f74-8dc9-cb8ac6d324e4", "db_session_id": "P7CGIDJI77P8MFE5I1ZF", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843942410176, "cf_name": "p-0", "job": 1, "event": "table_file_creation", "file_number": 36, "file_size": 1593, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14, "largest_seqno": 15, "table_properties": {"data_size": 467, "index_size": 39, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 72, "raw_average_key_size": 36, "raw_value_size": 567, "raw_average_value_size": 283, "num_data_blocks": 1, "num_entries": 2, "num_filter_entries": 2, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "p-0", "column_family_id": 4, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843942, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "08e03913-6658-4f74-8dc9-cb8ac6d324e4", "db_session_id": "P7CGIDJI77P8MFE5I1ZF", "orig_file_number": 36, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843942412768, "cf_name": "O-2", "job": 1, "event": "table_file_creation", "file_number": 37, "file_size": 1275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16, "largest_seqno": 16, "table_properties": {"data_size": 121, "index_size": 64, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 69, "raw_key_size": 55, "raw_average_key_size": 55, "raw_value_size": 50, "raw_average_value_size": 50, "num_data_blocks": 1, "num_entries": 1, "num_filter_entries": 1, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "O-2", "column_family_id": 9, "comparator": "leveldb.BytewiseComparator", "merge_operator": "nullptr", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "LZ4", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843942, "oldest_key_time": 0, "file_creation_time": 0, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "08e03913-6658-4f74-8dc9-cb8ac6d324e4", "db_session_id": "P7CGIDJI77P8MFE5I1ZF", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759843942414123, "job": 1, "event": "recovery_finished"}
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/version_set.cc:5047] Creating manifest 40
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 33.955 iops: 8692.563 elapsed_sec: 0.345
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: log_channel(cluster) log [WRN] : OSD bench result of 8692.563290 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 0 waiting for initial osdmap
Oct  7 09:32:22 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1[89058]: 2025-10-07T13:32:22.421+0000 7f8cb0b86640 -1 osd.1 0 waiting for initial osdmap
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 12 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 12 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 12 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 12 check_osdmap_features require_osd_release unknown -> reef
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_open.cc:1987] SstFileManager instance 0x55f4fba54000
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: DB pointer 0x55f4fc7f9a00
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _open_db opened rocksdb path db options compression=kLZ4Compression,max_write_buffer_number=64,min_write_buffer_number_to_merge=6,compaction_style=kCompactionStyleLevel,write_buffer_size=16777216,max_background_jobs=4,level0_file_num_compaction_trigger=8,max_bytes_for_level_base=1073741824,max_bytes_for_level_multiplier=8,compaction_readahead_size=2MB,max_total_wal_size=1073741824,writable_file_max_buffer_size=0
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super from 4, latest 4
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _upgrade_super done
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 0.1 total, 0.1 interval#012Cumulative writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 GB, 0.00 MB/s#012Cumulative WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 0 writes, 0 keys, 0 commit groups, 0.0 writes per commit group, ingest: 0.00 MB, 0.00 MB/s#012Interval WAL: 0 writes, 0 syncs, 0.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.02 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 460.80 MB usage: 0.67 KB table_size: 0 occupancy: 18446744073709551615 collections: 1 last_copies: 8 last_secs: 2.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): FilterBlock(3,0.33 KB,6.95388e-05%) IndexBlock(3,0.34 KB,7.28501e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 0.1 total, 0.1 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 460.80 MB usag
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/cephfs/cls_cephfs.cc:201: loading cephfs
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: <cls> /home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos9/DIST/centos9/MACHINE_SIZE/gigantic/release/18.2.7/rpm/el9/BUILD/ceph-18.2.7/src/cls/hello/cls_hello.cc:316: loading cls_hello
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: _get_class not permitted to load lua
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: _get_class not permitted to load sdk
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: _get_class not permitted to load test_remote_reads
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for clients
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: osd.2 0 crush map has features 288232575208783872 was 8705, adjusting msgr requires for mons
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: osd.2 0 crush map has features 288232575208783872, adjusting msgr requires for osds
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: osd.2 0 check_osdmap_features enabling on-disk ERASURE CODES compat feature
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: osd.2 0 load_pgs
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: osd.2 0 load_pgs opened 0 pgs
Oct  7 09:32:22 np0005473739 ceph-osd[90092]: osd.2 0 log_to_monitors true
Oct  7 09:32:22 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2[90088]: 2025-10-07T13:32:22.447+0000 7fa3c3fc9740 -1 osd.2 0 log_to_monitors true
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 12 set_numa_affinity not setting numa affinity
Oct  7 09:32:22 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-1[89058]: 2025-10-07T13:32:22.450+0000 7f8cac1ae640 -1 osd.1 12 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]} v 0) v1
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 12 _collect_metadata loop4:  no unique device id for loop4: fallback method has no model nor serial
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.1 v2:192.168.122.100:6806/215390500; not ready for session (expect reconnect)
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.1: (2) No such file or directory
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v33: 1 pgs: 1 unknown; 0 B data, 426 MiB used, 20 GiB / 20 GiB avail
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:32:22
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Some PGs (1.000000) are unknown; try again later
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 21470642176
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 1 (current 1)
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e12 do_prune osdmap full prune enabled
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e13 e13: 3 total, 2 up, 3 in
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500] boot
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e13: 3 total, 2 up, 3 in
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]} v 0) v1
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e13 create-or-move crush item name 'osd.2' initial_weight 0.0195 at location {host=compute-0,root=default}
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 1} v 0) v1
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 1}]: dispatch
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool create", "format": "json", "pool": ".mgr", "pg_num": 1, "pg_num_min": 1, "pg_num_max": 32, "yes_i_really_mean_it": true}]': finished
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]: dispatch
Oct  7 09:32:22 np0005473739 ceph-mon[74295]: from='osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875]' entity='osd.2' cmd=[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]: dispatch
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 13 state: booting -> active
Oct  7 09:32:22 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 13 pg[1.0( empty local-lis/les=0/0 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[12,13)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:32:23 np0005473739 great_lederberg[90506]: {
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "osd_id": 2,
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "type": "bluestore"
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:    },
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "osd_id": 1,
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "type": "bluestore"
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:    },
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "osd_id": 0,
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:        "type": "bluestore"
Oct  7 09:32:23 np0005473739 great_lederberg[90506]:    }
Oct  7 09:32:23 np0005473739 great_lederberg[90506]: }
Oct  7 09:32:23 np0005473739 systemd[1]: libpod-8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f.scope: Deactivated successfully.
Oct  7 09:32:23 np0005473739 podman[90489]: 2025-10-07 13:32:23.277350154 +0000 UTC m=+1.110627761 container died 8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lederberg, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 09:32:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c724ab7782736045a39077172d66c6b1b3a4c4203ab566039fbdc9306208c81f-merged.mount: Deactivated successfully.
Oct  7 09:32:23 np0005473739 podman[90489]: 2025-10-07 13:32:23.334919859 +0000 UTC m=+1.168197446 container remove 8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lederberg, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:23 np0005473739 systemd[1]: libpod-conmon-8ed29550973c5ad510ae28a057998cf7a86d3133d9a022c05e8b988f78c2430f.scope: Deactivated successfully.
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:23 np0005473739 python3[90781]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /home/ceph-admin/specs/ceph_spec.yaml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .osdmap.num_up_osds _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : purged_snaps scrub starts
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : purged_snaps scrub ok
Oct  7 09:32:23 np0005473739 podman[90817]: 2025-10-07 13:32:23.461282177 +0000 UTC m=+0.042880393 container create 7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef (image=quay.io/ceph/ceph:v18, name=funny_jackson, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:32:23 np0005473739 systemd[1]: Started libpod-conmon-7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef.scope.
Oct  7 09:32:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab6f758d086d5f1fe07b3a675a29b0aab50d0e01d6929bbf68a981e26561b5b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab6f758d086d5f1fe07b3a675a29b0aab50d0e01d6929bbf68a981e26561b5b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ab6f758d086d5f1fe07b3a675a29b0aab50d0e01d6929bbf68a981e26561b5b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:23 np0005473739 podman[90817]: 2025-10-07 13:32:23.443560515 +0000 UTC m=+0.025158731 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:23 np0005473739 podman[90817]: 2025-10-07 13:32:23.540309148 +0000 UTC m=+0.121907384 container init 7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef (image=quay.io/ceph/ceph:v18, name=funny_jackson, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:23 np0005473739 podman[90817]: 2025-10-07 13:32:23.54666087 +0000 UTC m=+0.128259086 container start 7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef (image=quay.io/ceph/ceph:v18, name=funny_jackson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 09:32:23 np0005473739 podman[90817]: 2025-10-07 13:32:23.549773649 +0000 UTC m=+0.131371895 container attach 7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef (image=quay.io/ceph/ceph:v18, name=funny_jackson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e13 do_prune osdmap full prune enabled
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e14 e14: 3 total, 2 up, 3 in
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: osd.2 0 done with init, starting boot process
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: osd.2 0 start_boot
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: osd.2 0 maybe_override_options_for_qos osd_max_backfills set to 1
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active set to 0
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_hdd set to 3
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: osd.2 0 maybe_override_options_for_qos osd_recovery_max_active_ssd set to 10
Oct  7 09:32:23 np0005473739 ceph-osd[90092]: osd.2 0  bench count 12288000 bsize 4 KiB
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e14: 3 total, 2 up, 3 in
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:23 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:23 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3895130875; not ready for session (expect reconnect)
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:23 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: OSD bench result of 8692.563290 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.1. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool application enable", "format": "json", "pool": ".mgr", "app": "mgr", "yes_i_really_mean_it": true}]': finished
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: from='osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875]' entity='osd.2' cmd='[{"prefix": "osd crush set-device-class", "class": "hdd", "ids": ["2"]}]': finished
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: osd.1 [v2:192.168.122.100:6806/215390500,v1:192.168.122.100:6807/215390500] boot
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: from='osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875]' entity='osd.2' cmd=[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]: dispatch
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: from='osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875]' entity='osd.2' cmd='[{"prefix": "osd crush create-or-move", "id": 2, "weight":0.0195, "args": ["host=compute-0", "root=default"]}]': finished
Oct  7 09:32:23 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 14 pg[1.0( empty local-lis/les=13/14 n=0 ec=12/12 lis/c=0/0 les/c/f=0/0/0 sis=13) [1] r=0 lpr=13 pi=[12,13)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:32:23 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] creating main.db for devicehealth
Oct  7 09:32:23 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 09:32:23 np0005473739 ceph-mgr[74587]: [devicehealth ERROR root] Fail to parse JSON result from daemon osd.2 ()
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon metadata", "id": "compute-0"} v 0) v1
Oct  7 09:32:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mon metadata", "id": "compute-0"}]: dispatch
Oct  7 09:32:24 np0005473739 podman[91070]: 2025-10-07 13:32:24.175829045 +0000 UTC m=+0.056200841 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1810290422' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  7 09:32:24 np0005473739 funny_jackson[90861]: 
Oct  7 09:32:24 np0005473739 funny_jackson[90861]: {"fsid":"82044f27-a8da-5b2a-a297-ff6afc620e1f","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":109,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":14,"num_osds":3,"num_up_osds":2,"osd_up_since":1759843942,"num_in_osds":3,"osd_in_since":1759843923,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"unknown","count":1}],"num_pgs":1,"num_pools":1,"num_objects":0,"data_bytes":0,"bytes_used":446992384,"bytes_avail":21023649792,"bytes_total":21470642176,"unknown_pgs_ratio":1},"fsmap":{"epoch":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":1,"modified":"2025-10-07T13:30:32.279410+0000","services":{}},"progress_events":{}}
Oct  7 09:32:24 np0005473739 systemd[1]: libpod-7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef.scope: Deactivated successfully.
Oct  7 09:32:24 np0005473739 conmon[90861]: conmon 7a3946946659e71d1bbe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef.scope/container/memory.events
Oct  7 09:32:24 np0005473739 podman[90817]: 2025-10-07 13:32:24.235970796 +0000 UTC m=+0.817569032 container died 7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef (image=quay.io/ceph/ceph:v18, name=funny_jackson, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 09:32:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5ab6f758d086d5f1fe07b3a675a29b0aab50d0e01d6929bbf68a981e26561b5b-merged.mount: Deactivated successfully.
Oct  7 09:32:24 np0005473739 podman[91070]: 2025-10-07 13:32:24.318414245 +0000 UTC m=+0.198786041 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 09:32:24 np0005473739 podman[90817]: 2025-10-07 13:32:24.389672229 +0000 UTC m=+0.971270435 container remove 7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef (image=quay.io/ceph/ceph:v18, name=funny_jackson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 09:32:24 np0005473739 systemd[1]: libpod-conmon-7a3946946659e71d1bbe1dd1fc25f3f8cf41d2f39c95f0b0db307c3b14fe84ef.scope: Deactivated successfully.
Oct  7 09:32:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v36: 1 pgs: 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  7 09:32:24 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3895130875; not ready for session (expect reconnect)
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:24 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e14 do_prune osdmap full prune enabled
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mgrmap e9: compute-0.kdyrcd(active, since 62s)
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e15 e15: 3 total, 2 up, 3 in
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: from='admin socket' entity='admin socket' cmd='smart' args=[json]: dispatch
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: from='admin socket' entity='admin socket' cmd=smart args=[json]: finished
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e15: 3 total, 2 up, 3 in
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:24 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:24 np0005473739 python3[91226]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create vms  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:24 np0005473739 podman[91251]: 2025-10-07 13:32:24.909628334 +0000 UTC m=+0.043099028 container create 9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c (image=quay.io/ceph/ceph:v18, name=priceless_wright, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 09:32:24 np0005473739 systemd[1]: Started libpod-conmon-9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c.scope.
Oct  7 09:32:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f0f1c9fc1bd43020904745183b7d6b73a38214cb78255bb8c8db76f62a18f2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25f0f1c9fc1bd43020904745183b7d6b73a38214cb78255bb8c8db76f62a18f2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:24 np0005473739 podman[91251]: 2025-10-07 13:32:24.891913673 +0000 UTC m=+0.025384387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:25 np0005473739 podman[91251]: 2025-10-07 13:32:25.02102844 +0000 UTC m=+0.154499164 container init 9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c (image=quay.io/ceph/ceph:v18, name=priceless_wright, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:32:25 np0005473739 podman[91251]: 2025-10-07 13:32:25.028324846 +0000 UTC m=+0.161795540 container start 9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c (image=quay.io/ceph/ceph:v18, name=priceless_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:32:25 np0005473739 podman[91251]: 2025-10-07 13:32:25.039346877 +0000 UTC m=+0.172817601 container attach 9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c (image=quay.io/ceph/ceph:v18, name=priceless_wright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e15 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:25 np0005473739 podman[91385]: 2025-10-07 13:32:25.391201473 +0000 UTC m=+0.110565816 container create 6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_benz, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:25 np0005473739 podman[91385]: 2025-10-07 13:32:25.301471659 +0000 UTC m=+0.020835972 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:25 np0005473739 systemd[1]: Started libpod-conmon-6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e.scope.
Oct  7 09:32:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4170071207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:25 np0005473739 podman[91385]: 2025-10-07 13:32:25.563188411 +0000 UTC m=+0.282552734 container init 6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_benz, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:25 np0005473739 podman[91385]: 2025-10-07 13:32:25.571108912 +0000 UTC m=+0.290473215 container start 6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_benz, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:32:25 np0005473739 bold_benz[91420]: 167 167
Oct  7 09:32:25 np0005473739 systemd[1]: libpod-6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e.scope: Deactivated successfully.
Oct  7 09:32:25 np0005473739 conmon[91420]: conmon 6aab80110520892d7f1c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e.scope/container/memory.events
Oct  7 09:32:25 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3895130875; not ready for session (expect reconnect)
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:25 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:25 np0005473739 podman[91385]: 2025-10-07 13:32:25.624942643 +0000 UTC m=+0.344306976 container attach 6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_benz, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:25 np0005473739 podman[91385]: 2025-10-07 13:32:25.625297401 +0000 UTC m=+0.344661694 container died 6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_benz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: Cluster is now healthy
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/4170071207' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-023d514d6e0cd201734376d4abcd71eef57616ef256aabf1b7629837e5c9b690-merged.mount: Deactivated successfully.
Oct  7 09:32:25 np0005473739 podman[91385]: 2025-10-07 13:32:25.737256712 +0000 UTC m=+0.456621025 container remove 6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 09:32:25 np0005473739 systemd[1]: libpod-conmon-6aab80110520892d7f1cc344cf729bdb1b96e53be0026b7877bf5dca2291839e.scope: Deactivated successfully.
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e15 do_prune osdmap full prune enabled
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4170071207' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e16 e16: 3 total, 2 up, 3 in
Oct  7 09:32:25 np0005473739 priceless_wright[91310]: pool 'vms' created
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e16: 3 total, 2 up, 3 in
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:25 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:25 np0005473739 systemd[1]: libpod-9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c.scope: Deactivated successfully.
Oct  7 09:32:25 np0005473739 podman[91251]: 2025-10-07 13:32:25.852276529 +0000 UTC m=+0.985747243 container died 9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c (image=quay.io/ceph/ceph:v18, name=priceless_wright, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:26 np0005473739 podman[91456]: 2025-10-07 13:32:26.095780018 +0000 UTC m=+0.214111342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-25f0f1c9fc1bd43020904745183b7d6b73a38214cb78255bb8c8db76f62a18f2-merged.mount: Deactivated successfully.
Oct  7 09:32:26 np0005473739 podman[91251]: 2025-10-07 13:32:26.352478281 +0000 UTC m=+1.485948975 container remove 9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c (image=quay.io/ceph/ceph:v18, name=priceless_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 09:32:26 np0005473739 podman[91456]: 2025-10-07 13:32:26.378997297 +0000 UTC m=+0.497328541 container create e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:26 np0005473739 systemd[1]: libpod-conmon-9feeb8048d95a09d36fa2fbd8cb3753a3ee023893f75b2616b0947b2f398bf6c.scope: Deactivated successfully.
Oct  7 09:32:26 np0005473739 systemd[1]: Started libpod-conmon-e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7.scope.
Oct  7 09:32:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4763b3702dc62d0903cda5c0ab2a54f305b717388ca4dc15877a21046dd027/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4763b3702dc62d0903cda5c0ab2a54f305b717388ca4dc15877a21046dd027/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4763b3702dc62d0903cda5c0ab2a54f305b717388ca4dc15877a21046dd027/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4763b3702dc62d0903cda5c0ab2a54f305b717388ca4dc15877a21046dd027/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:26 np0005473739 podman[91456]: 2025-10-07 13:32:26.481998069 +0000 UTC m=+0.600329323 container init e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:26 np0005473739 podman[91456]: 2025-10-07 13:32:26.491984443 +0000 UTC m=+0.610315687 container start e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v39: 2 pgs: 1 unknown, 1 creating+peering; 0 B data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  7 09:32:26 np0005473739 podman[91456]: 2025-10-07 13:32:26.508429872 +0000 UTC m=+0.626761136 container attach e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:26 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3895130875; not ready for session (expect reconnect)
Oct  7 09:32:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:26 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:26 np0005473739 python3[91509]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create volumes  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:26 np0005473739 podman[91510]: 2025-10-07 13:32:26.75821 +0000 UTC m=+0.053676147 container create e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c (image=quay.io/ceph/ceph:v18, name=pedantic_carson, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:32:26 np0005473739 systemd[1]: Started libpod-conmon-e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c.scope.
Oct  7 09:32:26 np0005473739 podman[91510]: 2025-10-07 13:32:26.726651317 +0000 UTC m=+0.022117514 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:26 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a318174d32ff75c8b1b18445b1e1449096246f896f7ee2fa510d4445ca7b33/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/02a318174d32ff75c8b1b18445b1e1449096246f896f7ee2fa510d4445ca7b33/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:26 np0005473739 podman[91510]: 2025-10-07 13:32:26.841157602 +0000 UTC m=+0.136623799 container init e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c (image=quay.io/ceph/ceph:v18, name=pedantic_carson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:26 np0005473739 podman[91510]: 2025-10-07 13:32:26.849071503 +0000 UTC m=+0.144537650 container start e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c (image=quay.io/ceph/ceph:v18, name=pedantic_carson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:26 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/4170071207' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "vms", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:26 np0005473739 podman[91510]: 2025-10-07 13:32:26.862512175 +0000 UTC m=+0.157978342 container attach e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c (image=quay.io/ceph/ceph:v18, name=pedantic_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 0 maybe_override_max_osd_capacity_for_qos osd bench result - bandwidth (MiB/sec): 31.313 iops: 8016.047 elapsed_sec: 0.374
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: log_channel(cluster) log [WRN] : OSD bench result of 8016.047143 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 0 waiting for initial osdmap
Oct  7 09:32:27 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2[90088]: 2025-10-07T13:32:27.292+0000 7fa3c0760640 -1 osd.2 0 waiting for initial osdmap
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 16 crush map has features 288514051259236352, adjusting msgr requires for clients
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 16 crush map has features 288514051259236352 was 288232575208792577, adjusting msgr requires for mons
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 16 crush map has features 3314933000852226048, adjusting msgr requires for osds
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 16 check_osdmap_features require_osd_release unknown -> reef
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 16 set_numa_affinity not setting numa affinity
Oct  7 09:32:27 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-osd-2[90088]: 2025-10-07T13:32:27.316+0000 7fa3bb571640 -1 osd.2 16 set_numa_affinity unable to identify public interface '' numa node: (2) No such file or directory
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 16 _collect_metadata loop5:  no unique device id for loop5: fallback method has no model nor serial
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3490171700' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: mgr.server handle_open ignoring open from osd.2 v2:192.168.122.100:6810/3895130875; not ready for session (expect reconnect)
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: mgr finish mon failed to return metadata for osd.2: (2) No such file or directory
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]: [
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:    {
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        "available": false,
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        "ceph_device": false,
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        "lsm_data": {},
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        "lvs": [],
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        "path": "/dev/sr0",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        "rejected_reasons": [
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "Insufficient space (<5GB)",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "Has a FileSystem"
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        ],
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        "sys_api": {
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "actuators": null,
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "device_nodes": "sr0",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "devname": "sr0",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "human_readable_size": "482.00 KB",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "id_bus": "ata",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "model": "QEMU DVD-ROM",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "nr_requests": "2",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "parent": "/dev/sr0",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "partitions": {},
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "path": "/dev/sr0",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "removable": "1",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "rev": "2.5+",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "ro": "0",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "rotational": "0",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "sas_address": "",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "sas_device_handle": "",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "scheduler_mode": "mq-deadline",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "sectors": 0,
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "sectorsize": "2048",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "size": 493568.0,
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "support_discard": "2048",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "type": "disk",
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:            "vendor": "QEMU"
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:        }
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]:    }
Oct  7 09:32:27 np0005473739 elated_dubinsky[91479]: ]
Oct  7 09:32:27 np0005473739 systemd[1]: libpod-e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7.scope: Deactivated successfully.
Oct  7 09:32:27 np0005473739 systemd[1]: libpod-e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7.scope: Consumed 1.270s CPU time.
Oct  7 09:32:27 np0005473739 conmon[91479]: conmon e62586e5667286f261a3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7.scope/container/memory.events
Oct  7 09:32:27 np0005473739 podman[91456]: 2025-10-07 13:32:27.765215153 +0000 UTC m=+1.883546397 container died e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2f4763b3702dc62d0903cda5c0ab2a54f305b717388ca4dc15877a21046dd027-merged.mount: Deactivated successfully.
Oct  7 09:32:27 np0005473739 podman[91456]: 2025-10-07 13:32:27.825689723 +0000 UTC m=+1.944020987 container remove e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:27 np0005473739 systemd[1]: libpod-conmon-e62586e5667286f261a33b4854e5b5ade1e8b377ac9c9b7957f39b70fae1c1a7.scope: Deactivated successfully.
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e16 do_prune osdmap full prune enabled
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: OSD bench result of 8016.047143 IOPS is not within the threshold limit range of 50.000000 IOPS and 500.000000 IOPS for osd.2. IOPS capacity is unchanged at 315.000000 IOPS. The recommendation is to establish the osd's IOPS capacity using other benchmark tools (e.g. Fio) and then override osd_mclock_max_capacity_iops_[hdd|ssd].
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3490171700' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3490171700' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e17 e17: 3 total, 3 up, 3 in
Oct  7 09:32:27 np0005473739 pedantic_carson[91525]: pool 'volumes' created
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875] boot
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e17: 3 total, 3 up, 3 in
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd metadata", "id": 2} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd metadata", "id": 2}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 17 state: booting -> active
Oct  7 09:32:27 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 17 pg[2.0( empty local-lis/les=0/0 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  7 09:32:27 np0005473739 systemd[1]: libpod-e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c.scope: Deactivated successfully.
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"} v 0) v1
Oct  7 09:32:27 np0005473739 podman[91510]: 2025-10-07 13:32:27.89671633 +0000 UTC m=+1.192182497 container died e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c (image=quay.io/ceph/ceph:v18, name=pedantic_carson, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Adjusting osd_memory_target on compute-0 to 43641k
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Adjusting osd_memory_target on compute-0 to 43641k
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=osd_memory_target}] v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: [cephadm WARNING cephadm.serve] Unable to set osd_memory_target on compute-0 to 44688588: error parsing value: Value '44688588' is below minimum 939524096
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [WRN] : Unable to set osd_memory_target on compute-0 to 44688588: error parsing value: Value '44688588' is below minimum 939524096
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cd66cd26-000f-41f5-a2e8-387a39fc07bf does not exist
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 867b1f7e-bdaa-4907-99c5-929dfb038bf8 does not exist
Oct  7 09:32:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0c363fbc-2259-403a-b560-3f2705f89378 does not exist
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:32:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-02a318174d32ff75c8b1b18445b1e1449096246f896f7ee2fa510d4445ca7b33-merged.mount: Deactivated successfully.
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:32:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:32:27 np0005473739 podman[91510]: 2025-10-07 13:32:27.945168134 +0000 UTC m=+1.240634281 container remove e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c (image=quay.io/ceph/ceph:v18, name=pedantic_carson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:32:27 np0005473739 systemd[1]: libpod-conmon-e4548de6e7bf0da27e395a9f8d4ca1a6a6e6a4fc09aa58e12171fdb81222872c.scope: Deactivated successfully.
Oct  7 09:32:28 np0005473739 python3[93505]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create backups  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:28 np0005473739 podman[93531]: 2025-10-07 13:32:28.304203423 +0000 UTC m=+0.040315287 container create 26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af (image=quay.io/ceph/ceph:v18, name=focused_hodgkin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 09:32:28 np0005473739 systemd[1]: Started libpod-conmon-26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af.scope.
Oct  7 09:32:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a82fca9424315d6f49448ee2dd0c0b9be519c998b8ad9001620d0c7a4a9860/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01a82fca9424315d6f49448ee2dd0c0b9be519c998b8ad9001620d0c7a4a9860/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:28 np0005473739 podman[93531]: 2025-10-07 13:32:28.28720344 +0000 UTC m=+0.023315324 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:28 np0005473739 podman[93531]: 2025-10-07 13:32:28.403611593 +0000 UTC m=+0.139723477 container init 26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af (image=quay.io/ceph/ceph:v18, name=focused_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:28 np0005473739 podman[93531]: 2025-10-07 13:32:28.411551836 +0000 UTC m=+0.147663710 container start 26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af (image=quay.io/ceph/ceph:v18, name=focused_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:28 np0005473739 podman[93531]: 2025-10-07 13:32:28.475605246 +0000 UTC m=+0.211717140 container attach 26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af (image=quay.io/ceph/ceph:v18, name=focused_hodgkin, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:32:28 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 17 pg[3.0( empty local-lis/les=0/0 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:32:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v41: 3 pgs: 1 creating+peering, 1 active+clean, 1 unknown; 449 KiB data, 853 MiB used, 39 GiB / 40 GiB avail
Oct  7 09:32:28 np0005473739 podman[93589]: 2025-10-07 13:32:28.600200478 +0000 UTC m=+0.044892714 container create 0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sanderson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 09:32:28 np0005473739 systemd[1]: Started libpod-conmon-0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2.scope.
Oct  7 09:32:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:28 np0005473739 podman[93589]: 2025-10-07 13:32:28.575880228 +0000 UTC m=+0.020572474 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:28 np0005473739 podman[93589]: 2025-10-07 13:32:28.672011806 +0000 UTC m=+0.116704052 container init 0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:32:28 np0005473739 podman[93589]: 2025-10-07 13:32:28.677636859 +0000 UTC m=+0.122329085 container start 0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 09:32:28 np0005473739 kind_sanderson[93605]: 167 167
Oct  7 09:32:28 np0005473739 podman[93589]: 2025-10-07 13:32:28.681115648 +0000 UTC m=+0.125807894 container attach 0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sanderson, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:28 np0005473739 systemd[1]: libpod-0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2.scope: Deactivated successfully.
Oct  7 09:32:28 np0005473739 conmon[93605]: conmon 0388f77f4d11869c64ff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2.scope/container/memory.events
Oct  7 09:32:28 np0005473739 podman[93589]: 2025-10-07 13:32:28.683387085 +0000 UTC m=+0.128079311 container died 0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sanderson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 09:32:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e08f79c64d248d7fbdf0c97bd7bccd77d144976b874c630031573bc3d2309e94-merged.mount: Deactivated successfully.
Oct  7 09:32:28 np0005473739 podman[93589]: 2025-10-07 13:32:28.724122552 +0000 UTC m=+0.168814788 container remove 0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_sanderson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:28 np0005473739 systemd[1]: libpod-conmon-0388f77f4d11869c64ff4d4e9655c7d83f4ae77348a91bfec3fdb1d78b39fbf2.scope: Deactivated successfully.
Oct  7 09:32:28 np0005473739 podman[93648]: 2025-10-07 13:32:28.854697146 +0000 UTC m=+0.035816953 container create 5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e17 do_prune osdmap full prune enabled
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3490171700' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "volumes", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: osd.2 [v2:192.168.122.100:6810/3895130875,v1:192.168.122.100:6811/3895130875] boot
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd.0", "name": "osd_memory_target"}]: dispatch
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd.1", "name": "osd_memory_target"}]: dispatch
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd.2", "name": "osd_memory_target"}]: dispatch
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: Adjusting osd_memory_target on compute-0 to 43641k
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: Unable to set osd_memory_target on compute-0 to 44688588: error parsing value: Value '44688588' is below minimum 939524096
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e18 e18: 3 total, 3 up, 3 in
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e18: 3 total, 3 up, 3 in
Oct  7 09:32:28 np0005473739 systemd[1]: Started libpod-conmon-5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d.scope.
Oct  7 09:32:28 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 18 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=0/0 les/c/f=0/0/0 sis=17) [1] r=0 lpr=17 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:32:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 18 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=0/0 les/c/f=0/0/0 sis=17) [2] r=0 lpr=17 pi=[16,17)/0 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:32:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/689abd51ed06af6af13a875970723fd2486017ddd0e06480dd20c16260ec6f21/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/689abd51ed06af6af13a875970723fd2486017ddd0e06480dd20c16260ec6f21/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/689abd51ed06af6af13a875970723fd2486017ddd0e06480dd20c16260ec6f21/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/689abd51ed06af6af13a875970723fd2486017ddd0e06480dd20c16260ec6f21/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/689abd51ed06af6af13a875970723fd2486017ddd0e06480dd20c16260ec6f21/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:28 np0005473739 podman[93648]: 2025-10-07 13:32:28.83834703 +0000 UTC m=+0.019466867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:28 np0005473739 podman[93648]: 2025-10-07 13:32:28.935116223 +0000 UTC m=+0.116236060 container init 5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lehmann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:28 np0005473739 podman[93648]: 2025-10-07 13:32:28.943414294 +0000 UTC m=+0.124534101 container start 5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lehmann, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:32:28 np0005473739 podman[93648]: 2025-10-07 13:32:28.946644706 +0000 UTC m=+0.127764513 container attach 5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lehmann, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  7 09:32:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/759936796' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e18 do_prune osdmap full prune enabled
Oct  7 09:32:29 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/759936796' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/759936796' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e19 e19: 3 total, 3 up, 3 in
Oct  7 09:32:29 np0005473739 focused_hodgkin[93571]: pool 'backups' created
Oct  7 09:32:29 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e19: 3 total, 3 up, 3 in
Oct  7 09:32:29 np0005473739 systemd[1]: libpod-26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af.scope: Deactivated successfully.
Oct  7 09:32:29 np0005473739 podman[93531]: 2025-10-07 13:32:29.92536371 +0000 UTC m=+1.661475584 container died 26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af (image=quay.io/ceph/ceph:v18, name=focused_hodgkin, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:32:29 np0005473739 dreamy_lehmann[93665]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:32:29 np0005473739 dreamy_lehmann[93665]: --> relative data size: 1.0
Oct  7 09:32:29 np0005473739 dreamy_lehmann[93665]: --> All data devices are unavailable
Oct  7 09:32:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-01a82fca9424315d6f49448ee2dd0c0b9be519c998b8ad9001620d0c7a4a9860-merged.mount: Deactivated successfully.
Oct  7 09:32:29 np0005473739 podman[93531]: 2025-10-07 13:32:29.985375607 +0000 UTC m=+1.721487471 container remove 26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af (image=quay.io/ceph/ceph:v18, name=focused_hodgkin, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 09:32:29 np0005473739 systemd[1]: libpod-5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d.scope: Deactivated successfully.
Oct  7 09:32:29 np0005473739 podman[93648]: 2025-10-07 13:32:29.993105475 +0000 UTC m=+1.174225282 container died 5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lehmann, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:29 np0005473739 systemd[1]: libpod-conmon-26f9ce973f514632f6fe0c2a2d66714d0d373936aa1882b1f808c1f269bd35af.scope: Deactivated successfully.
Oct  7 09:32:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-689abd51ed06af6af13a875970723fd2486017ddd0e06480dd20c16260ec6f21-merged.mount: Deactivated successfully.
Oct  7 09:32:30 np0005473739 podman[93648]: 2025-10-07 13:32:30.048702349 +0000 UTC m=+1.229822156 container remove 5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  7 09:32:30 np0005473739 systemd[1]: libpod-conmon-5f9492886fa3d9b6a4e3ac973516de1dde3592877d40b604a277293510a1004d.scope: Deactivated successfully.
Oct  7 09:32:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e19 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:30 np0005473739 python3[93797]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create images  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:30 np0005473739 podman[93849]: 2025-10-07 13:32:30.365624437 +0000 UTC m=+0.034965881 container create 1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06 (image=quay.io/ceph/ceph:v18, name=infallible_sanderson, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:30 np0005473739 systemd[1]: Started libpod-conmon-1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06.scope.
Oct  7 09:32:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc8a96b63d5eb9e2731d747fcf90c190fe2ac743addaf7fbaf3e73544dd82dc/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecc8a96b63d5eb9e2731d747fcf90c190fe2ac743addaf7fbaf3e73544dd82dc/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:30 np0005473739 podman[93849]: 2025-10-07 13:32:30.437992159 +0000 UTC m=+0.107333623 container init 1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06 (image=quay.io/ceph/ceph:v18, name=infallible_sanderson, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:30 np0005473739 podman[93849]: 2025-10-07 13:32:30.444068404 +0000 UTC m=+0.113409848 container start 1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06 (image=quay.io/ceph/ceph:v18, name=infallible_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 09:32:30 np0005473739 podman[93849]: 2025-10-07 13:32:30.351022565 +0000 UTC m=+0.020364029 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:30 np0005473739 podman[93849]: 2025-10-07 13:32:30.447544872 +0000 UTC m=+0.116886316 container attach 1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06 (image=quay.io/ceph/ceph:v18, name=infallible_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v44: 4 pgs: 1 unknown, 2 creating+peering, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct  7 09:32:30 np0005473739 podman[93906]: 2025-10-07 13:32:30.564582932 +0000 UTC m=+0.036447959 container create 2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldstine, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:30 np0005473739 systemd[1]: Started libpod-conmon-2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623.scope.
Oct  7 09:32:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:30 np0005473739 podman[93906]: 2025-10-07 13:32:30.630796547 +0000 UTC m=+0.102661584 container init 2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldstine, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 09:32:30 np0005473739 podman[93906]: 2025-10-07 13:32:30.635695282 +0000 UTC m=+0.107560299 container start 2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldstine, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  7 09:32:30 np0005473739 kind_goldstine[93923]: 167 167
Oct  7 09:32:30 np0005473739 systemd[1]: libpod-2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623.scope: Deactivated successfully.
Oct  7 09:32:30 np0005473739 podman[93906]: 2025-10-07 13:32:30.639977501 +0000 UTC m=+0.111842538 container attach 2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldstine, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:30 np0005473739 podman[93906]: 2025-10-07 13:32:30.640194616 +0000 UTC m=+0.112059643 container died 2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldstine, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:30 np0005473739 podman[93906]: 2025-10-07 13:32:30.549800955 +0000 UTC m=+0.021666002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-448f80dd68d437afa48caf8a23234ab20893436871676dc4149b1857e4a1b20b-merged.mount: Deactivated successfully.
Oct  7 09:32:30 np0005473739 podman[93906]: 2025-10-07 13:32:30.675215758 +0000 UTC m=+0.147080785 container remove 2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_goldstine, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:30 np0005473739 systemd[1]: libpod-conmon-2b934576af7008af338d4cbe87dcd2c4f2ebea9bbf8ae60a7490bb8cf7c30623.scope: Deactivated successfully.
Oct  7 09:32:30 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 19 pg[4.0( empty local-lis/les=0/0 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:32:30 np0005473739 podman[93966]: 2025-10-07 13:32:30.834251425 +0000 UTC m=+0.054082107 container create 33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:32:30 np0005473739 systemd[1]: Started libpod-conmon-33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d.scope.
Oct  7 09:32:30 np0005473739 podman[93966]: 2025-10-07 13:32:30.80374517 +0000 UTC m=+0.023575902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5541225b31289e5fdd0dc36814422aaf60ef2decf36bd42b866f32c8ced10f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e19 do_prune osdmap full prune enabled
Oct  7 09:32:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5541225b31289e5fdd0dc36814422aaf60ef2decf36bd42b866f32c8ced10f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5541225b31289e5fdd0dc36814422aaf60ef2decf36bd42b866f32c8ced10f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5541225b31289e5fdd0dc36814422aaf60ef2decf36bd42b866f32c8ced10f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e20 e20: 3 total, 3 up, 3 in
Oct  7 09:32:30 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e20: 3 total, 3 up, 3 in
Oct  7 09:32:30 np0005473739 podman[93966]: 2025-10-07 13:32:30.930145057 +0000 UTC m=+0.149975799 container init 33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct  7 09:32:30 np0005473739 podman[93966]: 2025-10-07 13:32:30.937708529 +0000 UTC m=+0.157539171 container start 33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:30 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/759936796' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "backups", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:30 np0005473739 podman[93966]: 2025-10-07 13:32:30.941014674 +0000 UTC m=+0.160845396 container attach 33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:30 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 20 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=0/0 les/c/f=0/0/0 sis=19) [0] r=0 lpr=19 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:32:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  7 09:32:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438050017' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:31 np0005473739 sad_kalam[93982]: {
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:    "0": [
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:        {
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "devices": [
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "/dev/loop3"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            ],
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_name": "ceph_lv0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_size": "21470642176",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "name": "ceph_lv0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "tags": {
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.crush_device_class": "",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.encrypted": "0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osd_id": "0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.type": "block",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.vdo": "0"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            },
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "type": "block",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "vg_name": "ceph_vg0"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:        }
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:    ],
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:    "1": [
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:        {
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "devices": [
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "/dev/loop4"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            ],
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_name": "ceph_lv1",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_size": "21470642176",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "name": "ceph_lv1",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "tags": {
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.crush_device_class": "",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.encrypted": "0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osd_id": "1",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.type": "block",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.vdo": "0"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            },
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "type": "block",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "vg_name": "ceph_vg1"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:        }
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:    ],
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:    "2": [
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:        {
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "devices": [
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "/dev/loop5"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            ],
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_name": "ceph_lv2",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_size": "21470642176",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "name": "ceph_lv2",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "tags": {
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.crush_device_class": "",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.encrypted": "0",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osd_id": "2",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.type": "block",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:                "ceph.vdo": "0"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            },
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "type": "block",
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:            "vg_name": "ceph_vg2"
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:        }
Oct  7 09:32:31 np0005473739 sad_kalam[93982]:    ]
Oct  7 09:32:31 np0005473739 sad_kalam[93982]: }
Oct  7 09:32:31 np0005473739 systemd[1]: libpod-33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d.scope: Deactivated successfully.
Oct  7 09:32:31 np0005473739 podman[93994]: 2025-10-07 13:32:31.755108126 +0000 UTC m=+0.022281668 container died 33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b5541225b31289e5fdd0dc36814422aaf60ef2decf36bd42b866f32c8ced10f8-merged.mount: Deactivated successfully.
Oct  7 09:32:31 np0005473739 podman[93994]: 2025-10-07 13:32:31.803415986 +0000 UTC m=+0.070589518 container remove 33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_kalam, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:32:31 np0005473739 systemd[1]: libpod-conmon-33d180a2f305ffd069404e84e95b5d7564cbed3c7d84974cbf480c23e624ec5d.scope: Deactivated successfully.
Oct  7 09:32:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e20 do_prune osdmap full prune enabled
Oct  7 09:32:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1438050017' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e21 e21: 3 total, 3 up, 3 in
Oct  7 09:32:31 np0005473739 infallible_sanderson[93866]: pool 'images' created
Oct  7 09:32:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e21: 3 total, 3 up, 3 in
Oct  7 09:32:31 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1438050017' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:31 np0005473739 systemd[1]: libpod-1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06.scope: Deactivated successfully.
Oct  7 09:32:31 np0005473739 podman[93849]: 2025-10-07 13:32:31.965349038 +0000 UTC m=+1.634690482 container died 1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06 (image=quay.io/ceph/ceph:v18, name=infallible_sanderson, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:32:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ecc8a96b63d5eb9e2731d747fcf90c190fe2ac743addaf7fbaf3e73544dd82dc-merged.mount: Deactivated successfully.
Oct  7 09:32:32 np0005473739 podman[93849]: 2025-10-07 13:32:32.013864643 +0000 UTC m=+1.683206097 container remove 1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06 (image=quay.io/ceph/ceph:v18, name=infallible_sanderson, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 09:32:32 np0005473739 systemd[1]: libpod-conmon-1e4aecf67dd5141a5d7877cf9751239dd4096bc55d625399f2227e31215b6d06.scope: Deactivated successfully.
Oct  7 09:32:32 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 21 pg[5.0( empty local-lis/les=0/0 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:32:32 np0005473739 python3[94147]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.meta  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:32 np0005473739 podman[94180]: 2025-10-07 13:32:32.41129082 +0000 UTC m=+0.040795250 container create 5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae (image=quay.io/ceph/ceph:v18, name=inspiring_williamson, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:32 np0005473739 systemd[1]: Started libpod-conmon-5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae.scope.
Oct  7 09:32:32 np0005473739 podman[94200]: 2025-10-07 13:32:32.452849157 +0000 UTC m=+0.039955088 container create 0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 09:32:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:32 np0005473739 systemd[1]: Started libpod-conmon-0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890.scope.
Oct  7 09:32:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc8be8e5df00d15ba65eabf7d4a212d23e869e5dde8fa5d187ae35aec5aba8b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1cc8be8e5df00d15ba65eabf7d4a212d23e869e5dde8fa5d187ae35aec5aba8b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:32 np0005473739 podman[94180]: 2025-10-07 13:32:32.39522496 +0000 UTC m=+0.024729410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:32 np0005473739 podman[94180]: 2025-10-07 13:32:32.493352858 +0000 UTC m=+0.122857308 container init 5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae (image=quay.io/ceph/ceph:v18, name=inspiring_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:32:32 np0005473739 podman[94200]: 2025-10-07 13:32:32.496034596 +0000 UTC m=+0.083140547 container init 0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wozniak, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:32 np0005473739 podman[94180]: 2025-10-07 13:32:32.499397083 +0000 UTC m=+0.128901513 container start 5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae (image=quay.io/ceph/ceph:v18, name=inspiring_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:32 np0005473739 podman[94200]: 2025-10-07 13:32:32.501041544 +0000 UTC m=+0.088147475 container start 0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wozniak, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 09:32:32 np0005473739 podman[94180]: 2025-10-07 13:32:32.502292626 +0000 UTC m=+0.131797056 container attach 5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae (image=quay.io/ceph/ceph:v18, name=inspiring_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct  7 09:32:32 np0005473739 optimistic_wozniak[94221]: 167 167
Oct  7 09:32:32 np0005473739 systemd[1]: libpod-0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890.scope: Deactivated successfully.
Oct  7 09:32:32 np0005473739 podman[94200]: 2025-10-07 13:32:32.505550379 +0000 UTC m=+0.092656330 container attach 0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wozniak, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:32:32 np0005473739 podman[94200]: 2025-10-07 13:32:32.505801255 +0000 UTC m=+0.092907176 container died 0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:32:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v47: 5 pgs: 2 unknown, 2 creating+peering, 1 active+clean; 449 KiB data, 880 MiB used, 59 GiB / 60 GiB avail
Oct  7 09:32:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fa156fead1738d6ed0be092cf9d7ae1e06a4d50ebb527bdcac7688168be0201f-merged.mount: Deactivated successfully.
Oct  7 09:32:32 np0005473739 podman[94200]: 2025-10-07 13:32:32.434646774 +0000 UTC m=+0.021752725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:32 np0005473739 podman[94200]: 2025-10-07 13:32:32.540994491 +0000 UTC m=+0.128100422 container remove 0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 09:32:32 np0005473739 systemd[1]: libpod-conmon-0cec43d0ec25749807f351ce03a0a07ec6b0d82108d68cdc130879bbc7e0e890.scope: Deactivated successfully.
Oct  7 09:32:32 np0005473739 podman[94245]: 2025-10-07 13:32:32.675089144 +0000 UTC m=+0.040038449 container create bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:32 np0005473739 systemd[1]: Started libpod-conmon-bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4.scope.
Oct  7 09:32:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa0558823394f5e67e23da6c65a1ad767387be95cc69495cd3f5c6007134d9d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa0558823394f5e67e23da6c65a1ad767387be95cc69495cd3f5c6007134d9d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa0558823394f5e67e23da6c65a1ad767387be95cc69495cd3f5c6007134d9d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bfa0558823394f5e67e23da6c65a1ad767387be95cc69495cd3f5c6007134d9d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:32 np0005473739 podman[94245]: 2025-10-07 13:32:32.655867725 +0000 UTC m=+0.020817050 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:32 np0005473739 podman[94245]: 2025-10-07 13:32:32.758511658 +0000 UTC m=+0.123460983 container init bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:32 np0005473739 podman[94245]: 2025-10-07 13:32:32.764039369 +0000 UTC m=+0.128988674 container start bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_visvesvaraya, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:32:32 np0005473739 podman[94245]: 2025-10-07 13:32:32.767827225 +0000 UTC m=+0.132776550 container attach bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_visvesvaraya, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:32 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e21 do_prune osdmap full prune enabled
Oct  7 09:32:32 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1438050017' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "images", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e22 e22: 3 total, 3 up, 3 in
Oct  7 09:32:32 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e22: 3 total, 3 up, 3 in
Oct  7 09:32:32 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 22 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=0/0 les/c/f=0/0/0 sis=21) [2] r=0 lpr=21 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:32:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  7 09:32:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3562431097' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]: {
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "osd_id": 2,
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "type": "bluestore"
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:    },
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "osd_id": 1,
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "type": "bluestore"
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:    },
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "osd_id": 0,
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:        "type": "bluestore"
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]:    }
Oct  7 09:32:33 np0005473739 mystifying_visvesvaraya[94262]: }
Oct  7 09:32:33 np0005473739 systemd[1]: libpod-bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4.scope: Deactivated successfully.
Oct  7 09:32:33 np0005473739 systemd[1]: libpod-bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4.scope: Consumed 1.069s CPU time.
Oct  7 09:32:33 np0005473739 podman[94245]: 2025-10-07 13:32:33.830862325 +0000 UTC m=+1.195811680 container died bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:32:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bfa0558823394f5e67e23da6c65a1ad767387be95cc69495cd3f5c6007134d9d-merged.mount: Deactivated successfully.
Oct  7 09:32:33 np0005473739 podman[94245]: 2025-10-07 13:32:33.897632244 +0000 UTC m=+1.262581549 container remove bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:32:33 np0005473739 systemd[1]: libpod-conmon-bab511a49ad3f9a7f4c268c7fca7602a31b414cfa951fdae6bff6b5f9c7066a4.scope: Deactivated successfully.
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3562431097' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e22 do_prune osdmap full prune enabled
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3562431097' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e23 e23: 3 total, 3 up, 3 in
Oct  7 09:32:33 np0005473739 inspiring_williamson[94216]: pool 'cephfs.cephfs.meta' created
Oct  7 09:32:33 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e23: 3 total, 3 up, 3 in
Oct  7 09:32:34 np0005473739 systemd[1]: libpod-5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae.scope: Deactivated successfully.
Oct  7 09:32:34 np0005473739 conmon[94216]: conmon 5095a35e40858e259fef <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae.scope/container/memory.events
Oct  7 09:32:34 np0005473739 podman[94180]: 2025-10-07 13:32:34.00313589 +0000 UTC m=+1.632640320 container died 5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae (image=quay.io/ceph/ceph:v18, name=inspiring_williamson, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:32:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1cc8be8e5df00d15ba65eabf7d4a212d23e869e5dde8fa5d187ae35aec5aba8b-merged.mount: Deactivated successfully.
Oct  7 09:32:34 np0005473739 podman[94180]: 2025-10-07 13:32:34.044408941 +0000 UTC m=+1.673913361 container remove 5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae (image=quay.io/ceph/ceph:v18, name=inspiring_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 09:32:34 np0005473739 systemd[1]: libpod-conmon-5095a35e40858e259fef321efbe580e9c2719c0cf2975877b4d678201fca4cae.scope: Deactivated successfully.
Oct  7 09:32:34 np0005473739 python3[94483]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool create cephfs.cephfs.data  replicated_rule --autoscale-mode on _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:34 np0005473739 podman[94522]: 2025-10-07 13:32:34.397494928 +0000 UTC m=+0.041028245 container create fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e (image=quay.io/ceph/ceph:v18, name=jolly_chandrasekhar, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:34 np0005473739 systemd[1]: Started libpod-conmon-fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e.scope.
Oct  7 09:32:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e22f611c2de52b3112a6e78f5e388d510a7c304a1f88e045220154884d03ec8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e22f611c2de52b3112a6e78f5e388d510a7c304a1f88e045220154884d03ec8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:34 np0005473739 podman[94522]: 2025-10-07 13:32:34.468595868 +0000 UTC m=+0.112129185 container init fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e (image=quay.io/ceph/ceph:v18, name=jolly_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:32:34 np0005473739 podman[94522]: 2025-10-07 13:32:34.474455057 +0000 UTC m=+0.117988374 container start fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e (image=quay.io/ceph/ceph:v18, name=jolly_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:34 np0005473739 podman[94522]: 2025-10-07 13:32:34.377783747 +0000 UTC m=+0.021317084 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:34 np0005473739 podman[94522]: 2025-10-07 13:32:34.47766266 +0000 UTC m=+0.121195977 container attach fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e (image=quay.io/ceph/ceph:v18, name=jolly_chandrasekhar, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v50: 6 pgs: 4 active+clean, 2 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:34 np0005473739 podman[94615]: 2025-10-07 13:32:34.700637965 +0000 UTC m=+0.046438503 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:32:34 np0005473739 podman[94615]: 2025-10-07 13:32:34.800346093 +0000 UTC m=+0.146146641 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 09:32:35 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 23 pg[6.0( empty local-lis/les=0/0 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"} v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3342872269' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3562431097' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e23 do_prune osdmap full prune enabled
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3342872269' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e24 e24: 3 total, 3 up, 3 in
Oct  7 09:32:35 np0005473739 jolly_chandrasekhar[94549]: pool 'cephfs.cephfs.data' created
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e24: 3 total, 3 up, 3 in
Oct  7 09:32:35 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 24 pg[6.0( empty local-lis/les=23/24 n=0 ec=23/23 lis/c=0/0 les/c/f=0/0/0 sis=23) [0] r=0 lpr=23 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:32:35 np0005473739 systemd[1]: libpod-fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e.scope: Deactivated successfully.
Oct  7 09:32:35 np0005473739 podman[94522]: 2025-10-07 13:32:35.088119019 +0000 UTC m=+0.731652336 container died fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e (image=quay.io/ceph/ceph:v18, name=jolly_chandrasekhar, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 09:32:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3e22f611c2de52b3112a6e78f5e388d510a7c304a1f88e045220154884d03ec8-merged.mount: Deactivated successfully.
Oct  7 09:32:35 np0005473739 podman[94522]: 2025-10-07 13:32:35.129469991 +0000 UTC m=+0.773003298 container remove fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e (image=quay.io/ceph/ceph:v18, name=jolly_chandrasekhar, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:35 np0005473739 systemd[1]: libpod-conmon-fa9a68fce89ce0ef7e31254e519c9041d4b1668f99cd47280531d1a2d6591f8e.scope: Deactivated successfully.
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e24 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b6b34336-6b78-4f90-9324-8b872a21a557 does not exist
Oct  7 09:32:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6f44a7b8-48e6-4ca2-bc61-42ae0f051495 does not exist
Oct  7 09:32:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a8c91a15-7698-4c91-8083-f2f38d8d430d does not exist
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:32:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:32:35 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 24 pg[7.0( empty local-lis/les=0/0 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:32:35 np0005473739 python3[94827]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable vms rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:35 np0005473739 podman[94893]: 2025-10-07 13:32:35.507016872 +0000 UTC m=+0.042492694 container create cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492 (image=quay.io/ceph/ceph:v18, name=vigilant_boyd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:32:35 np0005473739 systemd[1]: Started libpod-conmon-cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492.scope.
Oct  7 09:32:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33bf67471057aa5de26edebe27dd35b6e71a2f47731d593cb40e955537cf73c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33bf67471057aa5de26edebe27dd35b6e71a2f47731d593cb40e955537cf73c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:35 np0005473739 podman[94893]: 2025-10-07 13:32:35.578824749 +0000 UTC m=+0.114300571 container init cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492 (image=quay.io/ceph/ceph:v18, name=vigilant_boyd, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 09:32:35 np0005473739 podman[94893]: 2025-10-07 13:32:35.583968631 +0000 UTC m=+0.119444453 container start cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492 (image=quay.io/ceph/ceph:v18, name=vigilant_boyd, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:32:35 np0005473739 podman[94893]: 2025-10-07 13:32:35.491452726 +0000 UTC m=+0.026928578 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:35 np0005473739 podman[94893]: 2025-10-07 13:32:35.587060549 +0000 UTC m=+0.122536391 container attach cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492 (image=quay.io/ceph/ceph:v18, name=vigilant_boyd, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 09:32:35 np0005473739 podman[94952]: 2025-10-07 13:32:35.80867245 +0000 UTC m=+0.058260403 container create 21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:35 np0005473739 systemd[1]: Started libpod-conmon-21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db.scope.
Oct  7 09:32:35 np0005473739 podman[94952]: 2025-10-07 13:32:35.784671189 +0000 UTC m=+0.034259162 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:35 np0005473739 podman[94952]: 2025-10-07 13:32:35.896806174 +0000 UTC m=+0.146394137 container init 21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:35 np0005473739 podman[94952]: 2025-10-07 13:32:35.908292896 +0000 UTC m=+0.157880849 container start 21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:32:35 np0005473739 podman[94952]: 2025-10-07 13:32:35.911310673 +0000 UTC m=+0.160898626 container attach 21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:35 np0005473739 awesome_dirac[94966]: 167 167
Oct  7 09:32:35 np0005473739 systemd[1]: libpod-21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db.scope: Deactivated successfully.
Oct  7 09:32:35 np0005473739 podman[94952]: 2025-10-07 13:32:35.91357083 +0000 UTC m=+0.163158823 container died 21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2dcf160d2b8652d6558627b55f0a631dd06b310408091f917ea98f95296670d3-merged.mount: Deactivated successfully.
Oct  7 09:32:35 np0005473739 podman[94952]: 2025-10-07 13:32:35.968481528 +0000 UTC m=+0.218069511 container remove 21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dirac, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:32:35 np0005473739 systemd[1]: libpod-conmon-21d4dacab29624960dacaa455265946aab6d78e248fde4e104a7102aeae4f1db.scope: Deactivated successfully.
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3342872269' entity='client.admin' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]: dispatch
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3342872269' entity='client.admin' cmd='[{"prefix": "osd pool create", "pool": "cephfs.cephfs.data", "erasure_code_profile": "replicated_rule", "autoscale_mode": "on"}]': finished
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e24 do_prune osdmap full prune enabled
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e25 e25: 3 total, 3 up, 3 in
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e25: 3 total, 3 up, 3 in
Oct  7 09:32:36 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 25 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=0/0 les/c/f=0/0/0 sis=24) [1] r=0 lpr=24 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"} v 0) v1
Oct  7 09:32:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1284457936' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  7 09:32:36 np0005473739 podman[95010]: 2025-10-07 13:32:36.180364852 +0000 UTC m=+0.055550845 container create 972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:32:36 np0005473739 systemd[1]: Started libpod-conmon-972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1.scope.
Oct  7 09:32:36 np0005473739 podman[95010]: 2025-10-07 13:32:36.156384682 +0000 UTC m=+0.031570725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f7ad153c589987080b5074675f56b4e1eed1b5160ec9d68563eb5f16b27c5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f7ad153c589987080b5074675f56b4e1eed1b5160ec9d68563eb5f16b27c5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f7ad153c589987080b5074675f56b4e1eed1b5160ec9d68563eb5f16b27c5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f7ad153c589987080b5074675f56b4e1eed1b5160ec9d68563eb5f16b27c5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f0f7ad153c589987080b5074675f56b4e1eed1b5160ec9d68563eb5f16b27c5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:36 np0005473739 podman[95010]: 2025-10-07 13:32:36.273361799 +0000 UTC m=+0.148547812 container init 972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:36 np0005473739 podman[95010]: 2025-10-07 13:32:36.287330104 +0000 UTC m=+0.162516097 container start 972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:36 np0005473739 podman[95010]: 2025-10-07 13:32:36.292782583 +0000 UTC m=+0.167968596 container attach 972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v53: 7 pgs: 6 active+clean, 1 unknown; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e25 do_prune osdmap full prune enabled
Oct  7 09:32:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1284457936' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  7 09:32:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e26 e26: 3 total, 3 up, 3 in
Oct  7 09:32:37 np0005473739 vigilant_boyd[94910]: enabled application 'rbd' on pool 'vms'
Oct  7 09:32:37 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e26: 3 total, 3 up, 3 in
Oct  7 09:32:37 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1284457936' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]: dispatch
Oct  7 09:32:37 np0005473739 systemd[1]: libpod-cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492.scope: Deactivated successfully.
Oct  7 09:32:37 np0005473739 podman[94893]: 2025-10-07 13:32:37.105213864 +0000 UTC m=+1.640689716 container died cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492 (image=quay.io/ceph/ceph:v18, name=vigilant_boyd, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 09:32:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a33bf67471057aa5de26edebe27dd35b6e71a2f47731d593cb40e955537cf73c-merged.mount: Deactivated successfully.
Oct  7 09:32:37 np0005473739 podman[94893]: 2025-10-07 13:32:37.155104384 +0000 UTC m=+1.690580206 container remove cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492 (image=quay.io/ceph/ceph:v18, name=vigilant_boyd, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:37 np0005473739 systemd[1]: libpod-conmon-cdec24313c8f552937cc7e517906d125bd8712cee44783866ec8a01259f48492.scope: Deactivated successfully.
Oct  7 09:32:37 np0005473739 musing_ardinghelli[95026]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:32:37 np0005473739 musing_ardinghelli[95026]: --> relative data size: 1.0
Oct  7 09:32:37 np0005473739 musing_ardinghelli[95026]: --> All data devices are unavailable
Oct  7 09:32:37 np0005473739 podman[95010]: 2025-10-07 13:32:37.296483513 +0000 UTC m=+1.171669506 container died 972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 09:32:37 np0005473739 systemd[1]: libpod-972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1.scope: Deactivated successfully.
Oct  7 09:32:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f0f7ad153c589987080b5074675f56b4e1eed1b5160ec9d68563eb5f16b27c5e-merged.mount: Deactivated successfully.
Oct  7 09:32:37 np0005473739 podman[95010]: 2025-10-07 13:32:37.388209297 +0000 UTC m=+1.263395290 container remove 972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_ardinghelli, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True)
Oct  7 09:32:37 np0005473739 systemd[1]: libpod-conmon-972bba3f87ae3fc36beda86f1fafed110360f22d843706abf76c335fe60258a1.scope: Deactivated successfully.
Oct  7 09:32:37 np0005473739 python3[95094]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable volumes rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:37 np0005473739 podman[95125]: 2025-10-07 13:32:37.509830053 +0000 UTC m=+0.040076730 container create 5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85 (image=quay.io/ceph/ceph:v18, name=intelligent_solomon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:32:37 np0005473739 systemd[1]: Started libpod-conmon-5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85.scope.
Oct  7 09:32:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f08e11aa3009f7b12bb2c59e19260eefc8f15fa9c75e0b7608ea4a2cafde1a3/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3f08e11aa3009f7b12bb2c59e19260eefc8f15fa9c75e0b7608ea4a2cafde1a3/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:37 np0005473739 podman[95125]: 2025-10-07 13:32:37.493054446 +0000 UTC m=+0.023301173 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:37 np0005473739 podman[95125]: 2025-10-07 13:32:37.591498403 +0000 UTC m=+0.121745090 container init 5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85 (image=quay.io/ceph/ceph:v18, name=intelligent_solomon, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:37 np0005473739 podman[95125]: 2025-10-07 13:32:37.600875721 +0000 UTC m=+0.131122408 container start 5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85 (image=quay.io/ceph/ceph:v18, name=intelligent_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:37 np0005473739 podman[95125]: 2025-10-07 13:32:37.605209281 +0000 UTC m=+0.135455958 container attach 5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85 (image=quay.io/ceph/ceph:v18, name=intelligent_solomon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 09:32:38 np0005473739 podman[95283]: 2025-10-07 13:32:38.008451456 +0000 UTC m=+0.056897779 container create ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:38 np0005473739 systemd[1]: Started libpod-conmon-ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9.scope.
Oct  7 09:32:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:38 np0005473739 podman[95283]: 2025-10-07 13:32:38.076885808 +0000 UTC m=+0.125332161 container init ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:32:38 np0005473739 podman[95283]: 2025-10-07 13:32:38.081946747 +0000 UTC m=+0.130393080 container start ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 09:32:38 np0005473739 systemd[1]: libpod-ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9.scope: Deactivated successfully.
Oct  7 09:32:38 np0005473739 trusting_mendeleev[95300]: 167 167
Oct  7 09:32:38 np0005473739 conmon[95300]: conmon ce38b9d7e2d1747be76d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9.scope/container/memory.events
Oct  7 09:32:38 np0005473739 podman[95283]: 2025-10-07 13:32:38.086756539 +0000 UTC m=+0.135202892 container attach ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:38 np0005473739 podman[95283]: 2025-10-07 13:32:38.087134369 +0000 UTC m=+0.135580702 container died ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:38 np0005473739 podman[95283]: 2025-10-07 13:32:37.99368455 +0000 UTC m=+0.042130913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:38 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1284457936' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "vms", "app": "rbd"}]': finished
Oct  7 09:32:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ad5e0b27b0a49cfb571713f9bc4509700eb08fb759c52095ef9bf95d5f179e95-merged.mount: Deactivated successfully.
Oct  7 09:32:38 np0005473739 podman[95283]: 2025-10-07 13:32:38.12804914 +0000 UTC m=+0.176495473 container remove ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendeleev, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:32:38 np0005473739 systemd[1]: libpod-conmon-ce38b9d7e2d1747be76d177593c4f8dc4706e096863f336a4adcc5f1074000a9.scope: Deactivated successfully.
Oct  7 09:32:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"} v 0) v1
Oct  7 09:32:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220837043' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  7 09:32:38 np0005473739 podman[95326]: 2025-10-07 13:32:38.260982164 +0000 UTC m=+0.034805647 container create 84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:32:38 np0005473739 systemd[1]: Started libpod-conmon-84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91.scope.
Oct  7 09:32:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b2b3695ed9ee23336bac5a7835171b16c1dd3b5abb326e02341aeff339e4b7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b2b3695ed9ee23336bac5a7835171b16c1dd3b5abb326e02341aeff339e4b7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b2b3695ed9ee23336bac5a7835171b16c1dd3b5abb326e02341aeff339e4b7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50b2b3695ed9ee23336bac5a7835171b16c1dd3b5abb326e02341aeff339e4b7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:38 np0005473739 podman[95326]: 2025-10-07 13:32:38.329801296 +0000 UTC m=+0.103624809 container init 84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:32:38 np0005473739 podman[95326]: 2025-10-07 13:32:38.336764613 +0000 UTC m=+0.110588106 container start 84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:38 np0005473739 podman[95326]: 2025-10-07 13:32:38.3398035 +0000 UTC m=+0.113626993 container attach 84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:32:38 np0005473739 podman[95326]: 2025-10-07 13:32:38.246229138 +0000 UTC m=+0.020052661 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v55: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]: {
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:    "0": [
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:        {
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "devices": [
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "/dev/loop3"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            ],
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_name": "ceph_lv0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_size": "21470642176",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "name": "ceph_lv0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "tags": {
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.crush_device_class": "",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.encrypted": "0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osd_id": "0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.type": "block",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.vdo": "0"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            },
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "type": "block",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "vg_name": "ceph_vg0"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:        }
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:    ],
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:    "1": [
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:        {
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "devices": [
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "/dev/loop4"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            ],
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_name": "ceph_lv1",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_size": "21470642176",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "name": "ceph_lv1",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "tags": {
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.crush_device_class": "",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.encrypted": "0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osd_id": "1",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.type": "block",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.vdo": "0"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            },
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "type": "block",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "vg_name": "ceph_vg1"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:        }
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:    ],
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:    "2": [
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:        {
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "devices": [
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "/dev/loop5"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            ],
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_name": "ceph_lv2",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_size": "21470642176",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "name": "ceph_lv2",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "tags": {
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.cluster_name": "ceph",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.crush_device_class": "",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.encrypted": "0",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osd_id": "2",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.type": "block",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:                "ceph.vdo": "0"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            },
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "type": "block",
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:            "vg_name": "ceph_vg2"
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:        }
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]:    ]
Oct  7 09:32:39 np0005473739 recursing_perlman[95343]: }
Oct  7 09:32:39 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e26 do_prune osdmap full prune enabled
Oct  7 09:32:39 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3220837043' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]: dispatch
Oct  7 09:32:39 np0005473739 systemd[1]: libpod-84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91.scope: Deactivated successfully.
Oct  7 09:32:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3220837043' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  7 09:32:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e27 e27: 3 total, 3 up, 3 in
Oct  7 09:32:39 np0005473739 podman[95326]: 2025-10-07 13:32:39.112001167 +0000 UTC m=+0.885824680 container died 84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:39 np0005473739 intelligent_solomon[95168]: enabled application 'rbd' on pool 'volumes'
Oct  7 09:32:39 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e27: 3 total, 3 up, 3 in
Oct  7 09:32:39 np0005473739 systemd[1]: libpod-5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85.scope: Deactivated successfully.
Oct  7 09:32:39 np0005473739 podman[95125]: 2025-10-07 13:32:39.133447893 +0000 UTC m=+1.663694570 container died 5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85 (image=quay.io/ceph/ceph:v18, name=intelligent_solomon, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 09:32:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-50b2b3695ed9ee23336bac5a7835171b16c1dd3b5abb326e02341aeff339e4b7-merged.mount: Deactivated successfully.
Oct  7 09:32:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3f08e11aa3009f7b12bb2c59e19260eefc8f15fa9c75e0b7608ea4a2cafde1a3-merged.mount: Deactivated successfully.
Oct  7 09:32:39 np0005473739 podman[95326]: 2025-10-07 13:32:39.181020994 +0000 UTC m=+0.954844487 container remove 84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_perlman, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 09:32:39 np0005473739 systemd[1]: libpod-conmon-84ba061dfad350d03eef590be94f7c83a25f9db96028b39eb0c20429aa530b91.scope: Deactivated successfully.
Oct  7 09:32:39 np0005473739 podman[95125]: 2025-10-07 13:32:39.188123185 +0000 UTC m=+1.718369862 container remove 5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85 (image=quay.io/ceph/ceph:v18, name=intelligent_solomon, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:32:39 np0005473739 systemd[1]: libpod-conmon-5aa0846837c2134c1fc539c539e74b1b4f62108f4f1790c06476b117f5e09c85.scope: Deactivated successfully.
Oct  7 09:32:39 np0005473739 python3[95465]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable backups rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:39 np0005473739 podman[95503]: 2025-10-07 13:32:39.548144099 +0000 UTC m=+0.050031095 container create 9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06 (image=quay.io/ceph/ceph:v18, name=quirky_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:39 np0005473739 systemd[1]: Started libpod-conmon-9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06.scope.
Oct  7 09:32:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/254e427969f0bb4bd15b5d3bb74478789bbe8b9db7f04d0eb99a434278e86d38/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/254e427969f0bb4bd15b5d3bb74478789bbe8b9db7f04d0eb99a434278e86d38/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:39 np0005473739 podman[95503]: 2025-10-07 13:32:39.523255305 +0000 UTC m=+0.025142321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:39 np0005473739 podman[95503]: 2025-10-07 13:32:39.621189518 +0000 UTC m=+0.123076534 container init 9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06 (image=quay.io/ceph/ceph:v18, name=quirky_booth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:39 np0005473739 podman[95503]: 2025-10-07 13:32:39.632052385 +0000 UTC m=+0.133939381 container start 9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06 (image=quay.io/ceph/ceph:v18, name=quirky_booth, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:32:39 np0005473739 podman[95503]: 2025-10-07 13:32:39.635804261 +0000 UTC m=+0.137691267 container attach 9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06 (image=quay.io/ceph/ceph:v18, name=quirky_booth, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:39 np0005473739 podman[95562]: 2025-10-07 13:32:39.762232739 +0000 UTC m=+0.037251440 container create 92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:32:39 np0005473739 systemd[1]: Started libpod-conmon-92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c.scope.
Oct  7 09:32:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:39 np0005473739 podman[95562]: 2025-10-07 13:32:39.831886352 +0000 UTC m=+0.106905073 container init 92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:39 np0005473739 podman[95562]: 2025-10-07 13:32:39.83771528 +0000 UTC m=+0.112733981 container start 92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:32:39 np0005473739 quirky_goodall[95579]: 167 167
Oct  7 09:32:39 np0005473739 podman[95562]: 2025-10-07 13:32:39.746670113 +0000 UTC m=+0.021688824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:39 np0005473739 systemd[1]: libpod-92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c.scope: Deactivated successfully.
Oct  7 09:32:39 np0005473739 podman[95562]: 2025-10-07 13:32:39.842590364 +0000 UTC m=+0.117609085 container attach 92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:32:39 np0005473739 podman[95562]: 2025-10-07 13:32:39.842921263 +0000 UTC m=+0.117939964 container died 92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-25e71e69a1dcde025d521b358e1dffb6dfae22368ffc5400c0900d206cafab46-merged.mount: Deactivated successfully.
Oct  7 09:32:39 np0005473739 podman[95562]: 2025-10-07 13:32:39.880650442 +0000 UTC m=+0.155669143 container remove 92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_goodall, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:32:39 np0005473739 systemd[1]: libpod-conmon-92265adfb626267a13247e160af0e981e9f284052a14a23476846ff8fd0b064c.scope: Deactivated successfully.
Oct  7 09:32:40 np0005473739 podman[95623]: 2025-10-07 13:32:40.063185599 +0000 UTC m=+0.065043517 container create 3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ptolemy, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:40 np0005473739 podman[95623]: 2025-10-07 13:32:40.017877497 +0000 UTC m=+0.019735415 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:32:40 np0005473739 systemd[1]: Started libpod-conmon-3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427.scope.
Oct  7 09:32:40 np0005473739 ceph-mon[74295]: Health check update: 5 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:40 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3220837043' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "volumes", "app": "rbd"}]': finished
Oct  7 09:32:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e27 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"} v 0) v1
Oct  7 09:32:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2082981471' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  7 09:32:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd310ddc83812b794cebc8f961e547b7ccb0a71bf170b8a04088d8689b4c0abf/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd310ddc83812b794cebc8f961e547b7ccb0a71bf170b8a04088d8689b4c0abf/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd310ddc83812b794cebc8f961e547b7ccb0a71bf170b8a04088d8689b4c0abf/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dd310ddc83812b794cebc8f961e547b7ccb0a71bf170b8a04088d8689b4c0abf/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:40 np0005473739 podman[95623]: 2025-10-07 13:32:40.187168225 +0000 UTC m=+0.189026143 container init 3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ptolemy, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:40 np0005473739 podman[95623]: 2025-10-07 13:32:40.195059526 +0000 UTC m=+0.196917424 container start 3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ptolemy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:32:40 np0005473739 podman[95623]: 2025-10-07 13:32:40.201705046 +0000 UTC m=+0.203562974 container attach 3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ptolemy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 09:32:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v57: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]: {
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "osd_id": 2,
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "type": "bluestore"
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:    },
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "osd_id": 1,
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "type": "bluestore"
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:    },
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "osd_id": 0,
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:        "type": "bluestore"
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]:    }
Oct  7 09:32:41 np0005473739 reverent_ptolemy[95640]: }
Oct  7 09:32:41 np0005473739 systemd[1]: libpod-3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427.scope: Deactivated successfully.
Oct  7 09:32:41 np0005473739 podman[95623]: 2025-10-07 13:32:41.139049475 +0000 UTC m=+1.140907373 container died 3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ptolemy, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct  7 09:32:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e27 do_prune osdmap full prune enabled
Oct  7 09:32:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v58: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:44 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2082981471' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]: dispatch
Oct  7 09:32:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v59: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2082981471' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  7 09:32:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e28 e28: 3 total, 3 up, 3 in
Oct  7 09:32:44 np0005473739 quirky_booth[95525]: enabled application 'rbd' on pool 'backups'
Oct  7 09:32:44 np0005473739 systemd[1]: libpod-9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06.scope: Deactivated successfully.
Oct  7 09:32:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dd310ddc83812b794cebc8f961e547b7ccb0a71bf170b8a04088d8689b4c0abf-merged.mount: Deactivated successfully.
Oct  7 09:32:44 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e28: 3 total, 3 up, 3 in
Oct  7 09:32:45 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e28 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:32:45 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2082981471' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "backups", "app": "rbd"}]': finished
Oct  7 09:32:45 np0005473739 ceph-mon[74295]: Health check update: 4 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:46 np0005473739 podman[95623]: 2025-10-07 13:32:46.010258933 +0000 UTC m=+6.012116841 container remove 3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_ptolemy, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:32:46 np0005473739 systemd[1]: libpod-conmon-3fc714b9876ba1e12d09ec507030160b565f935021aafc663da50bc7b7087427.scope: Deactivated successfully.
Oct  7 09:32:46 np0005473739 podman[95503]: 2025-10-07 13:32:46.086787121 +0000 UTC m=+6.588674117 container died 9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06 (image=quay.io/ceph/ceph:v18, name=quirky_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:32:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:32:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v61: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:32:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-254e427969f0bb4bd15b5d3bb74478789bbe8b9db7f04d0eb99a434278e86d38-merged.mount: Deactivated successfully.
Oct  7 09:32:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:47 np0005473739 podman[95689]: 2025-10-07 13:32:47.4927783 +0000 UTC m=+2.622928527 container remove 9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06 (image=quay.io/ceph/ceph:v18, name=quirky_booth, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:47 np0005473739 systemd[1]: libpod-conmon-9c2ce5a972b3b8f1d28652836362f2d0694fff2195049992c23604d821783a06.scope: Deactivated successfully.
Oct  7 09:32:47 np0005473739 python3[95781]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable images rbd _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:47 np0005473739 podman[95782]: 2025-10-07 13:32:47.846214317 +0000 UTC m=+0.020013571 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:48 np0005473739 podman[95782]: 2025-10-07 13:32:48.188384687 +0000 UTC m=+0.362183931 container create 3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528 (image=quay.io/ceph/ceph:v18, name=friendly_rhodes, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:32:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v62: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:48 np0005473739 systemd[1]: Started libpod-conmon-3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528.scope.
Oct  7 09:32:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9842a01ccb13a023f8c7a380349a893977a502353b3417fdc8dad099890ee545/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9842a01ccb13a023f8c7a380349a893977a502353b3417fdc8dad099890ee545/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:48 np0005473739 podman[95782]: 2025-10-07 13:32:48.98597021 +0000 UTC m=+1.159769444 container init 3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528 (image=quay.io/ceph/ceph:v18, name=friendly_rhodes, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:32:48 np0005473739 podman[95782]: 2025-10-07 13:32:48.992721491 +0000 UTC m=+1.166520725 container start 3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528 (image=quay.io/ceph/ceph:v18, name=friendly_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:32:49 np0005473739 podman[95782]: 2025-10-07 13:32:49.28573125 +0000 UTC m=+1.459530504 container attach 3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528 (image=quay.io/ceph/ceph:v18, name=friendly_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:32:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "images", "app": "rbd"} v 0) v1
Oct  7 09:32:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4256413606' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  7 09:32:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e28 do_prune osdmap full prune enabled
Oct  7 09:32:50 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/4256413606' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  7 09:32:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e29 e29: 3 total, 3 up, 3 in
Oct  7 09:32:50 np0005473739 friendly_rhodes[95797]: enabled application 'rbd' on pool 'images'
Oct  7 09:32:50 np0005473739 systemd[1]: libpod-3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528.scope: Deactivated successfully.
Oct  7 09:32:50 np0005473739 podman[95782]: 2025-10-07 13:32:50.414957854 +0000 UTC m=+2.588757088 container died 3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528 (image=quay.io/ceph/ceph:v18, name=friendly_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:32:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v63: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:50 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e29: 3 total, 3 up, 3 in
Oct  7 09:32:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9842a01ccb13a023f8c7a380349a893977a502353b3417fdc8dad099890ee545-merged.mount: Deactivated successfully.
Oct  7 09:32:51 np0005473739 podman[95782]: 2025-10-07 13:32:51.010083923 +0000 UTC m=+3.183883157 container remove 3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528 (image=quay.io/ceph/ceph:v18, name=friendly_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:51 np0005473739 systemd[1]: libpod-conmon-3bc8c41a4db9a2669b0b2b14ae1e4d00a91b04bd438d1524c4dd9803cedf4528.scope: Deactivated successfully.
Oct  7 09:32:51 np0005473739 python3[95860]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.meta cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:51 np0005473739 podman[95861]: 2025-10-07 13:32:51.43421548 +0000 UTC m=+0.106534523 container create 866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5 (image=quay.io/ceph/ceph:v18, name=sad_albattani, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 09:32:51 np0005473739 podman[95861]: 2025-10-07 13:32:51.352457129 +0000 UTC m=+0.024776182 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:51 np0005473739 systemd[1]: Started libpod-conmon-866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5.scope.
Oct  7 09:32:51 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/4256413606' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]: dispatch
Oct  7 09:32:51 np0005473739 ceph-mon[74295]: Health check update: 3 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:51 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/4256413606' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "images", "app": "rbd"}]': finished
Oct  7 09:32:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5082f12a0a7ae6b8937abc7c31291c0d92d12bca4be8260c2a325308ae87dd7/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5082f12a0a7ae6b8937abc7c31291c0d92d12bca4be8260c2a325308ae87dd7/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:51 np0005473739 podman[95861]: 2025-10-07 13:32:51.706752448 +0000 UTC m=+0.379071471 container init 866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5 (image=quay.io/ceph/ceph:v18, name=sad_albattani, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 09:32:51 np0005473739 podman[95861]: 2025-10-07 13:32:51.714050534 +0000 UTC m=+0.386369557 container start 866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5 (image=quay.io/ceph/ceph:v18, name=sad_albattani, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:32:51 np0005473739 podman[95861]: 2025-10-07 13:32:51.756602956 +0000 UTC m=+0.428921979 container attach 866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5 (image=quay.io/ceph/ceph:v18, name=sad_albattani, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"} v 0) v1
Oct  7 09:32:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1717588714' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  7 09:32:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v65: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:32:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e29 do_prune osdmap full prune enabled
Oct  7 09:32:52 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1717588714' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]: dispatch
Oct  7 09:32:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1717588714' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  7 09:32:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e30 e30: 3 total, 3 up, 3 in
Oct  7 09:32:52 np0005473739 sad_albattani[95876]: enabled application 'cephfs' on pool 'cephfs.cephfs.meta'
Oct  7 09:32:52 np0005473739 systemd[1]: libpod-866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5.scope: Deactivated successfully.
Oct  7 09:32:52 np0005473739 podman[95861]: 2025-10-07 13:32:52.81195651 +0000 UTC m=+1.484275533 container died 866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5 (image=quay.io/ceph/ceph:v18, name=sad_albattani, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 09:32:52 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e30: 3 total, 3 up, 3 in
Oct  7 09:32:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f5082f12a0a7ae6b8937abc7c31291c0d92d12bca4be8260c2a325308ae87dd7-merged.mount: Deactivated successfully.
Oct  7 09:32:53 np0005473739 podman[95861]: 2025-10-07 13:32:53.078651449 +0000 UTC m=+1.750970502 container remove 866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5 (image=quay.io/ceph/ceph:v18, name=sad_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:32:53 np0005473739 systemd[1]: libpod-conmon-866d2e77a6bf12eecbad4ff962f89793acdc5fcd8aabd5fc9feca05ccfa86bf5.scope: Deactivated successfully.
Oct  7 09:32:53 np0005473739 python3[95939]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd pool application enable cephfs.cephfs.data cephfs _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:53 np0005473739 podman[95940]: 2025-10-07 13:32:53.504556821 +0000 UTC m=+0.061958188 container create 3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a (image=quay.io/ceph/ceph:v18, name=jovial_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 09:32:53 np0005473739 podman[95940]: 2025-10-07 13:32:53.463396972 +0000 UTC m=+0.020798359 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:53 np0005473739 systemd[1]: Started libpod-conmon-3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a.scope.
Oct  7 09:32:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da46e95a083b9c1704b02ef105c24bf3e0eddbd6cd7e32eed2565d037fab1bd/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4da46e95a083b9c1704b02ef105c24bf3e0eddbd6cd7e32eed2565d037fab1bd/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:53 np0005473739 podman[95940]: 2025-10-07 13:32:53.644057422 +0000 UTC m=+0.201458799 container init 3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a (image=quay.io/ceph/ceph:v18, name=jovial_chatterjee, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 09:32:53 np0005473739 podman[95940]: 2025-10-07 13:32:53.651129402 +0000 UTC m=+0.208530769 container start 3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a (image=quay.io/ceph/ceph:v18, name=jovial_chatterjee, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:53 np0005473739 podman[95940]: 2025-10-07 13:32:53.718825995 +0000 UTC m=+0.276227362 container attach 3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a (image=quay.io/ceph/ceph:v18, name=jovial_chatterjee, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:32:53 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1717588714' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.meta", "app": "cephfs"}]': finished
Oct  7 09:32:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"} v 0) v1
Oct  7 09:32:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/647969554' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  7 09:32:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v67: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e30 do_prune osdmap full prune enabled
Oct  7 09:32:55 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/647969554' entity='client.admin' cmd=[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]: dispatch
Oct  7 09:32:55 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/647969554' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  7 09:32:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e31 e31: 3 total, 3 up, 3 in
Oct  7 09:32:55 np0005473739 jovial_chatterjee[95956]: enabled application 'cephfs' on pool 'cephfs.cephfs.data'
Oct  7 09:32:55 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e31: 3 total, 3 up, 3 in
Oct  7 09:32:55 np0005473739 systemd[1]: libpod-3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a.scope: Deactivated successfully.
Oct  7 09:32:55 np0005473739 podman[95940]: 2025-10-07 13:32:55.221472975 +0000 UTC m=+1.778874432 container died 3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a (image=quay.io/ceph/ceph:v18, name=jovial_chatterjee, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4da46e95a083b9c1704b02ef105c24bf3e0eddbd6cd7e32eed2565d037fab1bd-merged.mount: Deactivated successfully.
Oct  7 09:32:55 np0005473739 podman[95940]: 2025-10-07 13:32:55.564779333 +0000 UTC m=+2.122180700 container remove 3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a (image=quay.io/ceph/ceph:v18, name=jovial_chatterjee, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 09:32:55 np0005473739 systemd[1]: libpod-conmon-3bdf9c1afa157a9b2e3f327a988abf569a9bbf8410fe59a13baf5e730f98751a.scope: Deactivated successfully.
Oct  7 09:32:56 np0005473739 ceph-mon[74295]: Health check update: 2 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:32:56 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/647969554' entity='client.admin' cmd='[{"prefix": "osd pool application enable", "pool": "cephfs.cephfs.data", "app": "cephfs"}]': finished
Oct  7 09:32:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v69: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:56 np0005473739 python3[96068]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_rgw.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:32:56 np0005473739 python3[96139]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843976.284562-33218-182660478174148/source dest=/tmp/ceph_rgw.yml mode=0644 force=True follow=False _original_basename=ceph_rgw.yml.j2 checksum=0a1ea65aada399f80274d3cc2047646f2797712b backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:32:57 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  7 09:32:57 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  7 09:32:57 np0005473739 ceph-mon[74295]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  7 09:32:57 np0005473739 ceph-mon[74295]: Cluster is now healthy
Oct  7 09:32:57 np0005473739 python3[96241]: ansible-ansible.legacy.stat Invoked with path=/home/ceph-admin/assimilate_ceph.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:32:57 np0005473739 python3[96316]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843977.2755933-33232-212731824833170/source dest=/home/ceph-admin/assimilate_ceph.conf owner=167 group=167 mode=0644 follow=False _original_basename=ceph_rgw.conf.j2 checksum=2fa3e1f8fb1888af3a631d7dfbfe46e86474a6d9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:32:58 np0005473739 python3[96366]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config assimilate-conf -i /home/assimilate_ceph.conf#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:58 np0005473739 podman[96367]: 2025-10-07 13:32:58.398000965 +0000 UTC m=+0.055667139 container create a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df (image=quay.io/ceph/ceph:v18, name=elated_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:58 np0005473739 systemd[1]: Started libpod-conmon-a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df.scope.
Oct  7 09:32:58 np0005473739 podman[96367]: 2025-10-07 13:32:58.364676136 +0000 UTC m=+0.022342330 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaee9fa97edf3c9879e6e096f173fa1f3ace9128cae71a2be3c7377287dbc502/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaee9fa97edf3c9879e6e096f173fa1f3ace9128cae71a2be3c7377287dbc502/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aaee9fa97edf3c9879e6e096f173fa1f3ace9128cae71a2be3c7377287dbc502/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v70: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:32:58 np0005473739 podman[96367]: 2025-10-07 13:32:58.519799545 +0000 UTC m=+0.177465719 container init a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df (image=quay.io/ceph/ceph:v18, name=elated_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:32:58 np0005473739 podman[96367]: 2025-10-07 13:32:58.526727481 +0000 UTC m=+0.184393655 container start a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df (image=quay.io/ceph/ceph:v18, name=elated_allen, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:32:58 np0005473739 podman[96367]: 2025-10-07 13:32:58.557524105 +0000 UTC m=+0.215190279 container attach a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df (image=quay.io/ceph/ceph:v18, name=elated_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:32:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config assimilate-conf"} v 0) v1
Oct  7 09:32:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672834819' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  7 09:32:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/672834819' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  7 09:32:59 np0005473739 elated_allen[96383]: 
Oct  7 09:32:59 np0005473739 elated_allen[96383]: [global]
Oct  7 09:32:59 np0005473739 elated_allen[96383]: #011fsid = 82044f27-a8da-5b2a-a297-ff6afc620e1f
Oct  7 09:32:59 np0005473739 elated_allen[96383]: #011mon_host = 192.168.122.100
Oct  7 09:32:59 np0005473739 systemd[1]: libpod-a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df.scope: Deactivated successfully.
Oct  7 09:32:59 np0005473739 podman[96367]: 2025-10-07 13:32:59.104892099 +0000 UTC m=+0.762558283 container died a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df (image=quay.io/ceph/ceph:v18, name=elated_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:32:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-aaee9fa97edf3c9879e6e096f173fa1f3ace9128cae71a2be3c7377287dbc502-merged.mount: Deactivated successfully.
Oct  7 09:32:59 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/672834819' entity='client.admin' cmd=[{"prefix": "config assimilate-conf"}]: dispatch
Oct  7 09:32:59 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/672834819' entity='client.admin' cmd='[{"prefix": "config assimilate-conf"}]': finished
Oct  7 09:32:59 np0005473739 podman[96367]: 2025-10-07 13:32:59.34615288 +0000 UTC m=+1.003819054 container remove a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df (image=quay.io/ceph/ceph:v18, name=elated_allen, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 09:32:59 np0005473739 systemd[1]: libpod-conmon-a66210b8d0943d43b1c387974a76cc8e3240e1eeef156f4cb29bef43233651df.scope: Deactivated successfully.
Oct  7 09:32:59 np0005473739 python3[96585]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config-key set ssl_option no_sslv2:sslv3:no_tlsv1:no_tlsv1_1#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:32:59 np0005473739 podman[96625]: 2025-10-07 13:32:59.722241283 +0000 UTC m=+0.038663505 container create 7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad (image=quay.io/ceph/ceph:v18, name=recursing_ganguly, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:32:59 np0005473739 podman[96618]: 2025-10-07 13:32:59.73351059 +0000 UTC m=+0.064796261 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2)
Oct  7 09:32:59 np0005473739 systemd[1]: Started libpod-conmon-7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad.scope.
Oct  7 09:32:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:32:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8a040a05248d3c4a243917ddb2a9a746571f97189935b060ce0d5890e0b34b/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8a040a05248d3c4a243917ddb2a9a746571f97189935b060ce0d5890e0b34b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b8a040a05248d3c4a243917ddb2a9a746571f97189935b060ce0d5890e0b34b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:32:59 np0005473739 podman[96625]: 2025-10-07 13:32:59.701563327 +0000 UTC m=+0.017985569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:32:59 np0005473739 podman[96625]: 2025-10-07 13:32:59.822178377 +0000 UTC m=+0.138600689 container init 7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad (image=quay.io/ceph/ceph:v18, name=recursing_ganguly, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:32:59 np0005473739 podman[96618]: 2025-10-07 13:32:59.826271111 +0000 UTC m=+0.157556672 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 09:32:59 np0005473739 podman[96625]: 2025-10-07 13:32:59.82822403 +0000 UTC m=+0.144646252 container start 7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad (image=quay.io/ceph/ceph:v18, name=recursing_ganguly, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:32:59 np0005473739 podman[96625]: 2025-10-07 13:32:59.832581761 +0000 UTC m=+0.149004073 container attach 7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad (image=quay.io/ceph/ceph:v18, name=recursing_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e31 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=ssl_option}] v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/3047131648' entity='client.admin' 
Oct  7 09:33:00 np0005473739 recursing_ganguly[96653]: set ssl_option
Oct  7 09:33:00 np0005473739 systemd[1]: libpod-7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad.scope: Deactivated successfully.
Oct  7 09:33:00 np0005473739 podman[96625]: 2025-10-07 13:33:00.488321944 +0000 UTC m=+0.804744166 container died 7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad (image=quay.io/ceph/ceph:v18, name=recursing_ganguly, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2b8a040a05248d3c4a243917ddb2a9a746571f97189935b060ce0d5890e0b34b-merged.mount: Deactivated successfully.
Oct  7 09:33:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v71: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:00 np0005473739 podman[96625]: 2025-10-07 13:33:00.523792566 +0000 UTC m=+0.840214788 container remove 7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad (image=quay.io/ceph/ceph:v18, name=recursing_ganguly, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 09:33:00 np0005473739 systemd[1]: libpod-conmon-7d73a81ce7b71bfa410f21fa64f9be85042f342bc125c64334fafbe837e5ddad.scope: Deactivated successfully.
Oct  7 09:33:00 np0005473739 python3[96936]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_rgw.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:00 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 428a0aae-de46-496b-b8ea-ff919e4c0004 does not exist
Oct  7 09:33:00 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bf040673-5b3f-427c-ac41-18178bfa0ccc does not exist
Oct  7 09:33:00 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8c22973b-6331-4aa0-aff6-3a47ae878182 does not exist
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:00 np0005473739 podman[96951]: 2025-10-07 13:33:00.897879489 +0000 UTC m=+0.039251960 container create 6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f (image=quay.io/ceph/ceph:v18, name=silly_williamson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:33:00 np0005473739 systemd[1]: Started libpod-conmon-6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f.scope.
Oct  7 09:33:00 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33db382f6e9ef79a137d2637f3eb8c52750a9c42066e96f211010d55e66ee1e/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33db382f6e9ef79a137d2637f3eb8c52750a9c42066e96f211010d55e66ee1e/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a33db382f6e9ef79a137d2637f3eb8c52750a9c42066e96f211010d55e66ee1e/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:00 np0005473739 podman[96951]: 2025-10-07 13:33:00.881498432 +0000 UTC m=+0.022870923 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:00 np0005473739 podman[96951]: 2025-10-07 13:33:00.982079522 +0000 UTC m=+0.123452023 container init 6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f (image=quay.io/ceph/ceph:v18, name=silly_williamson, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 09:33:00 np0005473739 podman[96951]: 2025-10-07 13:33:00.988939217 +0000 UTC m=+0.130311688 container start 6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f (image=quay.io/ceph/ceph:v18, name=silly_williamson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:00 np0005473739 podman[96951]: 2025-10-07 13:33:00.992060957 +0000 UTC m=+0.133433428 container attach 6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f (image=quay.io/ceph/ceph:v18, name=silly_williamson, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:01 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/3047131648' entity='client.admin' 
Oct  7 09:33:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:33:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:33:01 np0005473739 podman[97130]: 2025-10-07 13:33:01.380898734 +0000 UTC m=+0.034370525 container create 51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 09:33:01 np0005473739 systemd[1]: Started libpod-conmon-51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136.scope.
Oct  7 09:33:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:01 np0005473739 podman[97130]: 2025-10-07 13:33:01.461185108 +0000 UTC m=+0.114656929 container init 51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:33:01 np0005473739 podman[97130]: 2025-10-07 13:33:01.36460281 +0000 UTC m=+0.018074631 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:01 np0005473739 podman[97130]: 2025-10-07 13:33:01.468897634 +0000 UTC m=+0.122369425 container start 51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:01 np0005473739 goofy_ishizaka[97146]: 167 167
Oct  7 09:33:01 np0005473739 systemd[1]: libpod-51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136.scope: Deactivated successfully.
Oct  7 09:33:01 np0005473739 podman[97130]: 2025-10-07 13:33:01.473525952 +0000 UTC m=+0.126997773 container attach 51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 09:33:01 np0005473739 podman[97130]: 2025-10-07 13:33:01.473702037 +0000 UTC m=+0.127173828 container died 51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e6998e04dcc3580be789fed73c31326014449d8bef365320708f1361a78182f8-merged.mount: Deactivated successfully.
Oct  7 09:33:01 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14244 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:33:01 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service rgw.rgw spec with placement compute-0
Oct  7 09:33:01 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  7 09:33:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  7 09:33:01 np0005473739 podman[97130]: 2025-10-07 13:33:01.511740025 +0000 UTC m=+0.165211816 container remove 51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_ishizaka, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:01 np0005473739 silly_williamson[96990]: Scheduled rgw.rgw update...
Oct  7 09:33:01 np0005473739 systemd[1]: libpod-conmon-51029416696ed9015f1752ac9a6eb3139ec6adf83a4d8e49323560d0c4b3b136.scope: Deactivated successfully.
Oct  7 09:33:01 np0005473739 systemd[1]: libpod-6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f.scope: Deactivated successfully.
Oct  7 09:33:01 np0005473739 podman[97168]: 2025-10-07 13:33:01.565172385 +0000 UTC m=+0.019113768 container died 6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f (image=quay.io/ceph/ceph:v18, name=silly_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a33db382f6e9ef79a137d2637f3eb8c52750a9c42066e96f211010d55e66ee1e-merged.mount: Deactivated successfully.
Oct  7 09:33:01 np0005473739 podman[97168]: 2025-10-07 13:33:01.607648786 +0000 UTC m=+0.061590149 container remove 6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f (image=quay.io/ceph/ceph:v18, name=silly_williamson, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:33:01 np0005473739 systemd[1]: libpod-conmon-6039bb8aff0bbffc844fa467cfd37296c0f16602b5e3e406deb19bda265df53f.scope: Deactivated successfully.
Oct  7 09:33:01 np0005473739 podman[97188]: 2025-10-07 13:33:01.659085086 +0000 UTC m=+0.035247209 container create d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_diffie, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:01 np0005473739 podman[97188]: 2025-10-07 13:33:01.64393886 +0000 UTC m=+0.020101033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:01 np0005473739 systemd[1]: Started libpod-conmon-d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7.scope.
Oct  7 09:33:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c456bafba789a5ad1c3b3d1e43561ca1c54db166657cde23b3072b8d8b32a37d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c456bafba789a5ad1c3b3d1e43561ca1c54db166657cde23b3072b8d8b32a37d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c456bafba789a5ad1c3b3d1e43561ca1c54db166657cde23b3072b8d8b32a37d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c456bafba789a5ad1c3b3d1e43561ca1c54db166657cde23b3072b8d8b32a37d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c456bafba789a5ad1c3b3d1e43561ca1c54db166657cde23b3072b8d8b32a37d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:01 np0005473739 podman[97188]: 2025-10-07 13:33:01.802592928 +0000 UTC m=+0.178755071 container init d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_diffie, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:33:01 np0005473739 podman[97188]: 2025-10-07 13:33:01.810002197 +0000 UTC m=+0.186164320 container start d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_diffie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 09:33:01 np0005473739 podman[97188]: 2025-10-07 13:33:01.813355612 +0000 UTC m=+0.189517745 container attach d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_diffie, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v72: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:02 np0005473739 python3[97289]: ansible-ansible.legacy.stat Invoked with path=/tmp/ceph_mds.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:33:02 np0005473739 stoic_diffie[97205]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:33:02 np0005473739 stoic_diffie[97205]: --> relative data size: 1.0
Oct  7 09:33:02 np0005473739 stoic_diffie[97205]: --> All data devices are unavailable
Oct  7 09:33:02 np0005473739 systemd[1]: libpod-d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7.scope: Deactivated successfully.
Oct  7 09:33:02 np0005473739 podman[97188]: 2025-10-07 13:33:02.826507633 +0000 UTC m=+1.202669756 container died d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_diffie, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 09:33:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c456bafba789a5ad1c3b3d1e43561ca1c54db166657cde23b3072b8d8b32a37d-merged.mount: Deactivated successfully.
Oct  7 09:33:02 np0005473739 podman[97188]: 2025-10-07 13:33:02.877799838 +0000 UTC m=+1.253961961 container remove d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_diffie, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct  7 09:33:02 np0005473739 systemd[1]: libpod-conmon-d1b9e1d911372be9ed490731eb5ab10d77380e9f26fe285616ef072ad11261d7.scope: Deactivated successfully.
Oct  7 09:33:02 np0005473739 python3[97383]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843982.3603706-33273-12215439139377/source dest=/tmp/ceph_mds.yml mode=0644 force=True follow=False _original_basename=ceph_mds.yml.j2 checksum=e359e26d9e42bc107a0de03375144cf8590b6f68 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:33:03 np0005473739 ceph-mon[74295]: Saving service rgw.rgw spec with placement compute-0
Oct  7 09:33:03 np0005473739 podman[97588]: 2025-10-07 13:33:03.441718233 +0000 UTC m=+0.039489596 container create db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:03 np0005473739 systemd[1]: Started libpod-conmon-db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da.scope.
Oct  7 09:33:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:03 np0005473739 python3[97585]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   fs volume create cephfs '--placement=compute-0 '#012 _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:03 np0005473739 podman[97588]: 2025-10-07 13:33:03.424820483 +0000 UTC m=+0.022591876 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:03 np0005473739 podman[97588]: 2025-10-07 13:33:03.60621358 +0000 UTC m=+0.203984983 container init db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:03 np0005473739 podman[97588]: 2025-10-07 13:33:03.612065149 +0000 UTC m=+0.209836522 container start db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:03 np0005473739 dreamy_diffie[97605]: 167 167
Oct  7 09:33:03 np0005473739 systemd[1]: libpod-db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da.scope: Deactivated successfully.
Oct  7 09:33:03 np0005473739 podman[97588]: 2025-10-07 13:33:03.724785648 +0000 UTC m=+0.322557021 container attach db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 09:33:03 np0005473739 podman[97588]: 2025-10-07 13:33:03.725144488 +0000 UTC m=+0.322915861 container died db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:33:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-66ba4a792d411f83c7436489e6e188c87254dbbd79d1f500403261ddf0db4cb3-merged.mount: Deactivated successfully.
Oct  7 09:33:03 np0005473739 podman[97588]: 2025-10-07 13:33:03.764292504 +0000 UTC m=+0.362063877 container remove db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_diffie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:03 np0005473739 systemd[1]: libpod-conmon-db05d5f93a2905db53fe275fbebeba6a3093efad7b2a7ff89511e75d4bcd06da.scope: Deactivated successfully.
Oct  7 09:33:03 np0005473739 podman[97608]: 2025-10-07 13:33:03.804254601 +0000 UTC m=+0.268958447 container create 4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb (image=quay.io/ceph/ceph:v18, name=hopeful_booth, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:03 np0005473739 systemd[1]: Started libpod-conmon-4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb.scope.
Oct  7 09:33:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4e84790e292b0a25d83f739f410c07d606302df57bb19d84c1e9142b4789b2/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4e84790e292b0a25d83f739f410c07d606302df57bb19d84c1e9142b4789b2/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb4e84790e292b0a25d83f739f410c07d606302df57bb19d84c1e9142b4789b2/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:03 np0005473739 podman[97608]: 2025-10-07 13:33:03.787427863 +0000 UTC m=+0.252131749 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:03 np0005473739 podman[97608]: 2025-10-07 13:33:03.900950582 +0000 UTC m=+0.365654438 container init 4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb (image=quay.io/ceph/ceph:v18, name=hopeful_booth, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:33:03 np0005473739 podman[97608]: 2025-10-07 13:33:03.908077804 +0000 UTC m=+0.372781650 container start 4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb (image=quay.io/ceph/ceph:v18, name=hopeful_booth, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:33:03 np0005473739 podman[97608]: 2025-10-07 13:33:03.911555173 +0000 UTC m=+0.376259019 container attach 4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb (image=quay.io/ceph/ceph:v18, name=hopeful_booth, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:03 np0005473739 podman[97648]: 2025-10-07 13:33:03.918829578 +0000 UTC m=+0.042628597 container create 4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:33:03 np0005473739 systemd[1]: Started libpod-conmon-4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9.scope.
Oct  7 09:33:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07165db17804e649d2dbb4a520021ca47b137f479c662deeeca8690de7867b6a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07165db17804e649d2dbb4a520021ca47b137f479c662deeeca8690de7867b6a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07165db17804e649d2dbb4a520021ca47b137f479c662deeeca8690de7867b6a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/07165db17804e649d2dbb4a520021ca47b137f479c662deeeca8690de7867b6a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:03 np0005473739 podman[97648]: 2025-10-07 13:33:03.984327325 +0000 UTC m=+0.108126344 container init 4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:33:03 np0005473739 podman[97648]: 2025-10-07 13:33:03.991543579 +0000 UTC m=+0.115342598 container start 4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 09:33:03 np0005473739 podman[97648]: 2025-10-07 13:33:03.994522125 +0000 UTC m=+0.118321144 container attach 4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:33:03 np0005473739 podman[97648]: 2025-10-07 13:33:03.903135328 +0000 UTC m=+0.026934377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:04 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14246 -' entity='client.admin' cmd=[{"prefix": "fs volume create", "name": "cephfs", "placement": "compute-0 ", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"} v 0) v1
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  7 09:33:04 np0005473739 ceph-mgr[74587]: [volumes INFO volumes.module] Starting _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"} v 0) v1
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"} v 0) v1
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e31 do_prune osdmap full prune enabled
Oct  7 09:33:04 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0[74291]: 2025-10-07T13:33:04.465+0000 7f8494937640 -1 log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(cluster) log [ERR] : Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e2 new map
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e2 print_map#012e2#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-07T13:33:04.466197+0000#012modified#0112025-10-07T13:33:04.466246+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e32 e32: 3 total, 3 up, 3 in
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e32: 3 total, 3 up, 3 in
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : fsmap cephfs:0
Oct  7 09:33:04 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  7 09:33:04 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:04 np0005473739 ceph-mgr[74587]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_create(name:cephfs, placement:compute-0 , prefix:fs volume create, target:['mon-mgr', '']) < ""
Oct  7 09:33:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v74: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool create", "pool": "cephfs.cephfs.meta"}]: dispatch
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"bulk": true, "prefix": "osd pool create", "pool": "cephfs.cephfs.data"}]: dispatch
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]: dispatch
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: Health check failed: 1 filesystem is offline (MDS_ALL_DOWN)
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: Health check failed: 1 filesystem is online with fewer MDS than max_mds (MDS_UP_LESS_THAN_MAX)
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "fs new", "fs_name": "cephfs", "metadata": "cephfs.cephfs.meta", "data": "cephfs.cephfs.data"}]': finished
Oct  7 09:33:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:04 np0005473739 systemd[1]: libpod-4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb.scope: Deactivated successfully.
Oct  7 09:33:04 np0005473739 podman[97608]: 2025-10-07 13:33:04.521284203 +0000 UTC m=+0.985988049 container died 4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb (image=quay.io/ceph/ceph:v18, name=hopeful_booth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fb4e84790e292b0a25d83f739f410c07d606302df57bb19d84c1e9142b4789b2-merged.mount: Deactivated successfully.
Oct  7 09:33:04 np0005473739 podman[97608]: 2025-10-07 13:33:04.560788018 +0000 UTC m=+1.025491864 container remove 4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb (image=quay.io/ceph/ceph:v18, name=hopeful_booth, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:33:04 np0005473739 systemd[1]: libpod-conmon-4427429a7544499b9a7505edc7c23cfd78f7e1207986a25ecc167e7564c60fdb.scope: Deactivated successfully.
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]: {
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:    "0": [
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:        {
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "devices": [
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "/dev/loop3"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            ],
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_name": "ceph_lv0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_size": "21470642176",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "name": "ceph_lv0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "tags": {
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.crush_device_class": "",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.encrypted": "0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osd_id": "0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.type": "block",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.vdo": "0"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            },
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "type": "block",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "vg_name": "ceph_vg0"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:        }
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:    ],
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:    "1": [
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:        {
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "devices": [
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "/dev/loop4"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            ],
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_name": "ceph_lv1",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_size": "21470642176",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "name": "ceph_lv1",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "tags": {
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.crush_device_class": "",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.encrypted": "0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osd_id": "1",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.type": "block",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.vdo": "0"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            },
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "type": "block",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "vg_name": "ceph_vg1"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:        }
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:    ],
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:    "2": [
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:        {
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "devices": [
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "/dev/loop5"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            ],
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_name": "ceph_lv2",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_size": "21470642176",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "name": "ceph_lv2",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "tags": {
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.crush_device_class": "",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.encrypted": "0",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osd_id": "2",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.type": "block",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:                "ceph.vdo": "0"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            },
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "type": "block",
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:            "vg_name": "ceph_vg2"
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:        }
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]:    ]
Oct  7 09:33:04 np0005473739 distracted_brattain[97665]: }
Oct  7 09:33:04 np0005473739 systemd[1]: libpod-4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9.scope: Deactivated successfully.
Oct  7 09:33:04 np0005473739 podman[97648]: 2025-10-07 13:33:04.757400254 +0000 UTC m=+0.881199273 container died 4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-07165db17804e649d2dbb4a520021ca47b137f479c662deeeca8690de7867b6a-merged.mount: Deactivated successfully.
Oct  7 09:33:04 np0005473739 podman[97648]: 2025-10-07 13:33:04.811593334 +0000 UTC m=+0.935392353 container remove 4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:04 np0005473739 systemd[1]: libpod-conmon-4925efced4f48e89d2a233fead3d472a24a9c38772d2232df2adc2e28eb11eb9.scope: Deactivated successfully.
Oct  7 09:33:04 np0005473739 python3[97738]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z   --volume /tmp/ceph_mds.yml:/home/ceph_spec.yaml:z   --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch apply --in-file /home/ceph_spec.yaml _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:04 np0005473739 podman[97791]: 2025-10-07 13:33:04.972055458 +0000 UTC m=+0.036212603 container create dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042 (image=quay.io/ceph/ceph:v18, name=hungry_swartz, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:05 np0005473739 systemd[1]: Started libpod-conmon-dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042.scope.
Oct  7 09:33:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00dc746027a3883147c4b6a3a56add95313ae20a4459474d4a7a581b0e1ec8e8/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00dc746027a3883147c4b6a3a56add95313ae20a4459474d4a7a581b0e1ec8e8/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/00dc746027a3883147c4b6a3a56add95313ae20a4459474d4a7a581b0e1ec8e8/merged/home/ceph_spec.yaml supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:05 np0005473739 podman[97791]: 2025-10-07 13:33:05.053394928 +0000 UTC m=+0.117552093 container init dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042 (image=quay.io/ceph/ceph:v18, name=hungry_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 09:33:05 np0005473739 podman[97791]: 2025-10-07 13:33:04.95761441 +0000 UTC m=+0.021771595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:05 np0005473739 podman[97791]: 2025-10-07 13:33:05.062660274 +0000 UTC m=+0.126817429 container start dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042 (image=quay.io/ceph/ceph:v18, name=hungry_swartz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:05 np0005473739 podman[97791]: 2025-10-07 13:33:05.065783393 +0000 UTC m=+0.129940548 container attach dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042 (image=quay.io/ceph/ceph:v18, name=hungry_swartz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:33:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:05 np0005473739 podman[97903]: 2025-10-07 13:33:05.335637523 +0000 UTC m=+0.035246918 container create 72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:33:05 np0005473739 systemd[1]: Started libpod-conmon-72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab.scope.
Oct  7 09:33:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:05 np0005473739 podman[97903]: 2025-10-07 13:33:05.389549345 +0000 UTC m=+0.089158760 container init 72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_albattani, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:33:05 np0005473739 podman[97903]: 2025-10-07 13:33:05.395977999 +0000 UTC m=+0.095587394 container start 72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_albattani, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:33:05 np0005473739 podman[97903]: 2025-10-07 13:33:05.399527789 +0000 UTC m=+0.099137204 container attach 72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_albattani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:33:05 np0005473739 romantic_albattani[97920]: 167 167
Oct  7 09:33:05 np0005473739 systemd[1]: libpod-72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab.scope: Deactivated successfully.
Oct  7 09:33:05 np0005473739 conmon[97920]: conmon 72184d3b0a4c4ade6271 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab.scope/container/memory.events
Oct  7 09:33:05 np0005473739 podman[97903]: 2025-10-07 13:33:05.402168126 +0000 UTC m=+0.101777541 container died 72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:33:05 np0005473739 podman[97903]: 2025-10-07 13:33:05.320564929 +0000 UTC m=+0.020174344 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-71cc165ef1e713800886aed99ebe89225ff07d8c64d900730c02286b73ade088-merged.mount: Deactivated successfully.
Oct  7 09:33:05 np0005473739 podman[97903]: 2025-10-07 13:33:05.436947031 +0000 UTC m=+0.136556426 container remove 72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  7 09:33:05 np0005473739 systemd[1]: libpod-conmon-72184d3b0a4c4ade6271bf598c12a7d7caf6a0c7eddebfa3e7907e4f4eb76fab.scope: Deactivated successfully.
Oct  7 09:33:05 np0005473739 ceph-mon[74295]: Saving service mds.cephfs spec with placement compute-0
Oct  7 09:33:05 np0005473739 podman[97962]: 2025-10-07 13:33:05.591767763 +0000 UTC m=+0.044537005 container create f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wilson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:05 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14248 -' entity='client.admin' cmd=[{"prefix": "orch apply", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 09:33:05 np0005473739 ceph-mgr[74587]: [cephadm INFO root] Saving service mds.cephfs spec with placement compute-0
Oct  7 09:33:05 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service mds.cephfs spec with placement compute-0
Oct  7 09:33:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  7 09:33:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:05 np0005473739 hungry_swartz[97833]: Scheduled mds.cephfs update...
Oct  7 09:33:05 np0005473739 systemd[1]: Started libpod-conmon-f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688.scope.
Oct  7 09:33:05 np0005473739 podman[97791]: 2025-10-07 13:33:05.650026955 +0000 UTC m=+0.714184110 container died dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042 (image=quay.io/ceph/ceph:v18, name=hungry_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 09:33:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:05 np0005473739 systemd[1]: libpod-dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042.scope: Deactivated successfully.
Oct  7 09:33:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec8053e437ad9c86d11fa9dd3e57c48e18a6f6b8cabf0ccfb97967e448d460b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec8053e437ad9c86d11fa9dd3e57c48e18a6f6b8cabf0ccfb97967e448d460b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec8053e437ad9c86d11fa9dd3e57c48e18a6f6b8cabf0ccfb97967e448d460b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eec8053e437ad9c86d11fa9dd3e57c48e18a6f6b8cabf0ccfb97967e448d460b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:05 np0005473739 podman[97962]: 2025-10-07 13:33:05.574057432 +0000 UTC m=+0.026826694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-00dc746027a3883147c4b6a3a56add95313ae20a4459474d4a7a581b0e1ec8e8-merged.mount: Deactivated successfully.
Oct  7 09:33:05 np0005473739 podman[97962]: 2025-10-07 13:33:05.685351965 +0000 UTC m=+0.138121227 container init f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wilson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:05 np0005473739 podman[97962]: 2025-10-07 13:33:05.692605 +0000 UTC m=+0.145374242 container start f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wilson, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:33:05 np0005473739 podman[97962]: 2025-10-07 13:33:05.695179415 +0000 UTC m=+0.147948677 container attach f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:05 np0005473739 podman[97791]: 2025-10-07 13:33:05.708620727 +0000 UTC m=+0.772777882 container remove dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042 (image=quay.io/ceph/ceph:v18, name=hungry_swartz, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:05 np0005473739 systemd[1]: libpod-conmon-dad3d9817c17202fa39153ec0fd0d0cc42efdea64f62f987100ae9568bd69042.scope: Deactivated successfully.
Oct  7 09:33:06 np0005473739 python3[98074]: ansible-ansible.legacy.stat Invoked with path=/etc/ceph/ceph.client.openstack.keyring follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Oct  7 09:33:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v75: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]: {
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "osd_id": 2,
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "type": "bluestore"
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:    },
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "osd_id": 1,
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "type": "bluestore"
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:    },
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "osd_id": 0,
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:        "type": "bluestore"
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]:    }
Oct  7 09:33:06 np0005473739 hopeful_wilson[97981]: }
Oct  7 09:33:06 np0005473739 ceph-mon[74295]: Saving service mds.cephfs spec with placement compute-0
Oct  7 09:33:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:06 np0005473739 systemd[1]: libpod-f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688.scope: Deactivated successfully.
Oct  7 09:33:06 np0005473739 podman[97962]: 2025-10-07 13:33:06.642469578 +0000 UTC m=+1.095238830 container died f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 09:33:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-eec8053e437ad9c86d11fa9dd3e57c48e18a6f6b8cabf0ccfb97967e448d460b-merged.mount: Deactivated successfully.
Oct  7 09:33:06 np0005473739 podman[97962]: 2025-10-07 13:33:06.699237544 +0000 UTC m=+1.152006786 container remove f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True)
Oct  7 09:33:06 np0005473739 systemd[1]: libpod-conmon-f119912da14bcdf97107b65deb37a7a8ccc8f60545b265b0895dceec6c415688.scope: Deactivated successfully.
Oct  7 09:33:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:06 np0005473739 python3[98186]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759843986.1891959-33303-168360271641799/source dest=/etc/ceph/ceph.client.openstack.keyring mode=0644 force=True owner=167 group=167 follow=False _original_basename=ceph_key.j2 checksum=a5e69f9fe4e1ae5e1f06c4bb7cdaf69a79fd44f5 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:33:07 np0005473739 python3[98413]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth import -i /etc/ceph/ceph.client.openstack.keyring _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:07 np0005473739 podman[98442]: 2025-10-07 13:33:07.449311226 +0000 UTC m=+0.057868304 container create 9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f (image=quay.io/ceph/ceph:v18, name=vigorous_gauss, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:07 np0005473739 systemd[1]: Started libpod-conmon-9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f.scope.
Oct  7 09:33:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2373e0b632e0dd6e8c570622b4c8c2710b5929bbb04bef1354125d2d80270ba/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2373e0b632e0dd6e8c570622b4c8c2710b5929bbb04bef1354125d2d80270ba/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:07 np0005473739 podman[98442]: 2025-10-07 13:33:07.414131501 +0000 UTC m=+0.022688599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:07 np0005473739 podman[98472]: 2025-10-07 13:33:07.516203339 +0000 UTC m=+0.063419705 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:33:07 np0005473739 podman[98442]: 2025-10-07 13:33:07.526582904 +0000 UTC m=+0.135139992 container init 9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f (image=quay.io/ceph/ceph:v18, name=vigorous_gauss, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:07 np0005473739 podman[98442]: 2025-10-07 13:33:07.533119529 +0000 UTC m=+0.141676597 container start 9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f (image=quay.io/ceph/ceph:v18, name=vigorous_gauss, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 09:33:07 np0005473739 podman[98442]: 2025-10-07 13:33:07.536142906 +0000 UTC m=+0.144699974 container attach 9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f (image=quay.io/ceph/ceph:v18, name=vigorous_gauss, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:33:07 np0005473739 podman[98472]: 2025-10-07 13:33:07.608372195 +0000 UTC m=+0.155588561 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:33:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth import"} v 0) v1
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2797527279' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2797527279' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  7 09:33:08 np0005473739 systemd[1]: libpod-9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f.scope: Deactivated successfully.
Oct  7 09:33:08 np0005473739 podman[98442]: 2025-10-07 13:33:08.21315939 +0000 UTC m=+0.821716538 container died 9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f (image=quay.io/ceph/ceph:v18, name=vigorous_gauss, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:08 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7d8c75a6-c5db-499b-b52a-79e269bf7fc6 does not exist
Oct  7 09:33:08 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 44d1a65b-305a-4f5f-80bf-2d19ebadc618 does not exist
Oct  7 09:33:08 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev da945e95-a8d7-44d7-a03a-1be59e9bf4bc does not exist
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f2373e0b632e0dd6e8c570622b4c8c2710b5929bbb04bef1354125d2d80270ba-merged.mount: Deactivated successfully.
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:08 np0005473739 podman[98442]: 2025-10-07 13:33:08.339744242 +0000 UTC m=+0.948301310 container remove 9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f (image=quay.io/ceph/ceph:v18, name=vigorous_gauss, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:08 np0005473739 systemd[1]: libpod-conmon-9d87e4d121ae8dd5061606383a68f3afcee3f21a49d927faeea0fb63dabd5c7f.scope: Deactivated successfully.
Oct  7 09:33:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v76: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:08 np0005473739 podman[98775]: 2025-10-07 13:33:08.825803856 +0000 UTC m=+0.036461330 container create 2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_booth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 09:33:08 np0005473739 systemd[1]: Started libpod-conmon-2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8.scope.
Oct  7 09:33:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:08 np0005473739 podman[98775]: 2025-10-07 13:33:08.883773731 +0000 UTC m=+0.094431225 container init 2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_booth, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:08 np0005473739 podman[98775]: 2025-10-07 13:33:08.889862236 +0000 UTC m=+0.100519700 container start 2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_booth, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2797527279' entity='client.admin' cmd=[{"prefix": "auth import"}]: dispatch
Oct  7 09:33:08 np0005473739 frosty_booth[98792]: 167 167
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:08 np0005473739 podman[98775]: 2025-10-07 13:33:08.894261447 +0000 UTC m=+0.104918931 container attach 2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_booth, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2797527279' entity='client.admin' cmd='[{"prefix": "auth import"}]': finished
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:33:08 np0005473739 systemd[1]: libpod-2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8.scope: Deactivated successfully.
Oct  7 09:33:08 np0005473739 podman[98775]: 2025-10-07 13:33:08.896456504 +0000 UTC m=+0.107113998 container died 2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 09:33:08 np0005473739 podman[98775]: 2025-10-07 13:33:08.811316266 +0000 UTC m=+0.021973750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5c001acbc6a923eefe3bc8dc0306195303719b657772c3f1de6bb0175c38fefb-merged.mount: Deactivated successfully.
Oct  7 09:33:08 np0005473739 podman[98775]: 2025-10-07 13:33:08.941549001 +0000 UTC m=+0.152206485 container remove 2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=frosty_booth, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:33:08 np0005473739 systemd[1]: libpod-conmon-2cb85a7c250e538de0f185b27b85be8e52cb556b809a66df39b8dad7cc0b7da8.scope: Deactivated successfully.
Oct  7 09:33:09 np0005473739 python3[98834]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   status --format json | jq .monmap.num_mons _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:09 np0005473739 podman[98842]: 2025-10-07 13:33:09.103051393 +0000 UTC m=+0.037408403 container create 7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:09 np0005473739 systemd[1]: Started libpod-conmon-7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef.scope.
Oct  7 09:33:09 np0005473739 podman[98857]: 2025-10-07 13:33:09.151737632 +0000 UTC m=+0.049999553 container create 5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9 (image=quay.io/ceph/ceph:v18, name=sleepy_goldwasser, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:33:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df9aa66dabc23c9b9069cff9b4d3f5269253cbac0a66830cd0e10cb1bd492de/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df9aa66dabc23c9b9069cff9b4d3f5269253cbac0a66830cd0e10cb1bd492de/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df9aa66dabc23c9b9069cff9b4d3f5269253cbac0a66830cd0e10cb1bd492de/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df9aa66dabc23c9b9069cff9b4d3f5269253cbac0a66830cd0e10cb1bd492de/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df9aa66dabc23c9b9069cff9b4d3f5269253cbac0a66830cd0e10cb1bd492de/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:09 np0005473739 systemd[1]: Started libpod-conmon-5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9.scope.
Oct  7 09:33:09 np0005473739 podman[98842]: 2025-10-07 13:33:09.086398529 +0000 UTC m=+0.020755559 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab89bc6a17ce567cd10a4454e020a7d5f81083600ee9a463a32b2188e5f5235/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6ab89bc6a17ce567cd10a4454e020a7d5f81083600ee9a463a32b2188e5f5235/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:09 np0005473739 podman[98842]: 2025-10-07 13:33:09.22316912 +0000 UTC m=+0.157526150 container init 7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_franklin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:09 np0005473739 podman[98857]: 2025-10-07 13:33:09.134726159 +0000 UTC m=+0.032988130 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:09 np0005473739 podman[98842]: 2025-10-07 13:33:09.230752204 +0000 UTC m=+0.165109214 container start 7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 09:33:09 np0005473739 podman[98842]: 2025-10-07 13:33:09.362979329 +0000 UTC m=+0.297336339 container attach 7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_franklin, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:09 np0005473739 podman[98857]: 2025-10-07 13:33:09.422079743 +0000 UTC m=+0.320341664 container init 5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9 (image=quay.io/ceph/ceph:v18, name=sleepy_goldwasser, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:33:09 np0005473739 podman[98857]: 2025-10-07 13:33:09.427295026 +0000 UTC m=+0.325556947 container start 5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9 (image=quay.io/ceph/ceph:v18, name=sleepy_goldwasser, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:09 np0005473739 podman[98857]: 2025-10-07 13:33:09.516537098 +0000 UTC m=+0.414799109 container attach 5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9 (image=quay.io/ceph/ceph:v18, name=sleepy_goldwasser, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  7 09:33:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/537827042' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  7 09:33:10 np0005473739 sleepy_goldwasser[98879]: 
Oct  7 09:33:10 np0005473739 sleepy_goldwasser[98879]: {"fsid":"82044f27-a8da-5b2a-a297-ff6afc620e1f","health":{"status":"HEALTH_ERR","checks":{"MDS_ALL_DOWN":{"severity":"HEALTH_ERR","summary":{"message":"1 filesystem is offline","count":1},"muted":false},"MDS_UP_LESS_THAN_MAX":{"severity":"HEALTH_WARN","summary":{"message":"1 filesystem is online with fewer MDS than max_mds","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":154,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":32,"num_osds":3,"num_up_osds":3,"osd_up_since":1759843947,"num_in_osds":3,"osd_in_since":1759843923,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7}],"num_pgs":7,"num_pools":7,"num_objects":2,"data_bytes":459280,"bytes_used":83841024,"bytes_avail":64328085504,"bytes_total":64411926528},"fsmap":{"epoch":2,"id":1,"up":0,"in":0,"max":1,"by_rank":[],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":2,"modified":"2025-10-07T13:32:24.506573+0000","services":{"osd":{"daemons":{"summary":"","0":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct  7 09:33:10 np0005473739 systemd[1]: libpod-5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9.scope: Deactivated successfully.
Oct  7 09:33:10 np0005473739 podman[98857]: 2025-10-07 13:33:10.080791251 +0000 UTC m=+0.979053172 container died 5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9 (image=quay.io/ceph/ceph:v18, name=sleepy_goldwasser, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6ab89bc6a17ce567cd10a4454e020a7d5f81083600ee9a463a32b2188e5f5235-merged.mount: Deactivated successfully.
Oct  7 09:33:10 np0005473739 podman[98857]: 2025-10-07 13:33:10.127008268 +0000 UTC m=+1.025270199 container remove 5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9 (image=quay.io/ceph/ceph:v18, name=sleepy_goldwasser, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:10 np0005473739 systemd[1]: libpod-conmon-5ff8d7d5a712d96c6c62c543b58e1528aa5441e11e7007131d6a7993a70a4be9.scope: Deactivated successfully.
Oct  7 09:33:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:10 np0005473739 focused_franklin[98874]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:33:10 np0005473739 focused_franklin[98874]: --> relative data size: 1.0
Oct  7 09:33:10 np0005473739 focused_franklin[98874]: --> All data devices are unavailable
Oct  7 09:33:10 np0005473739 podman[98842]: 2025-10-07 13:33:10.344046342 +0000 UTC m=+1.278403352 container died 7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_franklin, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:10 np0005473739 systemd[1]: libpod-7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef.scope: Deactivated successfully.
Oct  7 09:33:10 np0005473739 systemd[1]: libpod-7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef.scope: Consumed 1.051s CPU time.
Oct  7 09:33:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0df9aa66dabc23c9b9069cff9b4d3f5269253cbac0a66830cd0e10cb1bd492de-merged.mount: Deactivated successfully.
Oct  7 09:33:10 np0005473739 podman[98842]: 2025-10-07 13:33:10.397746009 +0000 UTC m=+1.332103019 container remove 7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:33:10 np0005473739 systemd[1]: libpod-conmon-7a2a69b243f9acb56f05b0b5ca5737f0401f6bffc488b01ab5691b8db0d8a7ef.scope: Deactivated successfully.
Oct  7 09:33:10 np0005473739 python3[98966]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   mon dump --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:10 np0005473739 podman[98999]: 2025-10-07 13:33:10.515278431 +0000 UTC m=+0.038488830 container create 2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419 (image=quay.io/ceph/ceph:v18, name=goofy_booth, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v77: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:10 np0005473739 systemd[1]: Started libpod-conmon-2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419.scope.
Oct  7 09:33:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d1bfc3a5d692af67588d2691db1e1919aeab7ccba2342db2e54bf174e82aa/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df0d1bfc3a5d692af67588d2691db1e1919aeab7ccba2342db2e54bf174e82aa/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:10 np0005473739 podman[98999]: 2025-10-07 13:33:10.49953938 +0000 UTC m=+0.022749809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:10 np0005473739 podman[98999]: 2025-10-07 13:33:10.599174637 +0000 UTC m=+0.122385056 container init 2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419 (image=quay.io/ceph/ceph:v18, name=goofy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 09:33:10 np0005473739 podman[98999]: 2025-10-07 13:33:10.608086723 +0000 UTC m=+0.131297122 container start 2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419 (image=quay.io/ceph/ceph:v18, name=goofy_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:10 np0005473739 podman[98999]: 2025-10-07 13:33:10.614033945 +0000 UTC m=+0.137244344 container attach 2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419 (image=quay.io/ceph/ceph:v18, name=goofy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:33:10 np0005473739 podman[99158]: 2025-10-07 13:33:10.984875945 +0000 UTC m=+0.033970566 container create f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hellman, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:33:11 np0005473739 systemd[1]: Started libpod-conmon-f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1.scope.
Oct  7 09:33:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:11 np0005473739 podman[99158]: 2025-10-07 13:33:11.040602683 +0000 UTC m=+0.089697324 container init f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hellman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 09:33:11 np0005473739 podman[99158]: 2025-10-07 13:33:11.046046002 +0000 UTC m=+0.095140633 container start f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hellman, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:11 np0005473739 romantic_hellman[99174]: 167 167
Oct  7 09:33:11 np0005473739 systemd[1]: libpod-f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1.scope: Deactivated successfully.
Oct  7 09:33:11 np0005473739 podman[99158]: 2025-10-07 13:33:11.049423728 +0000 UTC m=+0.098518349 container attach f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hellman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:11 np0005473739 podman[99158]: 2025-10-07 13:33:11.049697975 +0000 UTC m=+0.098792606 container died f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hellman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:11 np0005473739 podman[99158]: 2025-10-07 13:33:10.970471808 +0000 UTC m=+0.019566459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c15341668a61239b3338464362ee83c2fe2471e0df2ba114ad9d3828f0a17281-merged.mount: Deactivated successfully.
Oct  7 09:33:11 np0005473739 podman[99158]: 2025-10-07 13:33:11.079279888 +0000 UTC m=+0.128374499 container remove f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_hellman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:33:11 np0005473739 systemd[1]: libpod-conmon-f77f8389fc7b8703dec8f84cf1921a08581a642f773d87495d43889475b3f5e1.scope: Deactivated successfully.
Oct  7 09:33:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 09:33:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2546718925' entity='client.admin' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 09:33:11 np0005473739 goofy_booth[99043]: 
Oct  7 09:33:11 np0005473739 goofy_booth[99043]: {"epoch":1,"fsid":"82044f27-a8da-5b2a-a297-ff6afc620e1f","modified":"2025-10-07T13:30:30.125721Z","created":"2025-10-07T13:30:30.125721Z","min_mon_release":18,"min_mon_release_name":"reef","election_strategy":1,"disallowed_leaders: ":"","stretch_mode":false,"tiebreaker_mon":"","removed_ranks: ":"","features":{"persistent":["kraken","luminous","mimic","osdmap-prune","nautilus","octopus","pacific","elector-pinging","quincy","reef"],"optional":[]},"mons":[{"rank":0,"name":"compute-0","public_addrs":{"addrvec":[{"type":"v2","addr":"192.168.122.100:3300","nonce":0},{"type":"v1","addr":"192.168.122.100:6789","nonce":0}]},"addr":"192.168.122.100:6789/0","public_addr":"192.168.122.100:6789/0","priority":0,"weight":0,"crush_location":"{}"}],"quorum":[0]}
Oct  7 09:33:11 np0005473739 goofy_booth[99043]: dumped monmap epoch 1
Oct  7 09:33:11 np0005473739 podman[99198]: 2025-10-07 13:33:11.209216236 +0000 UTC m=+0.035270609 container create b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilson, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:33:11 np0005473739 systemd[1]: libpod-2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419.scope: Deactivated successfully.
Oct  7 09:33:11 np0005473739 podman[98999]: 2025-10-07 13:33:11.213005211 +0000 UTC m=+0.736215630 container died 2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419 (image=quay.io/ceph/ceph:v18, name=goofy_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:33:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-df0d1bfc3a5d692af67588d2691db1e1919aeab7ccba2342db2e54bf174e82aa-merged.mount: Deactivated successfully.
Oct  7 09:33:11 np0005473739 podman[98999]: 2025-10-07 13:33:11.275958944 +0000 UTC m=+0.799169343 container remove 2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419 (image=quay.io/ceph/ceph:v18, name=goofy_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:33:11 np0005473739 systemd[1]: Started libpod-conmon-b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6.scope.
Oct  7 09:33:11 np0005473739 systemd[1]: libpod-conmon-2ce325bc1d26e1ee5ae9dff6f93792765078011b5aa2a035899780feec2a0419.scope: Deactivated successfully.
Oct  7 09:33:11 np0005473739 podman[99198]: 2025-10-07 13:33:11.19367563 +0000 UTC m=+0.019730013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4495ab8fb3601c01611246d961e45e8ee5173c5e35e1acd0e4ef12f89e68e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4495ab8fb3601c01611246d961e45e8ee5173c5e35e1acd0e4ef12f89e68e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4495ab8fb3601c01611246d961e45e8ee5173c5e35e1acd0e4ef12f89e68e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a4495ab8fb3601c01611246d961e45e8ee5173c5e35e1acd0e4ef12f89e68e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:11 np0005473739 podman[99198]: 2025-10-07 13:33:11.330739669 +0000 UTC m=+0.156794052 container init b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 09:33:11 np0005473739 podman[99198]: 2025-10-07 13:33:11.3390538 +0000 UTC m=+0.165108173 container start b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:11 np0005473739 podman[99198]: 2025-10-07 13:33:11.342742514 +0000 UTC m=+0.168796877 container attach b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:11 np0005473739 python3[99261]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   auth get client.openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:11 np0005473739 podman[99262]: 2025-10-07 13:33:11.966666606 +0000 UTC m=+0.053049501 container create 50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6 (image=quay.io/ceph/ceph:v18, name=priceless_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:12 np0005473739 systemd[1]: Started libpod-conmon-50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6.scope.
Oct  7 09:33:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:12 np0005473739 podman[99262]: 2025-10-07 13:33:11.946867803 +0000 UTC m=+0.033250798 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b57c68d2fc2663a491af03eb7dca1350f054a99739dd66e449c1c6a45066f5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27b57c68d2fc2663a491af03eb7dca1350f054a99739dd66e449c1c6a45066f5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:12 np0005473739 podman[99262]: 2025-10-07 13:33:12.06269232 +0000 UTC m=+0.149075265 container init 50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6 (image=quay.io/ceph/ceph:v18, name=priceless_galois, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:33:12 np0005473739 podman[99262]: 2025-10-07 13:33:12.076055311 +0000 UTC m=+0.162438216 container start 50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6 (image=quay.io/ceph/ceph:v18, name=priceless_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:33:12 np0005473739 podman[99262]: 2025-10-07 13:33:12.080099614 +0000 UTC m=+0.166482519 container attach 50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6 (image=quay.io/ceph/ceph:v18, name=priceless_galois, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:12 np0005473739 busy_wilson[99231]: {
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:    "0": [
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:        {
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "devices": [
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "/dev/loop3"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            ],
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_name": "ceph_lv0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_size": "21470642176",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "name": "ceph_lv0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "tags": {
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.crush_device_class": "",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.encrypted": "0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osd_id": "0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.type": "block",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.vdo": "0"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            },
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "type": "block",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "vg_name": "ceph_vg0"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:        }
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:    ],
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:    "1": [
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:        {
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "devices": [
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "/dev/loop4"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            ],
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_name": "ceph_lv1",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_size": "21470642176",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "name": "ceph_lv1",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "tags": {
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.crush_device_class": "",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.encrypted": "0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osd_id": "1",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.type": "block",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.vdo": "0"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            },
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "type": "block",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "vg_name": "ceph_vg1"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:        }
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:    ],
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:    "2": [
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:        {
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "devices": [
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "/dev/loop5"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            ],
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_name": "ceph_lv2",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_size": "21470642176",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "name": "ceph_lv2",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "tags": {
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.crush_device_class": "",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.encrypted": "0",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osd_id": "2",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.type": "block",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:                "ceph.vdo": "0"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            },
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "type": "block",
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:            "vg_name": "ceph_vg2"
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:        }
Oct  7 09:33:12 np0005473739 busy_wilson[99231]:    ]
Oct  7 09:33:12 np0005473739 busy_wilson[99231]: }
Oct  7 09:33:12 np0005473739 systemd[1]: libpod-b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6.scope: Deactivated successfully.
Oct  7 09:33:12 np0005473739 podman[99198]: 2025-10-07 13:33:12.109622465 +0000 UTC m=+0.935676868 container died b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-23a4495ab8fb3601c01611246d961e45e8ee5173c5e35e1acd0e4ef12f89e68e-merged.mount: Deactivated successfully.
Oct  7 09:33:12 np0005473739 podman[99198]: 2025-10-07 13:33:12.175485682 +0000 UTC m=+1.001540055 container remove b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_wilson, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:33:12 np0005473739 systemd[1]: libpod-conmon-b047fea4b386b34b566823105b23305ad4a4453c0b44b4b5d847fe94418227e6.scope: Deactivated successfully.
Oct  7 09:33:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v78: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.openstack"} v 0) v1
Oct  7 09:33:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/1655691959' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  7 09:33:12 np0005473739 priceless_galois[99279]: [client.openstack]
Oct  7 09:33:12 np0005473739 priceless_galois[99279]: #011key = AQDbFeVoAAAAABAAnVB2Itvvr5jLKIPnWkeyPw==
Oct  7 09:33:12 np0005473739 priceless_galois[99279]: #011caps mgr = "allow *"
Oct  7 09:33:12 np0005473739 priceless_galois[99279]: #011caps mon = "profile rbd"
Oct  7 09:33:12 np0005473739 priceless_galois[99279]: #011caps osd = "profile rbd pool=vms, profile rbd pool=volumes, profile rbd pool=backups, profile rbd pool=images, profile rbd pool=cephfs.cephfs.meta, profile rbd pool=cephfs.cephfs.data"
Oct  7 09:33:12 np0005473739 systemd[1]: libpod-50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6.scope: Deactivated successfully.
Oct  7 09:33:12 np0005473739 podman[99262]: 2025-10-07 13:33:12.728793096 +0000 UTC m=+0.815176031 container died 50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6 (image=quay.io/ceph/ceph:v18, name=priceless_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-27b57c68d2fc2663a491af03eb7dca1350f054a99739dd66e449c1c6a45066f5-merged.mount: Deactivated successfully.
Oct  7 09:33:12 np0005473739 podman[99262]: 2025-10-07 13:33:12.782649828 +0000 UTC m=+0.869032763 container remove 50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6 (image=quay.io/ceph/ceph:v18, name=priceless_galois, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 09:33:12 np0005473739 systemd[1]: libpod-conmon-50d85a0af5e3379ec3e17ca20525e82e04af24e930f45eb92ad316f487736ca6.scope: Deactivated successfully.
Oct  7 09:33:12 np0005473739 podman[99470]: 2025-10-07 13:33:12.863329811 +0000 UTC m=+0.043145910 container create 00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 09:33:12 np0005473739 systemd[1]: Started libpod-conmon-00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3.scope.
Oct  7 09:33:12 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/1655691959' entity='client.admin' cmd=[{"prefix": "auth get", "entity": "client.openstack"}]: dispatch
Oct  7 09:33:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:12 np0005473739 podman[99470]: 2025-10-07 13:33:12.84209529 +0000 UTC m=+0.021911429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:12 np0005473739 podman[99470]: 2025-10-07 13:33:12.942212468 +0000 UTC m=+0.122028557 container init 00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:12 np0005473739 podman[99470]: 2025-10-07 13:33:12.949596187 +0000 UTC m=+0.129412276 container start 00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:12 np0005473739 podman[99470]: 2025-10-07 13:33:12.952439319 +0000 UTC m=+0.132255408 container attach 00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:12 np0005473739 ecstatic_greider[99486]: 167 167
Oct  7 09:33:12 np0005473739 systemd[1]: libpod-00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3.scope: Deactivated successfully.
Oct  7 09:33:12 np0005473739 podman[99470]: 2025-10-07 13:33:12.954602004 +0000 UTC m=+0.134418093 container died 00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bbdab4528adc1e2bb3891fbed88f33e53cbb1430c32effbc2a38d4799be44936-merged.mount: Deactivated successfully.
Oct  7 09:33:12 np0005473739 podman[99470]: 2025-10-07 13:33:12.989066101 +0000 UTC m=+0.168882190 container remove 00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_greider, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:13 np0005473739 systemd[1]: libpod-conmon-00aeeb06cc6eb7377936fc04d6155003febb284949ad07eb34cc92db347446d3.scope: Deactivated successfully.
Oct  7 09:33:13 np0005473739 podman[99510]: 2025-10-07 13:33:13.210261232 +0000 UTC m=+0.058967382 container create 88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:13 np0005473739 systemd[1]: Started libpod-conmon-88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4.scope.
Oct  7 09:33:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a174f9912de992452943256af34383c0da37178e54a771fd3fe264c1d1cf3962/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a174f9912de992452943256af34383c0da37178e54a771fd3fe264c1d1cf3962/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a174f9912de992452943256af34383c0da37178e54a771fd3fe264c1d1cf3962/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a174f9912de992452943256af34383c0da37178e54a771fd3fe264c1d1cf3962/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:13 np0005473739 podman[99510]: 2025-10-07 13:33:13.190860758 +0000 UTC m=+0.039566918 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:13 np0005473739 podman[99510]: 2025-10-07 13:33:13.294082726 +0000 UTC m=+0.142788896 container init 88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brahmagupta, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:13 np0005473739 podman[99510]: 2025-10-07 13:33:13.300606431 +0000 UTC m=+0.149312571 container start 88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brahmagupta, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:13 np0005473739 podman[99510]: 2025-10-07 13:33:13.304175383 +0000 UTC m=+0.152881523 container attach 88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brahmagupta, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:14 np0005473739 systemd[1]: libpod-88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4.scope: Deactivated successfully.
Oct  7 09:33:14 np0005473739 systemd[1]: libpod-88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4.scope: Consumed 1.012s CPU time.
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]: {
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:14 np0005473739 conmon[99527]: conmon 88428b5b1f3b59041d9c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4.scope/container/memory.events
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "osd_id": 2,
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "type": "bluestore"
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:    },
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "osd_id": 1,
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "type": "bluestore"
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:    },
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "osd_id": 0,
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:        "type": "bluestore"
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]:    }
Oct  7 09:33:14 np0005473739 adoring_brahmagupta[99527]: }
Oct  7 09:33:14 np0005473739 podman[99510]: 2025-10-07 13:33:14.330956569 +0000 UTC m=+1.179662709 container died 88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brahmagupta, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:33:14 np0005473739 ansible-async_wrapper.py[99709]: Invoked with j711923821958 30 /home/zuul/.ansible/tmp/ansible-tmp-1759843993.981797-33375-183885736819648/AnsiballZ_command.py _
Oct  7 09:33:14 np0005473739 ansible-async_wrapper.py[99724]: Starting module and watcher
Oct  7 09:33:14 np0005473739 ansible-async_wrapper.py[99724]: Start watching 99725 (30)
Oct  7 09:33:14 np0005473739 ansible-async_wrapper.py[99725]: Start module (99725)
Oct  7 09:33:14 np0005473739 ansible-async_wrapper.py[99709]: Return async_wrapper task started.
Oct  7 09:33:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v79: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a174f9912de992452943256af34383c0da37178e54a771fd3fe264c1d1cf3962-merged.mount: Deactivated successfully.
Oct  7 09:33:14 np0005473739 podman[99510]: 2025-10-07 13:33:14.590038724 +0000 UTC m=+1.438744864 container remove 88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_brahmagupta, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:33:14 np0005473739 systemd[1]: libpod-conmon-88428b5b1f3b59041d9c7273bf4009c025775340a020eb7a7a79a7fe07ccfba4.scope: Deactivated successfully.
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:14 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 7cd7a6f1-be67-47d3-8593-36bf4969fb18 (Updating rgw.rgw deployment (+1 -> 1))
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gmvmfz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]} v 0) v1
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gmvmfz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gmvmfz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=rgw_frontends}] v 0) v1
Oct  7 09:33:14 np0005473739 python3[99726]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:14 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Deploying daemon rgw.rgw.compute-0.gmvmfz on compute-0
Oct  7 09:33:14 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Deploying daemon rgw.rgw.compute-0.gmvmfz on compute-0
Oct  7 09:33:14 np0005473739 podman[99727]: 2025-10-07 13:33:14.773219617 +0000 UTC m=+0.083719222 container create c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab (image=quay.io/ceph/ceph:v18, name=dazzling_hypatia, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:33:14 np0005473739 podman[99727]: 2025-10-07 13:33:14.71831534 +0000 UTC m=+0.028814945 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:14 np0005473739 systemd[1]: Started libpod-conmon-c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab.scope.
Oct  7 09:33:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa61a557471392fea98d76349462edea547051c1b8efd59d51c54da67b6dde4f/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa61a557471392fea98d76349462edea547051c1b8efd59d51c54da67b6dde4f/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:14 np0005473739 podman[99727]: 2025-10-07 13:33:14.879183635 +0000 UTC m=+0.189683250 container init c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab (image=quay.io/ceph/ceph:v18, name=dazzling_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:14 np0005473739 podman[99727]: 2025-10-07 13:33:14.891086627 +0000 UTC m=+0.201586222 container start c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab (image=quay.io/ceph/ceph:v18, name=dazzling_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:33:14 np0005473739 podman[99727]: 2025-10-07 13:33:14.895346996 +0000 UTC m=+0.205846581 container attach c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab (image=quay.io/ceph/ceph:v18, name=dazzling_hypatia, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e32 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:15 np0005473739 podman[99906]: 2025-10-07 13:33:15.379296465 +0000 UTC m=+0.039389674 container create c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 09:33:15 np0005473739 systemd[1]: Started libpod-conmon-c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f.scope.
Oct  7 09:33:15 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14258 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  7 09:33:15 np0005473739 dazzling_hypatia[99766]: 
Oct  7 09:33:15 np0005473739 dazzling_hypatia[99766]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  7 09:33:15 np0005473739 podman[99727]: 2025-10-07 13:33:15.447544782 +0000 UTC m=+0.758044367 container died c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab (image=quay.io/ceph/ceph:v18, name=dazzling_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:15 np0005473739 systemd[1]: libpod-c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab.scope: Deactivated successfully.
Oct  7 09:33:15 np0005473739 podman[99906]: 2025-10-07 13:33:15.362173289 +0000 UTC m=+0.022266498 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:15 np0005473739 podman[99906]: 2025-10-07 13:33:15.465802557 +0000 UTC m=+0.125895806 container init c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:15 np0005473739 podman[99906]: 2025-10-07 13:33:15.472041146 +0000 UTC m=+0.132134355 container start c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:33:15 np0005473739 xenodochial_nobel[99921]: 167 167
Oct  7 09:33:15 np0005473739 systemd[1]: libpod-c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f.scope: Deactivated successfully.
Oct  7 09:33:15 np0005473739 podman[99906]: 2025-10-07 13:33:15.482667046 +0000 UTC m=+0.142760285 container attach c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 09:33:15 np0005473739 podman[99906]: 2025-10-07 13:33:15.483129698 +0000 UTC m=+0.143222907 container died c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:33:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1d6603610cfcd569dc0eb880f892d41b2ece305d52a867ed2fc5a5b25919684d-merged.mount: Deactivated successfully.
Oct  7 09:33:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-aa61a557471392fea98d76349462edea547051c1b8efd59d51c54da67b6dde4f-merged.mount: Deactivated successfully.
Oct  7 09:33:15 np0005473739 podman[99727]: 2025-10-07 13:33:15.566246344 +0000 UTC m=+0.876745929 container remove c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab (image=quay.io/ceph/ceph:v18, name=dazzling_hypatia, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:33:15 np0005473739 systemd[1]: libpod-conmon-c7930d8ca1d52c1e229d808af909bce3ad25d452e8953c1419abdeec21af6fab.scope: Deactivated successfully.
Oct  7 09:33:15 np0005473739 ansible-async_wrapper.py[99725]: Module complete (99725)
Oct  7 09:33:15 np0005473739 podman[99906]: 2025-10-07 13:33:15.589960028 +0000 UTC m=+0.250053237 container remove c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_nobel, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:15 np0005473739 systemd[1]: libpod-conmon-c3cf2648778fbbab402dfbc199c2c924d7b027ed87fc387eab38f4b7a6b42b4f.scope: Deactivated successfully.
Oct  7 09:33:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gmvmfz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]: dispatch
Oct  7 09:33:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth get-or-create", "entity": "client.rgw.rgw.compute-0.gmvmfz", "caps": ["mon", "allow *", "mgr", "allow rw", "osd", "allow rwx tag rgw *=*"]}]': finished
Oct  7 09:33:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:15 np0005473739 ceph-mon[74295]: Deploying daemon rgw.rgw.compute-0.gmvmfz on compute-0
Oct  7 09:33:15 np0005473739 systemd[1]: Reloading.
Oct  7 09:33:15 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:33:15 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:33:15 np0005473739 systemd[1]: Reloading.
Oct  7 09:33:16 np0005473739 python3[100042]: ansible-ansible.legacy.async_status Invoked with jid=j711923821958.99709 mode=status _async_dir=/root/.ansible_async
Oct  7 09:33:16 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:33:16 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:33:16 np0005473739 systemd[1]: Starting Ceph rgw.rgw.compute-0.gmvmfz for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:33:16 np0005473739 python3[100130]: ansible-ansible.legacy.async_status Invoked with jid=j711923821958.99709 mode=cleanup _async_dir=/root/.ansible_async
Oct  7 09:33:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v80: 7 pgs: 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:16 np0005473739 podman[100179]: 2025-10-07 13:33:16.573818211 +0000 UTC m=+0.055564704 container create 40f1209fe4ff8c095b0882874e9c78e1d91c3b64b7b8f4a4a0909b6ca21bcc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-rgw-rgw-compute-0-gmvmfz, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9fd193ad56a596091122882014919bafe24705ce029227f2484217d10bd4f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9fd193ad56a596091122882014919bafe24705ce029227f2484217d10bd4f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9fd193ad56a596091122882014919bafe24705ce029227f2484217d10bd4f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a9fd193ad56a596091122882014919bafe24705ce029227f2484217d10bd4f8/merged/var/lib/ceph/radosgw/ceph-rgw.rgw.compute-0.gmvmfz supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:16 np0005473739 podman[100179]: 2025-10-07 13:33:16.638308383 +0000 UTC m=+0.120054866 container init 40f1209fe4ff8c095b0882874e9c78e1d91c3b64b7b8f4a4a0909b6ca21bcc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-rgw-rgw-compute-0-gmvmfz, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:16 np0005473739 podman[100179]: 2025-10-07 13:33:16.546441285 +0000 UTC m=+0.028187858 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:16 np0005473739 podman[100179]: 2025-10-07 13:33:16.651474469 +0000 UTC m=+0.133220992 container start 40f1209fe4ff8c095b0882874e9c78e1d91c3b64b7b8f4a4a0909b6ca21bcc6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-rgw-rgw-compute-0-gmvmfz, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:33:16 np0005473739 bash[100179]: 40f1209fe4ff8c095b0882874e9c78e1d91c3b64b7b8f4a4a0909b6ca21bcc6d
Oct  7 09:33:16 np0005473739 systemd[1]: Started Ceph rgw.rgw.compute-0.gmvmfz for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:33:16 np0005473739 radosgw[100199]: deferred set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:33:16 np0005473739 radosgw[100199]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process radosgw, pid 2
Oct  7 09:33:16 np0005473739 radosgw[100199]: framework: beast
Oct  7 09:33:16 np0005473739 radosgw[100199]: framework conf key: endpoint, val: 192.168.122.100:8082
Oct  7 09:33:16 np0005473739 radosgw[100199]: init_numa not setting numa affinity
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:16 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 7cd7a6f1-be67-47d3-8593-36bf4969fb18 (Updating rgw.rgw deployment (+1 -> 1))
Oct  7 09:33:16 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 7cd7a6f1-be67-47d3-8593-36bf4969fb18 (Updating rgw.rgw deployment (+1 -> 1)) in 2 seconds
Oct  7 09:33:16 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.services.cephadmservice] Saving service rgw.rgw spec with placement compute-0
Oct  7 09:33:16 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Saving service rgw.rgw spec with placement compute-0
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.rgw.rgw}] v 0) v1
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:16 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 84c24084-d0d9-4b19-8946-d3e4a096b3ef (Updating mds.cephfs deployment (+1 -> 1))
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xpofvx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]} v 0) v1
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xpofvx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xpofvx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:16 np0005473739 ceph-mgr[74587]: [cephadm INFO cephadm.serve] Deploying daemon mds.cephfs.compute-0.xpofvx on compute-0
Oct  7 09:33:16 np0005473739 ceph-mgr[74587]: log_channel(cephadm) log [INF] : Deploying daemon mds.cephfs.compute-0.xpofvx on compute-0
Oct  7 09:33:16 np0005473739 python3[100286]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch status --format json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:16 np0005473739 podman[100312]: 2025-10-07 13:33:16.98032114 +0000 UTC m=+0.054015647 container create 2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d (image=quay.io/ceph/ceph:v18, name=sharp_darwin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:33:17 np0005473739 systemd[1]: Started libpod-conmon-2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d.scope.
Oct  7 09:33:17 np0005473739 podman[100312]: 2025-10-07 13:33:16.956453802 +0000 UTC m=+0.030148389 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5938667c011b1f260cfbaaeb9a94dba8cfe0c713156348909b2da2bffe580a5/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5938667c011b1f260cfbaaeb9a94dba8cfe0c713156348909b2da2bffe580a5/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:17 np0005473739 podman[100312]: 2025-10-07 13:33:17.099515804 +0000 UTC m=+0.173210331 container init 2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d (image=quay.io/ceph/ceph:v18, name=sharp_darwin, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:17 np0005473739 podman[100312]: 2025-10-07 13:33:17.11001331 +0000 UTC m=+0.183707847 container start 2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d (image=quay.io/ceph/ceph:v18, name=sharp_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:17 np0005473739 podman[100312]: 2025-10-07 13:33:17.117434749 +0000 UTC m=+0.191129256 container attach 2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d (image=quay.io/ceph/ceph:v18, name=sharp_darwin, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 09:33:17 np0005473739 podman[100467]: 2025-10-07 13:33:17.534813164 +0000 UTC m=+0.059338992 container create 0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:17 np0005473739 systemd[1]: Started libpod-conmon-0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5.scope.
Oct  7 09:33:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:17 np0005473739 podman[100467]: 2025-10-07 13:33:17.598495885 +0000 UTC m=+0.123021763 container init 0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:33:17 np0005473739 ceph-mgr[74587]: [progress INFO root] Writing back 4 completed events
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  7 09:33:17 np0005473739 podman[100467]: 2025-10-07 13:33:17.604914579 +0000 UTC m=+0.129440407 container start 0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:17 np0005473739 podman[100467]: 2025-10-07 13:33:17.511588783 +0000 UTC m=+0.036114611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:17 np0005473739 practical_pare[100483]: 167 167
Oct  7 09:33:17 np0005473739 systemd[1]: libpod-0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5.scope: Deactivated successfully.
Oct  7 09:33:17 np0005473739 conmon[100483]: conmon 0b90b33f791e87804d91 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5.scope/container/memory.events
Oct  7 09:33:17 np0005473739 podman[100467]: 2025-10-07 13:33:17.608798177 +0000 UTC m=+0.133324085 container attach 0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:17 np0005473739 podman[100467]: 2025-10-07 13:33:17.613344673 +0000 UTC m=+0.137870491 container died 0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8a1419ccbb6e192610c8856f663878a6e2147efe8bfa013f2848dab966568046-merged.mount: Deactivated successfully.
Oct  7 09:33:17 np0005473739 podman[100467]: 2025-10-07 13:33:17.654112191 +0000 UTC m=+0.178638039 container remove 0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_pare, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 09:33:17 np0005473739 systemd[1]: libpod-conmon-0b90b33f791e87804d91c5b8d956af640da9dc756bc2d2928b74c4c47651fcd5.scope: Deactivated successfully.
Oct  7 09:33:17 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14263 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  7 09:33:17 np0005473739 sharp_darwin[100361]: 
Oct  7 09:33:17 np0005473739 sharp_darwin[100361]: {"available": true, "backend": "cephadm", "paused": false, "workers": 10}
Oct  7 09:33:17 np0005473739 systemd[1]: Reloading.
Oct  7 09:33:17 np0005473739 podman[100312]: 2025-10-07 13:33:17.704624177 +0000 UTC m=+0.778318704 container died 2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d (image=quay.io/ceph/ceph:v18, name=sharp_darwin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e32 do_prune osdmap full prune enabled
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e33 e33: 3 total, 3 up, 3 in
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e33: 3 total, 3 up, 3 in
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"} v 0) v1
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: Saving service rgw.rgw spec with placement compute-0
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xpofvx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]: dispatch
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "auth get-or-create", "entity": "mds.cephfs.compute-0.xpofvx", "caps": ["mon", "profile mds", "osd", "allow rw tag cephfs *=*", "mds", "allow"]}]': finished
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: Deploying daemon mds.cephfs.compute-0.xpofvx on compute-0
Oct  7 09:33:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:17 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:33:17 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:33:17 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 33 pg[8.0( empty local-lis/les=0/0 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:17 np0005473739 systemd[1]: libpod-2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d.scope: Deactivated successfully.
Oct  7 09:33:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f5938667c011b1f260cfbaaeb9a94dba8cfe0c713156348909b2da2bffe580a5-merged.mount: Deactivated successfully.
Oct  7 09:33:18 np0005473739 podman[100312]: 2025-10-07 13:33:18.015128511 +0000 UTC m=+1.088823038 container remove 2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d (image=quay.io/ceph/ceph:v18, name=sharp_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:33:18 np0005473739 systemd[1]: libpod-conmon-2e556babe08f92819f79854acc19d9aa6dc777c6f39b3fe181724e80110b4f7d.scope: Deactivated successfully.
Oct  7 09:33:18 np0005473739 systemd[1]: Reloading.
Oct  7 09:33:18 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:33:18 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:33:18 np0005473739 systemd[1]: Starting Ceph mds.cephfs.compute-0.xpofvx for 82044f27-a8da-5b2a-a297-ff6afc620e1f...
Oct  7 09:33:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v82: 8 pgs: 1 creating+peering, 7 active+clean; 449 KiB data, 80 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:18 np0005473739 podman[100641]: 2025-10-07 13:33:18.670239667 +0000 UTC m=+0.054322424 container create 3fccc0f6f003853ca9bd90cf1dcec46107a299a77843c1773ac177704334cf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mds-cephfs-compute-0-xpofvx, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e30d5f013393c71c39634b300456122778cd3c8ceedd2758a8a1585f3b6d3ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e30d5f013393c71c39634b300456122778cd3c8ceedd2758a8a1585f3b6d3ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e30d5f013393c71c39634b300456122778cd3c8ceedd2758a8a1585f3b6d3ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1e30d5f013393c71c39634b300456122778cd3c8ceedd2758a8a1585f3b6d3ad/merged/var/lib/ceph/mds/ceph-cephfs.compute-0.xpofvx supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e33 do_prune osdmap full prune enabled
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e34 e34: 3 total, 3 up, 3 in
Oct  7 09:33:18 np0005473739 podman[100641]: 2025-10-07 13:33:18.745330018 +0000 UTC m=+0.129412785 container init 3fccc0f6f003853ca9bd90cf1dcec46107a299a77843c1773ac177704334cf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mds-cephfs-compute-0-xpofvx, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:33:18 np0005473739 podman[100641]: 2025-10-07 13:33:18.651242173 +0000 UTC m=+0.035324940 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e34: 3 total, 3 up, 3 in
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [WRN] : Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:33:18 np0005473739 podman[100641]: 2025-10-07 13:33:18.752533861 +0000 UTC m=+0.136616608 container start 3fccc0f6f003853ca9bd90cf1dcec46107a299a77843c1773ac177704334cf36 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mds-cephfs-compute-0-xpofvx, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]: dispatch
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool application enable","pool": ".rgw.root","app": "rgw"}]': finished
Oct  7 09:33:18 np0005473739 bash[100641]: 3fccc0f6f003853ca9bd90cf1dcec46107a299a77843c1773ac177704334cf36
Oct  7 09:33:18 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 34 pg[8.0( empty local-lis/les=33/34 n=0 ec=33/33 lis/c=0/0 les/c/f=0/0/0 sis=33) [1] r=0 lpr=33 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:18 np0005473739 systemd[1]: Started Ceph mds.cephfs.compute-0.xpofvx for 82044f27-a8da-5b2a-a297-ff6afc620e1f.
Oct  7 09:33:18 np0005473739 ceph-mds[100686]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:33:18 np0005473739 ceph-mds[100686]: ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable), process ceph-mds, pid 2
Oct  7 09:33:18 np0005473739 ceph-mds[100686]: main not setting numa affinity
Oct  7 09:33:18 np0005473739 ceph-mds[100686]: pidfile_write: ignore empty --pid-file
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:18 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mds-cephfs-compute-0-xpofvx[100681]: starting mds.cephfs.compute-0.xpofvx at 
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:18 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx Updating MDS map to version 2 from mon.0
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:18 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 84c24084-d0d9-4b19-8946-d3e4a096b3ef (Updating mds.cephfs deployment (+1 -> 1))
Oct  7 09:33:18 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 84c24084-d0d9-4b19-8946-d3e4a096b3ef (Updating mds.cephfs deployment (+1 -> 1)) in 2 seconds
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config set, name=mds_join_fs}] v 0) v1
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/spec.mds.cephfs}] v 0) v1
Oct  7 09:33:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:18 np0005473739 python3[100684]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ls --export -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:18 np0005473739 podman[100710]: 2025-10-07 13:33:18.935727094 +0000 UTC m=+0.047326496 container create 7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9 (image=quay.io/ceph/ceph:v18, name=vigilant_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:33:18 np0005473739 systemd[1]: Started libpod-conmon-7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9.scope.
Oct  7 09:33:19 np0005473739 podman[100710]: 2025-10-07 13:33:18.918720411 +0000 UTC m=+0.030319833 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a1c5c347ff22b28794bd56aa06dd2d8da1dc05409915f737ffbee348276669/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d6a1c5c347ff22b28794bd56aa06dd2d8da1dc05409915f737ffbee348276669/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:19 np0005473739 podman[100710]: 2025-10-07 13:33:19.109219781 +0000 UTC m=+0.220819203 container init 7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9 (image=quay.io/ceph/ceph:v18, name=vigilant_ellis, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:19 np0005473739 podman[100710]: 2025-10-07 13:33:19.117885011 +0000 UTC m=+0.229484413 container start 7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9 (image=quay.io/ceph/ceph:v18, name=vigilant_ellis, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:19 np0005473739 podman[100710]: 2025-10-07 13:33:19.12135702 +0000 UTC m=+0.232956442 container attach 7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9 (image=quay.io/ceph/ceph:v18, name=vigilant_ellis, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:19 np0005473739 ansible-async_wrapper.py[99724]: Done in kid B.
Oct  7 09:33:19 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14267 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  7 09:33:19 np0005473739 vigilant_ellis[100771]: 
Oct  7 09:33:19 np0005473739 vigilant_ellis[100771]: [{"placement": {"host_pattern": "*"}, "service_name": "crash", "service_type": "crash"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "cephfs", "service_name": "mds.cephfs", "service_type": "mds"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mgr", "service_type": "mgr"}, {"placement": {"hosts": ["compute-0"]}, "service_name": "mon", "service_type": "mon"}, {"placement": {"hosts": ["compute-0"]}, "service_id": "default_drive_group", "service_name": "osd.default_drive_group", "service_type": "osd", "spec": {"data_devices": {"paths": ["/dev/ceph_vg0/ceph_lv0", "/dev/ceph_vg1/ceph_lv1", "/dev/ceph_vg2/ceph_lv2"]}, "filter_logic": "AND", "objectstore": "bluestore"}}, {"networks": ["192.168.122.0/24"], "placement": {"hosts": ["compute-0"]}, "service_id": "rgw", "service_name": "rgw.rgw", "service_type": "rgw", "spec": {"rgw_frontend_port": 8082}}]
Oct  7 09:33:19 np0005473739 systemd[1]: libpod-7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9.scope: Deactivated successfully.
Oct  7 09:33:19 np0005473739 podman[100710]: 2025-10-07 13:33:19.644493046 +0000 UTC m=+0.756092448 container died 7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9 (image=quay.io/ceph/ceph:v18, name=vigilant_ellis, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 09:33:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d6a1c5c347ff22b28794bd56aa06dd2d8da1dc05409915f737ffbee348276669-merged.mount: Deactivated successfully.
Oct  7 09:33:19 np0005473739 podman[100710]: 2025-10-07 13:33:19.71143297 +0000 UTC m=+0.823032372 container remove 7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9 (image=quay.io/ceph/ceph:v18, name=vigilant_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:19 np0005473739 systemd[1]: libpod-conmon-7f9d35c60aa236c90f2ef45edfd233d5dd7ae26302094bc76209d7f9addeb6f9.scope: Deactivated successfully.
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e34 do_prune osdmap full prune enabled
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e35 e35: 3 total, 3 up, 3 in
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e35: 3 total, 3 up, 3 in
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"} v 0) v1
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e3 new map
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e3 print_map#012e3#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0112#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-07T13:33:04.466197+0000#012modified#0112025-10-07T13:33:04.466246+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#011#012up#011{}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012 #012 #012Standby daemons:#012 #012[mds.cephfs.compute-0.xpofvx{-1:14265} state up:standby seq 1 addr [v2:192.168.122.100:6814/2462520944,v1:192.168.122.100:6815/2462520944] compat {c=[1],r=[1],i=[7ff]}]
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: Health check failed: 1 pool(s) do not have an application enabled (POOL_APP_NOT_ENABLED)
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx Updating MDS map to version 3 from mon.0
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx Monitors have assigned me to become a standby.
Oct  7 09:33:19 np0005473739 podman[100981]: 2025-10-07 13:33:19.928732871 +0000 UTC m=+0.159041959 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2462520944,v1:192.168.122.100:6815/2462520944] up:boot
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e3 assigned standby [v2:192.168.122.100:6814/2462520944,v1:192.168.122.100:6815/2462520944] as mds.0
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.xpofvx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : fsmap cephfs:0 1 up:standby
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mds metadata", "who": "cephfs.compute-0.xpofvx"} v 0) v1
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "mds metadata", "who": "cephfs.compute-0.xpofvx"}]: dispatch
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e3 all = 0
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e4 new map
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e4 print_map#012e4#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0114#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-07T13:33:04.466197+0000#012modified#0112025-10-07T13:33:19.937600+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.xpofvx{0:14265} state up:creating seq 1 addr [v2:192.168.122.100:6814/2462520944,v1:192.168.122.100:6815/2462520944] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx Updating MDS map to version 4 from mon.0
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.4 handle_mds_map state change up:standby --> up:creating
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x1
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x100
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x600
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x601
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x602
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x603
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x604
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x605
Oct  7 09:33:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.xpofvx=up:creating}
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x606
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x607
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x608
Oct  7 09:33:19 np0005473739 ceph-mds[100686]: mds.0.cache creating system inode with ino:0x609
Oct  7 09:33:20 np0005473739 podman[100981]: 2025-10-07 13:33:20.025281989 +0000 UTC m=+0.255591047 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:20 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 35 pg[9.0( empty local-lis/les=0/0 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:20 np0005473739 ceph-mds[100686]: mds.0.4 creating_done
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : daemon mds.cephfs.compute-0.xpofvx is now active in filesystem cephfs as rank 0
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e35 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v85: 9 pgs: 1 unknown, 1 creating+peering, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s wr, 7 op/s
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f97f54ac-c26f-47f0-a020-8dddebfbfa13 does not exist
Oct  7 09:33:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e0baa2ec-f3be-4b95-aee8-2a8a3de8db02 does not exist
Oct  7 09:33:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6cb30150-7825-4b66-a5a2-19186745f4ab does not exist
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e35 do_prune osdmap full prune enabled
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e36 e36: 3 total, 3 up, 3 in
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e36: 3 total, 3 up, 3 in
Oct  7 09:33:20 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 36 pg[9.0( empty local-lis/les=35/36 n=0 ec=35/35 lis/c=0/0 les/c/f=0/0/0 sis=35) [1] r=0 lpr=35 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]: dispatch
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: daemon mds.cephfs.compute-0.xpofvx assigned to filesystem cephfs as rank 0 (now has 1 ranks)
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: Health check cleared: MDS_ALL_DOWN (was: 1 filesystem is offline)
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: Health check cleared: MDS_UP_LESS_THAN_MAX (was: 1 filesystem is online with fewer MDS than max_mds)
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: daemon mds.cephfs.compute-0.xpofvx is now active in filesystem cephfs as rank 0
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.log","app": "rgw"}]': finished
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e5 new map
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).mds e5 print_map#012e5#012enable_multiple, ever_enabled_multiple: 1,1#012default compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}#012legacy client fscid: 1#012 #012Filesystem 'cephfs' (1)#012fs_name#011cephfs#012epoch#0115#012flags#01112 joinable allow_snaps allow_multimds_snaps#012created#0112025-10-07T13:33:04.466197+0000#012modified#0112025-10-07T13:33:20.970855+0000#012tableserver#0110#012root#0110#012session_timeout#01160#012session_autoclose#011300#012max_file_size#0111099511627776#012max_xattr_size#01165536#012required_client_features#011{}#012last_failure#0110#012last_failure_osd_epoch#0110#012compat#011compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,7=mds uses inline data,8=no anchor table,9=file layout v2,10=snaprealm v2}#012max_mds#0111#012in#0110#012up#011{0=14265}#012failed#011#012damaged#011#012stopped#011#012data_pools#011[7]#012metadata_pool#0116#012inline_data#011disabled#012balancer#011#012bal_rank_mask#011-1#012standby_count_wanted#0110#012[mds.cephfs.compute-0.xpofvx{0:14265} state up:active seq 2 join_fscid=1 addr [v2:192.168.122.100:6814/2462520944,v1:192.168.122.100:6815/2462520944] compat {c=[1],r=[1],i=[7ff]}]#012 #012 
Oct  7 09:33:20 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx Updating MDS map to version 5 from mon.0
Oct  7 09:33:20 np0005473739 ceph-mds[100686]: mds.0.4 handle_mds_map i am now mds.0.4
Oct  7 09:33:20 np0005473739 ceph-mds[100686]: mds.0.4 handle_mds_map state change up:creating --> up:active
Oct  7 09:33:20 np0005473739 ceph-mds[100686]: mds.0.4 recovery_done -- successful recovery!
Oct  7 09:33:20 np0005473739 ceph-mds[100686]: mds.0.4 active_start
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : mds.? [v2:192.168.122.100:6814/2462520944,v1:192.168.122.100:6815/2462520944] up:active
Oct  7 09:33:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : fsmap cephfs:1 {0=cephfs.compute-0.xpofvx=up:active}
Oct  7 09:33:21 np0005473739 python3[101228]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   orch ps -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:21 np0005473739 podman[101279]: 2025-10-07 13:33:21.248111326 +0000 UTC m=+0.054480707 container create b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc (image=quay.io/ceph/ceph:v18, name=relaxed_hugle, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:21 np0005473739 systemd[1]: Started libpod-conmon-b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc.scope.
Oct  7 09:33:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:21 np0005473739 podman[101279]: 2025-10-07 13:33:21.227159343 +0000 UTC m=+0.033528704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999c00442a7669b34bdd1f6ae6f1daa33e0093d4aa183d21fa74536c811a6a0a/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/999c00442a7669b34bdd1f6ae6f1daa33e0093d4aa183d21fa74536c811a6a0a/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:21 np0005473739 podman[101279]: 2025-10-07 13:33:21.339678847 +0000 UTC m=+0.146048298 container init b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc (image=quay.io/ceph/ceph:v18, name=relaxed_hugle, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:21 np0005473739 podman[101279]: 2025-10-07 13:33:21.347001743 +0000 UTC m=+0.153371104 container start b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc (image=quay.io/ceph/ceph:v18, name=relaxed_hugle, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:21 np0005473739 podman[101279]: 2025-10-07 13:33:21.351002875 +0000 UTC m=+0.157372276 container attach b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc (image=quay.io/ceph/ceph:v18, name=relaxed_hugle, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  7 09:33:21 np0005473739 podman[101335]: 2025-10-07 13:33:21.46475642 +0000 UTC m=+0.038796127 container create b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nobel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:21 np0005473739 systemd[1]: Started libpod-conmon-b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c.scope.
Oct  7 09:33:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:21 np0005473739 podman[101335]: 2025-10-07 13:33:21.535174574 +0000 UTC m=+0.109214281 container init b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nobel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:21 np0005473739 podman[101335]: 2025-10-07 13:33:21.54134461 +0000 UTC m=+0.115384307 container start b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nobel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:21 np0005473739 podman[101335]: 2025-10-07 13:33:21.448082596 +0000 UTC m=+0.022122313 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:21 np0005473739 beautiful_nobel[101352]: 167 167
Oct  7 09:33:21 np0005473739 systemd[1]: libpod-b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c.scope: Deactivated successfully.
Oct  7 09:33:21 np0005473739 podman[101335]: 2025-10-07 13:33:21.54722175 +0000 UTC m=+0.121261467 container attach b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nobel, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:21 np0005473739 podman[101335]: 2025-10-07 13:33:21.548297047 +0000 UTC m=+0.122336814 container died b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nobel, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 09:33:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1b49aa29a2521392a53bb9e1e423c13a878f87a5df08f77964647116148a4969-merged.mount: Deactivated successfully.
Oct  7 09:33:21 np0005473739 podman[101335]: 2025-10-07 13:33:21.599222193 +0000 UTC m=+0.173261890 container remove b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:21 np0005473739 systemd[1]: libpod-conmon-b252be3f24c2beb31e81124ffe98c3d705cd554d6e377c9c44bfc8a72515cf1c.scope: Deactivated successfully.
Oct  7 09:33:21 np0005473739 podman[101397]: 2025-10-07 13:33:21.783804398 +0000 UTC m=+0.042893290 container create d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:21 np0005473739 systemd[1]: Started libpod-conmon-d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054.scope.
Oct  7 09:33:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c33b1383ec89821aeb424e2d357502ba822db3cb67fe33fc84f23cb06053364/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c33b1383ec89821aeb424e2d357502ba822db3cb67fe33fc84f23cb06053364/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c33b1383ec89821aeb424e2d357502ba822db3cb67fe33fc84f23cb06053364/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c33b1383ec89821aeb424e2d357502ba822db3cb67fe33fc84f23cb06053364/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c33b1383ec89821aeb424e2d357502ba822db3cb67fe33fc84f23cb06053364/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:21 np0005473739 podman[101397]: 2025-10-07 13:33:21.763417438 +0000 UTC m=+0.022506310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e36 do_prune osdmap full prune enabled
Oct  7 09:33:21 np0005473739 podman[101397]: 2025-10-07 13:33:21.876607722 +0000 UTC m=+0.135696594 container init d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e37 e37: 3 total, 3 up, 3 in
Oct  7 09:33:21 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e37: 3 total, 3 up, 3 in
Oct  7 09:33:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"} v 0) v1
Oct  7 09:33:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  7 09:33:21 np0005473739 podman[101397]: 2025-10-07 13:33:21.886559605 +0000 UTC m=+0.145648477 container start d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:33:21 np0005473739 podman[101397]: 2025-10-07 13:33:21.890453452 +0000 UTC m=+0.149542304 container attach d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:21 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 37 pg[10.0( empty local-lis/les=0/0 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:21 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14269 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""], "format": "json"}]: dispatch
Oct  7 09:33:21 np0005473739 relaxed_hugle[101307]: 
Oct  7 09:33:21 np0005473739 relaxed_hugle[101307]: [{"container_id": "b63fb31a4b12", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "0.44%", "created": "2025-10-07T13:31:46.885136Z", "daemon_id": "compute-0", "daemon_name": "crash.compute-0", "daemon_type": "crash", "events": ["2025-10-07T13:31:46.948569Z daemon:crash.compute-0 [INFO] \"Deployed crash.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-07T13:33:20.818661Z", "memory_usage": 11639193, "ports": [], "service_name": "crash", "started": "2025-10-07T13:31:46.780057Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@crash.compute-0", "version": "18.2.7"}, {"container_id": "3fccc0f6f003", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "8.42%", "created": "2025-10-07T13:33:18.771486Z", "daemon_id": "cephfs.compute-0.xpofvx", "daemon_name": "mds.cephfs.compute-0.xpofvx", "daemon_type": "mds", "events": ["2025-10-07T13:33:18.826284Z daemon:mds.cephfs.compute-0.xpofvx [INFO] \"Deployed mds.cephfs.compute-0.xpofvx on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-07T13:33:20.819132Z", "memory_usage": 13526630, "ports": [], "service_name": "mds.cephfs", "started": "2025-10-07T13:33:18.655371Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@mds.cephfs.compute-0.xpofvx", "version": "18.2.7"}, {"container_id": "a97aec2b1c22", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "28.52%", "created": "2025-10-07T13:30:37.275076Z", "daemon_id": "compute-0.kdyrcd", "daemon_name": "mgr.compute-0.kdyrcd", "daemon_type": "mgr", "events": ["2025-10-07T13:31:51.421368Z daemon:mgr.compute-0.kdyrcd [INFO] \"Reconfigured mgr.compute-0.kdyrcd on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-07T13:33:20.818573Z", "memory_usage": 550816972, "ports": [9283, 8765], "service_name": "mgr", "started": "2025-10-07T13:30:37.167537Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@mgr.compute-0.kdyrcd", "version": "18.2.7"}, {"container_id": "f803401b563e", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph:v18", "cpu_percentage": "1.99%", "created": "2025-10-07T13:30:32.208915Z", "daemon_id": "compute-0", "daemon_name": "mon.compute-0", "daemon_type": "mon", "events": ["2025-10-07T13:31:50.768294Z daemon:mon.compute-0 [INFO] \"Reconfigured mon.compute-0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-07T13:33:20.818464Z", "memory_request": 2147483648, "memory_usage": 40632320, "ports": [], "service_name": "mon", "started": "2025-10-07T13:30:34.970793Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@mon.compute-0", "version": "18.2.7"}, {"container_id": "266f50edea4d", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.22%", "created": "2025-10-07T13:32:12.345832Z", "daemon_id": "0", "daemon_name": "osd.0", "daemon_type": "osd", "events": ["2025-10-07T13:32:12.394781Z daemon:osd.0 [INFO] \"Deployed osd.0 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-07T13:33:20.818749Z", "memory_request": 4294967296, "memory_usage": 56182702, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-07T13:32:12.225803Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@osd.0", "version": "18.2.7"}, {"container_id": "aa275ffdd3c6", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.32%", "created": "2025-10-07T13:32:16.731951Z", "daemon_id": "1", "daemon_name": "osd.1", "daemon_type": "osd", "events": ["2025-10-07T13:32:16.832263Z daemon:osd.1 [INFO] \"Deployed osd.1 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-07T13:33:20.818834Z", "memory_request": 4294967296, "memory_usage": 60838379, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-07T13:32:16.584523Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@osd.1", "version": "18.2.7"}, {"container_id": "f4c98c027c35", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "1.27%", "created": "2025-10-07T13:32:21.255063Z", "daemon_id": "2", "daemon_name": "osd.2", "daemon_type": "osd", "events": ["2025-10-07T13:32:21.342413Z daemon:osd.2 [INFO] \"Deployed osd.2 on host 'compute-0'\""], "hostname": "compute-0", "is_active": false, "last_refresh": "2025-10-07T13:33:20.818919Z", "memory_request": 4294967296, "memory_usage": 56298045, "ports": [], "service_name": "osd.default_drive_group", "started": "2025-10-07T13:32:21.150635Z", "status": 1, "status_desc": "running", "systemd_unit": "ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f@osd.2", "version": "18.2.7"}, {"container_id": "40f1209fe4ff", "container_image_digests": ["quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "quay.io/ceph/ceph@sha256:7d8bb82696d5d9cbeae2a2828dc12b6835aa2dded890fa3ac5a733cb66b72b1c"], "container_image_id": "0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1", "container_image_name": "quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0", "cpu_percentage": "2.75%", "created": "2025-10-07T13:33:16.677434Z", "daemon_id": "rgw.compute-0.gmvmfz", "daemon_name": "rgw.rgw.compute-0.gmvmfz", "daemon_type": "rgw", "events": ["2025-10-07T
Oct  7 09:33:21 np0005473739 systemd[1]: libpod-b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc.scope: Deactivated successfully.
Oct  7 09:33:21 np0005473739 podman[101279]: 2025-10-07 13:33:21.945374683 +0000 UTC m=+0.751744054 container died b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc (image=quay.io/ceph/ceph:v18, name=relaxed_hugle, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:33:21 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]: dispatch
Oct  7 09:33:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-999c00442a7669b34bdd1f6ae6f1daa33e0093d4aa183d21fa74536c811a6a0a-merged.mount: Deactivated successfully.
Oct  7 09:33:22 np0005473739 podman[101279]: 2025-10-07 13:33:22.005849267 +0000 UTC m=+0.812218648 container remove b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc (image=quay.io/ceph/ceph:v18, name=relaxed_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:33:22 np0005473739 systemd[1]: libpod-conmon-b13dd267d28a0c1f56a3cfaaff6683c65c2afde5e42d8825e90c2b1790cf55dc.scope: Deactivated successfully.
Oct  7 09:33:22 np0005473739 rsyslogd[1004]: message too long (8588) with configured size 8096, begin of message is: [{"container_id": "b63fb31a4b12", "container_image_digests": ["quay.io/ceph/ceph [v8.2506.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:33:22
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Some PGs (0.200000) are unknown; try again later
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v88: 10 pgs: 2 unknown, 1 creating+peering, 7 active+clean; 451 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s wr, 10 op/s
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 1)
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [progress INFO root] Writing back 5 completed events
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e37 do_prune osdmap full prune enabled
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e38 e38: 3 total, 3 up, 3 in
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e38: 3 total, 3 up, 3 in
Oct  7 09:33:22 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 2b5cdb32-89a3-412b-90dc-41fce26f8b76 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:22 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 38 pg[10.0( empty local-lis/les=37/38 n=0 ec=37/37 lis/c=0/0 les/c/f=0/0/0 sis=37) [2] r=0 lpr=37 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:22 np0005473739 sharp_raman[101413]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:33:22 np0005473739 sharp_raman[101413]: --> relative data size: 1.0
Oct  7 09:33:22 np0005473739 sharp_raman[101413]: --> All data devices are unavailable
Oct  7 09:33:22 np0005473739 systemd[1]: libpod-d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054.scope: Deactivated successfully.
Oct  7 09:33:22 np0005473739 podman[101397]: 2025-10-07 13:33:22.946103012 +0000 UTC m=+1.205191864 container died d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9c33b1383ec89821aeb424e2d357502ba822db3cb67fe33fc84f23cb06053364-merged.mount: Deactivated successfully.
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/2671259237' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.control","app": "rgw"}]': finished
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:23 np0005473739 podman[101397]: 2025-10-07 13:33:23.039219974 +0000 UTC m=+1.298308826 container remove d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_raman, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:33:23 np0005473739 systemd[1]: libpod-conmon-d177c511eaed0154a7e0adf5735e04aba18de82f98d25aaff3bec93ff96ae054.scope: Deactivated successfully.
Oct  7 09:33:23 np0005473739 python3[101481]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   -s -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:23 np0005473739 podman[101505]: 2025-10-07 13:33:23.119080281 +0000 UTC m=+0.038250523 container create d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8 (image=quay.io/ceph/ceph:v18, name=elegant_allen, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 09:33:23 np0005473739 systemd[1]: Started libpod-conmon-d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8.scope.
Oct  7 09:33:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9aae51088a97296208d0fb7543c0060dea04ea91cb29adfa6ce14919beb6bea/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a9aae51088a97296208d0fb7543c0060dea04ea91cb29adfa6ce14919beb6bea/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:23 np0005473739 podman[101505]: 2025-10-07 13:33:23.104690005 +0000 UTC m=+0.023860267 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:23 np0005473739 podman[101505]: 2025-10-07 13:33:23.202144476 +0000 UTC m=+0.121314748 container init d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8 (image=quay.io/ceph/ceph:v18, name=elegant_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:23 np0005473739 podman[101505]: 2025-10-07 13:33:23.209590221 +0000 UTC m=+0.128760463 container start d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8 (image=quay.io/ceph/ceph:v18, name=elegant_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:33:23 np0005473739 podman[101505]: 2025-10-07 13:33:23.21281823 +0000 UTC m=+0.131988492 container attach d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8 (image=quay.io/ceph/ceph:v18, name=elegant_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:23 np0005473739 podman[101682]: 2025-10-07 13:33:23.595002093 +0000 UTC m=+0.035359573 container create 3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 09:33:23 np0005473739 systemd[1]: Started libpod-conmon-3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416.scope.
Oct  7 09:33:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:23 np0005473739 podman[101682]: 2025-10-07 13:33:23.660865955 +0000 UTC m=+0.101223435 container init 3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:23 np0005473739 podman[101682]: 2025-10-07 13:33:23.665604285 +0000 UTC m=+0.105961765 container start 3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:23 np0005473739 podman[101682]: 2025-10-07 13:33:23.668587787 +0000 UTC m=+0.108945297 container attach 3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:23 np0005473739 zealous_sanderson[101698]: 167 167
Oct  7 09:33:23 np0005473739 systemd[1]: libpod-3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416.scope: Deactivated successfully.
Oct  7 09:33:23 np0005473739 podman[101682]: 2025-10-07 13:33:23.672026782 +0000 UTC m=+0.112384252 container died 3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:33:23 np0005473739 podman[101682]: 2025-10-07 13:33:23.579907798 +0000 UTC m=+0.020265278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-02f5b58c6018f54a96f20c4f6e88166d2c2452229f08eb846969c811164c166e-merged.mount: Deactivated successfully.
Oct  7 09:33:23 np0005473739 podman[101682]: 2025-10-07 13:33:23.706658015 +0000 UTC m=+0.147015495 container remove 3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_sanderson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:23 np0005473739 systemd[1]: libpod-conmon-3f52fcfc2e278093d08e166016279642a9a055bbbd7a934df2c0e41ee92a6416.scope: Deactivated successfully.
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status", "format": "json"} v 0) v1
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2821585683' entity='client.admin' cmd=[{"prefix": "status", "format": "json"}]: dispatch
Oct  7 09:33:23 np0005473739 elegant_allen[101558]: 
Oct  7 09:33:23 np0005473739 elegant_allen[101558]: {"fsid":"82044f27-a8da-5b2a-a297-ff6afc620e1f","health":{"status":"HEALTH_WARN","checks":{"POOL_APP_NOT_ENABLED":{"severity":"HEALTH_WARN","summary":{"message":"1 pool(s) do not have an application enabled","count":1},"muted":false}},"mutes":[]},"election_epoch":5,"quorum":[0],"quorum_names":["compute-0"],"quorum_age":168,"monmap":{"epoch":1,"min_mon_release_name":"reef","num_mons":1},"osdmap":{"epoch":38,"num_osds":3,"num_up_osds":3,"osd_up_since":1759843947,"num_in_osds":3,"osd_in_since":1759843923,"num_remapped_pgs":0},"pgmap":{"pgs_by_state":[{"state_name":"active+clean","count":7},{"state_name":"unknown","count":2},{"state_name":"creating+peering","count":1}],"num_pgs":10,"num_pools":10,"num_objects":23,"data_bytes":461642,"bytes_used":83927040,"bytes_avail":64327999488,"bytes_total":64411926528,"unknown_pgs_ratio":0.20000000298023224,"inactive_pgs_ratio":0.10000000149011612,"write_bytes_sec":3583,"read_op_per_sec":0,"write_op_per_sec":10},"fsmap":{"epoch":5,"id":1,"up":1,"in":1,"max":1,"by_rank":[{"filesystem_id":1,"rank":0,"name":"cephfs.compute-0.xpofvx","status":"up:active","gid":14265}],"up:standby":0},"mgrmap":{"available":true,"num_standbys":0,"modules":["cephadm","iostat","nfs","restful"],"services":{}},"servicemap":{"epoch":3,"modified":"2025-10-07T13:33:20.520654+0000","services":{"mds":{"daemons":{"summary":"","cephfs.compute-0.xpofvx":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}},"osd":{"daemons":{"summary":"","1":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}},"2":{"start_epoch":0,"start_stamp":"0.000000","gid":0,"addr":"(unrecognized address family 0)/0","metadata":{},"task_status":{}}}}}},"progress_events":{}}
Oct  7 09:33:23 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:33:23 np0005473739 systemd[1]: libpod-d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8.scope: Deactivated successfully.
Oct  7 09:33:23 np0005473739 podman[101505]: 2025-10-07 13:33:23.814170102 +0000 UTC m=+0.733340344 container died d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8 (image=quay.io/ceph/ceph:v18, name=elegant_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a9aae51088a97296208d0fb7543c0060dea04ea91cb29adfa6ce14919beb6bea-merged.mount: Deactivated successfully.
Oct  7 09:33:23 np0005473739 podman[101505]: 2025-10-07 13:33:23.865203277 +0000 UTC m=+0.784373519 container remove d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8 (image=quay.io/ceph/ceph:v18, name=elegant_allen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:23 np0005473739 systemd[1]: libpod-conmon-d285aacc02bf19d0a83e6b3ddab5a9ebf30bf689d0681b61e19db55c147b39a8.scope: Deactivated successfully.
Oct  7 09:33:23 np0005473739 podman[101723]: 2025-10-07 13:33:23.873341581 +0000 UTC m=+0.051423936 container create eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e38 do_prune osdmap full prune enabled
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e39 e39: 3 total, 3 up, 3 in
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e39: 3 total, 3 up, 3 in
Oct  7 09:33:23 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 96bca875-72b0-4e17-bb0a-32cfd03525d4 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"} v 0) v1
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/543832985' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:33:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:23 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 39 pg[11.0( empty local-lis/les=0/0 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:23 np0005473739 systemd[1]: Started libpod-conmon-eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a.scope.
Oct  7 09:33:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c61313d2b830ec2f1e1628c14c7a3047f18fbb9e8d6267a077937f55ac8f91/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c61313d2b830ec2f1e1628c14c7a3047f18fbb9e8d6267a077937f55ac8f91/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c61313d2b830ec2f1e1628c14c7a3047f18fbb9e8d6267a077937f55ac8f91/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/83c61313d2b830ec2f1e1628c14c7a3047f18fbb9e8d6267a077937f55ac8f91/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:23 np0005473739 podman[101723]: 2025-10-07 13:33:23.850028329 +0000 UTC m=+0.028110704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:23 np0005473739 podman[101723]: 2025-10-07 13:33:23.945199807 +0000 UTC m=+0.123282182 container init eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:23 np0005473739 podman[101723]: 2025-10-07 13:33:23.952421486 +0000 UTC m=+0.130503841 container start eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:23 np0005473739 podman[101723]: 2025-10-07 13:33:23.955539231 +0000 UTC m=+0.133621606 container attach eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/543832985' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]: dispatch
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v91: 11 pgs: 1 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]: {
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:    "0": [
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:        {
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "devices": [
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "/dev/loop3"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            ],
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_name": "ceph_lv0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_size": "21470642176",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "name": "ceph_lv0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "tags": {
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.crush_device_class": "",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.encrypted": "0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osd_id": "0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.type": "block",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.vdo": "0"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            },
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "type": "block",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "vg_name": "ceph_vg0"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:        }
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:    ],
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:    "1": [
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:        {
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "devices": [
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "/dev/loop4"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            ],
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_name": "ceph_lv1",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_size": "21470642176",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "name": "ceph_lv1",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "tags": {
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.crush_device_class": "",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.encrypted": "0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osd_id": "1",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.type": "block",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.vdo": "0"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            },
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "type": "block",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "vg_name": "ceph_vg1"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:        }
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:    ],
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:    "2": [
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:        {
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "devices": [
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "/dev/loop5"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            ],
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_name": "ceph_lv2",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_size": "21470642176",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "name": "ceph_lv2",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "tags": {
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.crush_device_class": "",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.encrypted": "0",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osd_id": "2",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.type": "block",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:                "ceph.vdo": "0"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            },
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "type": "block",
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:            "vg_name": "ceph_vg2"
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:        }
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]:    ]
Oct  7 09:33:24 np0005473739 ecstatic_jones[101751]: }
Oct  7 09:33:24 np0005473739 systemd[1]: libpod-eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a.scope: Deactivated successfully.
Oct  7 09:33:24 np0005473739 podman[101723]: 2025-10-07 13:33:24.655395464 +0000 UTC m=+0.833477819 container died eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-83c61313d2b830ec2f1e1628c14c7a3047f18fbb9e8d6267a077937f55ac8f91-merged.mount: Deactivated successfully.
Oct  7 09:33:24 np0005473739 podman[101723]: 2025-10-07 13:33:24.708756022 +0000 UTC m=+0.886838387 container remove eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_jones, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:33:24 np0005473739 systemd[1]: libpod-conmon-eb284dcd97e6beb738c22a6139467e9984ed0a62d092e6d46f2e6cb8c90a638a.scope: Deactivated successfully.
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e39 do_prune osdmap full prune enabled
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/543832985' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e40 e40: 3 total, 3 up, 3 in
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e40: 3 total, 3 up, 3 in
Oct  7 09:33:24 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 40acfa4c-30fd-4469-b3e4-b737a834c5cd (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"} v 0) v1
Oct  7 09:33:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/543832985' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  7 09:33:24 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=15.999217987s) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 78.463920593s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:24 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 40 pg[2.0( empty local-lis/les=17/18 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=15.999217987s) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown pruub 78.463920593s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:24 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=15.995522499s) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active pruub 82.710273743s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:24 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 40 pg[3.0( empty local-lis/les=17/18 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40 pruub=15.995522499s) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown pruub 82.710273743s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:24 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 40 pg[11.0( empty local-lis/les=39/40 n=0 ec=39/39 lis/c=0/0 les/c/f=0/0/0 sis=39) [1] r=0 lpr=39 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:24 np0005473739 python3[101819]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   config dump -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:24 np0005473739 podman[101871]: 2025-10-07 13:33:24.975359386 +0000 UTC m=+0.039883078 container create 86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2 (image=quay.io/ceph/ceph:v18, name=strange_neumann, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/543832985' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool application enable","pool": "default.rgw.meta","app": "rgw"}]': finished
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/543832985' entity='client.rgw.rgw.compute-0.gmvmfz' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]: dispatch
Oct  7 09:33:25 np0005473739 systemd[1]: Started libpod-conmon-86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2.scope.
Oct  7 09:33:25 np0005473739 podman[101871]: 2025-10-07 13:33:24.95838425 +0000 UTC m=+0.022907992 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58434f5f4ca0ed0115286b4724b746bb13b7aefabd970247f90290215c450d06/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58434f5f4ca0ed0115286b4724b746bb13b7aefabd970247f90290215c450d06/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:25 np0005473739 podman[101871]: 2025-10-07 13:33:25.079121421 +0000 UTC m=+0.143645133 container init 86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2 (image=quay.io/ceph/ceph:v18, name=strange_neumann, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 09:33:25 np0005473739 podman[101871]: 2025-10-07 13:33:25.08600977 +0000 UTC m=+0.150533462 container start 86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2 (image=quay.io/ceph/ceph:v18, name=strange_neumann, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:25 np0005473739 podman[101871]: 2025-10-07 13:33:25.089521236 +0000 UTC m=+0.154044928 container attach 86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2 (image=quay.io/ceph/ceph:v18, name=strange_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e40 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:25 np0005473739 podman[101956]: 2025-10-07 13:33:25.335348609 +0000 UTC m=+0.057787300 container create 2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leakey, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 09:33:25 np0005473739 systemd[1]: Started libpod-conmon-2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba.scope.
Oct  7 09:33:25 np0005473739 podman[101956]: 2025-10-07 13:33:25.314890956 +0000 UTC m=+0.037329667 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:25 np0005473739 podman[101956]: 2025-10-07 13:33:25.42806224 +0000 UTC m=+0.150500951 container init 2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 09:33:25 np0005473739 podman[101956]: 2025-10-07 13:33:25.438519008 +0000 UTC m=+0.160957699 container start 2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leakey, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:25 np0005473739 podman[101956]: 2025-10-07 13:33:25.443897645 +0000 UTC m=+0.166336356 container attach 2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leakey, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:33:25 np0005473739 busy_leakey[101974]: 167 167
Oct  7 09:33:25 np0005473739 systemd[1]: libpod-2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba.scope: Deactivated successfully.
Oct  7 09:33:25 np0005473739 conmon[101974]: conmon 2945f582c7f4d6f02688 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba.scope/container/memory.events
Oct  7 09:33:25 np0005473739 podman[101956]: 2025-10-07 13:33:25.447279869 +0000 UTC m=+0.169718550 container died 2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 09:33:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-baeb511edd1bafb484d2e2d4c2a1f20e83c70d577618fa0038e478a2bc8506dd-merged.mount: Deactivated successfully.
Oct  7 09:33:25 np0005473739 podman[101956]: 2025-10-07 13:33:25.491655299 +0000 UTC m=+0.214093990 container remove 2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_leakey, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:25 np0005473739 systemd[1]: libpod-conmon-2945f582c7f4d6f02688f4413d2d7db39a55b4afb3e7f1c632158223579a0dba.scope: Deactivated successfully.
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config dump", "format": "json"} v 0) v1
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3090934621' entity='client.admin' cmd=[{"prefix": "config dump", "format": "json"}]: dispatch
Oct  7 09:33:25 np0005473739 strange_neumann[101911]: 
Oct  7 09:33:25 np0005473739 systemd[1]: libpod-86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2.scope: Deactivated successfully.
Oct  7 09:33:25 np0005473739 strange_neumann[101911]: [{"section":"global","name":"cluster_network","value":"172.20.0.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"container_image","value":"quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"log_to_file","value":"true","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"global","name":"mon_cluster_log_to_file","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv4","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"ms_bind_ipv6","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"osd_pool_default_size","value":"1","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"public_network","value":"192.168.122.0/24","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_admin_roles","value":"ResellerAdmin, swiftoperator","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_accepted_roles","value":"member, Member, admin","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_domain","value":"default","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_password","value":"12345678","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_project","value":"service","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_admin_user","value":"swift","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_api_version","value":"3","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_keystone_implicit_tenants","value":"true","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_url","value":"https://keystone-internal.openstack.svc:5000","level":"basic","can_update_at_runtime":false,"mask":""},{"section":"global","name":"rgw_keystone_verify_ssl","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_name_len","value":"128","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attr_size","value":"1024","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_max_attrs_num_in_req","value":"90","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_s3_auth_use_keystone","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_account_in_url","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_enforce_content_length","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_swift_versioning_enabled","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"global","name":"rgw_trust_forwarded_https","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"auth_allow_insecure_global_id_reclaim","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mon","name":"mon_warn_on_pool_no_redundancy","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr/cephadm/container_init","value":"True","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/migration_current","value":"6","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/cephadm/use_repo_digest","value":"false","level":"advanced","can_update_at_runtime":false,"mask":""},{"section":"mgr","name":"mgr/orchestrator/orchestrator","value":"cephadm","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mgr","name":"mgr_standby_modules","value":"false","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"osd","name":"osd_memory_target_autotune","value":"true","level":"advanced","can_update_at_runtime":true,"mask":""},{"section":"mds.cephfs","name":"mds_join_fs","value":"cephfs","level":"basic","can_update_at_runtime":true,"mask":""},{"section":"client.rgw.rgw.compute-0.gmvmfz","name":"rgw_frontends","value":"beast endpoint=192.168.122.100:8082","level":"basic","can_update_at_runtime":false,"mask":""}]
Oct  7 09:33:25 np0005473739 podman[101871]: 2025-10-07 13:33:25.610635192 +0000 UTC m=+0.675158904 container died 86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2 (image=quay.io/ceph/ceph:v18, name=strange_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:33:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-58434f5f4ca0ed0115286b4724b746bb13b7aefabd970247f90290215c450d06-merged.mount: Deactivated successfully.
Oct  7 09:33:25 np0005473739 podman[101871]: 2025-10-07 13:33:25.653761958 +0000 UTC m=+0.718285650 container remove 86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2 (image=quay.io/ceph/ceph:v18, name=strange_neumann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:33:25 np0005473739 systemd[1]: libpod-conmon-86a9459c77903405ddc964bc7b66080a6a9aa48b839ce8ca5d427c0cf2b112e2.scope: Deactivated successfully.
Oct  7 09:33:25 np0005473739 podman[102015]: 2025-10-07 13:33:25.679167167 +0000 UTC m=+0.077044380 container create 5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:33:25 np0005473739 podman[102015]: 2025-10-07 13:33:25.625251714 +0000 UTC m=+0.023128967 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:25 np0005473739 systemd[1]: Started libpod-conmon-5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1.scope.
Oct  7 09:33:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610c9a16e22a12209b18d2687492927b2022b3fefe8e07d035b86a34e67ba811/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610c9a16e22a12209b18d2687492927b2022b3fefe8e07d035b86a34e67ba811/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610c9a16e22a12209b18d2687492927b2022b3fefe8e07d035b86a34e67ba811/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/610c9a16e22a12209b18d2687492927b2022b3fefe8e07d035b86a34e67ba811/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:25 np0005473739 podman[102015]: 2025-10-07 13:33:25.749751259 +0000 UTC m=+0.147628492 container init 5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:25 np0005473739 podman[102015]: 2025-10-07 13:33:25.760244227 +0000 UTC m=+0.158121440 container start 5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:33:25 np0005473739 podman[102015]: 2025-10-07 13:33:25.76396426 +0000 UTC m=+0.161841523 container attach 5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e40 do_prune osdmap full prune enabled
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='client.? 192.168.122.100:0/543832985' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e41 e41: 3 total, 3 up, 3 in
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e41: 3 total, 3 up, 3 in
Oct  7 09:33:25 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev a93b7722-bf57-4180-b99e-25363383a128 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"} v 0) v1
Oct  7 09:33:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=17/18 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=17/18 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.19( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.0( empty local-lis/les=40/41 n=0 ec=17/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.4( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.2( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.10( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.14( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 41 pg[3.13( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=17/17 les/c/f=18/18/0 sis=40) [1] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.2( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.e( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.10( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.12( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.14( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.17( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.16( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.0( empty local-lis/les=40/41 n=0 ec=16/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.1a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.3( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 41 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=17/17 les/c/f=18/18/0 sis=40) [2] r=0 lpr=40 pi=[17,40)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: from='client.? 192.168.122.100:0/543832985' entity='client.rgw.rgw.compute-0.gmvmfz' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_autoscale_bias", "val": "4"}]': finished
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]: dispatch
Oct  7 09:33:26 np0005473739 radosgw[100199]: LDAP not started since no server URIs were provided in the configuration.
Oct  7 09:33:26 np0005473739 radosgw[100199]: framework: beast
Oct  7 09:33:26 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-rgw-rgw-compute-0-gmvmfz[100195]: 2025-10-07T13:33:26.050+0000 7ff83c03e940 -1 LDAP not started since no server URIs were provided in the configuration.
Oct  7 09:33:26 np0005473739 radosgw[100199]: framework conf key: ssl_certificate, val: config://rgw/cert/$realm/$zone.crt
Oct  7 09:33:26 np0005473739 radosgw[100199]: framework conf key: ssl_private_key, val: config://rgw/cert/$realm/$zone.key
Oct  7 09:33:26 np0005473739 radosgw[100199]: starting handler: beast
Oct  7 09:33:26 np0005473739 radosgw[100199]: set uid:gid to 167:167 (ceph:ceph)
Oct  7 09:33:26 np0005473739 radosgw[100199]: mgrc service_daemon_register rgw.14271 metadata {arch=x86_64,ceph_release=reef,ceph_version=ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable),ceph_version_short=18.2.7,container_hostname=compute-0,container_image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0,cpu=AMD EPYC-Rome Processor,distro=centos,distro_description=CentOS Stream 9,distro_version=9,frontend_config#0=beast endpoint=192.168.122.100:8082,frontend_type#0=beast,hostname=compute-0,id=rgw.compute-0.gmvmfz,kernel_description=#1 SMP PREEMPT_DYNAMIC Fri Sep 26 01:13:23 UTC 2025,kernel_version=5.14.0-620.el9.x86_64,mem_swap_kb=1048572,mem_total_kb=7864108,num_handles=1,os=Linux,pid=2,realm_id=,realm_name=,zone_id=4c70098d-77bd-4657-832e-1df74767daf5,zone_name=default,zonegroup_id=2c1b45fe-30f6-431c-a57c-d58807958d59,zonegroup_name=default}
Oct  7 09:33:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v94: 73 pgs: 63 unknown, 10 active+clean; 452 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 1.2 KiB/s rd, 1.5 KiB/s wr, 3 op/s
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:26 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.1 scrub starts
Oct  7 09:33:26 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.1 scrub ok
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]: {
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "osd_id": 2,
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "type": "bluestore"
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:    },
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "osd_id": 1,
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "type": "bluestore"
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:    },
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "osd_id": 0,
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:        "type": "bluestore"
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]:    }
Oct  7 09:33:26 np0005473739 laughing_robinson[102047]: }
Oct  7 09:33:26 np0005473739 podman[102015]: 2025-10-07 13:33:26.71598242 +0000 UTC m=+1.113859633 container died 5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 09:33:26 np0005473739 systemd[1]: libpod-5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1.scope: Deactivated successfully.
Oct  7 09:33:26 np0005473739 python3[102640]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   osd get-require-min-compat-client _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-610c9a16e22a12209b18d2687492927b2022b3fefe8e07d035b86a34e67ba811-merged.mount: Deactivated successfully.
Oct  7 09:33:26 np0005473739 podman[102015]: 2025-10-07 13:33:26.775186498 +0000 UTC m=+1.173063711 container remove 5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:33:26 np0005473739 systemd[1]: libpod-conmon-5a2ff325af8cc127feb2ba11be65eef019bcf9d80dbf2549fbd3246c156174c1.scope: Deactivated successfully.
Oct  7 09:33:26 np0005473739 podman[102656]: 2025-10-07 13:33:26.808467044 +0000 UTC m=+0.056244728 container create 3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de (image=quay.io/ceph/ceph:v18, name=inspiring_goodall, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cdbb194e-b530-46b6-a4e2-8022577b0d06 does not exist
Oct  7 09:33:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9b2b5d08-9287-4508-a55c-47262efee35f does not exist
Oct  7 09:33:26 np0005473739 systemd[1]: Started libpod-conmon-3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de.scope.
Oct  7 09:33:26 np0005473739 podman[102656]: 2025-10-07 13:33:26.784066633 +0000 UTC m=+0.031844277 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7021f4c36a655400a39fe318004d0222bfb5c91efa56834b1fb4353abbb16b/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7c7021f4c36a655400a39fe318004d0222bfb5c91efa56834b1fb4353abbb16b/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e41 do_prune osdmap full prune enabled
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e42 e42: 3 total, 3 up, 3 in
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e42: 3 total, 3 up, 3 in
Oct  7 09:33:26 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 26a83775-1eef-4c72-9952-7707502fdc19 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:26 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=8.026199341s) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active pruub 81.156410217s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:26 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 42 pg[4.0( empty local-lis/les=19/20 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42 pruub=8.026199341s) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown pruub 81.156410217s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:26 np0005473739 podman[102656]: 2025-10-07 13:33:26.921073981 +0000 UTC m=+0.168851645 container init 3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de (image=quay.io/ceph/ceph:v18, name=inspiring_goodall, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:33:26 np0005473739 podman[102656]: 2025-10-07 13:33:26.930605673 +0000 UTC m=+0.178383317 container start 3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de (image=quay.io/ceph/ceph:v18, name=inspiring_goodall, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:26 np0005473739 podman[102656]: 2025-10-07 13:33:26.935122868 +0000 UTC m=+0.182900512 container attach 3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de (image=quay.io/ceph/ceph:v18, name=inspiring_goodall, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: log_channel(cluster) log [INF] : Cluster is now healthy
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num", "val": "16"}]': finished
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd get-require-min-compat-client"} v 0) v1
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/653133921' entity='client.admin' cmd=[{"prefix": "osd get-require-min-compat-client"}]: dispatch
Oct  7 09:33:27 np0005473739 inspiring_goodall[102677]: mimic
Oct  7 09:33:27 np0005473739 systemd[1]: libpod-3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de.scope: Deactivated successfully.
Oct  7 09:33:27 np0005473739 podman[102656]: 2025-10-07 13:33:27.524956224 +0000 UTC m=+0.772733878 container died 3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de (image=quay.io/ceph/ceph:v18, name=inspiring_goodall, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7c7021f4c36a655400a39fe318004d0222bfb5c91efa56834b1fb4353abbb16b-merged.mount: Deactivated successfully.
Oct  7 09:33:27 np0005473739 podman[102656]: 2025-10-07 13:33:27.576173513 +0000 UTC m=+0.823951157 container remove 3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de (image=quay.io/ceph/ceph:v18, name=inspiring_goodall, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:33:27 np0005473739 systemd[1]: libpod-conmon-3dd9254d7ed006b4ce471f49f5eeb911e488388421da06d5d9e3b2851c5a40de.scope: Deactivated successfully.
Oct  7 09:33:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] Starting Global Recovery Event,125 pgs not in active + clean state
Oct  7 09:33:27 np0005473739 podman[102935]: 2025-10-07 13:33:27.842745986 +0000 UTC m=+0.067879688 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e42 do_prune osdmap full prune enabled
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e43 e43: 3 total, 3 up, 3 in
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e43: 3 total, 3 up, 3 in
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=19/20 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 651c113b-4b2a-495e-b8d7-cd19315f9b02 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:33:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.6( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.19( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.3( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.0( empty local-lis/les=42/43 n=0 ec=19/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.15( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.16( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.17( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 43 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=19/19 les/c/f=20/20/0 sis=42) [0] r=0 lpr=42 pi=[19,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:27 np0005473739 podman[102935]: 2025-10-07 13:33:27.971086196 +0000 UTC m=+0.196219888 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: Health check cleared: POOL_APP_NOT_ENABLED (was: 1 pool(s) do not have an application enabled)
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: Cluster is now healthy
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=42 pruub=8.920594215s) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active pruub 74.545242310s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 42 pg[5.0( empty local-lis/les=21/22 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=42 pruub=8.920594215s) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown pruub 74.545242310s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.19( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.1a( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.1b( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.1c( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.1d( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.1e( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.1( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.2( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.3( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.4( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.5( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.6( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.9( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.a( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.d( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.e( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.7( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.8( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.b( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.c( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.11( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.12( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.15( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.16( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.f( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.10( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.17( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.18( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.13( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.14( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 43 pg[5.1f( empty local-lis/les=21/22 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.1 scrub starts
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.1 scrub ok
Oct  7 09:33:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v97: 135 pgs: 93 unknown, 42 active+clean; 456 KiB data, 80 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 8.0 KiB/s wr, 274 op/s
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"} v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 50fb0fdb-52f9-4c21-b4ee-35d09b2b9e7c does not exist
Oct  7 09:33:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 382d4dfe-438b-4237-82fe-9bff3a84564f does not exist
Oct  7 09:33:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2268fb5b-553c-413a-9f65-ce14a8488ddd does not exist
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:33:28 np0005473739 python3[103116]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint ceph quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   versions -f json _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:33:28 np0005473739 podman[103143]: 2025-10-07 13:33:28.909464931 +0000 UTC m=+0.067681793 container create c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5 (image=quay.io/ceph/ceph:v18, name=busy_visvesvaraya, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e43 do_prune osdmap full prune enabled
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e44 e44: 3 total, 3 up, 3 in
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e44: 3 total, 3 up, 3 in
Oct  7 09:33:28 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 23220501-9d16-4a97-adce-a2482c14ac0f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  7 09:33:28 np0005473739 systemd[75908]: Starting Mark boot as successful...
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:33:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.1e( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 systemd[75908]: Finished Mark boot as successful.
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.1d( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.10( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.11( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.1f( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.13( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.12( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.14( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.16( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.17( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.15( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.8( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.b( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.a( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.c( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.9( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.6( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.5( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.0( empty local-lis/les=42/44 n=0 ec=21/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.4( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.f( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.3( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.2( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.1( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.1c( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.d( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.1b( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.e( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.1a( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.7( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.18( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 44 pg[5.19( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=21/21 les/c/f=22/22/0 sis=42) [2] r=0 lpr=42 pi=[21,42)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:28 np0005473739 systemd[1]: Started libpod-conmon-c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5.scope.
Oct  7 09:33:28 np0005473739 podman[103143]: 2025-10-07 13:33:28.878530289 +0000 UTC m=+0.036747211 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:33:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dc72786ec6f3c797f9377a186878b85f3dfbfa70c61373fcf58bd96819655c/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d8dc72786ec6f3c797f9377a186878b85f3dfbfa70c61373fcf58bd96819655c/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:29 np0005473739 podman[103143]: 2025-10-07 13:33:29.009507493 +0000 UTC m=+0.167724415 container init c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5 (image=quay.io/ceph/ceph:v18, name=busy_visvesvaraya, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:29 np0005473739 podman[103143]: 2025-10-07 13:33:29.021972486 +0000 UTC m=+0.180189318 container start c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5 (image=quay.io/ceph/ceph:v18, name=busy_visvesvaraya, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:29 np0005473739 podman[103143]: 2025-10-07 13:33:29.024979979 +0000 UTC m=+0.183196851 container attach c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5 (image=quay.io/ceph/ceph:v18, name=busy_visvesvaraya, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]: dispatch
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pg_num_actual", "val": "16"}]': finished
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.1 deep-scrub starts
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.1 deep-scrub ok
Oct  7 09:33:29 np0005473739 podman[103277]: 2025-10-07 13:33:29.339021167 +0000 UTC m=+0.039699312 container create 0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:33:29 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.2 scrub starts
Oct  7 09:33:29 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.2 scrub ok
Oct  7 09:33:29 np0005473739 systemd[1]: Started libpod-conmon-0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193.scope.
Oct  7 09:33:29 np0005473739 podman[103277]: 2025-10-07 13:33:29.321738332 +0000 UTC m=+0.022416507 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:29 np0005473739 podman[103277]: 2025-10-07 13:33:29.435652616 +0000 UTC m=+0.136330781 container init 0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 09:33:29 np0005473739 podman[103277]: 2025-10-07 13:33:29.441028544 +0000 UTC m=+0.141706699 container start 0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 09:33:29 np0005473739 podman[103277]: 2025-10-07 13:33:29.444446358 +0000 UTC m=+0.145124503 container attach 0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:29 np0005473739 elastic_jackson[103312]: 167 167
Oct  7 09:33:29 np0005473739 podman[103277]: 2025-10-07 13:33:29.447178664 +0000 UTC m=+0.147856849 container died 0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:33:29 np0005473739 systemd[1]: libpod-0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193.scope: Deactivated successfully.
Oct  7 09:33:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5e72bf0cac7e31208979dacfb4ca2125e000552b53e598a16328b0af17fca7ec-merged.mount: Deactivated successfully.
Oct  7 09:33:29 np0005473739 podman[103277]: 2025-10-07 13:33:29.493068255 +0000 UTC m=+0.193746400 container remove 0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:29 np0005473739 systemd[1]: libpod-conmon-0cd6052677abf6202851ded8f2dc67c66da7353c32401ce02149bcd8eb14d193.scope: Deactivated successfully.
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 44 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=23/24 n=22 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44 pruub=9.570522308s) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 35'38 mlcod 35'38 active pruub 85.294319153s@ mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [0], acting_primary 0 -> 0, up_primary 0 -> 0, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 44 pg[6.0( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44 pruub=9.570522308s) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 35'38 mlcod 0'0 unknown pruub 85.294319153s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 podman[103335]: 2025-10-07 13:33:29.642451825 +0000 UTC m=+0.038800099 container create 4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:33:29 np0005473739 systemd[1]: Started libpod-conmon-4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca.scope.
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "versions", "format": "json"} v 0) v1
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4242682026' entity='client.admin' cmd=[{"prefix": "versions", "format": "json"}]: dispatch
Oct  7 09:33:29 np0005473739 busy_visvesvaraya[103206]: 
Oct  7 09:33:29 np0005473739 busy_visvesvaraya[103206]: {"mon":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"mgr":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"osd":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":3},"mds":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"rgw":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":1},"overall":{"ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)":7}}
Oct  7 09:33:29 np0005473739 podman[103143]: 2025-10-07 13:33:29.705736276 +0000 UTC m=+0.863953118 container died c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5 (image=quay.io/ceph/ceph:v18, name=busy_visvesvaraya, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:33:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:29 np0005473739 systemd[1]: libpod-c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5.scope: Deactivated successfully.
Oct  7 09:33:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b00426d9463245523b61aa24c8c74777a02e65440a7f4ee9701ca44d3d1a35/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b00426d9463245523b61aa24c8c74777a02e65440a7f4ee9701ca44d3d1a35/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b00426d9463245523b61aa24c8c74777a02e65440a7f4ee9701ca44d3d1a35/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b00426d9463245523b61aa24c8c74777a02e65440a7f4ee9701ca44d3d1a35/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/82b00426d9463245523b61aa24c8c74777a02e65440a7f4ee9701ca44d3d1a35/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:29 np0005473739 podman[103335]: 2025-10-07 13:33:29.625491358 +0000 UTC m=+0.021839642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:29 np0005473739 podman[103335]: 2025-10-07 13:33:29.730391284 +0000 UTC m=+0.126739578 container init 4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 09:33:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d8dc72786ec6f3c797f9377a186878b85f3dfbfa70c61373fcf58bd96819655c-merged.mount: Deactivated successfully.
Oct  7 09:33:29 np0005473739 podman[103335]: 2025-10-07 13:33:29.750423245 +0000 UTC m=+0.146771529 container start 4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:33:29 np0005473739 podman[103335]: 2025-10-07 13:33:29.756673297 +0000 UTC m=+0.153021621 container attach 4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=44 pruub=10.314962387s) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active pruub 81.888816833s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 44 pg[7.0( empty local-lis/les=24/25 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=44 pruub=10.314962387s) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown pruub 81.888816833s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 podman[103143]: 2025-10-07 13:33:29.784494912 +0000 UTC m=+0.942711744 container remove c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5 (image=quay.io/ceph/ceph:v18, name=busy_visvesvaraya, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:33:29 np0005473739 systemd[1]: libpod-conmon-c8946d963990760b324fe5b2aa72b897ead531d35612f05668552208e9ed02a5.scope: Deactivated successfully.
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e44 do_prune osdmap full prune enabled
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e45 e45: 3 total, 3 up, 3 in
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e45: 3 total, 3 up, 3 in
Oct  7 09:33:29 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 07fb5e1a-1830-4b42-9317-f6c98a0e451a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.a( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.9( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.12( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.5( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.17( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.4( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.8( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.7( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.d( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.19( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.7( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=24/25 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.2( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.e( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.f( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.c( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.d( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=23/24 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.11( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.0( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=23/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 35'38 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 45 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=23/23 les/c/f=24/24/0 sis=44) [0] r=0 lpr=44 pi=[23,44)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.10( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.17( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.16( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.15( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.14( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.b( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.8( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.12( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.4( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.6( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.0( empty local-lis/les=44/45 n=0 ec=24/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.e( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.5( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.7( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.9( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.3( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.2( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1d( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1e( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.18( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.1b( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.19( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.d( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:29 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 45 pg[7.13( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=24/24 les/c/f=25/25/0 sis=44) [1] r=0 lpr=44 pi=[24,44)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e45 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:30 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.2 scrub starts
Oct  7 09:33:30 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.2 scrub ok
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v100: 181 pgs: 32 peering, 31 unknown, 118 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 111 KiB/s rd, 8.0 KiB/s wr, 274 op/s
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:30 np0005473739 boring_rosalind[103352]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:33:30 np0005473739 boring_rosalind[103352]: --> relative data size: 1.0
Oct  7 09:33:30 np0005473739 boring_rosalind[103352]: --> All data devices are unavailable
Oct  7 09:33:30 np0005473739 systemd[1]: libpod-4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca.scope: Deactivated successfully.
Oct  7 09:33:30 np0005473739 podman[103335]: 2025-10-07 13:33:30.764489961 +0000 UTC m=+1.160838235 container died 4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-82b00426d9463245523b61aa24c8c74777a02e65440a7f4ee9701ca44d3d1a35-merged.mount: Deactivated successfully.
Oct  7 09:33:30 np0005473739 podman[103335]: 2025-10-07 13:33:30.836018129 +0000 UTC m=+1.232366443 container remove 4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 09:33:30 np0005473739 systemd[1]: libpod-conmon-4119da4b48fe00ab9be48b1704a1a2def1e9c45789d7e3708c4a2c34673d33ca.scope: Deactivated successfully.
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e45 do_prune osdmap full prune enabled
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e46 e46: 3 total, 3 up, 3 in
Oct  7 09:33:30 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e46: 3 total, 3 up, 3 in
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev 9ebf8a84-57d1-40fa-842e-08f59a82de2c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 2b5cdb32-89a3-412b-90dc-41fce26f8b76 (PG autoscaler increasing pool 2 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 2b5cdb32-89a3-412b-90dc-41fce26f8b76 (PG autoscaler increasing pool 2 PGs from 1 to 32) in 8 seconds
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 96bca875-72b0-4e17-bb0a-32cfd03525d4 (PG autoscaler increasing pool 3 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 96bca875-72b0-4e17-bb0a-32cfd03525d4 (PG autoscaler increasing pool 3 PGs from 1 to 32) in 7 seconds
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 40acfa4c-30fd-4469-b3e4-b737a834c5cd (PG autoscaler increasing pool 4 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 40acfa4c-30fd-4469-b3e4-b737a834c5cd (PG autoscaler increasing pool 4 PGs from 1 to 32) in 6 seconds
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev a93b7722-bf57-4180-b99e-25363383a128 (PG autoscaler increasing pool 5 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event a93b7722-bf57-4180-b99e-25363383a128 (PG autoscaler increasing pool 5 PGs from 1 to 32) in 5 seconds
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 26a83775-1eef-4c72-9952-7707502fdc19 (PG autoscaler increasing pool 6 PGs from 1 to 16)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 26a83775-1eef-4c72-9952-7707502fdc19 (PG autoscaler increasing pool 6 PGs from 1 to 16) in 4 seconds
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 651c113b-4b2a-495e-b8d7-cd19315f9b02 (PG autoscaler increasing pool 7 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 651c113b-4b2a-495e-b8d7-cd19315f9b02 (PG autoscaler increasing pool 7 PGs from 1 to 32) in 3 seconds
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 23220501-9d16-4a97-adce-a2482c14ac0f (PG autoscaler increasing pool 8 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 23220501-9d16-4a97-adce-a2482c14ac0f (PG autoscaler increasing pool 8 PGs from 1 to 32) in 2 seconds
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 07fb5e1a-1830-4b42-9317-f6c98a0e451a (PG autoscaler increasing pool 9 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 07fb5e1a-1830-4b42-9317-f6c98a0e451a (PG autoscaler increasing pool 9 PGs from 1 to 32) in 1 seconds
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev 9ebf8a84-57d1-40fa-842e-08f59a82de2c (PG autoscaler increasing pool 10 PGs from 1 to 32)
Oct  7 09:33:30 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 9ebf8a84-57d1-40fa-842e-08f59a82de2c (PG autoscaler increasing pool 10 PGs from 1 to 32) in 0 seconds
Oct  7 09:33:30 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 46 pg[9.0( v 42'385 (0'0,42'385] local-lis/les=35/36 n=177 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.941696167s) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 42'384 mlcod 42'384 active pruub 86.689758301s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:30 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 46 pg[8.0( v 34'4 (0'0,34'4] local-lis/les=33/34 n=4 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.818873405s) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 34'3 mlcod 34'3 active pruub 84.566955566s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:30 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 46 pg[8.0( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=46 pruub=11.818873405s) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 34'3 mlcod 0'0 unknown pruub 84.566955566s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:30 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 46 pg[9.0( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=46 pruub=13.941696167s) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 42'384 mlcod 0'0 unknown pruub 86.689758301s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:33:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:31 np0005473739 podman[103546]: 2025-10-07 13:33:31.53836017 +0000 UTC m=+0.040452454 container create 5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 09:33:31 np0005473739 systemd[1]: Started libpod-conmon-5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445.scope.
Oct  7 09:33:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:31 np0005473739 podman[103546]: 2025-10-07 13:33:31.522893395 +0000 UTC m=+0.024985699 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:31 np0005473739 podman[103546]: 2025-10-07 13:33:31.618909326 +0000 UTC m=+0.121001630 container init 5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:31 np0005473739 podman[103546]: 2025-10-07 13:33:31.625732723 +0000 UTC m=+0.127825017 container start 5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 09:33:31 np0005473739 podman[103546]: 2025-10-07 13:33:31.628936292 +0000 UTC m=+0.131028596 container attach 5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:31 np0005473739 eloquent_hypatia[103563]: 167 167
Oct  7 09:33:31 np0005473739 systemd[1]: libpod-5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445.scope: Deactivated successfully.
Oct  7 09:33:31 np0005473739 podman[103546]: 2025-10-07 13:33:31.633835446 +0000 UTC m=+0.135927770 container died 5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 09:33:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-46c8f05c6e990de090879d5f0f412d7cfeee0fbfc99ec26f8085b9a1455e3004-merged.mount: Deactivated successfully.
Oct  7 09:33:31 np0005473739 podman[103546]: 2025-10-07 13:33:31.687609316 +0000 UTC m=+0.189701600 container remove 5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:31 np0005473739 systemd[1]: libpod-conmon-5ea326b1b36178f29a0797a4d56cfd2b06b9c1471b53415873f4f961c191a445.scope: Deactivated successfully.
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.2 scrub starts
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.2 scrub ok
Oct  7 09:33:31 np0005473739 podman[103587]: 2025-10-07 13:33:31.87242974 +0000 UTC m=+0.052836374 container create 3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:33:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e46 do_prune osdmap full prune enabled
Oct  7 09:33:31 np0005473739 podman[103587]: 2025-10-07 13:33:31.844016318 +0000 UTC m=+0.024423002 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e47 e47: 3 total, 3 up, 3 in
Oct  7 09:33:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e47: 3 total, 3 up, 3 in
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.15( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.14( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.15( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.17( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.16( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 systemd[1]: Started libpod-conmon-3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6.scope.
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.17( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.16( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.11( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.11( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.10( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.12( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.13( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.14( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.12( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.c( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.d( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.d( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.c( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.f( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.10( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.e( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.8( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.a( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.13( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.9( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.b( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.3( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=1 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.2( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1( v 34'4 (0'0,34'4] local-lis/les=33/34 n=1 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.f( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.e( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.b( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.a( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.9( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.8( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.2( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=1 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.3( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.7( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.6( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.6( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.7( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.5( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.4( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.4( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=1 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.5( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1b( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1a( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1b( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1a( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.19( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.18( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.18( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.19( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1e( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1f( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1f( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1e( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1d( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1d( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1c( v 42'385 lc 0'0 (0'0,42'385] local-lis/les=35/36 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1c( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=33/34 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.14( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.15( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.16( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.17( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.11( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.10( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.14( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.12( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.d( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.c( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.e( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.13( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.a( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.8( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.3( v 34'4 (0'0,34'4] local-lis/les=46/47 n=1 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.12( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.2( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.0( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=35/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 42'384 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.0( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=33/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 34'3 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1( v 34'4 (0'0,34'4] local-lis/les=46/47 n=1 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.f( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.b( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.a( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.10( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.9( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.7( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.6( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.2( v 34'4 (0'0,34'4] local-lis/les=46/47 n=1 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.4( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.4( v 34'4 (0'0,34'4] local-lis/les=46/47 n=1 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1b( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.5( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1a( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.18( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1a( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1f( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=35/35 les/c/f=36/36/0 sis=46) [1] r=0 lpr=46 pi=[35,46)/1 crt=42'385 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1d( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1c( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.19( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 47 pg[8.1e( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=33/33 les/c/f=34/34/0 sis=46) [1] r=0 lpr=46 pi=[33,46)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2badcbd32a6c09c5e78fc6b725f7e67351a9e824f39b26756c9273feba9672/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2badcbd32a6c09c5e78fc6b725f7e67351a9e824f39b26756c9273feba9672/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2badcbd32a6c09c5e78fc6b725f7e67351a9e824f39b26756c9273feba9672/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca2badcbd32a6c09c5e78fc6b725f7e67351a9e824f39b26756c9273feba9672/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:32 np0005473739 podman[103587]: 2025-10-07 13:33:32.012633347 +0000 UTC m=+0.193040001 container init 3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:32 np0005473739 podman[103587]: 2025-10-07 13:33:32.024334919 +0000 UTC m=+0.204741563 container start 3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:33:32 np0005473739 podman[103587]: 2025-10-07 13:33:32.028586436 +0000 UTC m=+0.208993040 container attach 3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 09:33:32 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.3 scrub starts
Oct  7 09:33:32 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.3 scrub ok
Oct  7 09:33:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v103: 243 pgs: 32 peering, 93 unknown, 118 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:32 np0005473739 ceph-mgr[74587]: [progress INFO root] Writing back 14 completed events
Oct  7 09:33:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  7 09:33:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:32 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.3 scrub starts
Oct  7 09:33:32 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.3 scrub ok
Oct  7 09:33:32 np0005473739 nice_noether[103604]: {
Oct  7 09:33:32 np0005473739 nice_noether[103604]:    "0": [
Oct  7 09:33:32 np0005473739 nice_noether[103604]:        {
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "devices": [
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "/dev/loop3"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            ],
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_name": "ceph_lv0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_size": "21470642176",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "name": "ceph_lv0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "tags": {
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.crush_device_class": "",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.encrypted": "0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osd_id": "0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.type": "block",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.vdo": "0"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            },
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "type": "block",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "vg_name": "ceph_vg0"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:        }
Oct  7 09:33:32 np0005473739 nice_noether[103604]:    ],
Oct  7 09:33:32 np0005473739 nice_noether[103604]:    "1": [
Oct  7 09:33:32 np0005473739 nice_noether[103604]:        {
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "devices": [
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "/dev/loop4"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            ],
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_name": "ceph_lv1",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_size": "21470642176",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "name": "ceph_lv1",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "tags": {
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.crush_device_class": "",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.encrypted": "0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osd_id": "1",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.type": "block",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.vdo": "0"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            },
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "type": "block",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "vg_name": "ceph_vg1"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:        }
Oct  7 09:33:32 np0005473739 nice_noether[103604]:    ],
Oct  7 09:33:32 np0005473739 nice_noether[103604]:    "2": [
Oct  7 09:33:32 np0005473739 nice_noether[103604]:        {
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "devices": [
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "/dev/loop5"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            ],
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_name": "ceph_lv2",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_size": "21470642176",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "name": "ceph_lv2",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "tags": {
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.cluster_name": "ceph",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.crush_device_class": "",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.encrypted": "0",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osd_id": "2",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.type": "block",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:                "ceph.vdo": "0"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            },
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "type": "block",
Oct  7 09:33:32 np0005473739 nice_noether[103604]:            "vg_name": "ceph_vg2"
Oct  7 09:33:32 np0005473739 nice_noether[103604]:        }
Oct  7 09:33:32 np0005473739 nice_noether[103604]:    ]
Oct  7 09:33:32 np0005473739 nice_noether[103604]: }
Oct  7 09:33:32 np0005473739 systemd[1]: libpod-3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6.scope: Deactivated successfully.
Oct  7 09:33:32 np0005473739 podman[103587]: 2025-10-07 13:33:32.773513428 +0000 UTC m=+0.953920052 container died 3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:33:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ca2badcbd32a6c09c5e78fc6b725f7e67351a9e824f39b26756c9273feba9672-merged.mount: Deactivated successfully.
Oct  7 09:33:32 np0005473739 podman[103587]: 2025-10-07 13:33:32.830897867 +0000 UTC m=+1.011304471 container remove 3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:33:32 np0005473739 systemd[1]: libpod-conmon-3a9ad134576d49d502bec51070119174774c1ea77457008e924e29e3a552cad6.scope: Deactivated successfully.
Oct  7 09:33:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e47 do_prune osdmap full prune enabled
Oct  7 09:33:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e48 e48: 3 total, 3 up, 3 in
Oct  7 09:33:33 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e48: 3 total, 3 up, 3 in
Oct  7 09:33:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:33 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 48 pg[10.0( v 38'16 (0'0,38'16] local-lis/les=37/38 n=8 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=13.736515999s) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 38'15 mlcod 38'15 active pruub 84.463912964s@ mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [2], acting_primary 2 -> 2, up_primary 2 -> 2, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:33 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 48 pg[10.0( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=48 pruub=13.736515999s) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 38'15 mlcod 0'0 unknown pruub 84.463912964s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:33 np0005473739 podman[103765]: 2025-10-07 13:33:33.528826547 +0000 UTC m=+0.044809674 container create e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:33:33 np0005473739 systemd[1]: Started libpod-conmon-e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810.scope.
Oct  7 09:33:33 np0005473739 podman[103765]: 2025-10-07 13:33:33.502757139 +0000 UTC m=+0.018740246 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:33 np0005473739 podman[103765]: 2025-10-07 13:33:33.624348774 +0000 UTC m=+0.140331911 container init e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:33 np0005473739 podman[103765]: 2025-10-07 13:33:33.636295633 +0000 UTC m=+0.152278730 container start e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:33:33 np0005473739 podman[103765]: 2025-10-07 13:33:33.639559043 +0000 UTC m=+0.155542140 container attach e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:33:33 np0005473739 stoic_blackwell[103782]: 167 167
Oct  7 09:33:33 np0005473739 systemd[1]: libpod-e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810.scope: Deactivated successfully.
Oct  7 09:33:33 np0005473739 podman[103765]: 2025-10-07 13:33:33.646634678 +0000 UTC m=+0.162617835 container died e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:33:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-575b24844dd528af481183b4556ed6bf188c16a111859ae1bf6f0a4eaf7218f8-merged.mount: Deactivated successfully.
Oct  7 09:33:33 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.4 scrub starts
Oct  7 09:33:33 np0005473739 podman[103765]: 2025-10-07 13:33:33.697117296 +0000 UTC m=+0.213100423 container remove e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_blackwell, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:33:33 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.4 scrub ok
Oct  7 09:33:33 np0005473739 systemd[1]: libpod-conmon-e250b6195a8ac40b81c92329a0d9afcd9d4d66dc1b496e3fcf423d92feb07810.scope: Deactivated successfully.
Oct  7 09:33:33 np0005473739 podman[103805]: 2025-10-07 13:33:33.90261306 +0000 UTC m=+0.055640942 container create a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:33:33 np0005473739 systemd[1]: Started libpod-conmon-a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8.scope.
Oct  7 09:33:33 np0005473739 podman[103805]: 2025-10-07 13:33:33.873318694 +0000 UTC m=+0.026346576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:33:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:33:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e18630940096ed5a8cae83bcba3a97d71a8ac690b710e9dddcd40c6a1b76d94/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e18630940096ed5a8cae83bcba3a97d71a8ac690b710e9dddcd40c6a1b76d94/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e18630940096ed5a8cae83bcba3a97d71a8ac690b710e9dddcd40c6a1b76d94/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e18630940096ed5a8cae83bcba3a97d71a8ac690b710e9dddcd40c6a1b76d94/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:33:33 np0005473739 podman[103805]: 2025-10-07 13:33:33.991893465 +0000 UTC m=+0.144921387 container init a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:33:34 np0005473739 podman[103805]: 2025-10-07 13:33:34.005410927 +0000 UTC m=+0.158438799 container start a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:33:34 np0005473739 podman[103805]: 2025-10-07 13:33:34.009108179 +0000 UTC m=+0.162136091 container attach a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:33:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e48 do_prune osdmap full prune enabled
Oct  7 09:33:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e49 e49: 3 total, 3 up, 3 in
Oct  7 09:33:34 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e49: 3 total, 3 up, 3 in
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.11( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.12( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.10( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1f( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1e( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1d( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1c( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1b( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1a( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.19( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.18( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.7( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.6( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.5( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.4( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.8( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.f( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.9( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.a( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.b( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.3( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.c( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.d( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.e( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1( v 38'16 (0'0,38'16] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.2( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.13( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.15( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.16( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.17( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.14( v 38'16 lc 0'0 (0'0,38'16] local-lis/les=37/38 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.12( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.10( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1f( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1e( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1d( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1c( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.11( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1b( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1a( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.19( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.7( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.6( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.18( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.5( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.4( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.8( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.f( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.0( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=37/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 38'15 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.9( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.a( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.b( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.3( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.d( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.e( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.1( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.2( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.13( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.15( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.16( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.17( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.14( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 49 pg[10.c( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=37/37 les/c/f=38/38/0 sis=48) [2] r=0 lpr=48 pi=[37,48)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v106: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:34 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.5 deep-scrub starts
Oct  7 09:33:34 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.5 deep-scrub ok
Oct  7 09:33:34 np0005473739 sweet_easley[103821]: {
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "osd_id": 2,
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "type": "bluestore"
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:    },
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "osd_id": 1,
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "type": "bluestore"
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:    },
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "osd_id": 0,
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:        "type": "bluestore"
Oct  7 09:33:34 np0005473739 sweet_easley[103821]:    }
Oct  7 09:33:34 np0005473739 sweet_easley[103821]: }
Oct  7 09:33:34 np0005473739 systemd[1]: libpod-a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8.scope: Deactivated successfully.
Oct  7 09:33:34 np0005473739 podman[103805]: 2025-10-07 13:33:34.94519024 +0000 UTC m=+1.098218142 container died a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 09:33:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3e18630940096ed5a8cae83bcba3a97d71a8ac690b710e9dddcd40c6a1b76d94-merged.mount: Deactivated successfully.
Oct  7 09:33:35 np0005473739 podman[103805]: 2025-10-07 13:33:35.004454481 +0000 UTC m=+1.157482353 container remove a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:33:35 np0005473739 systemd[1]: libpod-conmon-a7740a40ca927e4c71baff3c2d48e8fb7f6d18d7e871c12dc6fc384a6904d9a8.scope: Deactivated successfully.
Oct  7 09:33:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:33:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:33:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7e042e3d-2d00-417c-8c49-1bcd348cdb3f does not exist
Oct  7 09:33:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cf4c19a1-b7b4-407e-87a4-532424229bdc does not exist
Oct  7 09:33:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:35 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.3 scrub starts
Oct  7 09:33:35 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.3 scrub ok
Oct  7 09:33:35 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.6 scrub starts
Oct  7 09:33:35 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.6 scrub ok
Oct  7 09:33:36 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.4 scrub starts
Oct  7 09:33:36 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.4 scrub ok
Oct  7 09:33:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v107: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:37 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.7 scrub starts
Oct  7 09:33:37 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.7 scrub ok
Oct  7 09:33:38 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.4 scrub starts
Oct  7 09:33:38 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.4 scrub ok
Oct  7 09:33:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v108: 274 pgs: 1 peering, 31 unknown, 242 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:38 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.8 scrub starts
Oct  7 09:33:38 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.8 scrub ok
Oct  7 09:33:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.5 scrub starts
Oct  7 09:33:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.5 scrub ok
Oct  7 09:33:39 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.9 scrub starts
Oct  7 09:33:39 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.9 scrub ok
Oct  7 09:33:40 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.6 scrub starts
Oct  7 09:33:40 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.6 scrub ok
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e49 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v109: 274 pgs: 274 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e49 do_prune osdmap full prune enabled
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e50 e50: 3 total, 3 up, 3 in
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e50: 3 total, 3 up, 3 in
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.745432854s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.146018982s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743768692s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.144348145s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.8( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743715286s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.144348145s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.1c( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.745388031s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.146018982s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.761000633s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.161674500s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760977745s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.161674500s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.761084557s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.161682129s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760876656s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.161682129s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743696213s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.144599915s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743689537s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.144622803s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743659973s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.144622803s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760763168s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.161766052s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760741234s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.161766052s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744175911s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145339966s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.5( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744153023s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145339966s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760704041s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.161926270s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.1b( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743672371s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.144599915s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743341446s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.144592285s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760678291s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.161926270s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744280815s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145584106s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.9( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744261742s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145584106s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.7( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743257523s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.144592285s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744174957s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145568848s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760598183s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.162025452s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.4( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744150162s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145568848s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744212151s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145706177s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760437012s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.161933899s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.1( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744194984s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145706177s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760411263s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.161933899s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760478020s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.162025452s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744104385s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145629883s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.2( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744041443s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145629883s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743968964s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145637512s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.d( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743947983s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145637512s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744054794s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145759583s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.e( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744030952s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145759583s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760311127s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.162109375s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743970871s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145774841s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.744021416s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145858765s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.f( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743949890s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145774841s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760284424s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.162109375s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.10( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743998528s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145858765s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743944168s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145881653s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743905067s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145858765s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.11( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743924141s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145881653s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743845940s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145828247s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.12( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743885040s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145858765s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760092735s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.162101746s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.13( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743823051s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145828247s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.760066032s) [1] r=-1 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.162101746s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743792534s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145858765s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.14( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743772507s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145858765s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743779182s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145927429s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.18( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743757248s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145927429s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743126869s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 98.145401001s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[4.1a( empty local-lis/les=42/43 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50 pruub=10.743107796s) [2] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 98.145401001s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.10( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.18( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.1b( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.1a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.e( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.1( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.a( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.13( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.12( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.14( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.8( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.9( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[6.9( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[6.7( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.5( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[6.5( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.7( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.11( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[6.1( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.d( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[4.1c( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.f( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.735075951s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495254517s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.1d( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.735041618s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495254517s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.726810455s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.487190247s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.12( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.921322823s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.681449890s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.722075462s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482505798s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.12( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.920886040s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.681449890s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.19( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.721944809s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482505798s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.1e( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.726789474s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.487190247s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.721655846s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482391357s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.18( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.721620560s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482391357s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.10( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.927139282s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688095093s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.17( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.721528053s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482421875s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.11( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.928087234s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688507080s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.17( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.721442223s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482421875s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.1e( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.927395821s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688423157s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.734072685s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495277405s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.10( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.926895142s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688095093s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.11( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.734047890s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495277405s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.1e( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.927223206s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688423157s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.16( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.721085548s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482406616s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.720922470s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482360840s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733972549s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495429993s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.15( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.720897675s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482360840s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.19( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.16( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.720992088s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482406616s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.12( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733955383s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495429993s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.720433235s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482330322s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.13( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.720323563s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482330322s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.1a( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.926518440s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688583374s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.1a( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.926427841s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688583374s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733243942s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495437622s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.14( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733220100s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495437622s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733412743s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495376587s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733148575s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495491028s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.15( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733123779s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495491028s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.19( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.926182747s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688583374s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.19( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.926150322s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688583374s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.719789505s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482307434s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733129501s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495689392s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.16( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733108521s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495689392s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.7( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.926015854s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688629150s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.7( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925996780s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688629150s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.11( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.719764709s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482307434s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.719809532s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482498169s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.719792366s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482498169s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.6( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925896645s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688636780s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.6( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925882339s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688636780s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733105659s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495910645s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.9( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733090401s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495910645s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.719477654s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482398987s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.11( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.926415443s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688507080s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.719458580s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482398987s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.4( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925690651s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688682556s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.4( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925665855s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688682556s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.732810974s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495903015s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.c( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.732766151s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495903015s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.719039917s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482215881s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.719010353s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482215881s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.13( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.732870102s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495376587s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733146667s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496513367s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.7( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.733120918s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496513367s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.f( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925223351s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688735962s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.8( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925523758s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688705444s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.8( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925132751s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688705444s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.718325615s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481819153s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.7( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.718203545s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481819153s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.f( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925078392s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688735962s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.732373238s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496063232s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.f( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.732357025s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496063232s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.9( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925050735s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 38'16 active pruub 87.688789368s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.9( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.925023079s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 0'0 unknown NOTIFY pruub 87.688789368s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.732157707s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.495994568s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.5( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.732136726s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.495994568s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.b( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.924945831s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688842773s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.2( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.718304634s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482200623s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.b( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.924922943s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688842773s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.2( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.718278885s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482200623s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.3( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.718533516s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.482505798s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.3( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.718515396s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.482505798s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.731992722s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496017456s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.717708588s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481811523s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.732014656s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496154785s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.717331886s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481491089s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.4( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.731852531s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496017456s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.18( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[5.1e( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.1e( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.16( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[5.14( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.13( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[5.15( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.7( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.f( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.4( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.8( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.717209816s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481811523s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.3( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.731526375s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496154785s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.4( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716828346s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481491089s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.731484413s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496253967s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.e( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.924187660s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 38'16 active pruub 87.688972473s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716694832s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481491089s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.4( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[4.2( empty local-lis/les=0/0 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.d( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.924036980s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 38'16 active pruub 87.688934326s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.2( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.731187820s) [0] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496253967s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.e( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923883438s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 0'0 unknown NOTIFY pruub 87.688972473s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.5( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716394424s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481491089s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.b( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716168404s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481498718s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.730944633s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496292114s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.1( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923638344s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.688995361s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.6( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716148376s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481498718s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.1( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.730911255s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496292114s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.1( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923592567s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.688995361s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715916634s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481414795s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.9( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715902328s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481414795s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715794563s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481399536s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.a( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715779305s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481399536s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.13( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923326492s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.689025879s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.13( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923303604s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.689025879s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716052055s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481826782s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.1b( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716036797s) [1] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481826782s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.14( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923235893s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 38'16 active pruub 87.689094543s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.14( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923203468s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 0'0 unknown NOTIFY pruub 87.689094543s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710771561s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.476699829s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.1c( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710758209s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.476699829s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.15( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923037529s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 38'16 active pruub 87.689041138s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.730491638s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496505737s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.1a( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.730479240s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496505737s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.15( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.923012733s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 0'0 unknown NOTIFY pruub 87.689041138s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710574150s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.476722717s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.1d( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710558891s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.476722717s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.16( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.922860146s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.689064026s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.16( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.922840118s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.689064026s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.730359077s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496627808s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.19( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.730345726s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496627808s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.2( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.922686577s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.689018250s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.2( v 38'16 (0'0,38'16] local-lis/les=48/49 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.922660828s) [1] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.689018250s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.17( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.922657967s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active pruub 87.689071655s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715110779s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 87.481536865s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.17( v 38'16 (0'0,38'16] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.922541618s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 87.689071655s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[2.1f( empty local-lis/les=40/41 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.714995384s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 87.481536865s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.729994774s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active pruub 90.496612549s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[5.18( empty local-lis/les=42/44 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50 pruub=11.729974747s) [1] r=-1 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 90.496612549s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.1f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.1e( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.1a( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[8.15( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[6.3( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[8.11( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[8.12( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.1d( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.1c( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.7( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[10.d( v 49'17 (0'0,49'17] local-lis/les=48/49 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50 pruub=8.920696259s) [0] r=-1 lpr=50 pi=[48,50)/1 crt=38'16 lcod 38'16 mlcod 0'0 unknown NOTIFY pruub 87.688934326s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.18( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.1b( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.14( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716605186s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728622437s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.2( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.746139526s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758171082s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.14( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.756026268s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.768089294s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716577530s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728622437s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1b( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.746112823s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758171082s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.14( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.756005287s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768089294s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716099739s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728279114s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.18( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[8.d( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.716070175s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728279114s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.745755196s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758163452s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.5( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.1b( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.1f( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.5( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.745735168s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758163452s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.10( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.1( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.15( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.748768806s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.761276245s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.3( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.15( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.748754501s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.761276245s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.755154610s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.767730713s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.c( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715502739s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728240967s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1d( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715484619s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728240967s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.6( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754824638s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.767715454s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754794121s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.767715454s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.745104790s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758163452s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.18( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.745084763s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758163452s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.11( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715064049s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728263855s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1b( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.715022087s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728263855s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.744832039s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758171082s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[5.7( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.744811058s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758171082s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.10( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.759515762s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.772979736s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.8( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754513741s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.767997742s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.10( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.759494781s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.772979736s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.e( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.11( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754329681s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.768028259s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754358292s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.767997742s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.11( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754308701s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768028259s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.12( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754200935s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.768051147s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.12( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754179955s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768051147s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754083633s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.768066406s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754062653s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768066406s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.744005203s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758110046s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.743988037s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758110046s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.743773460s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758010864s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.3( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.743755341s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758010864s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713961601s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728279114s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713953018s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728294373s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.7( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713934898s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728294373s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.18( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713880539s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728279114s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.c( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.9( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.8( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[5.5( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.c( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753797531s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.768264771s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.c( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753775597s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768264771s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.743485451s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758033752s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.2( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.743457794s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758033752s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713699341s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728317261s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.6( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713679314s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728317261s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.d( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753508568s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.768257141s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.743244171s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758018494s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.d( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753483772s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768257141s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.1( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.743224144s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758018494s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.3( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713501930s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728424072s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.5( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713480949s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728424072s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.8( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.e( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753390312s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.768455505s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.e( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753375053s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768455505s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753164291s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.768325806s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753150940s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768325806s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713179588s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728439331s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.3( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.713164330s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728439331s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753252029s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.768600464s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753238678s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768600464s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.742516518s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757942200s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.5( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.742504120s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757942200s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753608704s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.768112183s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752593994s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768112183s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.712806702s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728431702s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.1( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.712791443s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728431702s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752080917s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.767730713s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.711334229s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728485107s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[8.2( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.a( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.2( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[5.4( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[5.3( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.8( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[5.2( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.e( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.1( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.e( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.1c( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.15( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[8.4( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.750853539s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.768585205s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.740164757s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757911682s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710713387s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728469849s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.750830650s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768585205s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.c( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.740144730s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757911682s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.8( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710683823s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728469849s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739899635s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757820129s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.1d( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.f( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739869118s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757820129s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.750853539s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.768844604s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.f( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.750842094s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.768829346s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.16( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.15( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.11( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.17( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.11( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[2.1f( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[3.16( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.f( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.750823021s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768829346s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.750826836s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768844604s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.a( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710809708s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728485107s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739739418s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757820129s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.4( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739718437s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757820129s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.b( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.750783920s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.768905640s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.b( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.750767708s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.768905640s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739620209s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757804871s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.6( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739598274s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757804871s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.9( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754739761s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.772979736s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.9( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754718781s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.772979736s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710264206s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728546143s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.9( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710238457s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728546143s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739235878s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757614136s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710161209s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728569031s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.8( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739214897s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757614136s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.c( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.710146904s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728569031s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.2( v 34'4 (0'0,34'4] local-lis/les=46/47 n=1 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754999161s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.773429871s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.2( v 34'4 (0'0,34'4] local-lis/les=46/47 n=1 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754937172s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773429871s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754520416s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.773063660s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739489555s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758056641s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754496574s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773063660s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.6( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754788399s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.773406982s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.9( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.739468575s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758056641s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.6( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754769325s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773406982s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.755342484s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.774017334s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.755325317s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.774017334s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.738876343s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757614136s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.709821701s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728584290s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.a( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.738856316s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757614136s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.e( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.709798813s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728584290s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754696846s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.773582458s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.709712029s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728599548s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[10.d( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.4( v 34'4 (0'0,34'4] local-lis/les=46/47 n=1 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754634857s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.773521423s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754678726s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773582458s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.4( v 34'4 (0'0,34'4] local-lis/les=46/47 n=1 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.754596710s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773521423s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.737708092s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757537842s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.15( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.737686157s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757537842s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.708492279s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728591919s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.11( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.708468437s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728591919s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1a( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753401756s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.773666382s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1a( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753377914s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773666382s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.708255768s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728652954s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[7.e( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753207207s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.773597717s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.12( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.708231926s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728652954s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[8.1b( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.18( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753163338s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.773696899s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 50 pg[8.1c( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753170013s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773597717s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.18( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753149033s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773696899s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753190041s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.773818970s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753175735s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773818970s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1f( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753061295s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.773857117s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1f( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.753040314s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773857117s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752959251s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.773872375s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752943039s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773872375s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.730390549s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.751464844s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.11( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.730374336s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.751464844s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.707442284s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728614807s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.15( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.707427979s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728614807s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.707345009s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728622437s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.16( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.707333565s) [2] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728622437s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1d( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752758980s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.774131775s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1d( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752745628s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.774131775s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752654076s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 97.774124146s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752639771s) [0] r=-1 lpr=50 pi=[46,50)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.774124146s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.736194611s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.757873535s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.e( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.736169815s) [2] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.757873535s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.f( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.709612846s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728599548s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1b( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.751697540s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.773529053s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.706782341s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active pruub 91.728645325s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1c( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.752246857s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active pruub 97.774139404s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1b( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.751594543s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.773529053s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[3.17( empty local-lis/les=40/41 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50 pruub=8.706699371s) [0] r=-1 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 91.728645325s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.1( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.1d( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.12( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.17( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.735681534s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active pruub 95.758316040s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[7.13( empty local-lis/les=44/45 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50 pruub=12.735636711s) [0] r=-1 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown NOTIFY pruub 95.758316040s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[8.1c( v 34'4 (0'0,34'4] local-lis/les=46/47 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50 pruub=14.751345634s) [2] r=-1 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 97.774139404s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.11( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.10( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.12( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.15( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.1a( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.19( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.16( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.f( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.f( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.6( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.9( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.11( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.d( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.a( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.c( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.13( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.7( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.f( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.f( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.4( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.b( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.6( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.9( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.9( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.b( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.c( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.3( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.5( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.1( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.6( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.9( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.a( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.13( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.1b( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.14( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.1a( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.19( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[10.2( empty local-lis/les=0/0 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[5.18( empty local-lis/les=0/0 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.9( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.6( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 50 pg[2.4( empty local-lis/les=0/0 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.1a( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.12( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.18( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.1f( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.15( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[8.1d( empty local-lis/les=0/0 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.f( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[3.17( empty local-lis/les=0/0 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:41 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 50 pg[7.13( empty local-lis/les=0/0 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e50 do_prune osdmap full prune enabled
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e51 e51: 3 total, 3 up, 3 in
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e51: 3 total, 3 up, 3 in
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.11( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.5( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.b( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": ".rgw.root", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "backups", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.data", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.control", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "2"}]': finished
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "images", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "vms", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "volumes", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.9( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.d( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.1( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.3( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.1d( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.1b( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=-1 lpr=51 pi=[46,51)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.19( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.e( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.1a( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.1b( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.1e( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.1( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.18( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.1b( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.10( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.f( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.4( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.9( v 49'17 lc 38'15 (0'0,49'17] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=49'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.b( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[5.7( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[5.1e( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.c( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.1f( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.1( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.1d( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.18( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.8( v 38'16 (0'0,38'16] local-lis/les=50/51 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.4( v 38'16 (0'0,38'16] local-lis/les=50/51 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.15( v 49'17 lc 38'5 (0'0,49'17] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=49'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.6( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.9( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.9( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.6( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[5.4( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.1c( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.f( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.7( v 38'16 (0'0,38'16] local-lis/les=50/51 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.2( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.17( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[5.5( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.1f( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[5.2( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.d( v 49'17 lc 38'9 (0'0,49'17] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=49'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.3( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.c( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.f( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[5.3( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.6( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.3( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.e( v 49'17 lc 38'7 (0'0,49'17] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=49'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.f( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.9( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.b( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.a( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.13( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.e( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.8( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.9( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[8.15( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.1a( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.1d( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[8.11( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.16( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.17( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.1d( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.1e( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.1f( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[5.15( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.18( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.15( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.8( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[5.14( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [0] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.12( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.13( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.c( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.1c( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.7( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.a( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.5( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.1( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.1a( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[8.2( v 34'4 (0'0,34'4] local-lis/les=50/51 n=1 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.2( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.5( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.18( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.e( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[8.4( v 34'4 (0'0,34'4] local-lis/les=50/51 n=1 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.8( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.e( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.11( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[8.1b( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[2.11( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[8.d( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.8( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.16( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.11( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.a( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.13( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[6.7( v 38'39 lc 35'21 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.1c( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[3.18( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [2] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[8.1c( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[8.12( v 34'4 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [2] r=0 lpr=50 pi=[46,50)/1 crt=34'4 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[4.11( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [2] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[6.5( v 38'39 lc 35'11 (0'0,38'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[3.1f( empty local-lis/les=50/51 n=0 ec=40/17 lis/c=40/40 les/c/f=41/41/0 sis=50) [0] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[8.14( v 34'4 lc 0'0 (0'0,34'4] local-lis/les=50/51 n=0 ec=46/33 lis/c=46/46 les/c/f=47/47/0 sis=50) [0] r=0 lpr=50 pi=[46,50)/1 crt=34'4 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[6.1( v 38'39 (0'0,38'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.7( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.5( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.2( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.4( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.19( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.18( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.14( v 49'17 lc 38'13 (0'0,49'17] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=49'17 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.16( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[7.1b( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [0] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.1a( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.12( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.1b( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.10( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.1( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.6( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.1d( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.f( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.7( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.4( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.11( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.9( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.d( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[6.d( v 38'39 lc 35'13 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[6.f( v 38'39 lc 35'1 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=50) [1] r=0 lpr=50 pi=[44,50)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.c( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 51 pg[7.15( empty local-lis/les=50/51 n=0 ec=44/24 lis/c=44/44 les/c/f=45/45/0 sis=50) [2] r=0 lpr=50 pi=[44,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.13( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.a( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.f( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.2( v 38'16 (0'0,38'16] local-lis/les=50/51 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.5( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.3( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.b( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.d( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.f( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.16( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 51 pg[10.1( v 38'16 (0'0,38'16] local-lis/les=50/51 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [0] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.6( v 38'16 (0'0,38'16] local-lis/les=50/51 n=1 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.19( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.9( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[10.1a( v 38'16 (0'0,38'16] local-lis/les=50/51 n=0 ec=48/37 lis/c=48/48 les/c/f=49/49/0 sis=50) [1] r=0 lpr=50 pi=[48,50)/1 crt=38'16 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.12( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.12( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.11( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.10( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[5.13( empty local-lis/les=50/51 n=0 ec=42/21 lis/c=42/42 les/c/f=44/44/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.17( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[4.14( empty local-lis/les=50/51 n=0 ec=42/19 lis/c=42/42 les/c/f=43/43/0 sis=50) [1] r=0 lpr=50 pi=[42,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 51 pg[2.15( empty local-lis/les=50/51 n=0 ec=40/16 lis/c=40/40 les/c/f=41/41/0 sis=50) [1] r=0 lpr=50 pi=[40,50)/1 crt=0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v112: 274 pgs: 274 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"} v 0) v1
Oct  7 09:33:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  7 09:33:42 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event 8691e7b4-2e79-4285-b88c-5e6e0baf4848 (Global Recovery Event) in 15 seconds
Oct  7 09:33:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e51 do_prune osdmap full prune enabled
Oct  7 09:33:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  7 09:33:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  7 09:33:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e52 e52: 3 total, 3 up, 3 in
Oct  7 09:33:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  7 09:33:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]: dispatch
Oct  7 09:33:43 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e52: 3 total, 3 up, 3 in
Oct  7 09:33:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 52 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=10.698252678s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.153450012s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 52 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=10.698203087s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.153450012s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 52 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=10.706056595s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.161758423s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 52 pg[6.6( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=10.706034660s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.161758423s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 52 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=10.705947876s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.161949158s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 52 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=10.705926895s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.161949158s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 52 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=10.705818176s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 100.162033081s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 52 pg[6.e( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52 pruub=10.705789566s) [1] r=-1 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 100.162033081s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[6.a( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[6.e( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[6.6( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[6.2( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=8}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 52 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=51) [0]/[1] async=[0] r=0 lpr=51 pi=[46,51)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e52 do_prune osdmap full prune enabled
Oct  7 09:33:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  7 09:33:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "3"}]': finished
Oct  7 09:33:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e53 e53: 3 total, 3 up, 3 in
Oct  7 09:33:44 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e53: 3 total, 3 up, 3 in
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.009259224s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073333740s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.009008408s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073089600s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.009201050s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073333740s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.008959770s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073089600s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.008715630s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073234558s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.008619308s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073234558s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.004925728s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.069740295s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.005007744s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.069747925s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.004788399s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.069740295s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.004837036s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.069747925s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.004515648s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.069572449s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.004431725s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.069572449s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.004002571s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.069305420s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.003952980s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.069305420s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.004147530s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.069564819s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.003710747s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.069175720s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.003677368s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.069175720s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.003702164s) [0] async=[0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.069282532s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.004097939s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.069564819s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53 pruub=15.003432274s) [0] r=-1 lpr=53 pi=[46,53)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.069282532s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[6.6( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=52/53 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[6.2( v 38'39 (0'0,38'39] local-lis/les=52/53 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[6.e( v 38'39 lc 35'19 (0'0,38'39] local-lis/les=52/53 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 53 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=52/53 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=52) [1] r=0 lpr=52 pi=[44,52)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 53 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v115: 274 pgs: 1 active+recovering+remapped, 15 active+recovery_wait+remapped, 4 peering, 254 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 87/213 objects misplaced (40.845%); 199 B/s, 2 keys/s, 2 objects/s recovering
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.b deep-scrub starts
Oct  7 09:33:44 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.b deep-scrub ok
Oct  7 09:33:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e53 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e53 do_prune osdmap full prune enabled
Oct  7 09:33:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e54 e54: 3 total, 3 up, 3 in
Oct  7 09:33:45 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e54: 3 total, 3 up, 3 in
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.082112312s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073936462s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.081462860s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073539734s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.081525803s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073699951s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.082206726s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073791504s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.081456184s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073699951s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.081829071s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073936462s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.080915451s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073394775s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.080850601s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073394775s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.081246376s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073791504s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=51/52 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.081016541s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073539734s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.080537796s) [0] async=[0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 101.073471069s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 54 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=51/52 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54 pruub=14.080236435s) [0] r=-1 lpr=54 pi=[46,54)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 101.073471069s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.5( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.b( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.9( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.1( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.3( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.1d( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 54 pg[9.1b( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=53) [0] r=0 lpr=53 pi=[46,53)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.d deep-scrub starts
Oct  7 09:33:45 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.d deep-scrub ok
Oct  7 09:33:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e54 do_prune osdmap full prune enabled
Oct  7 09:33:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e55 e55: 3 total, 3 up, 3 in
Oct  7 09:33:46 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e55: 3 total, 3 up, 3 in
Oct  7 09:33:46 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 55 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:46 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 55 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:46 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 55 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:46 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 55 pg[9.11( v 42'385 (0'0,42'385] local-lis/les=54/55 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:46 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 55 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=54/55 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:46 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 55 pg[9.d( v 42'385 (0'0,42'385] local-lis/les=54/55 n=6 ec=46/35 lis/c=51/46 les/c/f=52/47/0 sis=54) [0] r=0 lpr=54 pi=[46,54)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v118: 274 pgs: 10 peering, 264 active+clean; 456 KiB data, 81 MiB used, 60 GiB / 60 GiB avail; 1.1 KiB/s, 2 keys/s, 26 objects/s recovering
Oct  7 09:33:46 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.10 scrub starts
Oct  7 09:33:46 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.10 scrub ok
Oct  7 09:33:47 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.b scrub starts
Oct  7 09:33:47 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.b scrub ok
Oct  7 09:33:47 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.c scrub starts
Oct  7 09:33:47 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.c scrub ok
Oct  7 09:33:47 np0005473739 ceph-mgr[74587]: [progress INFO root] Writing back 15 completed events
Oct  7 09:33:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  7 09:33:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:33:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v119: 274 pgs: 6 peering, 268 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 835 B/s, 1 keys/s, 20 objects/s recovering
Oct  7 09:33:49 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.c scrub starts
Oct  7 09:33:49 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.c scrub ok
Oct  7 09:33:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e55 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v120: 274 pgs: 274 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 577 B/s, 15 objects/s recovering
Oct  7 09:33:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct  7 09:33:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  7 09:33:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"} v 0) v1
Oct  7 09:33:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  7 09:33:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e55 do_prune osdmap full prune enabled
Oct  7 09:33:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  7 09:33:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]: dispatch
Oct  7 09:33:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  7 09:33:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  7 09:33:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e56 e56: 3 total, 3 up, 3 in
Oct  7 09:33:51 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e56: 3 total, 3 up, 3 in
Oct  7 09:33:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  7 09:33:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "4"}]': finished
Oct  7 09:33:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v122: 274 pgs: 274 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 493 B/s, 13 objects/s recovering
Oct  7 09:33:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct  7 09:33:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  7 09:33:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"} v 0) v1
Oct  7 09:33:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  7 09:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:33:52 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.13 deep-scrub starts
Oct  7 09:33:52 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.13 deep-scrub ok
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.15 scrub starts
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 56 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.168367386s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 38'39 active pruub 108.040596008s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 56 pg[6.3( v 38'39 (0'0,38'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.168311119s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 108.040596008s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 56 pg[6.3( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 56 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.168681145s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 38'39 active pruub 108.041366577s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 56 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.168646812s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 108.041366577s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 56 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.166494370s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 38'39 active pruub 108.039756775s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 56 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 56 pg[6.7( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.166447639s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 108.039756775s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 56 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.155025482s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 38'39 active pruub 108.028526306s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 56 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56 pruub=13.154946327s) [0] r=-1 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 108.028526306s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 56 pg[6.7( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 56 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.15 scrub ok
Oct  7 09:33:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e56 do_prune osdmap full prune enabled
Oct  7 09:33:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  7 09:33:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]: dispatch
Oct  7 09:33:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  7 09:33:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  7 09:33:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e57 e57: 3 total, 3 up, 3 in
Oct  7 09:33:53 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e57: 3 total, 3 up, 3 in
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 57 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=8.607542038s) [1] r=-1 lpr=57 pi=[44,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 108.161911011s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 57 pg[6.4( v 38'39 (0'0,38'39] local-lis/les=44/45 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=8.607341766s) [1] r=-1 lpr=57 pi=[44,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.161911011s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 57 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=8.607352257s) [1] r=-1 lpr=57 pi=[44,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 108.162368774s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 57 pg[6.c( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=57 pruub=8.607287407s) [1] r=-1 lpr=57 pi=[44,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 108.162368774s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 57 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 57 pg[6.c( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 57 pg[6.7( v 38'39 lc 35'21 (0'0,38'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 57 pg[6.f( v 38'39 lc 35'1 (0'0,38'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 57 pg[6.3( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=56/57 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=56) [0] r=0 lpr=56 pi=[50,56)/1 crt=38'39 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:53 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 57 pg[6.4( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[44,57)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e57 do_prune osdmap full prune enabled
Oct  7 09:33:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e58 e58: 3 total, 3 up, 3 in
Oct  7 09:33:54 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e58: 3 total, 3 up, 3 in
Oct  7 09:33:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  7 09:33:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "5"}]': finished
Oct  7 09:33:54 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 58 pg[6.4( v 38'39 lc 35'15 (0'0,38'39] local-lis/les=57/58 n=2 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[44,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=4 mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:54 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 58 pg[6.c( v 38'39 lc 35'17 (0'0,38'39] local-lis/les=57/58 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=57) [1] r=0 lpr=57 pi=[44,57)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v125: 274 pgs: 2 peering, 272 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:33:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e58 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:33:55 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.e deep-scrub starts
Oct  7 09:33:55 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.e deep-scrub ok
Oct  7 09:33:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v126: 274 pgs: 2 peering, 272 active+clean; 456 KiB data, 85 MiB used, 60 GiB / 60 GiB avail; 106 B/s, 1 keys/s, 1 objects/s recovering
Oct  7 09:33:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v127: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 304 B/s, 1 keys/s, 1 objects/s recovering
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"} v 0) v1
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e58 do_prune osdmap full prune enabled
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e59 e59: 3 total, 3 up, 3 in
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]: dispatch
Oct  7 09:33:58 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e59: 3 total, 3 up, 3 in
Oct  7 09:33:58 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 59 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=15.616273880s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=38'39 mlcod 38'39 active pruub 116.040176392s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:58 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 59 pg[6.5( v 38'39 (0'0,38'39] local-lis/les=50/51 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=15.616220474s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 116.040176392s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:58 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 59 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=15.617207527s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=38'39 mlcod 38'39 active pruub 116.041465759s@ mbc={255={}}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:33:58 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 59 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=59 pruub=15.616958618s) [0] r=-1 lpr=59 pi=[50,59)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 116.041465759s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:33:58 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 59 pg[6.5( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:58 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 59 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:33:59 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.10 scrub starts
Oct  7 09:33:59 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.10 scrub ok
Oct  7 09:33:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e59 do_prune osdmap full prune enabled
Oct  7 09:33:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e60 e60: 3 total, 3 up, 3 in
Oct  7 09:33:59 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e60: 3 total, 3 up, 3 in
Oct  7 09:33:59 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  7 09:33:59 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "6"}]': finished
Oct  7 09:33:59 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 60 pg[6.5( v 38'39 lc 35'11 (0'0,38'39] local-lis/les=59/60 n=2 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:33:59 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 60 pg[6.d( v 38'39 lc 35'13 (0'0,38'39] local-lis/les=59/60 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=59) [0] r=0 lpr=59 pi=[50,59)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e60 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v130: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 381 B/s, 1 keys/s, 2 objects/s recovering
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"} v 0) v1
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e60 do_prune osdmap full prune enabled
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e61 e61: 3 total, 3 up, 3 in
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e61: 3 total, 3 up, 3 in
Oct  7 09:34:00 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 61 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=11.311991692s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 113.768264771s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:00 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 61 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=11.311934471s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.768264771s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  7 09:34:00 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]: dispatch
Oct  7 09:34:00 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 61 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=11.312175751s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 113.769302368s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:00 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 61 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=11.316153526s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 113.773399353s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:00 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 61 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=11.316102028s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.773399353s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:00 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 61 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=11.312103271s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.769302368s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:00 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 61 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=11.316735268s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 113.774200439s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:00 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 61 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61 pruub=11.316641808s) [2] r=-1 lpr=61 pi=[46,61)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 113.774200439s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:00 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 61 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61) [2] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:00 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 61 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61) [2] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:00 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 61 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61) [2] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:00 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 61 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=61) [2] r=0 lpr=61 pi=[46,61)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e61 do_prune osdmap full prune enabled
Oct  7 09:34:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e62 e62: 3 total, 3 up, 3 in
Oct  7 09:34:01 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e62: 3 total, 3 up, 3 in
Oct  7 09:34:01 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 62 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:01 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 62 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  7 09:34:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "7"}]': finished
Oct  7 09:34:01 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 62 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:01 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 62 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:01 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 62 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:01 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 62 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:01 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 62 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:01 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 62 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:01 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:01 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 62 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:01 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:01 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:01 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:01 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 62 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:01 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 62 pg[9.6( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:01 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 62 pg[9.e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] r=-1 lpr=62 pi=[46,62)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:01 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.16 deep-scrub starts
Oct  7 09:34:02 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.16 deep-scrub ok
Oct  7 09:34:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v133: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 39 B/s, 0 objects/s recovering
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"} v 0) v1
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e62 do_prune osdmap full prune enabled
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e63 e63: 3 total, 3 up, 3 in
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e63: 3 total, 3 up, 3 in
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  7 09:34:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]: dispatch
Oct  7 09:34:02 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 63 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=62/63 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:02 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 63 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=62/63 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:02 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 63 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=62/63 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=6}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:02 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 63 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=62/63 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=62) [2]/[1] async=[2] r=0 lpr=62 pi=[46,62)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e63 do_prune osdmap full prune enabled
Oct  7 09:34:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  7 09:34:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "8"}]': finished
Oct  7 09:34:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e64 e64: 3 total, 3 up, 3 in
Oct  7 09:34:03 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e64: 3 total, 3 up, 3 in
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 64 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=62/63 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=14.976863861s) [2] async=[2] r=-1 lpr=64 pi=[46,64)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 120.499023438s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 64 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=62/63 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=14.976292610s) [2] r=-1 lpr=64 pi=[46,64)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.499023438s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 64 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=62/63 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=14.974114418s) [2] async=[2] r=-1 lpr=64 pi=[46,64)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 120.497474670s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 64 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=62/63 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=14.975441933s) [2] async=[2] r=-1 lpr=64 pi=[46,64)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 120.499000549s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 64 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=62/63 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=14.973906517s) [2] r=-1 lpr=64 pi=[46,64)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.497474670s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 64 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=62/63 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=14.975388527s) [2] r=-1 lpr=64 pi=[46,64)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.499000549s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 64 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=62/63 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=14.970157623s) [2] async=[2] r=-1 lpr=64 pi=[46,64)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 120.494422913s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 64 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=62/63 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64 pruub=14.970092773s) [2] r=-1 lpr=64 pi=[46,64)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 120.494422913s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:03 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 63 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=13.369892120s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=42'385 mlcod 0'0 active pruub 123.404815674s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 64 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=13.369836807s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 123.404815674s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:03 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 63 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=14.375784874s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=42'385 mlcod 0'0 active pruub 124.410858154s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 64 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=14.375646591s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 124.410858154s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:03 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 63 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=13.377036095s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=42'385 mlcod 0'0 active pruub 123.412879944s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 64 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=63 pruub=13.377009392s) [2] r=-1 lpr=63 pi=[53,63)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 123.412879944s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=63) [2] r=0 lpr=64 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=64 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:03 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 63 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=54/55 n=6 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=14.377094269s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=42'385 mlcod 0'0 active pruub 124.413322449s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:03 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 64 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=54/55 n=6 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=63 pruub=14.376964569s) [2] r=-1 lpr=63 pi=[54,63)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 124.413322449s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=63) [2] r=0 lpr=64 pi=[53,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 64 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=63) [2] r=0 lpr=64 pi=[54,63)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.14 scrub starts
Oct  7 09:34:03 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.14 scrub ok
Oct  7 09:34:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v136: 274 pgs: 4 active+remapped, 270 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 54 B/s, 4 objects/s recovering
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"} v 0) v1
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e64 do_prune osdmap full prune enabled
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e65 e65: 3 total, 3 up, 3 in
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]: dispatch
Oct  7 09:34:04 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e65: 3 total, 3 up, 3 in
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=65 pruub=13.230272293s) [2] r=-1 lpr=65 pi=[44,65)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 124.162307739s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=53/54 n=6 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=44/45 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=65 pruub=13.230224609s) [2] r=-1 lpr=65 pi=[44,65)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.162307739s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=54/55 n=6 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=54/55 n=6 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=0 lpr=65 pi=[54,65)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 65 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] r=0 lpr=65 pi=[53,65)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[6.8( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[44,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.7( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.17( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[54,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] r=-1 lpr=65 pi=[53,65)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.6( v 42'385 (0'0,42'385] local-lis/les=64/65 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.e( v 42'385 (0'0,42'385] local-lis/les=64/65 n=6 ec=46/35 lis/c=62/46 les/c/f=63/47/0 sis=64) [2] r=0 lpr=64 pi=[46,64)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.17 scrub starts
Oct  7 09:34:04 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 65 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=65 pruub=15.024808884s) [2] r=-1 lpr=65 pi=[46,65)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 121.773452759s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 65 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=65 pruub=15.024742126s) [2] r=-1 lpr=65 pi=[46,65)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 121.773452759s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:04 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 65 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=65 pruub=15.025169373s) [2] r=-1 lpr=65 pi=[46,65)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 121.774131775s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=65) [2] r=0 lpr=65 pi=[46,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:04 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 65 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=65 pruub=15.025058746s) [2] r=-1 lpr=65 pi=[46,65)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 121.774131775s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:04 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.17 scrub ok
Oct  7 09:34:04 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 65 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=65) [2] r=0 lpr=65 pi=[46,65)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e65 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e65 do_prune osdmap full prune enabled
Oct  7 09:34:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e66 e66: 3 total, 3 up, 3 in
Oct  7 09:34:05 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e66: 3 total, 3 up, 3 in
Oct  7 09:34:05 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[46,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:05 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 66 pg[9.8( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[46,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:05 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[46,66)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:05 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 66 pg[9.18( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] r=-1 lpr=66 pi=[46,66)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:05 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 66 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] r=0 lpr=66 pi=[46,66)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:05 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 66 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] r=0 lpr=66 pi=[46,66)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:05 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 66 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] r=0 lpr=66 pi=[46,66)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:05 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 66 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] r=0 lpr=66 pi=[46,66)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:05 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 66 pg[6.8( v 38'39 (0'0,38'39] local-lis/les=65/66 n=1 ec=44/23 lis/c=44/44 les/c/f=45/45/0 sis=65) [2] r=0 lpr=65 pi=[44,65)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:05 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 66 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=65/66 n=6 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:05 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 66 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=65/66 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[54,65)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:05 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 66 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=65/66 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:05 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 66 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=65/66 n=6 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=65) [2]/[0] async=[2] r=0 lpr=65 pi=[53,65)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  7 09:34:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "9"}]': finished
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e66 do_prune osdmap full prune enabled
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e67 e67: 3 total, 3 up, 3 in
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e67: 3 total, 3 up, 3 in
Oct  7 09:34:06 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 67 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:06 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 67 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:06 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 67 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:06 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 67 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:06 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 67 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:06 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 67 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:06 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 67 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:06 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 67 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 67 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=65/66 n=5 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.987597466s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=42'385 mlcod 42'385 active pruub 127.448585510s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 67 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=65/66 n=5 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67 pruub=14.988098145s) [2] async=[2] r=-1 lpr=67 pi=[53,67)/1 crt=42'385 mlcod 42'385 active pruub 127.448913574s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 67 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=65/66 n=6 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67 pruub=14.988246918s) [2] async=[2] r=-1 lpr=67 pi=[53,67)/1 crt=42'385 mlcod 42'385 active pruub 127.449256897s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 67 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=65/66 n=6 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67 pruub=14.988138199s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 127.449256897s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 67 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=65/66 n=5 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67 pruub=14.987815857s) [2] r=-1 lpr=67 pi=[53,67)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 127.448913574s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 67 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=65/66 n=5 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.987507820s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 127.448585510s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 67 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=65/66 n=6 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.987004280s) [2] async=[2] r=-1 lpr=67 pi=[54,67)/1 crt=42'385 mlcod 42'385 active pruub 127.447998047s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 67 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=65/66 n=6 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67 pruub=14.986347198s) [2] r=-1 lpr=67 pi=[54,67)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 127.447998047s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:06 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 67 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=66/67 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[46,66)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:06 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 67 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=66/67 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=66) [2]/[1] async=[2] r=0 lpr=66 pi=[46,66)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v140: 274 pgs: 8 active+remapped, 266 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 199 B/s, 10 objects/s recovering
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"} v 0) v1
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  7 09:34:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]: dispatch
Oct  7 09:34:06 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.19 scrub starts
Oct  7 09:34:06 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.19 scrub ok
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.19 deep-scrub starts
Oct  7 09:34:06 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.19 deep-scrub ok
Oct  7 09:34:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e67 do_prune osdmap full prune enabled
Oct  7 09:34:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  7 09:34:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  7 09:34:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e68 e68: 3 total, 3 up, 3 in
Oct  7 09:34:07 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e68: 3 total, 3 up, 3 in
Oct  7 09:34:07 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 68 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68) [2] r=0 lpr=68 pi=[46,68)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:07 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 68 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68) [2] r=0 lpr=68 pi=[46,68)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:07 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 68 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=66/67 n=6 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68 pruub=15.010058403s) [2] async=[2] r=-1 lpr=68 pi=[46,68)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 124.070343018s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:07 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 68 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=66/67 n=6 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68 pruub=15.009521484s) [2] r=-1 lpr=68 pi=[46,68)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.070343018s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:07 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 68 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=68 pruub=14.978690147s) [0] r=-1 lpr=68 pi=[50,68)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 124.040184021s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:07 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 68 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=50/51 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=68 pruub=14.978623390s) [0] r=-1 lpr=68 pi=[50,68)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.040184021s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:07 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 68 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68) [2] r=0 lpr=68 pi=[46,68)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:07 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 68 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68) [2] r=0 lpr=68 pi=[46,68)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:07 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 68 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=66/67 n=5 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68 pruub=15.001824379s) [2] async=[2] r=-1 lpr=68 pi=[46,68)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 124.065193176s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:07 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 68 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=66/67 n=5 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68 pruub=15.001459122s) [2] r=-1 lpr=68 pi=[46,68)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 124.065193176s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:07 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 68 pg[9.f( v 42'385 (0'0,42'385] local-lis/les=67/68 n=6 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:07 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 68 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=67/68 n=5 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:07 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 68 pg[9.7( v 42'385 (0'0,42'385] local-lis/les=67/68 n=6 ec=46/35 lis/c=65/53 les/c/f=66/54/0 sis=67) [2] r=0 lpr=67 pi=[53,67)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:07 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 68 pg[9.17( v 42'385 (0'0,42'385] local-lis/les=67/68 n=5 ec=46/35 lis/c=65/54 les/c/f=66/55/0 sis=67) [2] r=0 lpr=67 pi=[54,67)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:07 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 68 pg[6.9( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=68) [0] r=0 lpr=68 pi=[50,68)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  7 09:34:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "10"}]': finished
Oct  7 09:34:07 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.1a deep-scrub starts
Oct  7 09:34:07 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.1a deep-scrub ok
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e68 do_prune osdmap full prune enabled
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e69 e69: 3 total, 3 up, 3 in
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e69: 3 total, 3 up, 3 in
Oct  7 09:34:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v143: 274 pgs: 8 active+remapped, 266 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 143 B/s, 5 objects/s recovering
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"} v 0) v1
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  7 09:34:08 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 69 pg[9.8( v 42'385 (0'0,42'385] local-lis/les=68/69 n=6 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68) [2] r=0 lpr=68 pi=[46,68)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:08 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 69 pg[6.9( v 38'39 (0'0,38'39] local-lis/les=68/69 n=1 ec=44/23 lis/c=50/50 les/c/f=51/51/0 sis=68) [0] r=0 lpr=68 pi=[50,68)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:08 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 69 pg[9.18( v 42'385 (0'0,42'385] local-lis/les=68/69 n=5 ec=46/35 lis/c=66/46 les/c/f=67/47/0 sis=68) [2] r=0 lpr=68 pi=[46,68)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  7 09:34:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]: dispatch
Oct  7 09:34:08 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.1d scrub starts
Oct  7 09:34:08 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.1d scrub ok
Oct  7 09:34:08 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.1c scrub starts
Oct  7 09:34:09 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 3.1c scrub ok
Oct  7 09:34:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e69 do_prune osdmap full prune enabled
Oct  7 09:34:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  7 09:34:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  7 09:34:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e70 e70: 3 total, 3 up, 3 in
Oct  7 09:34:09 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e70: 3 total, 3 up, 3 in
Oct  7 09:34:09 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 70 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=52/53 n=1 ec=44/23 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=14.581068039s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=38'39 lcod 0'0 mlcod 0'0 active pruub 126.075225830s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:09 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 70 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=52/53 n=1 ec=44/23 lis/c=52/52 les/c/f=53/53/0 sis=70 pruub=14.580951691s) [0] r=-1 lpr=70 pi=[52,70)/1 crt=38'39 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 126.075225830s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:09 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 70 pg[6.a( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=52/52 les/c/f=53/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  7 09:34:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "11"}]': finished
Oct  7 09:34:09 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.1e scrub starts
Oct  7 09:34:09 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.1e scrub ok
Oct  7 09:34:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e70 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e70 do_prune osdmap full prune enabled
Oct  7 09:34:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v145: 274 pgs: 1 peering, 1 activating, 272 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 51 B/s, 2 objects/s recovering
Oct  7 09:34:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e71 e71: 3 total, 3 up, 3 in
Oct  7 09:34:10 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e71: 3 total, 3 up, 3 in
Oct  7 09:34:10 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 71 pg[6.a( v 38'39 (0'0,38'39] local-lis/les=70/71 n=1 ec=44/23 lis/c=52/52 les/c/f=53/53/0 sis=70) [0] r=0 lpr=70 pi=[52,70)/1 crt=38'39 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:10 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.7 scrub starts
Oct  7 09:34:10 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.7 scrub ok
Oct  7 09:34:11 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.b deep-scrub starts
Oct  7 09:34:12 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.b deep-scrub ok
Oct  7 09:34:12 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.12 scrub starts
Oct  7 09:34:12 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.12 scrub ok
Oct  7 09:34:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v147: 274 pgs: 1 peering, 1 activating, 272 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 41 B/s, 2 objects/s recovering
Oct  7 09:34:12 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.1f deep-scrub starts
Oct  7 09:34:12 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 4.1f deep-scrub ok
Oct  7 09:34:13 np0005473739 python3[103944]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user info --uid openstack _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:34:13 np0005473739 podman[103945]: 2025-10-07 13:34:13.362119923 +0000 UTC m=+0.038561881 container create 80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7 (image=quay.io/ceph/ceph:v18, name=elegant_sutherland, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:34:13 np0005473739 systemd[1]: Started libpod-conmon-80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7.scope.
Oct  7 09:34:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:34:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896e5aea4c60a49e708035018147937dcca8de4cd9f32c282ac776e2720f5e03/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/896e5aea4c60a49e708035018147937dcca8de4cd9f32c282ac776e2720f5e03/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:13 np0005473739 podman[103945]: 2025-10-07 13:34:13.422226607 +0000 UTC m=+0.098668605 container init 80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7 (image=quay.io/ceph/ceph:v18, name=elegant_sutherland, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:34:13 np0005473739 podman[103945]: 2025-10-07 13:34:13.428272433 +0000 UTC m=+0.104714391 container start 80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7 (image=quay.io/ceph/ceph:v18, name=elegant_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:34:13 np0005473739 podman[103945]: 2025-10-07 13:34:13.431011848 +0000 UTC m=+0.107453806 container attach 80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7 (image=quay.io/ceph/ceph:v18, name=elegant_sutherland, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 09:34:13 np0005473739 podman[103945]: 2025-10-07 13:34:13.34708838 +0000 UTC m=+0.023530368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:34:13 np0005473739 elegant_sutherland[103961]: could not fetch user info: no user info saved
Oct  7 09:34:13 np0005473739 systemd[1]: libpod-80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7.scope: Deactivated successfully.
Oct  7 09:34:13 np0005473739 podman[103945]: 2025-10-07 13:34:13.609764466 +0000 UTC m=+0.286206454 container died 80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7 (image=quay.io/ceph/ceph:v18, name=elegant_sutherland, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:34:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-896e5aea4c60a49e708035018147937dcca8de4cd9f32c282ac776e2720f5e03-merged.mount: Deactivated successfully.
Oct  7 09:34:13 np0005473739 podman[103945]: 2025-10-07 13:34:13.649445327 +0000 UTC m=+0.325887275 container remove 80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7 (image=quay.io/ceph/ceph:v18, name=elegant_sutherland, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 09:34:13 np0005473739 systemd[1]: libpod-conmon-80991970b24eaa6f4c14e1c246292c3e8436f59545a9c2cc2f0b6d92865fcac7.scope: Deactivated successfully.
Oct  7 09:34:14 np0005473739 python3[104082]: ansible-ansible.legacy.command Invoked with _raw_params=podman run --rm --net=host --ipc=host   --volume /etc/ceph:/etc/ceph:z --volume /home/ceph-admin/assimilate_ceph.conf:/home/assimilate_ceph.conf:z    --entrypoint radosgw-admin quay.io/ceph/ceph:v18 --fsid 82044f27-a8da-5b2a-a297-ff6afc620e1f -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring   user create --uid="openstack" --display-name "openstack" _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:34:14 np0005473739 podman[104083]: 2025-10-07 13:34:14.07200371 +0000 UTC m=+0.039685212 container create f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df (image=quay.io/ceph/ceph:v18, name=interesting_keldysh, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:34:14 np0005473739 systemd[1]: Started libpod-conmon-f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df.scope.
Oct  7 09:34:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:34:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52136f5f20703417357acd6f1bb6acc73d145b60ae3afda684661e6120c8fae/merged/etc/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52136f5f20703417357acd6f1bb6acc73d145b60ae3afda684661e6120c8fae/merged/home/assimilate_ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:14 np0005473739 podman[104083]: 2025-10-07 13:34:14.14431779 +0000 UTC m=+0.111999342 container init f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df (image=quay.io/ceph/ceph:v18, name=interesting_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:34:14 np0005473739 podman[104083]: 2025-10-07 13:34:14.053901862 +0000 UTC m=+0.021583384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph:v18
Oct  7 09:34:14 np0005473739 podman[104083]: 2025-10-07 13:34:14.151653022 +0000 UTC m=+0.119334534 container start f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df (image=quay.io/ceph/ceph:v18, name=interesting_keldysh, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:34:14 np0005473739 podman[104083]: 2025-10-07 13:34:14.154643214 +0000 UTC m=+0.122324716 container attach f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df (image=quay.io/ceph/ceph:v18, name=interesting_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]: {
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "user_id": "openstack",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "display_name": "openstack",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "email": "",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "suspended": 0,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "max_buckets": 1000,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "subusers": [],
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "keys": [
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        {
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:            "user": "openstack",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:            "access_key": "ZTI80NMOXO22PTUY2QKR",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:            "secret_key": "JvfxngECNDHIZXCNDzVwON9qybtmJ32haNlBXYKI"
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        }
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    ],
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "swift_keys": [],
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "caps": [],
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "op_mask": "read, write, delete",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "default_placement": "",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "default_storage_class": "",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "placement_tags": [],
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "bucket_quota": {
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "enabled": false,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "check_on_raw": false,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "max_size": -1,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "max_size_kb": 0,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "max_objects": -1
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    },
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "user_quota": {
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "enabled": false,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "check_on_raw": false,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "max_size": -1,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "max_size_kb": 0,
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:        "max_objects": -1
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    },
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "temp_url_keys": [],
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "type": "rgw",
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]:    "mfa_ids": []
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]: }
Oct  7 09:34:14 np0005473739 interesting_keldysh[104099]: 
Oct  7 09:34:14 np0005473739 systemd[1]: libpod-f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df.scope: Deactivated successfully.
Oct  7 09:34:14 np0005473739 podman[104083]: 2025-10-07 13:34:14.356043535 +0000 UTC m=+0.323725047 container died f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df (image=quay.io/ceph/ceph:v18, name=interesting_keldysh, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:34:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c52136f5f20703417357acd6f1bb6acc73d145b60ae3afda684661e6120c8fae-merged.mount: Deactivated successfully.
Oct  7 09:34:14 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.14 deep-scrub starts
Oct  7 09:34:14 np0005473739 podman[104083]: 2025-10-07 13:34:14.398353508 +0000 UTC m=+0.366035040 container remove f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df (image=quay.io/ceph/ceph:v18, name=interesting_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:34:14 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.14 deep-scrub ok
Oct  7 09:34:14 np0005473739 systemd[1]: libpod-conmon-f3ab3e72c66b3de21a4be8f570071a76ec12401827eb335f779cadc5ed2694df.scope: Deactivated successfully.
Oct  7 09:34:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v148: 274 pgs: 1 peering, 273 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 35 B/s, 1 objects/s recovering
Oct  7 09:34:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e71 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:15 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.19 scrub starts
Oct  7 09:34:15 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.19 scrub ok
Oct  7 09:34:15 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.d scrub starts
Oct  7 09:34:15 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.d scrub ok
Oct  7 09:34:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v149: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1023 B/s rd, 0 op/s; 27 B/s, 1 objects/s recovering
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"} v 0) v1
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e71 do_prune osdmap full prune enabled
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e72 e72: 3 total, 3 up, 3 in
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e72: 3 total, 3 up, 3 in
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  7 09:34:16 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]: dispatch
Oct  7 09:34:16 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.18 scrub starts
Oct  7 09:34:16 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.18 scrub ok
Oct  7 09:34:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  7 09:34:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "12"}]': finished
Oct  7 09:34:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v151: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 255 B/s wr, 2 op/s
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"} v 0) v1
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e72 do_prune osdmap full prune enabled
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]: dispatch
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e73 e73: 3 total, 3 up, 3 in
Oct  7 09:34:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e73: 3 total, 3 up, 3 in
Oct  7 09:34:18 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 72 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.698881149s) [1] r=-1 lpr=72 pi=[56,72)/1 crt=38'39 mlcod 38'39 active pruub 139.558502197s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:18 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 73 pg[6.b( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=72 pruub=14.698731422s) [1] r=-1 lpr=72 pi=[56,72)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 139.558502197s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:18 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 73 pg[6.b( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=72) [1] r=0 lpr=73 pi=[56,72)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:18 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 73 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=73 pruub=9.308315277s) [2] r=-1 lpr=73 pi=[46,73)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 129.768844604s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:18 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 73 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=73 pruub=9.308073044s) [2] r=-1 lpr=73 pi=[46,73)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.768844604s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:18 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 73 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=73 pruub=9.311613083s) [2] r=-1 lpr=73 pi=[46,73)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 129.774658203s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:18 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 73 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=73 pruub=9.311557770s) [2] r=-1 lpr=73 pi=[46,73)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 129.774658203s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:18 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 73 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=73) [2] r=0 lpr=73 pi=[46,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:18 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 73 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=73) [2] r=0 lpr=73 pi=[46,73)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:18 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.1e deep-scrub starts
Oct  7 09:34:18 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.1e deep-scrub ok
Oct  7 09:34:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e73 do_prune osdmap full prune enabled
Oct  7 09:34:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e74 e74: 3 total, 3 up, 3 in
Oct  7 09:34:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e74: 3 total, 3 up, 3 in
Oct  7 09:34:19 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[46,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:19 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 74 pg[9.c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[46,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:19 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[46,74)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:19 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 74 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] r=-1 lpr=74 pi=[46,74)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  7 09:34:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "13"}]': finished
Oct  7 09:34:19 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 74 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] r=0 lpr=74 pi=[46,74)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:19 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 74 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] r=0 lpr=74 pi=[46,74)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:19 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 74 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] r=0 lpr=74 pi=[46,74)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:19 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 74 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=46/47 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] r=0 lpr=74 pi=[46,74)/1 crt=42'385 lcod 0'0 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:19 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 74 pg[6.b( v 38'39 lc 0'0 (0'0,38'39] local-lis/les=72/74 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=72) [1] r=0 lpr=73 pi=[56,72)/1 crt=38'39 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e74 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:20 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.1a scrub starts
Oct  7 09:34:20 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.1a scrub ok
Oct  7 09:34:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v154: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 341 B/s wr, 2 op/s
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"} v 0) v1
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e74 do_prune osdmap full prune enabled
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e75 e75: 3 total, 3 up, 3 in
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]: dispatch
Oct  7 09:34:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e75: 3 total, 3 up, 3 in
Oct  7 09:34:20 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 75 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=74/75 n=6 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[46,74)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:20 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 75 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=74/75 n=5 ec=46/35 lis/c=46/46 les/c/f=47/47/0 sis=74) [2]/[1] async=[2] r=0 lpr=74 pi=[46,74)/1 crt=42'385 lcod 0'0 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:20 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 75 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=44/23 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=10.703463554s) [1] r=-1 lpr=75 pi=[59,75)/1 crt=38'39 mlcod 38'39 active pruub 137.835250854s@ mbc={255={}}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:20 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 75 pg[6.d( v 38'39 (0'0,38'39] local-lis/les=59/60 n=1 ec=44/23 lis/c=59/59 les/c/f=60/60/0 sis=75 pruub=10.703371048s) [1] r=-1 lpr=75 pi=[59,75)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 137.835250854s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:20 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 75 pg[6.d( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=59/59 les/c/f=60/60/0 sis=75) [1] r=0 lpr=75 pi=[59,75)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e75 do_prune osdmap full prune enabled
Oct  7 09:34:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  7 09:34:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "14"}]': finished
Oct  7 09:34:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e76 e76: 3 total, 3 up, 3 in
Oct  7 09:34:21 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e76: 3 total, 3 up, 3 in
Oct  7 09:34:21 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 76 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=74/75 n=6 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76 pruub=15.004516602s) [2] async=[2] r=-1 lpr=76 pi=[46,76)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 138.503433228s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:21 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 76 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=74/75 n=6 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76 pruub=15.004362106s) [2] r=-1 lpr=76 pi=[46,76)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.503433228s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:21 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 76 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=74/75 n=5 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76 pruub=15.004691124s) [2] async=[2] r=-1 lpr=76 pi=[46,76)/1 crt=42'385 lcod 0'0 mlcod 0'0 active pruub 138.504943848s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:21 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 76 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=74/75 n=5 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76 pruub=15.004424095s) [2] r=-1 lpr=76 pi=[46,76)/1 crt=42'385 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 138.504943848s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:21 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 76 pg[6.d( v 38'39 lc 35'13 (0'0,38'39] local-lis/les=75/76 n=1 ec=44/23 lis/c=59/59 les/c/f=60/60/0 sis=75) [1] r=0 lpr=75 pi=[59,75)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=2 mbc={255={(0+1)=2}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:21 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 76 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76) [2] r=0 lpr=76 pi=[46,76)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:21 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 76 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76) [2] r=0 lpr=76 pi=[46,76)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:21 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 76 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=0/0 n=6 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76) [2] r=0 lpr=76 pi=[46,76)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:21 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 76 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76) [2] r=0 lpr=76 pi=[46,76)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:21 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.1e deep-scrub starts
Oct  7 09:34:21 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.1e deep-scrub ok
Oct  7 09:34:21 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.10 scrub starts
Oct  7 09:34:21 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.10 scrub ok
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:34:22
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'images', 'default.rgw.meta', 'backups', '.rgw.root', 'vms']
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v157: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"} v 0) v1
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:34:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e76 do_prune osdmap full prune enabled
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e77 e77: 3 total, 3 up, 3 in
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]: dispatch
Oct  7 09:34:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e77: 3 total, 3 up, 3 in
Oct  7 09:34:22 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 77 pg[9.c( v 42'385 (0'0,42'385] local-lis/les=76/77 n=6 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76) [2] r=0 lpr=76 pi=[46,76)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:22 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 77 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=76/77 n=5 ec=46/35 lis/c=74/46 les/c/f=75/47/0 sis=76) [2] r=0 lpr=76 pi=[46,76)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:22 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.16 scrub starts
Oct  7 09:34:22 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.16 scrub ok
Oct  7 09:34:22 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.12 scrub starts
Oct  7 09:34:22 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.12 scrub ok
Oct  7 09:34:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  7 09:34:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "15"}]': finished
Oct  7 09:34:23 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.14 scrub starts
Oct  7 09:34:23 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.14 scrub ok
Oct  7 09:34:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v159: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 36 B/s, 3 objects/s recovering
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"} v 0) v1
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e77 do_prune osdmap full prune enabled
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e78 e78: 3 total, 3 up, 3 in
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e78: 3 total, 3 up, 3 in
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  7 09:34:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]: dispatch
Oct  7 09:34:25 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 78 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=78 pruub=8.336298943s) [2] r=-1 lpr=78 pi=[56,78)/1 crt=38'39 mlcod 38'39 active pruub 139.565322876s@ mbc={255={}}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:25 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 78 pg[6.f( v 38'39 (0'0,38'39] local-lis/les=56/57 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=78 pruub=8.336220741s) [2] r=-1 lpr=78 pi=[56,78)/1 crt=38'39 mlcod 0'0 unknown NOTIFY pruub 139.565322876s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 78 pg[6.f( empty local-lis/les=0/0 n=0 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=78) [2] r=0 lpr=78 pi=[56,78)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e78 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e78 do_prune osdmap full prune enabled
Oct  7 09:34:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "cephfs.cephfs.meta", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  7 09:34:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "16"}]': finished
Oct  7 09:34:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e79 e79: 3 total, 3 up, 3 in
Oct  7 09:34:25 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e79: 3 total, 3 up, 3 in
Oct  7 09:34:25 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 79 pg[6.f( v 38'39 lc 35'1 (0'0,38'39] local-lis/les=78/79 n=1 ec=44/23 lis/c=56/56 les/c/f=57/57/0 sis=78) [2] r=0 lpr=78 pi=[56,78)/1 crt=38'39 lcod 0'0 mlcod 0'0 active+degraded m=3 mbc={255={(0+1)=3}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.1e scrub starts
Oct  7 09:34:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 2.1e scrub ok
Oct  7 09:34:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v162: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 37 B/s, 3 objects/s recovering
Oct  7 09:34:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"} v 0) v1
Oct  7 09:34:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  7 09:34:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e79 do_prune osdmap full prune enabled
Oct  7 09:34:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  7 09:34:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e80 e80: 3 total, 3 up, 3 in
Oct  7 09:34:26 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e80: 3 total, 3 up, 3 in
Oct  7 09:34:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]: dispatch
Oct  7 09:34:26 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.14 scrub starts
Oct  7 09:34:26 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.14 scrub ok
Oct  7 09:34:27 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.6 scrub starts
Oct  7 09:34:27 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.6 scrub ok
Oct  7 09:34:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "17"}]': finished
Oct  7 09:34:27 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.16 scrub starts
Oct  7 09:34:27 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.16 scrub ok
Oct  7 09:34:28 np0005473739 systemd-logind[801]: New session 35 of user zuul.
Oct  7 09:34:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v164: 274 pgs: 274 active+clean; 456 KiB data, 103 MiB used, 60 GiB / 60 GiB avail; 30 B/s, 2 objects/s recovering
Oct  7 09:34:28 np0005473739 systemd[1]: Started Session 35 of User zuul.
Oct  7 09:34:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"} v 0) v1
Oct  7 09:34:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  7 09:34:28 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.13 scrub starts
Oct  7 09:34:28 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.13 scrub ok
Oct  7 09:34:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e80 do_prune osdmap full prune enabled
Oct  7 09:34:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  7 09:34:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e81 e81: 3 total, 3 up, 3 in
Oct  7 09:34:28 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e81: 3 total, 3 up, 3 in
Oct  7 09:34:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]: dispatch
Oct  7 09:34:29 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.8 scrub starts
Oct  7 09:34:29 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.8 scrub ok
Oct  7 09:34:29 np0005473739 python3.9[104348]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:34:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "18"}]': finished
Oct  7 09:34:29 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.17 scrub starts
Oct  7 09:34:29 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.17 scrub ok
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e81 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v166: 274 pgs: 274 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 103 B/s, 0 objects/s recovering
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"} v 0) v1
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e81 do_prune osdmap full prune enabled
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e82 e82: 3 total, 3 up, 3 in
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e82: 3 total, 3 up, 3 in
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]: dispatch
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:34:30 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 1)
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"} v 0) v1
Oct  7 09:34:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:34:30 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.19 scrub starts
Oct  7 09:34:30 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.19 scrub ok
Oct  7 09:34:31 np0005473739 python3.9[104566]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:34:31 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.15 scrub starts
Oct  7 09:34:31 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.15 scrub ok
Oct  7 09:34:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e82 do_prune osdmap full prune enabled
Oct  7 09:34:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:34:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e83 e83: 3 total, 3 up, 3 in
Oct  7 09:34:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e83: 3 total, 3 up, 3 in
Oct  7 09:34:31 np0005473739 ceph-mgr[74587]: [progress INFO root] update: starting ev d353729b-5fab-4f1f-aa9c-ea864e8cad7e (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  7 09:34:31 np0005473739 ceph-mgr[74587]: [progress INFO root] complete: finished ev d353729b-5fab-4f1f-aa9c-ea864e8cad7e (PG autoscaler increasing pool 11 PGs from 1 to 32)
Oct  7 09:34:31 np0005473739 ceph-mgr[74587]: [progress INFO root] Completed event d353729b-5fab-4f1f-aa9c-ea864e8cad7e (PG autoscaler increasing pool 11 PGs from 1 to 32) in 0 seconds
Oct  7 09:34:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "19"}]': finished
Oct  7 09:34:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]: dispatch
Oct  7 09:34:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v169: 274 pgs: 274 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 104 B/s, 0 objects/s recovering
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"} v 0) v1
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"} v 0) v1
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  7 09:34:32 np0005473739 ceph-mgr[74587]: [progress INFO root] Writing back 16 completed events
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/progress/completed}] v 0) v1
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e83 do_prune osdmap full prune enabled
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e84 e84: 3 total, 3 up, 3 in
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e84: 3 total, 3 up, 3 in
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num", "val": "32"}]': finished
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]: dispatch
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]: dispatch
Oct  7 09:34:32 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:34:33 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 84 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=84 pruub=9.080702782s) [2] r=-1 lpr=84 pi=[54,84)/1 crt=42'385 mlcod 0'0 active pruub 148.411544800s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:33 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 84 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=84 pruub=9.080657005s) [2] r=-1 lpr=84 pi=[54,84)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 148.411544800s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:33 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 84 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=84) [2] r=0 lpr=84 pi=[54,84)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 84 pg[11.0( v 71'2 (0'0,71'2] local-lis/les=39/40 n=2 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=84 pruub=11.638961792s) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 71'1 mlcod 71'1 active pruub 146.719238281s@ mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [1], acting_primary 1 -> 1, up_primary 1 -> 1, role 0 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 84 pg[11.0( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=84 pruub=11.638961792s) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 71'1 mlcod 0'0 unknown pruub 146.719238281s@ mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.a scrub starts
Oct  7 09:34:33 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.a scrub ok
Oct  7 09:34:33 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.f deep-scrub starts
Oct  7 09:34:33 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.f deep-scrub ok
Oct  7 09:34:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e84 do_prune osdmap full prune enabled
Oct  7 09:34:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e85 e85: 3 total, 3 up, 3 in
Oct  7 09:34:33 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e85: 3 total, 3 up, 3 in
Oct  7 09:34:33 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 85 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] r=0 lpr=85 pi=[54,85)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:33 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 85 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] r=0 lpr=85 pi=[54,85)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pg_num_actual", "val": "32"}]': finished
Oct  7 09:34:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "20"}]': finished
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.19( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.16( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.17( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.14( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.13( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.12( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.11( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.10( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.f( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.e( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.d( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[54,85)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.b( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.9( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.2( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=1 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.3( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.15( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 85 pg[9.13( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] r=-1 lpr=85 pi=[54,85)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.c( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.8( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.a( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=39/40 n=1 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.4( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.5( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.18( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.7( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.6( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1a( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1b( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1c( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1d( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1e( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1f( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=39/40 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.19( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.17( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.14( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.12( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.10( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.f( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.e( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.d( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.b( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.9( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.11( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.0( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=39/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 71'1 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.3( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.2( v 71'2 (0'0,71'2] local-lis/les=84/85 n=1 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.c( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.8( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.a( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.4( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.13( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.16( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=84/85 n=1 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.5( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1b( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1a( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1c( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1e( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1d( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.1f( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.15( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.6( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.18( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:33 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 85 pg[11.7( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=39/39 les/c/f=40/40/0 sis=84) [1] r=0 lpr=84 pi=[39,84)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:34 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.1d deep-scrub starts
Oct  7 09:34:34 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.1d deep-scrub ok
Oct  7 09:34:34 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.b scrub starts
Oct  7 09:34:34 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.b scrub ok
Oct  7 09:34:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v172: 305 pgs: 1 peering, 32 unknown, 272 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:34:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e85 do_prune osdmap full prune enabled
Oct  7 09:34:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e86 e86: 3 total, 3 up, 3 in
Oct  7 09:34:34 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e86: 3 total, 3 up, 3 in
Oct  7 09:34:34 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 86 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=85/86 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=85) [2]/[0] async=[2] r=0 lpr=85 pi=[54,85)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:34 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.1e scrub starts
Oct  7 09:34:34 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 7.1e scrub ok
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e86 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e86 do_prune osdmap full prune enabled
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e87 e87: 3 total, 3 up, 3 in
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e87: 3 total, 3 up, 3 in
Oct  7 09:34:35 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 87 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=85/54 les/c/f=86/55/0 sis=87) [2] r=0 lpr=87 pi=[54,87)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:35 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 87 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=85/54 les/c/f=86/55/0 sis=87) [2] r=0 lpr=87 pi=[54,87)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:35 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 87 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=85/86 n=5 ec=46/35 lis/c=85/54 les/c/f=86/55/0 sis=87 pruub=15.628310204s) [2] async=[2] r=-1 lpr=87 pi=[54,87)/1 crt=42'385 mlcod 42'385 active pruub 157.075195312s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:35 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 87 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=85/86 n=5 ec=46/35 lis/c=85/54 les/c/f=86/55/0 sis=87 pruub=15.628230095s) [2] r=-1 lpr=87 pi=[54,87)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 157.075195312s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:34:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev dc1e4185-48bc-4742-bb7a-fe5462dd2a9b does not exist
Oct  7 09:34:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ac4370ec-ca1b-4e8a-9538-a9077ad5907f does not exist
Oct  7 09:34:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8fe3a15b-284b-46aa-bd88-1ceab54e500f does not exist
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:34:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:34:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e87 do_prune osdmap full prune enabled
Oct  7 09:34:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:34:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:34:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:34:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e88 e88: 3 total, 3 up, 3 in
Oct  7 09:34:36 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e88: 3 total, 3 up, 3 in
Oct  7 09:34:36 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 88 pg[9.13( v 42'385 (0'0,42'385] local-lis/les=87/88 n=5 ec=46/35 lis/c=85/54 les/c/f=86/55/0 sis=87) [2] r=0 lpr=87 pi=[54,87)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:36 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.d scrub starts
Oct  7 09:34:36 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.d scrub ok
Oct  7 09:34:36 np0005473739 podman[104861]: 2025-10-07 13:34:36.418102189 +0000 UTC m=+0.064267320 container create fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jang, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:34:36 np0005473739 systemd[1]: Started libpod-conmon-fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5.scope.
Oct  7 09:34:36 np0005473739 podman[104861]: 2025-10-07 13:34:36.380893143 +0000 UTC m=+0.027058294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:34:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:34:36 np0005473739 podman[104861]: 2025-10-07 13:34:36.519542861 +0000 UTC m=+0.165708042 container init fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jang, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:34:36 np0005473739 podman[104861]: 2025-10-07 13:34:36.533657514 +0000 UTC m=+0.179822645 container start fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:34:36 np0005473739 podman[104861]: 2025-10-07 13:34:36.538426977 +0000 UTC m=+0.184592348 container attach fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jang, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:34:36 np0005473739 priceless_jang[104878]: 167 167
Oct  7 09:34:36 np0005473739 systemd[1]: libpod-fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5.scope: Deactivated successfully.
Oct  7 09:34:36 np0005473739 conmon[104878]: conmon fdd3f619abf1c1d83d4b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5.scope/container/memory.events
Oct  7 09:34:36 np0005473739 podman[104861]: 2025-10-07 13:34:36.543070876 +0000 UTC m=+0.189236007 container died fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jang, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:34:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v176: 305 pgs: 1 peering, 32 unknown, 272 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:34:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7fae1ec3c1c9a08900083ae7ca072fb7f9360bfd60f6f74803831848b2c14a7e-merged.mount: Deactivated successfully.
Oct  7 09:34:36 np0005473739 podman[104861]: 2025-10-07 13:34:36.601456491 +0000 UTC m=+0.247621622 container remove fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:34:36 np0005473739 systemd[1]: libpod-conmon-fdd3f619abf1c1d83d4b50e64e00e4c114f3918f19ab702313670e7910ee19e5.scope: Deactivated successfully.
Oct  7 09:34:36 np0005473739 podman[104901]: 2025-10-07 13:34:36.792482177 +0000 UTC m=+0.075148692 container create ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:34:36 np0005473739 systemd[1]: Started libpod-conmon-ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787.scope.
Oct  7 09:34:36 np0005473739 podman[104901]: 2025-10-07 13:34:36.766499014 +0000 UTC m=+0.049165619 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:34:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:34:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62723db65b18458c442291300b0ff51d7bc45d93a02caeb7ceb79b051df12dac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62723db65b18458c442291300b0ff51d7bc45d93a02caeb7ceb79b051df12dac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62723db65b18458c442291300b0ff51d7bc45d93a02caeb7ceb79b051df12dac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62723db65b18458c442291300b0ff51d7bc45d93a02caeb7ceb79b051df12dac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62723db65b18458c442291300b0ff51d7bc45d93a02caeb7ceb79b051df12dac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:36 np0005473739 podman[104901]: 2025-10-07 13:34:36.895849314 +0000 UTC m=+0.178515839 container init ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:34:36 np0005473739 podman[104901]: 2025-10-07 13:34:36.921988332 +0000 UTC m=+0.204654847 container start ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:34:36 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.1 scrub starts
Oct  7 09:34:37 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.1 scrub ok
Oct  7 09:34:37 np0005473739 podman[104901]: 2025-10-07 13:34:37.081639075 +0000 UTC m=+0.364305590 container attach ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:34:37 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.4 scrub starts
Oct  7 09:34:37 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.4 scrub ok
Oct  7 09:34:37 np0005473739 dreamy_beaver[104918]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:34:37 np0005473739 dreamy_beaver[104918]: --> relative data size: 1.0
Oct  7 09:34:37 np0005473739 dreamy_beaver[104918]: --> All data devices are unavailable
Oct  7 09:34:37 np0005473739 systemd[1]: libpod-ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787.scope: Deactivated successfully.
Oct  7 09:34:37 np0005473739 systemd[1]: libpod-ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787.scope: Consumed 1.014s CPU time.
Oct  7 09:34:37 np0005473739 podman[104901]: 2025-10-07 13:34:37.991582038 +0000 UTC m=+1.274248553 container died ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:34:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-62723db65b18458c442291300b0ff51d7bc45d93a02caeb7ceb79b051df12dac-merged.mount: Deactivated successfully.
Oct  7 09:34:38 np0005473739 podman[104901]: 2025-10-07 13:34:38.066648776 +0000 UTC m=+1.349315291 container remove ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_beaver, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:34:38 np0005473739 systemd[1]: libpod-conmon-ce653ff74637c13375243b7406de5ec3352018c7203fffe9e79d439953fa4787.scope: Deactivated successfully.
Oct  7 09:34:38 np0005473739 systemd[1]: session-35.scope: Deactivated successfully.
Oct  7 09:34:38 np0005473739 systemd[1]: session-35.scope: Consumed 8.027s CPU time.
Oct  7 09:34:38 np0005473739 systemd-logind[801]: Session 35 logged out. Waiting for processes to exit.
Oct  7 09:34:38 np0005473739 systemd-logind[801]: Removed session 35.
Oct  7 09:34:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v177: 305 pgs: 1 unknown, 304 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:34:38 np0005473739 podman[105129]: 2025-10-07 13:34:38.648117649 +0000 UTC m=+0.036642462 container create 513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:34:38 np0005473739 systemd[1]: Started libpod-conmon-513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046.scope.
Oct  7 09:34:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:34:38 np0005473739 podman[105129]: 2025-10-07 13:34:38.722994302 +0000 UTC m=+0.111519135 container init 513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 09:34:38 np0005473739 podman[105129]: 2025-10-07 13:34:38.728819374 +0000 UTC m=+0.117344187 container start 513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:34:38 np0005473739 podman[105129]: 2025-10-07 13:34:38.633340737 +0000 UTC m=+0.021865570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:34:38 np0005473739 podman[105129]: 2025-10-07 13:34:38.732442566 +0000 UTC m=+0.120967399 container attach 513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 09:34:38 np0005473739 crazy_lumiere[105145]: 167 167
Oct  7 09:34:38 np0005473739 systemd[1]: libpod-513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046.scope: Deactivated successfully.
Oct  7 09:34:38 np0005473739 conmon[105145]: conmon 513cbea4d140a8a05bf3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046.scope/container/memory.events
Oct  7 09:34:38 np0005473739 podman[105129]: 2025-10-07 13:34:38.734268626 +0000 UTC m=+0.122793449 container died 513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:34:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2316826e3568e6a355783c66674f6e048551556e2274afb55146568a68ea31f5-merged.mount: Deactivated successfully.
Oct  7 09:34:38 np0005473739 podman[105129]: 2025-10-07 13:34:38.769030564 +0000 UTC m=+0.157555377 container remove 513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lumiere, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 09:34:38 np0005473739 systemd[1]: libpod-conmon-513cbea4d140a8a05bf3045ee6176dd1a1331798bceb04bd50da4a290671a046.scope: Deactivated successfully.
Oct  7 09:34:38 np0005473739 podman[105167]: 2025-10-07 13:34:38.898863237 +0000 UTC m=+0.033192435 container create b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:34:38 np0005473739 systemd[1]: Started libpod-conmon-b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b.scope.
Oct  7 09:34:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:34:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0e149d6e2e078971cb94702a4c8fb9d4c35fea9d65de7f7cd8c1fdfaebdabe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0e149d6e2e078971cb94702a4c8fb9d4c35fea9d65de7f7cd8c1fdfaebdabe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0e149d6e2e078971cb94702a4c8fb9d4c35fea9d65de7f7cd8c1fdfaebdabe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8c0e149d6e2e078971cb94702a4c8fb9d4c35fea9d65de7f7cd8c1fdfaebdabe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:38 np0005473739 podman[105167]: 2025-10-07 13:34:38.965906433 +0000 UTC m=+0.100235641 container init b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  7 09:34:38 np0005473739 podman[105167]: 2025-10-07 13:34:38.97265092 +0000 UTC m=+0.106980118 container start b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:34:38 np0005473739 podman[105167]: 2025-10-07 13:34:38.97624153 +0000 UTC m=+0.110570728 container attach b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:34:38 np0005473739 podman[105167]: 2025-10-07 13:34:38.884821816 +0000 UTC m=+0.019151034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]: {
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:    "0": [
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:        {
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "devices": [
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "/dev/loop3"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            ],
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_name": "ceph_lv0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_size": "21470642176",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "name": "ceph_lv0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "tags": {
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cluster_name": "ceph",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.crush_device_class": "",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.encrypted": "0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osd_id": "0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.type": "block",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.vdo": "0"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            },
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "type": "block",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "vg_name": "ceph_vg0"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:        }
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:    ],
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:    "1": [
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:        {
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "devices": [
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "/dev/loop4"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            ],
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_name": "ceph_lv1",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_size": "21470642176",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "name": "ceph_lv1",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "tags": {
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cluster_name": "ceph",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.crush_device_class": "",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.encrypted": "0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osd_id": "1",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.type": "block",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.vdo": "0"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            },
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "type": "block",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "vg_name": "ceph_vg1"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:        }
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:    ],
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:    "2": [
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:        {
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "devices": [
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "/dev/loop5"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            ],
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_name": "ceph_lv2",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_size": "21470642176",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "name": "ceph_lv2",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "tags": {
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.cluster_name": "ceph",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.crush_device_class": "",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.encrypted": "0",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osd_id": "2",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.type": "block",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:                "ceph.vdo": "0"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            },
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "type": "block",
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:            "vg_name": "ceph_vg2"
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:        }
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]:    ]
Oct  7 09:34:39 np0005473739 inspiring_cannon[105183]: }
Oct  7 09:34:39 np0005473739 systemd[1]: libpod-b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b.scope: Deactivated successfully.
Oct  7 09:34:39 np0005473739 podman[105167]: 2025-10-07 13:34:39.712523711 +0000 UTC m=+0.846852919 container died b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:34:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8c0e149d6e2e078971cb94702a4c8fb9d4c35fea9d65de7f7cd8c1fdfaebdabe-merged.mount: Deactivated successfully.
Oct  7 09:34:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.b scrub starts
Oct  7 09:34:39 np0005473739 podman[105167]: 2025-10-07 13:34:39.784242906 +0000 UTC m=+0.918572114 container remove b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 09:34:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.b scrub ok
Oct  7 09:34:39 np0005473739 systemd[1]: libpod-conmon-b43faa696f366e95b56aa22957ee2147264bb0ba56b5b883a1101d0dd85b947b.scope: Deactivated successfully.
Oct  7 09:34:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e88 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:40 np0005473739 podman[105347]: 2025-10-07 13:34:40.324363368 +0000 UTC m=+0.034528882 container create c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:34:40 np0005473739 systemd[1]: Started libpod-conmon-c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296.scope.
Oct  7 09:34:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:34:40 np0005473739 podman[105347]: 2025-10-07 13:34:40.391918728 +0000 UTC m=+0.102084252 container init c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 09:34:40 np0005473739 podman[105347]: 2025-10-07 13:34:40.399016205 +0000 UTC m=+0.109181719 container start c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 09:34:40 np0005473739 optimistic_elion[105363]: 167 167
Oct  7 09:34:40 np0005473739 podman[105347]: 2025-10-07 13:34:40.401731751 +0000 UTC m=+0.111897265 container attach c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 09:34:40 np0005473739 podman[105347]: 2025-10-07 13:34:40.40313006 +0000 UTC m=+0.113295584 container died c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:34:40 np0005473739 systemd[1]: libpod-c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296.scope: Deactivated successfully.
Oct  7 09:34:40 np0005473739 podman[105347]: 2025-10-07 13:34:40.308869387 +0000 UTC m=+0.019034921 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:34:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b543edd8f97a2d7dfc6d577d60470f9b0d49af08a550cf840b24afafda0dde12-merged.mount: Deactivated successfully.
Oct  7 09:34:40 np0005473739 podman[105347]: 2025-10-07 13:34:40.450544759 +0000 UTC m=+0.160710273 container remove c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_elion, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:34:40 np0005473739 systemd[1]: libpod-conmon-c921d2273c3244dfe72762564c73e3df23a9bfa1d5e0356db18302c48c99e296.scope: Deactivated successfully.
Oct  7 09:34:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v178: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 0 B/s wr, 5 op/s; 36 B/s, 1 objects/s recovering
Oct  7 09:34:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"} v 0) v1
Oct  7 09:34:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  7 09:34:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:34:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:34:40 np0005473739 podman[105387]: 2025-10-07 13:34:40.588264052 +0000 UTC m=+0.038860922 container create ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 09:34:40 np0005473739 systemd[1]: Started libpod-conmon-ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a.scope.
Oct  7 09:34:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332e708515dffe83ae6a77c9978a9d620cb538102700af5aff22b9eec4f0e27/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332e708515dffe83ae6a77c9978a9d620cb538102700af5aff22b9eec4f0e27/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332e708515dffe83ae6a77c9978a9d620cb538102700af5aff22b9eec4f0e27/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e332e708515dffe83ae6a77c9978a9d620cb538102700af5aff22b9eec4f0e27/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:34:40 np0005473739 podman[105387]: 2025-10-07 13:34:40.667831716 +0000 UTC m=+0.118428606 container init ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:34:40 np0005473739 podman[105387]: 2025-10-07 13:34:40.572726409 +0000 UTC m=+0.023323299 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:34:40 np0005473739 podman[105387]: 2025-10-07 13:34:40.677968188 +0000 UTC m=+0.128565058 container start ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:34:40 np0005473739 podman[105387]: 2025-10-07 13:34:40.682362641 +0000 UTC m=+0.132959541 container attach ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:34:40 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.7 scrub starts
Oct  7 09:34:40 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.7 scrub ok
Oct  7 09:34:40 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.3 scrub starts
Oct  7 09:34:40 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.3 scrub ok
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e88 do_prune osdmap full prune enabled
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]: dispatch
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e89 e89: 3 total, 3 up, 3 in
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e89: 3 total, 3 up, 3 in
Oct  7 09:34:41 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.e scrub starts
Oct  7 09:34:41 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.e scrub ok
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]: {
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "osd_id": 2,
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "type": "bluestore"
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:    },
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "osd_id": 1,
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "type": "bluestore"
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:    },
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "osd_id": 0,
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:        "type": "bluestore"
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]:    }
Oct  7 09:34:41 np0005473739 vibrant_blackwell[105403]: }
Oct  7 09:34:41 np0005473739 systemd[1]: libpod-ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a.scope: Deactivated successfully.
Oct  7 09:34:41 np0005473739 podman[105387]: 2025-10-07 13:34:41.6280921 +0000 UTC m=+1.078688970 container died ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:34:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e332e708515dffe83ae6a77c9978a9d620cb538102700af5aff22b9eec4f0e27-merged.mount: Deactivated successfully.
Oct  7 09:34:41 np0005473739 podman[105387]: 2025-10-07 13:34:41.680975452 +0000 UTC m=+1.131572322 container remove ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_blackwell, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:34:41 np0005473739 systemd[1]: libpod-conmon-ef208e825c4d41231bab1b6539b7c6282240fb18563d1680f2e9cf957011ce3a.scope: Deactivated successfully.
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:34:41 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.11 scrub starts
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:34:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:34:41 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.11 scrub ok
Oct  7 09:34:41 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev fd0ff800-abf7-40ee-94c5-ff2e1f7306c8 does not exist
Oct  7 09:34:41 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c9401eaf-6022-4d84-aab2-46330aec2967 does not exist
Oct  7 09:34:41 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.5 scrub starts
Oct  7 09:34:41 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.5 scrub ok
Oct  7 09:34:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "21"}]': finished
Oct  7 09:34:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.meta", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:34:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:34:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:34:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v180: 305 pgs: 305 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 1.8 KiB/s rd, 0 B/s wr, 4 op/s; 30 B/s, 1 objects/s recovering
Oct  7 09:34:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"} v 0) v1
Oct  7 09:34:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  7 09:34:42 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.8 scrub starts
Oct  7 09:34:42 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.8 scrub ok
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.10 scrub starts
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.10 scrub ok
Oct  7 09:34:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e89 do_prune osdmap full prune enabled
Oct  7 09:34:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  7 09:34:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e90 e90: 3 total, 3 up, 3 in
Oct  7 09:34:43 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e90: 3 total, 3 up, 3 in
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.19( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.452826500s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.663024902s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.17( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.457252502s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667495728s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.19( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.452761650s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.663024902s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.15( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.458007812s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668319702s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.17( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.457176208s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667495728s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.14( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.457228661s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667617798s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.14( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.457176208s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667617798s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.12( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456939697s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667541504s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.15( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.457786560s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668319702s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.12( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456842422s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667541504s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.11( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456944466s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667678833s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.11( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456899643s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667678833s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.10( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456808090s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667633057s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.10( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456774712s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667633057s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.f( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456713676s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667587280s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.e( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456700325s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667617798s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.d( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456697464s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667617798s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.e( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456676483s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667617798s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.f( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456668854s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667587280s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.d( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456676483s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667617798s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.b( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456553459s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667633057s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.9( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456544876s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667709351s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.2( v 71'2 (0'0,71'2] local-lis/les=84/85 n=1 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456616402s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667846680s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.9( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456503868s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667709351s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.b( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456455231s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667633057s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.2( v 71'2 (0'0,71'2] local-lis/les=84/85 n=1 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456594467s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667846680s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.3( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456434250s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667770386s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.8( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456443787s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667907715s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.3( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456395149s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667770386s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.8( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456419945s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667907715s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=84/85 n=1 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456584930s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668182373s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=84/85 n=1 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456554413s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668182373s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.4( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456300735s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.667999268s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.4( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456271172s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.667999268s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.6( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456429482s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668197632s@ mbc={}] start_peering_interval up [1] -> [0], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.6( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456400871s) [0] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668197632s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]: dispatch
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.18( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456455231s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668411255s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.18( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456430435s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668411255s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.1c( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456080437s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668212891s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.1c( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456057549s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668212891s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.1b( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456047058s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668197632s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.1a( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.456025124s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668228149s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.1e( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.455980301s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668228149s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.1a( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.455994606s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668228149s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.1e( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.455958366s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668228149s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.1b( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.455982208s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668197632s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 89 pg[11.1f( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.455913544s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active pruub 159.668273926s@ mbc={}] start_peering_interval up [1] -> [2], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[11.1f( v 71'2 (0'0,71'2] local-lis/les=84/85 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89 pruub=14.455881119s) [2] r=-1 lpr=89 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 unknown NOTIFY pruub 159.668273926s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.11( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.1e( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.1f( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.17( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.1c( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.19( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.1a( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.1( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.1b( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.f( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.18( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.e( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.2( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.9( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.8( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.b( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.6( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.d( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.14( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.3( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.4( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.12( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 90 pg[11.15( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[11.10( empty local-lis/les=0/0 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=90 pruub=14.775881767s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=42'385 mlcod 0'0 active pruub 164.412048340s@ mbc={}] start_peering_interval up [0] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 0 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:43 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 90 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=90 pruub=14.775843620s) [1] r=-1 lpr=90 pi=[54,90)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 164.412048340s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 90 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=90) [1] r=0 lpr=90 pi=[54,90)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.7 scrub starts
Oct  7 09:34:43 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.7 scrub ok
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.17 deep-scrub starts
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.17 deep-scrub ok
Oct  7 09:34:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e90 do_prune osdmap full prune enabled
Oct  7 09:34:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e91 e91: 3 total, 3 up, 3 in
Oct  7 09:34:44 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e91: 3 total, 3 up, 3 in
Oct  7 09:34:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "22"}]': finished
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] r=0 lpr=91 pi=[54,91)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=54/55 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] r=0 lpr=91 pi=[54,91)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 91 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] r=-1 lpr=91 pi=[54,91)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [0], acting_primary 1 -> 0, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:44 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 91 pg[9.15( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] r=-1 lpr=91 pi=[54,91)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.10( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.12( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.11( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.d( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.15( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.4( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.14( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.6( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.e( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.f( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.1( v 71'2 (0'0,71'2] local-lis/les=89/91 n=1 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.19( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 91 pg[11.17( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [0] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.8( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.9( v 71'2 lc 0'0 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 mlcod 0'0 active+degraded m=1 mbc={255={(0+1)=1}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.1a( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.3( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.1b( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.2( v 71'2 (0'0,71'2] local-lis/les=89/91 n=1 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.1f( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.1e( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.18( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.1c( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 91 pg[11.b( v 71'2 (0'0,71'2] local-lis/les=89/91 n=0 ec=84/39 lis/c=84/84 les/c/f=85/85/0 sis=89) [2] r=0 lpr=90 pi=[84,89)/1 crt=71'2 lcod 0'0 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v183: 305 pgs: 15 peering, 1 unknown, 289 active+clean; 456 KiB data, 104 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 0 objects/s recovering
Oct  7 09:34:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e91 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e91 do_prune osdmap full prune enabled
Oct  7 09:34:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e92 e92: 3 total, 3 up, 3 in
Oct  7 09:34:45 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e92: 3 total, 3 up, 3 in
Oct  7 09:34:45 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 92 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=91/92 n=5 ec=46/35 lis/c=54/54 les/c/f=55/55/0 sis=91) [1]/[0] async=[1] r=0 lpr=91 pi=[54,91)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e92 do_prune osdmap full prune enabled
Oct  7 09:34:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e93 e93: 3 total, 3 up, 3 in
Oct  7 09:34:46 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e93: 3 total, 3 up, 3 in
Oct  7 09:34:46 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 93 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=91/92 n=5 ec=46/35 lis/c=91/54 les/c/f=92/55/0 sis=93 pruub=15.455674171s) [1] async=[1] r=-1 lpr=93 pi=[54,93)/1 crt=42'385 mlcod 42'385 active pruub 168.123443604s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:46 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 93 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=91/92 n=5 ec=46/35 lis/c=91/54 les/c/f=92/55/0 sis=93 pruub=15.455595970s) [1] r=-1 lpr=93 pi=[54,93)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 168.123443604s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:46 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 93 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=91/54 les/c/f=92/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [0] -> [1], acting_primary 0 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:46 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 93 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=91/54 les/c/f=92/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v186: 305 pgs: 15 peering, 1 unknown, 289 active+clean; 456 KiB data, 121 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:34:46 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.9 scrub starts
Oct  7 09:34:46 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.9 scrub ok
Oct  7 09:34:46 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.8 scrub starts
Oct  7 09:34:46 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.8 scrub ok
Oct  7 09:34:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e93 do_prune osdmap full prune enabled
Oct  7 09:34:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e94 e94: 3 total, 3 up, 3 in
Oct  7 09:34:47 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e94: 3 total, 3 up, 3 in
Oct  7 09:34:47 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 94 pg[9.15( v 42'385 (0'0,42'385] local-lis/les=93/94 n=5 ec=46/35 lis/c=91/54 les/c/f=92/55/0 sis=93) [1] r=0 lpr=93 pi=[54,93)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:47 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.a scrub starts
Oct  7 09:34:47 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.a scrub ok
Oct  7 09:34:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v188: 305 pgs: 15 peering, 290 active+clean; 456 KiB data, 122 MiB used, 60 GiB / 60 GiB avail; 3.4 KiB/s rd, 247 B/s wr, 8 op/s; 53 B/s, 1 objects/s recovering
Oct  7 09:34:48 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.5 scrub starts
Oct  7 09:34:48 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.5 scrub ok
Oct  7 09:34:49 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.2 scrub starts
Oct  7 09:34:49 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.2 scrub ok
Oct  7 09:34:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e94 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:50 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1b scrub starts
Oct  7 09:34:50 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1b scrub ok
Oct  7 09:34:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v189: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 97 B/s, 1 objects/s recovering
Oct  7 09:34:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"} v 0) v1
Oct  7 09:34:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  7 09:34:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e94 do_prune osdmap full prune enabled
Oct  7 09:34:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  7 09:34:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e95 e95: 3 total, 3 up, 3 in
Oct  7 09:34:50 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e95: 3 total, 3 up, 3 in
Oct  7 09:34:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]: dispatch
Oct  7 09:34:51 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1c scrub starts
Oct  7 09:34:51 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1c scrub ok
Oct  7 09:34:51 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 95 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=95 pruub=9.266645432s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'385 mlcod 0'0 active pruub 158.296997070s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:51 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 95 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=95 pruub=9.265967369s) [0] r=-1 lpr=95 pi=[64,95)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 158.296997070s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:51 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 95 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=95) [0] r=0 lpr=95 pi=[64,95)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e95 do_prune osdmap full prune enabled
Oct  7 09:34:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "23"}]': finished
Oct  7 09:34:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e96 e96: 3 total, 3 up, 3 in
Oct  7 09:34:51 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e96: 3 total, 3 up, 3 in
Oct  7 09:34:51 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[64,96)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:51 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 96 pg[9.16( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=96) [0]/[2] r=-1 lpr=96 pi=[64,96)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:51 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 96 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=96) [0]/[2] r=0 lpr=96 pi=[64,96)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:51 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 96 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=96) [0]/[2] r=0 lpr=96 pi=[64,96)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:52 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.13 scrub starts
Oct  7 09:34:52 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.13 scrub ok
Oct  7 09:34:52 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1f deep-scrub starts
Oct  7 09:34:52 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 5.1f deep-scrub ok
Oct  7 09:34:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v192: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 2.3 KiB/s rd, 170 B/s wr, 5 op/s; 97 B/s, 1 objects/s recovering
Oct  7 09:34:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"} v 0) v1
Oct  7 09:34:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  7 09:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:34:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e96 do_prune osdmap full prune enabled
Oct  7 09:34:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  7 09:34:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e97 e97: 3 total, 3 up, 3 in
Oct  7 09:34:52 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]: dispatch
Oct  7 09:34:52 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e97: 3 total, 3 up, 3 in
Oct  7 09:34:53 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.16 scrub starts
Oct  7 09:34:53 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.16 scrub ok
Oct  7 09:34:53 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 97 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=96/97 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=96) [0]/[2] async=[0] r=0 lpr=96 pi=[64,96)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=4}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e97 do_prune osdmap full prune enabled
Oct  7 09:34:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "24"}]': finished
Oct  7 09:34:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e98 e98: 3 total, 3 up, 3 in
Oct  7 09:34:53 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e98: 3 total, 3 up, 3 in
Oct  7 09:34:53 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 98 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=96/97 n=5 ec=46/35 lis/c=96/64 les/c/f=97/65/0 sis=98 pruub=15.872638702s) [0] async=[0] r=-1 lpr=98 pi=[64,98)/1 crt=42'385 mlcod 42'385 active pruub 167.271240234s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:53 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 98 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=96/97 n=5 ec=46/35 lis/c=96/64 les/c/f=97/65/0 sis=98 pruub=15.872570038s) [0] r=-1 lpr=98 pi=[64,98)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 167.271240234s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 98 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=96/64 les/c/f=97/65/0 sis=98) [0] r=0 lpr=98 pi=[64,98)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:53 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 98 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=96/64 les/c/f=97/65/0 sis=98) [0] r=0 lpr=98 pi=[64,98)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:53 np0005473739 systemd-logind[801]: New session 36 of user zuul.
Oct  7 09:34:53 np0005473739 systemd[1]: Started Session 36 of User zuul.
Oct  7 09:34:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v195: 305 pgs: 1 activating+remapped, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 4/211 objects misplaced (1.896%)
Oct  7 09:34:54 np0005473739 python3.9[105651]: ansible-ansible.legacy.ping Invoked with data=pong
Oct  7 09:34:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e98 do_prune osdmap full prune enabled
Oct  7 09:34:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e99 e99: 3 total, 3 up, 3 in
Oct  7 09:34:54 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e99: 3 total, 3 up, 3 in
Oct  7 09:34:54 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 99 pg[9.16( v 42'385 (0'0,42'385] local-lis/les=98/99 n=5 ec=46/35 lis/c=96/64 les/c/f=97/65/0 sis=98) [0] r=0 lpr=98 pi=[64,98)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:34:55 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.17 deep-scrub starts
Oct  7 09:34:55 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.17 deep-scrub ok
Oct  7 09:34:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e99 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:34:55 np0005473739 python3.9[105825]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:34:56 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.3 scrub starts
Oct  7 09:34:56 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.3 scrub ok
Oct  7 09:34:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v197: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 0 objects/s recovering
Oct  7 09:34:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"} v 0) v1
Oct  7 09:34:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  7 09:34:56 np0005473739 python3.9[105981]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:34:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e99 do_prune osdmap full prune enabled
Oct  7 09:34:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  7 09:34:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e100 e100: 3 total, 3 up, 3 in
Oct  7 09:34:56 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e100: 3 total, 3 up, 3 in
Oct  7 09:34:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]: dispatch
Oct  7 09:34:57 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.5 scrub starts
Oct  7 09:34:57 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.5 scrub ok
Oct  7 09:34:57 np0005473739 python3.9[106134]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:34:57 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "25"}]': finished
Oct  7 09:34:58 np0005473739 python3.9[106288]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:34:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v199: 305 pgs: 305 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Oct  7 09:34:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"} v 0) v1
Oct  7 09:34:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  7 09:34:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e100 do_prune osdmap full prune enabled
Oct  7 09:34:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]: dispatch
Oct  7 09:34:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  7 09:34:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e101 e101: 3 total, 3 up, 3 in
Oct  7 09:34:58 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e101: 3 total, 3 up, 3 in
Oct  7 09:34:58 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 101 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=14.259394646s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=42'385 mlcod 0'0 active pruub 179.414352417s@ mbc={}] start_peering_interval up [0] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:58 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 101 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=101 pruub=14.259306908s) [2] r=-1 lpr=101 pi=[53,101)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 179.414352417s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:58 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 101 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=101) [2] r=0 lpr=101 pi=[53,101)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:34:59 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.19 deep-scrub starts
Oct  7 09:34:59 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.19 deep-scrub ok
Oct  7 09:34:59 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.a scrub starts
Oct  7 09:34:59 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.a scrub ok
Oct  7 09:34:59 np0005473739 python3.9[106438]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:34:59 np0005473739 network[106455]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:34:59 np0005473739 network[106456]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:34:59 np0005473739 network[106457]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:34:59 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.4 scrub starts
Oct  7 09:34:59 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.4 scrub ok
Oct  7 09:34:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e101 do_prune osdmap full prune enabled
Oct  7 09:34:59 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "26"}]': finished
Oct  7 09:34:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e102 e102: 3 total, 3 up, 3 in
Oct  7 09:34:59 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e102: 3 total, 3 up, 3 in
Oct  7 09:34:59 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[53,102)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:59 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 102 pg[9.19( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=102) [2]/[0] r=-1 lpr=102 pi=[53,102)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:34:59 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 102 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=102) [2]/[0] r=0 lpr=102 pi=[53,102)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [2] -> [2], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:34:59 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 102 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=53/54 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=102) [2]/[0] r=0 lpr=102 pi=[53,102)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e102 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v202: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 0 objects/s recovering
Oct  7 09:35:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e102 do_prune osdmap full prune enabled
Oct  7 09:35:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e103 e103: 3 total, 3 up, 3 in
Oct  7 09:35:00 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e103: 3 total, 3 up, 3 in
Oct  7 09:35:01 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.c scrub starts
Oct  7 09:35:01 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 103 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=102/103 n=5 ec=46/35 lis/c=53/53 les/c/f=54/54/0 sis=102) [2]/[0] async=[2] r=0 lpr=102 pi=[53,102)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:35:01 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.c scrub ok
Oct  7 09:35:01 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.3 scrub starts
Oct  7 09:35:01 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.3 scrub ok
Oct  7 09:35:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e103 do_prune osdmap full prune enabled
Oct  7 09:35:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e104 e104: 3 total, 3 up, 3 in
Oct  7 09:35:01 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e104: 3 total, 3 up, 3 in
Oct  7 09:35:02 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 104 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=102/103 n=5 ec=46/35 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.118346214s) [2] async=[2] r=-1 lpr=104 pi=[53,104)/1 crt=42'385 mlcod 42'385 active pruub 183.333465576s@ mbc={255={}}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:02 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 104 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=102/103 n=5 ec=46/35 lis/c=102/53 les/c/f=103/54/0 sis=104 pruub=15.118133545s) [2] r=-1 lpr=104 pi=[53,104)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 183.333465576s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:02 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 104 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=102/53 les/c/f=103/54/0 sis=104) [2] r=0 lpr=104 pi=[53,104)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [2] -> [2], acting [0] -> [2], acting_primary 0 -> 2, up_primary 2 -> 2, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:02 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 104 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=102/53 les/c/f=103/54/0 sis=104) [2] r=0 lpr=104 pi=[53,104)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:02 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.4 scrub starts
Oct  7 09:35:02 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.4 scrub ok
Oct  7 09:35:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v205: 305 pgs: 1 remapped+peering, 304 active+clean; 456 KiB data, 139 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:02 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.2 deep-scrub starts
Oct  7 09:35:02 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 5.2 deep-scrub ok
Oct  7 09:35:02 np0005473739 python3.9[106719]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:35:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e104 do_prune osdmap full prune enabled
Oct  7 09:35:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e105 e105: 3 total, 3 up, 3 in
Oct  7 09:35:03 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e105: 3 total, 3 up, 3 in
Oct  7 09:35:03 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 105 pg[9.19( v 42'385 (0'0,42'385] local-lis/les=104/105 n=5 ec=46/35 lis/c=102/53 les/c/f=103/54/0 sis=104) [2] r=0 lpr=104 pi=[53,104)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:35:03 np0005473739 python3.9[106869]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:35:04 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.18 scrub starts
Oct  7 09:35:04 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.18 scrub ok
Oct  7 09:35:04 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.1e scrub starts
Oct  7 09:35:04 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 8.1e scrub ok
Oct  7 09:35:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v207: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 23 B/s, 1 objects/s recovering
Oct  7 09:35:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"} v 0) v1
Oct  7 09:35:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  7 09:35:04 np0005473739 python3.9[107023]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:35:04 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.7 scrub starts
Oct  7 09:35:04 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.7 scrub ok
Oct  7 09:35:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e105 do_prune osdmap full prune enabled
Oct  7 09:35:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  7 09:35:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e106 e106: 3 total, 3 up, 3 in
Oct  7 09:35:05 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e106: 3 total, 3 up, 3 in
Oct  7 09:35:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]: dispatch
Oct  7 09:35:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e106 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:05 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.a scrub starts
Oct  7 09:35:05 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.a scrub ok
Oct  7 09:35:05 np0005473739 python3.9[107181]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:35:05 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.8 scrub starts
Oct  7 09:35:05 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.8 scrub ok
Oct  7 09:35:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "27"}]': finished
Oct  7 09:35:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v209: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 19 B/s, 1 objects/s recovering
Oct  7 09:35:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"} v 0) v1
Oct  7 09:35:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  7 09:35:06 np0005473739 python3.9[107265]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:35:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e106 do_prune osdmap full prune enabled
Oct  7 09:35:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  7 09:35:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e107 e107: 3 total, 3 up, 3 in
Oct  7 09:35:07 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e107: 3 total, 3 up, 3 in
Oct  7 09:35:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]: dispatch
Oct  7 09:35:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "28"}]': finished
Oct  7 09:35:08 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.10 scrub starts
Oct  7 09:35:08 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.10 scrub ok
Oct  7 09:35:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v211: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct  7 09:35:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"} v 0) v1
Oct  7 09:35:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  7 09:35:08 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.1 deep-scrub starts
Oct  7 09:35:08 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.1 deep-scrub ok
Oct  7 09:35:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e107 do_prune osdmap full prune enabled
Oct  7 09:35:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]: dispatch
Oct  7 09:35:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  7 09:35:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e108 e108: 3 total, 3 up, 3 in
Oct  7 09:35:09 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e108: 3 total, 3 up, 3 in
Oct  7 09:35:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "29"}]': finished
Oct  7 09:35:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e108 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:10 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.12 scrub starts
Oct  7 09:35:10 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.12 scrub ok
Oct  7 09:35:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v213: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"} v 0) v1
Oct  7 09:35:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  7 09:35:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e108 do_prune osdmap full prune enabled
Oct  7 09:35:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]: dispatch
Oct  7 09:35:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  7 09:35:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e109 e109: 3 total, 3 up, 3 in
Oct  7 09:35:11 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e109: 3 total, 3 up, 3 in
Oct  7 09:35:11 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 108 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=76/77 n=5 ec=46/35 lis/c=76/76 les/c/f=77/77/0 sis=108 pruub=15.649839401s) [0] r=-1 lpr=108 pi=[76,108)/1 crt=42'385 mlcod 0'0 active pruub 184.286972046s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:11 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 109 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=76/77 n=5 ec=46/35 lis/c=76/76 les/c/f=77/77/0 sis=108 pruub=15.649777412s) [0] r=-1 lpr=108 pi=[76,108)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 184.286972046s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:11 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 109 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=76/76 les/c/f=77/77/0 sis=108) [0] r=0 lpr=109 pi=[76,108)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:11 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.14 scrub starts
Oct  7 09:35:11 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.14 scrub ok
Oct  7 09:35:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e109 do_prune osdmap full prune enabled
Oct  7 09:35:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e110 e110: 3 total, 3 up, 3 in
Oct  7 09:35:12 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e110: 3 total, 3 up, 3 in
Oct  7 09:35:12 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=76/76 les/c/f=77/77/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[76,110)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:12 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 110 pg[9.1c( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=76/76 les/c/f=77/77/0 sis=110) [0]/[2] r=-1 lpr=110 pi=[76,110)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:12 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "30"}]': finished
Oct  7 09:35:12 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 110 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=76/77 n=5 ec=46/35 lis/c=76/76 les/c/f=77/77/0 sis=110) [0]/[2] r=0 lpr=110 pi=[76,110)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:12 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 110 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=76/77 n=5 ec=46/35 lis/c=76/76 les/c/f=77/77/0 sis=110) [0]/[2] r=0 lpr=110 pi=[76,110)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v216: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"} v 0) v1
Oct  7 09:35:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  7 09:35:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e110 do_prune osdmap full prune enabled
Oct  7 09:35:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  7 09:35:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e111 e111: 3 total, 3 up, 3 in
Oct  7 09:35:13 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e111: 3 total, 3 up, 3 in
Oct  7 09:35:13 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 111 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=111 pruub=11.564265251s) [0] r=-1 lpr=111 pi=[64,111)/1 crt=42'385 mlcod 0'0 active pruub 182.297592163s@ mbc={}] start_peering_interval up [2] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 2 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:13 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 111 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=111 pruub=11.564221382s) [0] r=-1 lpr=111 pi=[64,111)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 182.297592163s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]: dispatch
Oct  7 09:35:13 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 111 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=111) [0] r=0 lpr=111 pi=[64,111)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:13 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 111 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=110/111 n=5 ec=46/35 lis/c=76/76 les/c/f=77/77/0 sis=110) [0]/[2] async=[0] r=0 lpr=110 pi=[76,110)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=7}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:35:13 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.1a scrub starts
Oct  7 09:35:13 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.1a scrub ok
Oct  7 09:35:13 np0005473739 systemd[1]: packagekit.service: Deactivated successfully.
Oct  7 09:35:13 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.e scrub starts
Oct  7 09:35:13 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.e scrub ok
Oct  7 09:35:14 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.1b scrub starts
Oct  7 09:35:14 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.1b scrub ok
Oct  7 09:35:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e111 do_prune osdmap full prune enabled
Oct  7 09:35:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "31"}]': finished
Oct  7 09:35:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e112 e112: 3 total, 3 up, 3 in
Oct  7 09:35:14 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e112: 3 total, 3 up, 3 in
Oct  7 09:35:14 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 112 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=112) [0]/[2] r=0 lpr=112 pi=[64,112)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:14 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 112 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=110/111 n=5 ec=46/35 lis/c=110/76 les/c/f=111/77/0 sis=112 pruub=15.076240540s) [0] async=[0] r=-1 lpr=112 pi=[76,112)/1 crt=42'385 mlcod 42'385 active pruub 186.857666016s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:14 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 112 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=64/65 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=112) [0]/[2] r=0 lpr=112 pi=[64,112)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:14 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 112 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=110/111 n=5 ec=46/35 lis/c=110/76 les/c/f=111/77/0 sis=112 pruub=15.076127052s) [0] r=-1 lpr=112 pi=[76,112)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 186.857666016s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:14 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 112 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=110/76 les/c/f=111/77/0 sis=112) [0] r=0 lpr=112 pi=[76,112)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:14 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 112 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=110/76 les/c/f=111/77/0 sis=112) [0] r=0 lpr=112 pi=[76,112)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:14 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[64,112)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [0] -> [0], acting [0] -> [2], acting_primary 0 -> 2, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:14 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 112 pg[9.1e( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=112) [0]/[2] r=-1 lpr=112 pi=[64,112)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:14 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.10 scrub starts
Oct  7 09:35:14 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.10 scrub ok
Oct  7 09:35:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v219: 305 pgs: 1 active+remapped, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  7 09:35:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"} v 0) v1
Oct  7 09:35:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:35:14 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.1c scrub starts
Oct  7 09:35:14 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.1c scrub ok
Oct  7 09:35:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e112 do_prune osdmap full prune enabled
Oct  7 09:35:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]: dispatch
Oct  7 09:35:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:35:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e113 e113: 3 total, 3 up, 3 in
Oct  7 09:35:15 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e113: 3 total, 3 up, 3 in
Oct  7 09:35:15 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 113 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=67/68 n=5 ec=46/35 lis/c=67/67 les/c/f=68/68/0 sis=113 pruub=12.019315720s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=42'385 mlcod 0'0 active pruub 184.823669434s@ mbc={}] start_peering_interval up [2] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 2 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:15 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 113 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=67/68 n=5 ec=46/35 lis/c=67/67 les/c/f=68/68/0 sis=113 pruub=12.019255638s) [1] r=-1 lpr=113 pi=[67,113)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 184.823669434s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:15 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 113 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=67/67 les/c/f=68/68/0 sis=113) [1] r=0 lpr=113 pi=[67,113)/1 crt=0'0 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:15 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 113 pg[9.1c( v 42'385 (0'0,42'385] local-lis/les=112/113 n=5 ec=46/35 lis/c=110/76 les/c/f=111/77/0 sis=112) [0] r=0 lpr=112 pi=[76,112)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:35:15 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 113 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=112/113 n=5 ec=46/35 lis/c=64/64 les/c/f=65/65/0 sis=112) [0]/[2] async=[0] r=0 lpr=112 pi=[64,112)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:35:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e113 do_prune osdmap full prune enabled
Oct  7 09:35:16 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd='[{"prefix": "osd pool set", "pool": "default.rgw.log", "var": "pgp_num_actual", "val": "32"}]': finished
Oct  7 09:35:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e114 e114: 3 total, 3 up, 3 in
Oct  7 09:35:16 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e114: 3 total, 3 up, 3 in
Oct  7 09:35:16 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 114 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=67/68 n=5 ec=46/35 lis/c=67/67 les/c/f=68/68/0 sis=114) [1]/[2] r=0 lpr=114 pi=[67,114)/1 crt=42'385 mlcod 0'0 remapped NOTIFY mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:16 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 114 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=67/68 n=5 ec=46/35 lis/c=67/67 les/c/f=68/68/0 sis=114) [1]/[2] r=0 lpr=114 pi=[67,114)/1 crt=42'385 mlcod 0'0 remapped mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:16 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 114 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=112/113 n=5 ec=46/35 lis/c=112/64 les/c/f=113/65/0 sis=114 pruub=14.996724129s) [0] async=[0] r=-1 lpr=114 pi=[64,114)/1 crt=42'385 mlcod 42'385 active pruub 188.833023071s@ mbc={255={}}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:16 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 114 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=112/113 n=5 ec=46/35 lis/c=112/64 les/c/f=113/65/0 sis=114 pruub=14.996613503s) [0] r=-1 lpr=114 pi=[64,114)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 188.833023071s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:16 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=67/67 les/c/f=68/68/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[67,114)/1 crt=0'0 mlcod 0'0 remapped mbc={}] start_peering_interval up [1] -> [1], acting [1] -> [2], acting_primary 1 -> 2, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:16 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 114 pg[9.1f( empty local-lis/les=0/0 n=0 ec=46/35 lis/c=67/67 les/c/f=68/68/0 sis=114) [1]/[2] r=-1 lpr=114 pi=[67,114)/1 crt=0'0 mlcod 0'0 remapped NOTIFY mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:16 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 114 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=112/64 les/c/f=113/65/0 sis=114) [0] r=0 lpr=114 pi=[64,114)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [0] -> [0], acting [2] -> [0], acting_primary 2 -> 0, up_primary 0 -> 0, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:16 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 114 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=112/64 les/c/f=113/65/0 sis=114) [0] r=0 lpr=114 pi=[64,114)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v222: 305 pgs: 305 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 1 objects/s recovering
Oct  7 09:35:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e114 do_prune osdmap full prune enabled
Oct  7 09:35:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e115 e115: 3 total, 3 up, 3 in
Oct  7 09:35:17 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e115: 3 total, 3 up, 3 in
Oct  7 09:35:17 np0005473739 ceph-osd[88039]: osd.0 pg_epoch: 115 pg[9.1e( v 42'385 (0'0,42'385] local-lis/les=114/115 n=5 ec=46/35 lis/c=112/64 les/c/f=113/65/0 sis=114) [0] r=0 lpr=114 pi=[64,114)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:35:17 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 115 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=114/115 n=5 ec=46/35 lis/c=67/67 les/c/f=68/68/0 sis=114) [1]/[2] async=[1] r=0 lpr=114 pi=[67,114)/1 crt=42'385 mlcod 0'0 active+remapped mbc={255={(0+1)=5}}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:35:17 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.1d scrub starts
Oct  7 09:35:17 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.1d scrub ok
Oct  7 09:35:17 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.1c scrub starts
Oct  7 09:35:17 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.1c scrub ok
Oct  7 09:35:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e115 do_prune osdmap full prune enabled
Oct  7 09:35:18 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.12 scrub starts
Oct  7 09:35:18 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.12 scrub ok
Oct  7 09:35:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e116 e116: 3 total, 3 up, 3 in
Oct  7 09:35:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e116: 3 total, 3 up, 3 in
Oct  7 09:35:18 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 116 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=114/115 n=5 ec=46/35 lis/c=114/67 les/c/f=115/68/0 sis=116 pruub=14.947045326s) [1] async=[1] r=-1 lpr=116 pi=[67,116)/1 crt=42'385 mlcod 42'385 active pruub 190.932678223s@ mbc={255={}}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role 0 -> -1, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:18 np0005473739 ceph-osd[90092]: osd.2 pg_epoch: 116 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=114/115 n=5 ec=46/35 lis/c=114/67 les/c/f=115/68/0 sis=116 pruub=14.946779251s) [1] r=-1 lpr=116 pi=[67,116)/1 crt=42'385 mlcod 0'0 unknown NOTIFY pruub 190.932678223s@ mbc={}] state<Start>: transitioning to Stray
Oct  7 09:35:18 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 116 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=114/67 les/c/f=115/68/0 sis=116) [1] r=0 lpr=116 pi=[67,116)/1 luod=0'0 crt=42'385 mlcod 0'0 active mbc={}] start_peering_interval up [1] -> [1], acting [2] -> [1], acting_primary 2 -> 1, up_primary 1 -> 1, role -1 -> 0, features acting 4540138322906710015 upacting 4540138322906710015
Oct  7 09:35:18 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 116 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=0/0 n=5 ec=46/35 lis/c=114/67 les/c/f=115/68/0 sis=116) [1] r=0 lpr=116 pi=[67,116)/1 crt=42'385 mlcod 0'0 unknown mbc={}] state<Start>: transitioning to Primary
Oct  7 09:35:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v225: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 140 MiB used, 60 GiB / 60 GiB avail; 27 B/s, 0 objects/s recovering
Oct  7 09:35:19 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.14 scrub starts
Oct  7 09:35:19 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.14 scrub ok
Oct  7 09:35:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e116 do_prune osdmap full prune enabled
Oct  7 09:35:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 e117: 3 total, 3 up, 3 in
Oct  7 09:35:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e117: 3 total, 3 up, 3 in
Oct  7 09:35:19 np0005473739 ceph-osd[89062]: osd.1 pg_epoch: 117 pg[9.1f( v 42'385 (0'0,42'385] local-lis/les=116/117 n=5 ec=46/35 lis/c=114/67 les/c/f=115/68/0 sis=116) [1] r=0 lpr=116 pi=[67,116)/1 crt=42'385 mlcod 0'0 active mbc={}] state<Started/Primary/Active>: react AllReplicasActivated Activating complete
Oct  7 09:35:19 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.1d scrub starts
Oct  7 09:35:19 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.1d scrub ok
Oct  7 09:35:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:20 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.8 scrub starts
Oct  7 09:35:20 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.8 scrub ok
Oct  7 09:35:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v227: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 25 B/s, 2 objects/s recovering
Oct  7 09:35:21 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.15 scrub starts
Oct  7 09:35:21 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.15 scrub ok
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:35:22
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Some PGs (0.003279) are inactive; try again later
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v228: 305 pgs: 1 peering, 304 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 18 B/s, 1 objects/s recovering
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:35:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:35:23 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.9 scrub starts
Oct  7 09:35:23 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.9 scrub ok
Oct  7 09:35:23 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.17 scrub starts
Oct  7 09:35:23 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.17 scrub ok
Oct  7 09:35:24 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.1f scrub starts
Oct  7 09:35:24 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 10.1f scrub ok
Oct  7 09:35:24 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.7 scrub starts
Oct  7 09:35:24 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.7 scrub ok
Oct  7 09:35:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v229: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 15 B/s, 1 objects/s recovering
Oct  7 09:35:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:25 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.1 scrub starts
Oct  7 09:35:25 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.1 scrub ok
Oct  7 09:35:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.1b scrub starts
Oct  7 09:35:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.1b scrub ok
Oct  7 09:35:26 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.d scrub starts
Oct  7 09:35:26 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.d scrub ok
Oct  7 09:35:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v230: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 13 B/s, 1 objects/s recovering
Oct  7 09:35:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.1a scrub starts
Oct  7 09:35:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.1a scrub ok
Oct  7 09:35:27 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.f scrub starts
Oct  7 09:35:27 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 4.f scrub ok
Oct  7 09:35:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v231: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  7 09:35:28 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.18 scrub starts
Oct  7 09:35:28 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.18 scrub ok
Oct  7 09:35:29 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.e scrub starts
Oct  7 09:35:29 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.e scrub ok
Oct  7 09:35:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:30 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.1d deep-scrub starts
Oct  7 09:35:30 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.1d deep-scrub ok
Oct  7 09:35:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v232: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail; 0 B/s, 0 objects/s recovering
Oct  7 09:35:31 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.12 deep-scrub starts
Oct  7 09:35:31 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.12 deep-scrub ok
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:35:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:35:32 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.17 deep-scrub starts
Oct  7 09:35:32 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.17 deep-scrub ok
Oct  7 09:35:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v233: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:32 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.a scrub starts
Oct  7 09:35:32 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.a scrub ok
Oct  7 09:35:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v234: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:34 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.1f scrub starts
Oct  7 09:35:34 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 2.1f scrub ok
Oct  7 09:35:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:35 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.16 scrub starts
Oct  7 09:35:35 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.16 scrub ok
Oct  7 09:35:36 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.11 scrub starts
Oct  7 09:35:36 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.11 scrub ok
Oct  7 09:35:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v235: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:38 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.12 scrub starts
Oct  7 09:35:38 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.12 scrub ok
Oct  7 09:35:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v236: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.d scrub starts
Oct  7 09:35:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 10.d scrub ok
Oct  7 09:35:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:40 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.10 scrub starts
Oct  7 09:35:40 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.10 scrub ok
Oct  7 09:35:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v237: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:40 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.1c scrub starts
Oct  7 09:35:40 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.1c scrub ok
Oct  7 09:35:41 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.15 scrub starts
Oct  7 09:35:41 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.15 scrub ok
Oct  7 09:35:42 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.11 scrub starts
Oct  7 09:35:42 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.11 scrub ok
Oct  7 09:35:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v238: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:35:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0729e2cd-918d-4897-b781-4f192d540eaf does not exist
Oct  7 09:35:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c1c5bea0-01e4-4068-b521-b740c539cbd9 does not exist
Oct  7 09:35:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a0b836a7-d363-4c6c-b705-882a47cb9b72 does not exist
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:35:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:35:43 np0005473739 podman[107678]: 2025-10-07 13:35:43.228096196 +0000 UTC m=+0.040970860 container create 5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:35:43 np0005473739 systemd[1]: Started libpod-conmon-5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9.scope.
Oct  7 09:35:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:35:43 np0005473739 podman[107678]: 2025-10-07 13:35:43.209018862 +0000 UTC m=+0.021893516 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:35:43 np0005473739 podman[107678]: 2025-10-07 13:35:43.322567253 +0000 UTC m=+0.135441897 container init 5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:35:43 np0005473739 podman[107678]: 2025-10-07 13:35:43.328016958 +0000 UTC m=+0.140891582 container start 5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 09:35:43 np0005473739 podman[107678]: 2025-10-07 13:35:43.330786977 +0000 UTC m=+0.143661601 container attach 5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:35:43 np0005473739 unruffled_jackson[107695]: 167 167
Oct  7 09:35:43 np0005473739 systemd[1]: libpod-5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9.scope: Deactivated successfully.
Oct  7 09:35:43 np0005473739 podman[107678]: 2025-10-07 13:35:43.333709031 +0000 UTC m=+0.146583655 container died 5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:35:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b363429c3acc172fb420542ccd836a4e5971f129d877874f33507dfb8bcd064d-merged.mount: Deactivated successfully.
Oct  7 09:35:43 np0005473739 podman[107678]: 2025-10-07 13:35:43.384286274 +0000 UTC m=+0.197160918 container remove 5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_jackson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:35:43 np0005473739 systemd[1]: libpod-conmon-5234af9f34ff70706d791bdfb79511880f320170b03ad8c7db8c197c8c1867f9.scope: Deactivated successfully.
Oct  7 09:35:43 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.1a scrub starts
Oct  7 09:35:43 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.1a scrub ok
Oct  7 09:35:43 np0005473739 podman[107718]: 2025-10-07 13:35:43.600280308 +0000 UTC m=+0.068760404 container create 32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hellman, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:35:43 np0005473739 systemd[1]: Started libpod-conmon-32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d.scope.
Oct  7 09:35:43 np0005473739 podman[107718]: 2025-10-07 13:35:43.569775867 +0000 UTC m=+0.038256013 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:35:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:35:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512e5e0e254e9eb9f154f02ad68338d0a3f07dd2389e1aa860f2a4500e5b74e7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512e5e0e254e9eb9f154f02ad68338d0a3f07dd2389e1aa860f2a4500e5b74e7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512e5e0e254e9eb9f154f02ad68338d0a3f07dd2389e1aa860f2a4500e5b74e7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512e5e0e254e9eb9f154f02ad68338d0a3f07dd2389e1aa860f2a4500e5b74e7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/512e5e0e254e9eb9f154f02ad68338d0a3f07dd2389e1aa860f2a4500e5b74e7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:43 np0005473739 podman[107718]: 2025-10-07 13:35:43.705196422 +0000 UTC m=+0.173676568 container init 32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hellman, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:35:43 np0005473739 podman[107718]: 2025-10-07 13:35:43.712766428 +0000 UTC m=+0.181246524 container start 32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hellman, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 09:35:43 np0005473739 podman[107718]: 2025-10-07 13:35:43.716386521 +0000 UTC m=+0.184866607 container attach 32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:35:44 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.13 scrub starts
Oct  7 09:35:44 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 4.13 scrub ok
Oct  7 09:35:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v239: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:44 np0005473739 boring_hellman[107735]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:35:44 np0005473739 boring_hellman[107735]: --> relative data size: 1.0
Oct  7 09:35:44 np0005473739 boring_hellman[107735]: --> All data devices are unavailable
Oct  7 09:35:44 np0005473739 systemd[1]: libpod-32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d.scope: Deactivated successfully.
Oct  7 09:35:44 np0005473739 podman[107718]: 2025-10-07 13:35:44.733000653 +0000 UTC m=+1.201480719 container died 32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hellman, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 09:35:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-512e5e0e254e9eb9f154f02ad68338d0a3f07dd2389e1aa860f2a4500e5b74e7-merged.mount: Deactivated successfully.
Oct  7 09:35:44 np0005473739 podman[107718]: 2025-10-07 13:35:44.788913229 +0000 UTC m=+1.257393285 container remove 32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_hellman, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:35:44 np0005473739 systemd[1]: libpod-conmon-32fb994ac024a3dcab4530095dfb6ff0d84d46148fe5ce3f929f1813f53d540d.scope: Deactivated successfully.
Oct  7 09:35:44 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.1f scrub starts
Oct  7 09:35:44 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.1f scrub ok
Oct  7 09:35:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:45 np0005473739 podman[107919]: 2025-10-07 13:35:45.412159775 +0000 UTC m=+0.039780516 container create f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curie, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 09:35:45 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.16 scrub starts
Oct  7 09:35:45 np0005473739 systemd[1]: Started libpod-conmon-f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc.scope.
Oct  7 09:35:45 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.16 scrub ok
Oct  7 09:35:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:35:45 np0005473739 podman[107919]: 2025-10-07 13:35:45.39272763 +0000 UTC m=+0.020348391 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:35:45 np0005473739 podman[107919]: 2025-10-07 13:35:45.494727371 +0000 UTC m=+0.122348182 container init f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curie, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:35:45 np0005473739 podman[107919]: 2025-10-07 13:35:45.506421736 +0000 UTC m=+0.134042517 container start f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:35:45 np0005473739 elated_curie[107935]: 167 167
Oct  7 09:35:45 np0005473739 podman[107919]: 2025-10-07 13:35:45.510644416 +0000 UTC m=+0.138265237 container attach f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:35:45 np0005473739 systemd[1]: libpod-f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc.scope: Deactivated successfully.
Oct  7 09:35:45 np0005473739 podman[107919]: 2025-10-07 13:35:45.511783408 +0000 UTC m=+0.139404179 container died f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curie, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:35:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7773be4e5b3c38d0fe4661e97b9a130be6cfea28f6c44c3788ac6ea16d341a14-merged.mount: Deactivated successfully.
Oct  7 09:35:45 np0005473739 podman[107919]: 2025-10-07 13:35:45.551464601 +0000 UTC m=+0.179085352 container remove f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_curie, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:35:45 np0005473739 systemd[1]: libpod-conmon-f52b6d2a135ca04ab1830e33c0b2b966c1dc8e95ce28ef7890416c348b95aadc.scope: Deactivated successfully.
Oct  7 09:35:45 np0005473739 podman[107958]: 2025-10-07 13:35:45.728189784 +0000 UTC m=+0.053829587 container create b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaplygin, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:35:45 np0005473739 systemd[1]: Started libpod-conmon-b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b.scope.
Oct  7 09:35:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:35:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94369d9690feb23578015ea62388b9368067ec1643de52e91dc36f777dd2b9dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94369d9690feb23578015ea62388b9368067ec1643de52e91dc36f777dd2b9dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94369d9690feb23578015ea62388b9368067ec1643de52e91dc36f777dd2b9dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94369d9690feb23578015ea62388b9368067ec1643de52e91dc36f777dd2b9dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:45 np0005473739 podman[107958]: 2025-10-07 13:35:45.710347145 +0000 UTC m=+0.035986968 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:35:45 np0005473739 podman[107958]: 2025-10-07 13:35:45.841185509 +0000 UTC m=+0.166825322 container init b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:35:45 np0005473739 podman[107958]: 2025-10-07 13:35:45.847577191 +0000 UTC m=+0.173217014 container start b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:35:45 np0005473739 podman[107958]: 2025-10-07 13:35:45.851100582 +0000 UTC m=+0.176740405 container attach b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaplygin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:35:46 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.6 deep-scrub starts
Oct  7 09:35:46 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.6 deep-scrub ok
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]: {
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:    "0": [
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:        {
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "devices": [
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "/dev/loop3"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            ],
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_name": "ceph_lv0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_size": "21470642176",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "name": "ceph_lv0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "tags": {
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cluster_name": "ceph",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.crush_device_class": "",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.encrypted": "0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osd_id": "0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.type": "block",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.vdo": "0"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            },
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "type": "block",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "vg_name": "ceph_vg0"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:        }
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:    ],
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:    "1": [
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:        {
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "devices": [
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "/dev/loop4"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            ],
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_name": "ceph_lv1",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_size": "21470642176",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "name": "ceph_lv1",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "tags": {
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cluster_name": "ceph",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.crush_device_class": "",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.encrypted": "0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osd_id": "1",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.type": "block",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.vdo": "0"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            },
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "type": "block",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "vg_name": "ceph_vg1"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:        }
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:    ],
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:    "2": [
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:        {
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "devices": [
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "/dev/loop5"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            ],
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_name": "ceph_lv2",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_size": "21470642176",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "name": "ceph_lv2",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "tags": {
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.cluster_name": "ceph",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.crush_device_class": "",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.encrypted": "0",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osd_id": "2",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.type": "block",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:                "ceph.vdo": "0"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            },
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "type": "block",
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:            "vg_name": "ceph_vg2"
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:        }
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]:    ]
Oct  7 09:35:46 np0005473739 sad_chaplygin[107975]: }
Oct  7 09:35:46 np0005473739 systemd[1]: libpod-b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b.scope: Deactivated successfully.
Oct  7 09:35:46 np0005473739 podman[107958]: 2025-10-07 13:35:46.564829921 +0000 UTC m=+0.890469724 container died b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True)
Oct  7 09:35:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v240: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:46 np0005473739 systemd[1]: var-lib-containers-storage-overlay-94369d9690feb23578015ea62388b9368067ec1643de52e91dc36f777dd2b9dd-merged.mount: Deactivated successfully.
Oct  7 09:35:46 np0005473739 podman[107958]: 2025-10-07 13:35:46.610769771 +0000 UTC m=+0.936409554 container remove b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_chaplygin, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:35:46 np0005473739 systemd[1]: libpod-conmon-b51854ef5dd784f570587ba82f0a6f5f3aa51bc7da1ba43cc03cb32745e0920b.scope: Deactivated successfully.
Oct  7 09:35:46 np0005473739 python3.9[108132]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:35:46 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.1e scrub starts
Oct  7 09:35:46 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.1e scrub ok
Oct  7 09:35:47 np0005473739 podman[108421]: 2025-10-07 13:35:47.192344099 +0000 UTC m=+0.041161877 container create 8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:35:47 np0005473739 systemd[1]: Started libpod-conmon-8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a.scope.
Oct  7 09:35:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:35:47 np0005473739 podman[108421]: 2025-10-07 13:35:47.17173588 +0000 UTC m=+0.020553678 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:35:47 np0005473739 podman[108421]: 2025-10-07 13:35:47.279027122 +0000 UTC m=+0.127844910 container init 8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:35:47 np0005473739 podman[108421]: 2025-10-07 13:35:47.285381323 +0000 UTC m=+0.134199091 container start 8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 09:35:47 np0005473739 elated_varahamihira[108437]: 167 167
Oct  7 09:35:47 np0005473739 systemd[1]: libpod-8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a.scope: Deactivated successfully.
Oct  7 09:35:47 np0005473739 podman[108421]: 2025-10-07 13:35:47.294137503 +0000 UTC m=+0.142955281 container attach 8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:35:47 np0005473739 podman[108421]: 2025-10-07 13:35:47.294463823 +0000 UTC m=+0.143281601 container died 8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:35:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8d4df11c5bfadde09e13421e9aff79a6dac57ba66b0c015fefc28553624cdde6-merged.mount: Deactivated successfully.
Oct  7 09:35:47 np0005473739 podman[108421]: 2025-10-07 13:35:47.354917308 +0000 UTC m=+0.203735086 container remove 8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_varahamihira, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 09:35:47 np0005473739 systemd[1]: libpod-conmon-8b2a2ca41d4e2ce60fce0414266ce49fc7b83ed36f3b05ba063f941cbc87fb2a.scope: Deactivated successfully.
Oct  7 09:35:47 np0005473739 podman[108497]: 2025-10-07 13:35:47.495061277 +0000 UTC m=+0.036800561 container create 2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:35:47 np0005473739 systemd[1]: Started libpod-conmon-2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b.scope.
Oct  7 09:35:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:35:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df835670746cb21ed2de5aab1db27987926cf63b982a6e25ef561208aaf1f17/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df835670746cb21ed2de5aab1db27987926cf63b982a6e25ef561208aaf1f17/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df835670746cb21ed2de5aab1db27987926cf63b982a6e25ef561208aaf1f17/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0df835670746cb21ed2de5aab1db27987926cf63b982a6e25ef561208aaf1f17/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:35:47 np0005473739 podman[108497]: 2025-10-07 13:35:47.571465327 +0000 UTC m=+0.113204611 container init 2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:35:47 np0005473739 podman[108497]: 2025-10-07 13:35:47.477724063 +0000 UTC m=+0.019463377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:35:47 np0005473739 podman[108497]: 2025-10-07 13:35:47.581450143 +0000 UTC m=+0.123189427 container start 2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 09:35:47 np0005473739 podman[108497]: 2025-10-07 13:35:47.584207422 +0000 UTC m=+0.125946716 container attach 2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:35:47 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.1b scrub starts
Oct  7 09:35:47 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.1b scrub ok
Oct  7 09:35:47 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1a deep-scrub starts
Oct  7 09:35:47 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1a deep-scrub ok
Oct  7 09:35:48 np0005473739 python3.9[108632]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Oct  7 09:35:48 np0005473739 charming_jang[108552]: {
Oct  7 09:35:48 np0005473739 charming_jang[108552]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "osd_id": 2,
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "type": "bluestore"
Oct  7 09:35:48 np0005473739 charming_jang[108552]:    },
Oct  7 09:35:48 np0005473739 charming_jang[108552]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "osd_id": 1,
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "type": "bluestore"
Oct  7 09:35:48 np0005473739 charming_jang[108552]:    },
Oct  7 09:35:48 np0005473739 charming_jang[108552]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "osd_id": 0,
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:35:48 np0005473739 charming_jang[108552]:        "type": "bluestore"
Oct  7 09:35:48 np0005473739 charming_jang[108552]:    }
Oct  7 09:35:48 np0005473739 charming_jang[108552]: }
Oct  7 09:35:48 np0005473739 systemd[1]: libpod-2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b.scope: Deactivated successfully.
Oct  7 09:35:48 np0005473739 podman[108497]: 2025-10-07 13:35:48.536666583 +0000 UTC m=+1.078405867 container died 2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:35:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0df835670746cb21ed2de5aab1db27987926cf63b982a6e25ef561208aaf1f17-merged.mount: Deactivated successfully.
Oct  7 09:35:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v241: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:48 np0005473739 podman[108497]: 2025-10-07 13:35:48.606603089 +0000 UTC m=+1.148342373 container remove 2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 09:35:48 np0005473739 systemd[1]: libpod-conmon-2e59c708e669cceff397c2ab9ac7d17bc761c23d19fe98a9da4f4ee9e4e9169b.scope: Deactivated successfully.
Oct  7 09:35:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:35:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:35:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:35:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:35:48 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 91c4b9f5-fc96-439f-8628-2fd8acd67fd8 does not exist
Oct  7 09:35:48 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6a91a8d1-00b5-40b9-a613-7e0c0b2d3669 does not exist
Oct  7 09:35:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:35:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:35:48 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.15 scrub starts
Oct  7 09:35:48 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.15 scrub ok
Oct  7 09:35:49 np0005473739 python3.9[108876]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Oct  7 09:35:49 np0005473739 python3.9[109028]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:35:49 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.11 scrub starts
Oct  7 09:35:49 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.11 scrub ok
Oct  7 09:35:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v242: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:50 np0005473739 python3.9[109180]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Oct  7 09:35:51 np0005473739 python3.9[109332]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:35:51 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.12 scrub starts
Oct  7 09:35:51 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.12 scrub ok
Oct  7 09:35:52 np0005473739 python3.9[109484]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:35:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v243: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:35:52 np0005473739 python3.9[109562]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem _original_basename=tls-ca-bundle.pem recurse=False state=file path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:35:52 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.14 scrub starts
Oct  7 09:35:52 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.14 scrub ok
Oct  7 09:35:53 np0005473739 python3.9[109714]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Oct  7 09:35:54 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.19 scrub starts
Oct  7 09:35:54 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.19 scrub ok
Oct  7 09:35:54 np0005473739 python3.9[109867]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Oct  7 09:35:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v244: 305 pgs: 305 active+clean; 455 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:54 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.18 scrub starts
Oct  7 09:35:54 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.18 scrub ok
Oct  7 09:35:54 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.1d deep-scrub starts
Oct  7 09:35:55 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.1d deep-scrub ok
Oct  7 09:35:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:35:55 np0005473739 python3.9[110020]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  7 09:35:55 np0005473739 python3.9[110172]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Oct  7 09:35:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v245: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:56 np0005473739 python3.9[110324]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:35:56 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.1b scrub starts
Oct  7 09:35:56 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.1b scrub ok
Oct  7 09:35:57 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.3 scrub starts
Oct  7 09:35:57 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.3 scrub ok
Oct  7 09:35:57 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1c scrub starts
Oct  7 09:35:57 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1c scrub ok
Oct  7 09:35:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v246: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:35:58 np0005473739 python3.9[110479]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:35:59 np0005473739 python3.9[110631]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:35:59 np0005473739 python3.9[110709]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/modules-load.d/99-edpm.conf _original_basename=edpm-modprobe.conf.j2 recurse=False state=file path=/etc/modules-load.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:36:00 np0005473739 python3.9[110861]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:36:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v247: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:00 np0005473739 python3.9[110939]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/sysctl.d/99-edpm.conf _original_basename=edpm-sysctl.conf.j2 recurse=False state=file path=/etc/sysctl.d/99-edpm.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:36:00 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.10 scrub starts
Oct  7 09:36:00 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.10 scrub ok
Oct  7 09:36:01 np0005473739 python3.9[111091]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:36:01 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.18 scrub starts
Oct  7 09:36:01 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.18 scrub ok
Oct  7 09:36:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v248: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:03 np0005473739 python3.9[111242]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:36:03 np0005473739 python3.9[111394]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Oct  7 09:36:03 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.c scrub starts
Oct  7 09:36:03 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.c scrub ok
Oct  7 09:36:04 np0005473739 python3.9[111544]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:36:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v249: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:05 np0005473739 python3.9[111696]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:36:05 np0005473739 systemd[1]: Stopping Dynamic System Tuning Daemon...
Oct  7 09:36:05 np0005473739 systemd[1]: tuned.service: Deactivated successfully.
Oct  7 09:36:05 np0005473739 systemd[1]: Stopped Dynamic System Tuning Daemon.
Oct  7 09:36:05 np0005473739 systemd[1]: Starting Dynamic System Tuning Daemon...
Oct  7 09:36:05 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.d scrub starts
Oct  7 09:36:05 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.d scrub ok
Oct  7 09:36:06 np0005473739 systemd[1]: Started Dynamic System Tuning Daemon.
Oct  7 09:36:06 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.11 scrub starts
Oct  7 09:36:06 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.11 scrub ok
Oct  7 09:36:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v250: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:06 np0005473739 python3.9[111858]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Oct  7 09:36:06 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.5 scrub starts
Oct  7 09:36:06 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.5 scrub ok
Oct  7 09:36:06 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.e deep-scrub starts
Oct  7 09:36:06 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.e deep-scrub ok
Oct  7 09:36:07 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.9 scrub starts
Oct  7 09:36:07 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.9 scrub ok
Oct  7 09:36:07 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.c scrub starts
Oct  7 09:36:07 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.c scrub ok
Oct  7 09:36:07 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.1f deep-scrub starts
Oct  7 09:36:07 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.1f deep-scrub ok
Oct  7 09:36:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v251: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:08 np0005473739 python3.9[112010]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:36:09 np0005473739 python3.9[112164]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:36:09 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.c scrub starts
Oct  7 09:36:09 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.c scrub ok
Oct  7 09:36:09 np0005473739 systemd-logind[801]: Session 36 logged out. Waiting for processes to exit.
Oct  7 09:36:09 np0005473739 systemd[1]: session-36.scope: Deactivated successfully.
Oct  7 09:36:09 np0005473739 systemd[1]: session-36.scope: Consumed 59.778s CPU time.
Oct  7 09:36:09 np0005473739 systemd-logind[801]: Removed session 36.
Oct  7 09:36:09 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.8 deep-scrub starts
Oct  7 09:36:09 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.8 deep-scrub ok
Oct  7 09:36:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:10 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.d scrub starts
Oct  7 09:36:10 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.d scrub ok
Oct  7 09:36:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v252: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:11 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.f scrub starts
Oct  7 09:36:11 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.f scrub ok
Oct  7 09:36:11 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.a scrub starts
Oct  7 09:36:12 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.a scrub ok
Oct  7 09:36:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v253: 305 pgs: 305 active+clean; 456 KiB data, 144 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:13 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.a scrub starts
Oct  7 09:36:13 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.a scrub ok
Oct  7 09:36:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v254: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:15 np0005473739 systemd-logind[801]: New session 37 of user zuul.
Oct  7 09:36:15 np0005473739 systemd[1]: Started Session 37 of User zuul.
Oct  7 09:36:16 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.7 scrub starts
Oct  7 09:36:16 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.7 scrub ok
Oct  7 09:36:16 np0005473739 python3.9[112344]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:36:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v255: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:17 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.4 scrub starts
Oct  7 09:36:17 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.4 scrub ok
Oct  7 09:36:17 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.13 scrub starts
Oct  7 09:36:17 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.13 scrub ok
Oct  7 09:36:17 np0005473739 python3.9[112500]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Oct  7 09:36:17 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.b scrub starts
Oct  7 09:36:17 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.b scrub ok
Oct  7 09:36:18 np0005473739 python3.9[112653]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:36:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v256: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:18 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.6 scrub starts
Oct  7 09:36:18 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.6 scrub ok
Oct  7 09:36:19 np0005473739 python3.9[112737]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  7 09:36:19 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.2 scrub starts
Oct  7 09:36:19 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.2 scrub ok
Oct  7 09:36:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:20 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.f scrub starts
Oct  7 09:36:20 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.f scrub ok
Oct  7 09:36:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v257: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:21 np0005473739 python3.9[112890]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:36:22 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.f deep-scrub starts
Oct  7 09:36:22 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.f deep-scrub ok
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:36:22
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'images', 'default.rgw.control', 'volumes']
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v258: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:36:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:36:22 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.f scrub starts
Oct  7 09:36:22 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.f scrub ok
Oct  7 09:36:23 np0005473739 python3.9[113043]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:36:24 np0005473739 python3.9[113196]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:36:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v259: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:24 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.e scrub starts
Oct  7 09:36:24 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.e scrub ok
Oct  7 09:36:25 np0005473739 python3.9[113348]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Oct  7 09:36:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:26 np0005473739 python3.9[113498]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:36:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v260: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1 scrub starts
Oct  7 09:36:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.1 scrub ok
Oct  7 09:36:27 np0005473739 python3.9[113656]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:36:27 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.4 scrub starts
Oct  7 09:36:27 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.4 scrub ok
Oct  7 09:36:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v261: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:28 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.2 scrub starts
Oct  7 09:36:28 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.2 scrub ok
Oct  7 09:36:29 np0005473739 python3.9[113809]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:36:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:30 np0005473739 python3.9[114096]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  7 09:36:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v262: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:31 np0005473739 python3.9[114246]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:36:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:36:31 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.15 scrub starts
Oct  7 09:36:31 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.15 scrub ok
Oct  7 09:36:31 np0005473739 python3.9[114400]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:36:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v263: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:32 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.9 scrub starts
Oct  7 09:36:32 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.9 scrub ok
Oct  7 09:36:33 np0005473739 python3.9[114553]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:36:33 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.6 deep-scrub starts
Oct  7 09:36:33 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.6 deep-scrub ok
Oct  7 09:36:33 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.11 scrub starts
Oct  7 09:36:33 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.11 scrub ok
Oct  7 09:36:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v264: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:34 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.9 scrub starts
Oct  7 09:36:34 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.9 scrub ok
Oct  7 09:36:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:35 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.b scrub starts
Oct  7 09:36:35 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.b scrub ok
Oct  7 09:36:35 np0005473739 python3.9[114706]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:36:36 np0005473739 python3.9[114860]: ansible-ansible.builtin.slurp Invoked with path=/var/lib/edpm-config/os-net-config.returncode src=/var/lib/edpm-config/os-net-config.returncode
Oct  7 09:36:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v265: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:36 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.16 scrub starts
Oct  7 09:36:36 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.16 scrub ok
Oct  7 09:36:37 np0005473739 systemd-logind[801]: Session 37 logged out. Waiting for processes to exit.
Oct  7 09:36:37 np0005473739 systemd[1]: session-37.scope: Deactivated successfully.
Oct  7 09:36:37 np0005473739 systemd[1]: session-37.scope: Consumed 16.689s CPU time.
Oct  7 09:36:37 np0005473739 systemd-logind[801]: Removed session 37.
Oct  7 09:36:37 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.5 scrub starts
Oct  7 09:36:37 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.5 scrub ok
Oct  7 09:36:37 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.e deep-scrub starts
Oct  7 09:36:37 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 7.e deep-scrub ok
Oct  7 09:36:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v266: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:39 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.1b deep-scrub starts
Oct  7 09:36:39 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.1b deep-scrub ok
Oct  7 09:36:39 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.1 scrub starts
Oct  7 09:36:39 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.1 scrub ok
Oct  7 09:36:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.c scrub starts
Oct  7 09:36:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.c scrub ok
Oct  7 09:36:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:40 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.6 scrub starts
Oct  7 09:36:40 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.6 scrub ok
Oct  7 09:36:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v267: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:41 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.9 scrub starts
Oct  7 09:36:41 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.9 scrub ok
Oct  7 09:36:41 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.11 scrub starts
Oct  7 09:36:41 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 3.11 scrub ok
Oct  7 09:36:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v268: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:42 np0005473739 systemd-logind[801]: New session 38 of user zuul.
Oct  7 09:36:42 np0005473739 systemd[1]: Started Session 38 of User zuul.
Oct  7 09:36:43 np0005473739 python3.9[115038]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:36:43 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.12 scrub starts
Oct  7 09:36:43 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.12 scrub ok
Oct  7 09:36:44 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.13 scrub starts
Oct  7 09:36:44 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.13 scrub ok
Oct  7 09:36:44 np0005473739 python3.9[115192]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:36:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v269: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:44 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.1c scrub starts
Oct  7 09:36:44 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 8.1c scrub ok
Oct  7 09:36:44 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.18 scrub starts
Oct  7 09:36:45 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.18 scrub ok
Oct  7 09:36:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:45 np0005473739 python3.9[115385]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:36:45 np0005473739 systemd[1]: session-38.scope: Deactivated successfully.
Oct  7 09:36:45 np0005473739 systemd[1]: session-38.scope: Consumed 2.066s CPU time.
Oct  7 09:36:45 np0005473739 systemd-logind[801]: Session 38 logged out. Waiting for processes to exit.
Oct  7 09:36:45 np0005473739 systemd-logind[801]: Removed session 38.
Oct  7 09:36:46 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.1f scrub starts
Oct  7 09:36:46 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.1f scrub ok
Oct  7 09:36:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v270: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:48 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.14 scrub starts
Oct  7 09:36:48 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.14 scrub ok
Oct  7 09:36:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v271: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:49 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.15 scrub starts
Oct  7 09:36:49 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.15 scrub ok
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:36:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 80763968-0c3b-4ccd-b344-5fb78de4a4e4 does not exist
Oct  7 09:36:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9923af3d-34c3-4646-8f9b-2a3d6adafad7 does not exist
Oct  7 09:36:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c6c958fc-ae91-453e-ba9e-f0996f0bde54 does not exist
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:36:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:36:49 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.e scrub starts
Oct  7 09:36:49 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.e scrub ok
Oct  7 09:36:50 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.1d deep-scrub starts
Oct  7 09:36:50 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.1d deep-scrub ok
Oct  7 09:36:50 np0005473739 podman[115684]: 2025-10-07 13:36:50.02348576 +0000 UTC m=+0.019110824 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:36:50 np0005473739 podman[115684]: 2025-10-07 13:36:50.192320709 +0000 UTC m=+0.187945753 container create aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:36:50 np0005473739 systemd[75908]: Created slice User Background Tasks Slice.
Oct  7 09:36:50 np0005473739 systemd[75908]: Starting Cleanup of User's Temporary Files and Directories...
Oct  7 09:36:50 np0005473739 systemd[75908]: Finished Cleanup of User's Temporary Files and Directories.
Oct  7 09:36:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:36:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:36:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:36:50 np0005473739 systemd[1]: Started libpod-conmon-aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76.scope.
Oct  7 09:36:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:36:50 np0005473739 podman[115684]: 2025-10-07 13:36:50.419548145 +0000 UTC m=+0.415173199 container init aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:36:50 np0005473739 podman[115684]: 2025-10-07 13:36:50.433890251 +0000 UTC m=+0.429515295 container start aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:36:50 np0005473739 busy_neumann[115702]: 167 167
Oct  7 09:36:50 np0005473739 systemd[1]: libpod-aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76.scope: Deactivated successfully.
Oct  7 09:36:50 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.a scrub starts
Oct  7 09:36:50 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.a scrub ok
Oct  7 09:36:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v272: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:50 np0005473739 podman[115684]: 2025-10-07 13:36:50.649692971 +0000 UTC m=+0.645318015 container attach aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:36:50 np0005473739 podman[115684]: 2025-10-07 13:36:50.650360998 +0000 UTC m=+0.645986042 container died aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 09:36:51 np0005473739 systemd-logind[801]: New session 39 of user zuul.
Oct  7 09:36:51 np0005473739 systemd[1]: Started Session 39 of User zuul.
Oct  7 09:36:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-811f00ab5904a1f2259eeee402665b3d60863f96548d2f462de577886e397d39-merged.mount: Deactivated successfully.
Oct  7 09:36:51 np0005473739 podman[115684]: 2025-10-07 13:36:51.148594929 +0000 UTC m=+1.144220013 container remove aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_neumann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:36:51 np0005473739 systemd[1]: libpod-conmon-aa133722774d78dfe6d14155955b9cf1ef749e6d2694bf69a969ab9be4044c76.scope: Deactivated successfully.
Oct  7 09:36:51 np0005473739 podman[115783]: 2025-10-07 13:36:51.30568547 +0000 UTC m=+0.030832459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:36:51 np0005473739 podman[115783]: 2025-10-07 13:36:51.477913229 +0000 UTC m=+0.203060168 container create b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True)
Oct  7 09:36:51 np0005473739 systemd[1]: Started libpod-conmon-b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3.scope.
Oct  7 09:36:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:36:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cf5c95f0a166c53262a137b17a410b77a17bfeb774f5468d2ff8bc5d57b760/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cf5c95f0a166c53262a137b17a410b77a17bfeb774f5468d2ff8bc5d57b760/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cf5c95f0a166c53262a137b17a410b77a17bfeb774f5468d2ff8bc5d57b760/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cf5c95f0a166c53262a137b17a410b77a17bfeb774f5468d2ff8bc5d57b760/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33cf5c95f0a166c53262a137b17a410b77a17bfeb774f5468d2ff8bc5d57b760/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:51 np0005473739 podman[115783]: 2025-10-07 13:36:51.729580613 +0000 UTC m=+0.454727562 container init b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3)
Oct  7 09:36:51 np0005473739 podman[115783]: 2025-10-07 13:36:51.74062342 +0000 UTC m=+0.465770359 container start b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 09:36:51 np0005473739 podman[115783]: 2025-10-07 13:36:51.759438415 +0000 UTC m=+0.484585394 container attach b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:36:52 np0005473739 python3.9[115901]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:36:52 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.2 scrub starts
Oct  7 09:36:52 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 10.2 scrub ok
Oct  7 09:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:36:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v273: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:36:52 np0005473739 nostalgic_lichterman[115802]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:36:52 np0005473739 nostalgic_lichterman[115802]: --> relative data size: 1.0
Oct  7 09:36:52 np0005473739 nostalgic_lichterman[115802]: --> All data devices are unavailable
Oct  7 09:36:52 np0005473739 systemd[1]: libpod-b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3.scope: Deactivated successfully.
Oct  7 09:36:52 np0005473739 podman[115783]: 2025-10-07 13:36:52.811378197 +0000 UTC m=+1.536525156 container died b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 09:36:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-33cf5c95f0a166c53262a137b17a410b77a17bfeb774f5468d2ff8bc5d57b760-merged.mount: Deactivated successfully.
Oct  7 09:36:53 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.f scrub starts
Oct  7 09:36:53 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.f scrub ok
Oct  7 09:36:53 np0005473739 podman[115783]: 2025-10-07 13:36:53.260712793 +0000 UTC m=+1.985859742 container remove b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 09:36:53 np0005473739 systemd[1]: libpod-conmon-b826218ddad52f7ad0104f03b3971342a0b755a6d97b41144af061130fb60ee3.scope: Deactivated successfully.
Oct  7 09:36:53 np0005473739 python3.9[116093]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:36:53 np0005473739 podman[116314]: 2025-10-07 13:36:53.796684167 +0000 UTC m=+0.037238421 container create 2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 09:36:53 np0005473739 systemd[1]: Started libpod-conmon-2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42.scope.
Oct  7 09:36:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:36:53 np0005473739 podman[116314]: 2025-10-07 13:36:53.866002721 +0000 UTC m=+0.106557005 container init 2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:36:53 np0005473739 podman[116314]: 2025-10-07 13:36:53.872056693 +0000 UTC m=+0.112610947 container start 2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:36:53 np0005473739 podman[116314]: 2025-10-07 13:36:53.779329891 +0000 UTC m=+0.019884175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:36:53 np0005473739 podman[116314]: 2025-10-07 13:36:53.875437475 +0000 UTC m=+0.115991729 container attach 2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:36:53 np0005473739 systemd[1]: libpod-2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42.scope: Deactivated successfully.
Oct  7 09:36:53 np0005473739 optimistic_hoover[116354]: 167 167
Oct  7 09:36:53 np0005473739 podman[116314]: 2025-10-07 13:36:53.877710945 +0000 UTC m=+0.118265199 container died 2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 09:36:53 np0005473739 conmon[116354]: conmon 2e6cbf15ae659bdc5c6d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42.scope/container/memory.events
Oct  7 09:36:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-df24b5dde13a54f99fde45ca0f7db4e0a679db836bc8953261fc6f70f7418601-merged.mount: Deactivated successfully.
Oct  7 09:36:53 np0005473739 podman[116314]: 2025-10-07 13:36:53.920386813 +0000 UTC m=+0.160941067 container remove 2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:36:53 np0005473739 systemd[1]: libpod-conmon-2e6cbf15ae659bdc5c6d21104f6b28ac889dbdcc1a2c5d13a555cb1e5801fe42.scope: Deactivated successfully.
Oct  7 09:36:54 np0005473739 podman[116430]: 2025-10-07 13:36:54.069201902 +0000 UTC m=+0.037071928 container create 8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pike, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:36:54 np0005473739 systemd[1]: Started libpod-conmon-8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd.scope.
Oct  7 09:36:54 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:36:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5dc6d602a8872605afea07b8d256c8edc01102374ec7cc2054eb1546816db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5dc6d602a8872605afea07b8d256c8edc01102374ec7cc2054eb1546816db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5dc6d602a8872605afea07b8d256c8edc01102374ec7cc2054eb1546816db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50e5dc6d602a8872605afea07b8d256c8edc01102374ec7cc2054eb1546816db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:54 np0005473739 podman[116430]: 2025-10-07 13:36:54.140378465 +0000 UTC m=+0.108248511 container init 8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pike, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:36:54 np0005473739 podman[116430]: 2025-10-07 13:36:54.051764833 +0000 UTC m=+0.019634899 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:36:54 np0005473739 podman[116430]: 2025-10-07 13:36:54.150837585 +0000 UTC m=+0.118707611 container start 8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:36:54 np0005473739 podman[116430]: 2025-10-07 13:36:54.154404242 +0000 UTC m=+0.122274288 container attach 8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pike, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:36:54 np0005473739 python3.9[116424]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:36:54 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.18 scrub starts
Oct  7 09:36:54 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.18 scrub ok
Oct  7 09:36:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v274: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]: {
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:    "0": [
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:        {
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "devices": [
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "/dev/loop3"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            ],
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_name": "ceph_lv0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_size": "21470642176",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "name": "ceph_lv0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "tags": {
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cluster_name": "ceph",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.crush_device_class": "",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.encrypted": "0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osd_id": "0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.type": "block",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.vdo": "0"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            },
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "type": "block",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "vg_name": "ceph_vg0"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:        }
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:    ],
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:    "1": [
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:        {
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "devices": [
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "/dev/loop4"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            ],
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_name": "ceph_lv1",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_size": "21470642176",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "name": "ceph_lv1",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "tags": {
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cluster_name": "ceph",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.crush_device_class": "",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.encrypted": "0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osd_id": "1",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.type": "block",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.vdo": "0"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            },
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "type": "block",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "vg_name": "ceph_vg1"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:        }
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:    ],
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:    "2": [
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:        {
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "devices": [
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "/dev/loop5"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            ],
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_name": "ceph_lv2",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_size": "21470642176",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "name": "ceph_lv2",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "tags": {
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.cluster_name": "ceph",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.crush_device_class": "",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.encrypted": "0",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osd_id": "2",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.type": "block",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:                "ceph.vdo": "0"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            },
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "type": "block",
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:            "vg_name": "ceph_vg2"
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:        }
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]:    ]
Oct  7 09:36:54 np0005473739 compassionate_pike[116447]: }
Oct  7 09:36:54 np0005473739 systemd[1]: libpod-8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd.scope: Deactivated successfully.
Oct  7 09:36:54 np0005473739 podman[116430]: 2025-10-07 13:36:54.977165343 +0000 UTC m=+0.945035369 container died 8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pike, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:36:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-50e5dc6d602a8872605afea07b8d256c8edc01102374ec7cc2054eb1546816db-merged.mount: Deactivated successfully.
Oct  7 09:36:55 np0005473739 podman[116430]: 2025-10-07 13:36:55.032506871 +0000 UTC m=+1.000376897 container remove 8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_pike, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:36:55 np0005473739 systemd[1]: libpod-conmon-8d0f191ba8beec794ca7807d28643d771d57b2923027263fe496a56178ef28cd.scope: Deactivated successfully.
Oct  7 09:36:55 np0005473739 python3.9[116538]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:36:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:36:55 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.1a scrub starts
Oct  7 09:36:55 np0005473739 podman[116691]: 2025-10-07 13:36:55.568158276 +0000 UTC m=+0.035920575 container create a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:36:55 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.1a scrub ok
Oct  7 09:36:55 np0005473739 systemd[1]: Started libpod-conmon-a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68.scope.
Oct  7 09:36:55 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:36:55 np0005473739 podman[116691]: 2025-10-07 13:36:55.647492779 +0000 UTC m=+0.115255098 container init a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:36:55 np0005473739 podman[116691]: 2025-10-07 13:36:55.552616379 +0000 UTC m=+0.020378688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:36:55 np0005473739 podman[116691]: 2025-10-07 13:36:55.65533978 +0000 UTC m=+0.123102099 container start a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:36:55 np0005473739 infallible_chebyshev[116707]: 167 167
Oct  7 09:36:55 np0005473739 podman[116691]: 2025-10-07 13:36:55.660073207 +0000 UTC m=+0.127835596 container attach a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:36:55 np0005473739 systemd[1]: libpod-a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68.scope: Deactivated successfully.
Oct  7 09:36:55 np0005473739 conmon[116707]: conmon a54ce724c5650865ccac <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68.scope/container/memory.events
Oct  7 09:36:55 np0005473739 podman[116691]: 2025-10-07 13:36:55.662741919 +0000 UTC m=+0.130504218 container died a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:36:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4fedbd6012a340d43bf045663767d1730fafd217c539dc3add92fbce12c4b6df-merged.mount: Deactivated successfully.
Oct  7 09:36:55 np0005473739 podman[116691]: 2025-10-07 13:36:55.696247319 +0000 UTC m=+0.164009618 container remove a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_chebyshev, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 09:36:55 np0005473739 systemd[1]: libpod-conmon-a54ce724c5650865ccac723dcda090f554ebf9f9d0defec3fc566957a2cbae68.scope: Deactivated successfully.
Oct  7 09:36:55 np0005473739 podman[116731]: 2025-10-07 13:36:55.859165788 +0000 UTC m=+0.059251614 container create dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:36:55 np0005473739 systemd[1]: Started libpod-conmon-dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe.scope.
Oct  7 09:36:55 np0005473739 podman[116731]: 2025-10-07 13:36:55.834086193 +0000 UTC m=+0.034172099 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:36:55 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:36:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86470c3df53525595b9853a8f744e40e53a171e099aead222856a699948eebb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86470c3df53525595b9853a8f744e40e53a171e099aead222856a699948eebb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86470c3df53525595b9853a8f744e40e53a171e099aead222856a699948eebb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86470c3df53525595b9853a8f744e40e53a171e099aead222856a699948eebb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:36:55 np0005473739 podman[116731]: 2025-10-07 13:36:55.950428801 +0000 UTC m=+0.150514647 container init dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:36:55 np0005473739 podman[116731]: 2025-10-07 13:36:55.957725157 +0000 UTC m=+0.157810973 container start dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:36:55 np0005473739 podman[116731]: 2025-10-07 13:36:55.964112378 +0000 UTC m=+0.164198224 container attach dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:36:56 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.17 deep-scrub starts
Oct  7 09:36:56 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 3.17 deep-scrub ok
Oct  7 09:36:56 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.19 scrub starts
Oct  7 09:36:56 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 5.19 scrub ok
Oct  7 09:36:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v275: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]: {
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "osd_id": 2,
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "type": "bluestore"
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:    },
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "osd_id": 1,
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "type": "bluestore"
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:    },
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "osd_id": 0,
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:        "type": "bluestore"
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]:    }
Oct  7 09:36:56 np0005473739 flamboyant_murdock[116748]: }
Oct  7 09:36:56 np0005473739 systemd[1]: libpod-dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe.scope: Deactivated successfully.
Oct  7 09:36:56 np0005473739 podman[116731]: 2025-10-07 13:36:56.876756536 +0000 UTC m=+1.076842372 container died dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 09:36:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-86470c3df53525595b9853a8f744e40e53a171e099aead222856a699948eebb5-merged.mount: Deactivated successfully.
Oct  7 09:36:56 np0005473739 podman[116731]: 2025-10-07 13:36:56.934028005 +0000 UTC m=+1.134113831 container remove dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_murdock, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:36:56 np0005473739 systemd[1]: libpod-conmon-dd498ad77b8ea73be9430e06ab87ca6664dc8d15d07ac0180b80a697c3104dbe.scope: Deactivated successfully.
Oct  7 09:36:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:36:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:36:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:36:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:36:56 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7da16dd4-7c89-4f65-b3f3-889642b5cb97 does not exist
Oct  7 09:36:56 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5f8fc1e3-0b65-4af1-8f3b-3c098d159a80 does not exist
Oct  7 09:36:57 np0005473739 python3.9[116924]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:36:57 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:36:57 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:36:58 np0005473739 python3.9[117189]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:36:58 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.1b deep-scrub starts
Oct  7 09:36:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v276: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:36:58 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 2.1b deep-scrub ok
Oct  7 09:36:59 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.1a scrub starts
Oct  7 09:36:59 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 8.1a scrub ok
Oct  7 09:36:59 np0005473739 python3.9[117342]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:36:59 np0005473739 python3.9[117506]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:00 np0005473739 python3.9[117584]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v277: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:00 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.e scrub starts
Oct  7 09:37:00 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.e scrub ok
Oct  7 09:37:01 np0005473739 python3.9[117736]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:01 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.13 scrub starts
Oct  7 09:37:01 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 7.13 scrub ok
Oct  7 09:37:01 np0005473739 python3.9[117814]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:37:02 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.6 deep-scrub starts
Oct  7 09:37:02 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.6 deep-scrub ok
Oct  7 09:37:02 np0005473739 python3.9[117966]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:37:02 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.2 scrub starts
Oct  7 09:37:02 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.2 scrub ok
Oct  7 09:37:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v278: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:02 np0005473739 python3.9[118118]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:37:03 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.3 scrub starts
Oct  7 09:37:03 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.3 scrub ok
Oct  7 09:37:03 np0005473739 python3.9[118270]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:37:04 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.8 scrub starts
Oct  7 09:37:04 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.8 scrub ok
Oct  7 09:37:04 np0005473739 python3.9[118422]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:37:04 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1 scrub starts
Oct  7 09:37:04 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1 scrub ok
Oct  7 09:37:04 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.6 deep-scrub starts
Oct  7 09:37:04 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.6 deep-scrub ok
Oct  7 09:37:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v279: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:04 np0005473739 python3.9[118574]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:37:05 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.7 scrub starts
Oct  7 09:37:05 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.7 scrub ok
Oct  7 09:37:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:06 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.c scrub starts
Oct  7 09:37:06 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.c scrub ok
Oct  7 09:37:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v280: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:07 np0005473739 python3.9[118727]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:37:07 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.f deep-scrub starts
Oct  7 09:37:07 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.f deep-scrub ok
Oct  7 09:37:07 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.4 deep-scrub starts
Oct  7 09:37:07 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.4 deep-scrub ok
Oct  7 09:37:07 np0005473739 python3.9[118881]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:37:08 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.9 scrub starts
Oct  7 09:37:08 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.9 scrub ok
Oct  7 09:37:08 np0005473739 python3.9[119033]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:37:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v281: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:09 np0005473739 python3.9[119185]: ansible-service_facts Invoked
Oct  7 09:37:09 np0005473739 network[119202]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:37:09 np0005473739 network[119203]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:37:09 np0005473739 network[119204]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:37:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v282: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:11 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.5 deep-scrub starts
Oct  7 09:37:11 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.5 deep-scrub ok
Oct  7 09:37:12 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.b scrub starts
Oct  7 09:37:12 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.b scrub ok
Oct  7 09:37:12 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.b scrub starts
Oct  7 09:37:12 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.b scrub ok
Oct  7 09:37:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v283: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:14 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1d scrub starts
Oct  7 09:37:14 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1d scrub ok
Oct  7 09:37:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v284: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:14 np0005473739 python3.9[119659]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:37:15 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.17 scrub starts
Oct  7 09:37:15 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.17 scrub ok
Oct  7 09:37:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:16 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.8 scrub starts
Oct  7 09:37:16 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.8 scrub ok
Oct  7 09:37:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v285: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:17 np0005473739 python3.9[119812]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Oct  7 09:37:18 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.18 scrub starts
Oct  7 09:37:18 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.18 scrub ok
Oct  7 09:37:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v286: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:19 np0005473739 python3.9[119964]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:20 np0005473739 python3.9[120042]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/chrony.conf _original_basename=chrony.conf.j2 recurse=False state=file path=/etc/chrony.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:20 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1b deep-scrub starts
Oct  7 09:37:20 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1b deep-scrub ok
Oct  7 09:37:20 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.d scrub starts
Oct  7 09:37:20 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 6.d scrub ok
Oct  7 09:37:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v287: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:20 np0005473739 python3.9[120194]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:21 np0005473739 python3.9[120272]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/chronyd _original_basename=chronyd.sysconfig.j2 recurse=False state=file path=/etc/sysconfig/chronyd force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:21 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.d scrub starts
Oct  7 09:37:21 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.d scrub ok
Oct  7 09:37:21 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.5 deep-scrub starts
Oct  7 09:37:21 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.5 deep-scrub ok
Oct  7 09:37:22 np0005473739 python3.9[120424]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:37:22
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'backups', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.meta', '.mgr']
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v288: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:37:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:37:23 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.11 scrub starts
Oct  7 09:37:23 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.11 scrub ok
Oct  7 09:37:23 np0005473739 python3.9[120576]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:37:24 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.c scrub starts
Oct  7 09:37:24 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.c scrub ok
Oct  7 09:37:24 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.3 scrub starts
Oct  7 09:37:24 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.3 scrub ok
Oct  7 09:37:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v289: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:24 np0005473739 python3.9[120660]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:37:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:25 np0005473739 systemd[1]: session-39.scope: Deactivated successfully.
Oct  7 09:37:25 np0005473739 systemd[1]: session-39.scope: Consumed 22.407s CPU time.
Oct  7 09:37:25 np0005473739 systemd-logind[801]: Session 39 logged out. Waiting for processes to exit.
Oct  7 09:37:25 np0005473739 systemd-logind[801]: Removed session 39.
Oct  7 09:37:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.f scrub starts
Oct  7 09:37:26 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 6.f scrub ok
Oct  7 09:37:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v290: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:27 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.7 scrub starts
Oct  7 09:37:27 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.7 scrub ok
Oct  7 09:37:28 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.13 scrub starts
Oct  7 09:37:28 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.13 scrub ok
Oct  7 09:37:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v291: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:30 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.7 scrub starts
Oct  7 09:37:30 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.7 scrub ok
Oct  7 09:37:30 np0005473739 systemd-logind[801]: New session 40 of user zuul.
Oct  7 09:37:30 np0005473739 systemd[1]: Started Session 40 of User zuul.
Oct  7 09:37:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v292: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:31 np0005473739 python3.9[120842]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:37:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:37:32 np0005473739 python3.9[120994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/ceph-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:32 np0005473739 python3.9[121072]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/ceph-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/ceph-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v293: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:32 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.a scrub starts
Oct  7 09:37:32 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.a scrub ok
Oct  7 09:37:32 np0005473739 systemd[1]: session-40.scope: Deactivated successfully.
Oct  7 09:37:32 np0005473739 systemd[1]: session-40.scope: Consumed 1.516s CPU time.
Oct  7 09:37:32 np0005473739 systemd-logind[801]: Session 40 logged out. Waiting for processes to exit.
Oct  7 09:37:32 np0005473739 systemd-logind[801]: Removed session 40.
Oct  7 09:37:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v294: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #18. Immutable memtables: 0.
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.207909) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 3] Flushing memtable with next log file: 18
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844255208001, "job": 3, "event": "flush_started", "num_memtables": 1, "num_entries": 7216, "num_deletes": 251, "total_data_size": 9276293, "memory_usage": 9483120, "flush_reason": "Manual Compaction"}
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 3] Level-0 flush table #19: started
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844255272201, "cf_name": "default", "job": 3, "event": "table_file_creation", "file_number": 19, "file_size": 7403161, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 141, "largest_seqno": 7354, "table_properties": {"data_size": 7376579, "index_size": 17315, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8197, "raw_key_size": 76068, "raw_average_key_size": 23, "raw_value_size": 7313750, "raw_average_value_size": 2237, "num_data_blocks": 761, "num_entries": 3268, "num_filter_entries": 3268, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843835, "oldest_key_time": 1759843835, "file_creation_time": 1759844255, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 19, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 3] Flush lasted 64334 microseconds, and 13743 cpu microseconds.
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.272258) [db/flush_job.cc:967] [default] [JOB 3] Level-0 flush table #19: 7403161 bytes OK
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.272278) [db/memtable_list.cc:519] [default] Level-0 commit table #19 started
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.275996) [db/memtable_list.cc:722] [default] Level-0 commit table #19: memtable #1 done
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.276011) EVENT_LOG_v1 {"time_micros": 1759844255276005, "job": 3, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [3, 0, 0, 0, 0, 0, 0], "immutable_memtables": 0}
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.276043) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: files[3 0 0 0 0 0 0] max score 0.75
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 3] Try to delete WAL files size 9244826, prev total WAL file size 9244826, number of live WAL files 2.
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000014.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.278385) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730030' seq:72057594037927935, type:22 .. '7061786F7300323532' seq:0, type:0; will stop at (end)
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 4] Compacting 3@0 files to L6, score -1.00
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 3 Base level 0, inputs: [19(7229KB) 13(53KB) 8(1944B)]
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844255278455, "job": 4, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [19, 13, 8], "score": -1, "input_data_size": 7460362, "oldest_snapshot_seqno": -1}
Oct  7 09:37:35 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.11 scrub starts
Oct  7 09:37:35 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.11 scrub ok
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 4] Generated table #20: 3084 keys, 7415674 bytes, temperature: kUnknown
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844255380911, "cf_name": "default", "job": 4, "event": "table_file_creation", "file_number": 20, "file_size": 7415674, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7389511, "index_size": 17349, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 7749, "raw_key_size": 74140, "raw_average_key_size": 24, "raw_value_size": 7328298, "raw_average_value_size": 2376, "num_data_blocks": 764, "num_entries": 3084, "num_filter_entries": 3084, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759844255, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 20, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.381178) [db/compaction/compaction_job.cc:1663] [default] [JOB 4] Compacted 3@0 files to L6 => 7415674 bytes
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.391960) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 72.7 rd, 72.3 wr, level 6, files in(3, 0) out(1 +0 blob) MB in(7.1, 0.0 +0.0 blob) out(7.1 +0.0 blob), read-write-amplify(2.0) write-amplify(1.0) OK, records in: 3374, records dropped: 290 output_compression: NoCompression
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.391997) EVENT_LOG_v1 {"time_micros": 1759844255391984, "job": 4, "event": "compaction_finished", "compaction_time_micros": 102551, "compaction_time_cpu_micros": 15386, "output_level": 6, "num_output_files": 1, "total_output_size": 7415674, "num_input_records": 3374, "num_output_records": 3084, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000019.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844255393193, "job": 4, "event": "table_file_deletion", "file_number": 19}
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000013.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844255393274, "job": 4, "event": "table_file_deletion", "file_number": 13}
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000008.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844255393312, "job": 4, "event": "table_file_deletion", "file_number": 8}
Oct  7 09:37:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:37:35.278290) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:37:36 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.5 scrub starts
Oct  7 09:37:36 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.5 scrub ok
Oct  7 09:37:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v295: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:37 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1e scrub starts
Oct  7 09:37:37 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1e scrub ok
Oct  7 09:37:37 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.9 deep-scrub starts
Oct  7 09:37:37 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.9 deep-scrub ok
Oct  7 09:37:38 np0005473739 systemd-logind[801]: New session 41 of user zuul.
Oct  7 09:37:38 np0005473739 systemd[1]: Started Session 41 of User zuul.
Oct  7 09:37:38 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.a deep-scrub starts
Oct  7 09:37:38 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 6.a deep-scrub ok
Oct  7 09:37:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v296: 305 pgs: 305 active+clean; 457 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:38 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.c scrub starts
Oct  7 09:37:38 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.c scrub ok
Oct  7 09:37:39 np0005473739 python3.9[121253]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:37:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.1 deep-scrub starts
Oct  7 09:37:39 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.1 deep-scrub ok
Oct  7 09:37:40 np0005473739 python3.9[121409]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:40 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.19 scrub starts
Oct  7 09:37:40 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.19 scrub ok
Oct  7 09:37:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v297: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:41 np0005473739 python3.9[121584]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:41 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1f scrub starts
Oct  7 09:37:41 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1f scrub ok
Oct  7 09:37:41 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.17 scrub starts
Oct  7 09:37:41 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.17 scrub ok
Oct  7 09:37:41 np0005473739 python3.9[121662]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.an4gegtd recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:42 np0005473739 python3.9[121814]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v298: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:42 np0005473739 python3.9[121892]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/etc/sysconfig/podman_drop_in _original_basename=.dbtf1m7i recurse=False state=file path=/etc/sysconfig/podman_drop_in force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:43 np0005473739 python3.9[122044]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:37:44 np0005473739 python3.9[122196]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v299: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:44 np0005473739 python3.9[122274]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:37:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:45 np0005473739 python3.9[122426]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:45 np0005473739 python3.9[122504]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:37:45 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.13 scrub starts
Oct  7 09:37:45 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.13 scrub ok
Oct  7 09:37:46 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1a scrub starts
Oct  7 09:37:46 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1a scrub ok
Oct  7 09:37:46 np0005473739 python3.9[122656]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v300: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:46 np0005473739 python3.9[122808]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:46 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.16 scrub starts
Oct  7 09:37:46 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.16 scrub ok
Oct  7 09:37:47 np0005473739 python3.9[122886]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:47 np0005473739 python3.9[123038]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:48 np0005473739 python3.9[123116]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v301: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:49 np0005473739 python3.9[123268]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:37:49 np0005473739 systemd[1]: Reloading.
Oct  7 09:37:49 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:37:49 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:37:49 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.f scrub starts
Oct  7 09:37:49 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.f scrub ok
Oct  7 09:37:49 np0005473739 systemd[1]: Starting dnf makecache...
Oct  7 09:37:49 np0005473739 dnf[123305]: Metadata cache refreshed recently.
Oct  7 09:37:49 np0005473739 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  7 09:37:49 np0005473739 systemd[1]: Finished dnf makecache.
Oct  7 09:37:50 np0005473739 python3.9[123459]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:50 np0005473739 python3.9[123537]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v302: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:51 np0005473739 python3.9[123689]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:51 np0005473739 python3.9[123767]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:52 np0005473739 python3.9[123919]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:37:52 np0005473739 systemd[1]: Reloading.
Oct  7 09:37:52 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:37:52 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:37:52 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.e deep-scrub starts
Oct  7 09:37:52 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.e deep-scrub ok
Oct  7 09:37:52 np0005473739 systemd[1]: Starting Create netns directory...
Oct  7 09:37:52 np0005473739 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  7 09:37:52 np0005473739 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  7 09:37:52 np0005473739 systemd[1]: Finished Create netns directory.
Oct  7 09:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:37:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v303: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:52 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.1d scrub starts
Oct  7 09:37:52 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 11.1d scrub ok
Oct  7 09:37:53 np0005473739 python3.9[124110]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:37:53 np0005473739 network[124127]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:37:53 np0005473739 network[124128]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:37:53 np0005473739 network[124129]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:37:53 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.15 deep-scrub starts
Oct  7 09:37:53 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.15 deep-scrub ok
Oct  7 09:37:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v304: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:37:55 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.6 scrub starts
Oct  7 09:37:55 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.6 scrub ok
Oct  7 09:37:56 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1c scrub starts
Oct  7 09:37:56 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1c scrub ok
Oct  7 09:37:56 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.4 deep-scrub starts
Oct  7 09:37:56 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.4 deep-scrub ok
Oct  7 09:37:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v305: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:37:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6cf33bee-f8fb-4d96-84a1-7b1fe4b20c64 does not exist
Oct  7 09:37:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3b34515d-ab50-462f-84ae-84be54eff260 does not exist
Oct  7 09:37:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 78b29371-de44-4794-9d93-3e7c0570dc86 does not exist
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:37:57 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:37:57 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.1f deep-scrub starts
Oct  7 09:37:57 np0005473739 ceph-osd[89062]: log_channel(cluster) log [DBG] : 9.1f deep-scrub ok
Oct  7 09:37:58 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1b scrub starts
Oct  7 09:37:58 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.1b scrub ok
Oct  7 09:37:58 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.10 scrub starts
Oct  7 09:37:58 np0005473739 podman[124540]: 2025-10-07 13:37:58.246340759 +0000 UTC m=+0.073303045 container create 8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_payne, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:37:58 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.10 scrub ok
Oct  7 09:37:58 np0005473739 podman[124540]: 2025-10-07 13:37:58.201641211 +0000 UTC m=+0.028603537 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:37:58 np0005473739 systemd[1]: Started libpod-conmon-8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90.scope.
Oct  7 09:37:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:37:58 np0005473739 podman[124540]: 2025-10-07 13:37:58.367663116 +0000 UTC m=+0.194625432 container init 8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_payne, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:37:58 np0005473739 podman[124540]: 2025-10-07 13:37:58.373994351 +0000 UTC m=+0.200956637 container start 8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_payne, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 09:37:58 np0005473739 admiring_payne[124624]: 167 167
Oct  7 09:37:58 np0005473739 systemd[1]: libpod-8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90.scope: Deactivated successfully.
Oct  7 09:37:58 np0005473739 podman[124540]: 2025-10-07 13:37:58.380764468 +0000 UTC m=+0.207726774 container attach 8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 09:37:58 np0005473739 podman[124540]: 2025-10-07 13:37:58.381325082 +0000 UTC m=+0.208287368 container died 8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_payne, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:37:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c7285dd7e3e448367d3c129ad68ca32bea3f366976dadef4836085edd305f365-merged.mount: Deactivated successfully.
Oct  7 09:37:58 np0005473739 podman[124540]: 2025-10-07 13:37:58.517330184 +0000 UTC m=+0.344292470 container remove 8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_payne, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:37:58 np0005473739 systemd[1]: libpod-conmon-8d96ab88eaf8b15adbd83ef6073f578eb54e3f7ced54a6292a20733fc9d36e90.scope: Deactivated successfully.
Oct  7 09:37:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v306: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:37:58 np0005473739 python3.9[124699]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:37:58 np0005473739 podman[124707]: 2025-10-07 13:37:58.677470054 +0000 UTC m=+0.038058584 container create 88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:37:58 np0005473739 systemd[1]: Started libpod-conmon-88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16.scope.
Oct  7 09:37:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:37:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2b53c4e91054610d33cade97548fbff5ad2a3bfa252e0b2878fd106edb023e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:37:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2b53c4e91054610d33cade97548fbff5ad2a3bfa252e0b2878fd106edb023e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:37:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2b53c4e91054610d33cade97548fbff5ad2a3bfa252e0b2878fd106edb023e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:37:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2b53c4e91054610d33cade97548fbff5ad2a3bfa252e0b2878fd106edb023e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:37:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e2b53c4e91054610d33cade97548fbff5ad2a3bfa252e0b2878fd106edb023e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:37:58 np0005473739 podman[124707]: 2025-10-07 13:37:58.661733154 +0000 UTC m=+0.022321694 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:37:58 np0005473739 podman[124707]: 2025-10-07 13:37:58.769871147 +0000 UTC m=+0.130459697 container init 88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 09:37:58 np0005473739 podman[124707]: 2025-10-07 13:37:58.776616353 +0000 UTC m=+0.137204883 container start 88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:37:58 np0005473739 podman[124707]: 2025-10-07 13:37:58.788060712 +0000 UTC m=+0.148649252 container attach 88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 09:37:59 np0005473739 python3.9[124806]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/etc/ssh/sshd_config _original_basename=sshd_config_block.j2 recurse=False state=file path=/etc/ssh/sshd_config force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:59 np0005473739 python3.9[124969]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:37:59 np0005473739 laughing_robinson[124726]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:37:59 np0005473739 laughing_robinson[124726]: --> relative data size: 1.0
Oct  7 09:37:59 np0005473739 laughing_robinson[124726]: --> All data devices are unavailable
Oct  7 09:37:59 np0005473739 systemd[1]: libpod-88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16.scope: Deactivated successfully.
Oct  7 09:37:59 np0005473739 conmon[124726]: conmon 88a2a563b8a24fddc6f0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16.scope/container/memory.events
Oct  7 09:37:59 np0005473739 podman[124707]: 2025-10-07 13:37:59.817420076 +0000 UTC m=+1.178008606 container died 88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 09:37:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7e2b53c4e91054610d33cade97548fbff5ad2a3bfa252e0b2878fd106edb023e-merged.mount: Deactivated successfully.
Oct  7 09:38:00 np0005473739 podman[124707]: 2025-10-07 13:38:00.000361092 +0000 UTC m=+1.360949622 container remove 88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_robinson, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:38:00 np0005473739 systemd[1]: libpod-conmon-88a2a563b8a24fddc6f054b3b9611bb254be50b8bcf2b2438b6c19eee4f86c16.scope: Deactivated successfully.
Oct  7 09:38:00 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.2 scrub starts
Oct  7 09:38:00 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.2 scrub ok
Oct  7 09:38:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:00 np0005473739 python3.9[125221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:00 np0005473739 podman[125314]: 2025-10-07 13:38:00.5242903 +0000 UTC m=+0.035337913 container create 86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swanson, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:38:00 np0005473739 systemd[1]: Started libpod-conmon-86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545.scope.
Oct  7 09:38:00 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:38:00 np0005473739 podman[125314]: 2025-10-07 13:38:00.603295253 +0000 UTC m=+0.114342886 container init 86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swanson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:38:00 np0005473739 podman[125314]: 2025-10-07 13:38:00.507610055 +0000 UTC m=+0.018657688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:38:00 np0005473739 podman[125314]: 2025-10-07 13:38:00.610356097 +0000 UTC m=+0.121403710 container start 86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swanson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:38:00 np0005473739 podman[125314]: 2025-10-07 13:38:00.613752796 +0000 UTC m=+0.124800429 container attach 86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swanson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:38:00 np0005473739 sad_swanson[125378]: 167 167
Oct  7 09:38:00 np0005473739 systemd[1]: libpod-86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545.scope: Deactivated successfully.
Oct  7 09:38:00 np0005473739 podman[125314]: 2025-10-07 13:38:00.616249962 +0000 UTC m=+0.127297575 container died 86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:38:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v307: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-57a69a1b128ef26e53c42e0568f5112a9114a7a9f499f4ed18fd375b0fe1c511-merged.mount: Deactivated successfully.
Oct  7 09:38:00 np0005473739 podman[125314]: 2025-10-07 13:38:00.727795173 +0000 UTC m=+0.238842786 container remove 86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_swanson, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:38:00 np0005473739 systemd[1]: libpod-conmon-86e860b9e78c2261edb552a693769943259e271306e8530861f141279b7f9545.scope: Deactivated successfully.
Oct  7 09:38:00 np0005473739 python3.9[125386]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/var/lib/edpm-config/firewall/sshd-networks.yaml _original_basename=firewall.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/sshd-networks.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:00 np0005473739 podman[125409]: 2025-10-07 13:38:00.880444779 +0000 UTC m=+0.050048027 container create ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 09:38:00 np0005473739 systemd[1]: Started libpod-conmon-ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc.scope.
Oct  7 09:38:00 np0005473739 podman[125409]: 2025-10-07 13:38:00.852682574 +0000 UTC m=+0.022285852 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:38:00 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:38:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aefb57eaec004eba3d5da2f12ba0d045b20379aaf3713eff4ed2da9374dadfe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:38:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aefb57eaec004eba3d5da2f12ba0d045b20379aaf3713eff4ed2da9374dadfe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:38:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aefb57eaec004eba3d5da2f12ba0d045b20379aaf3713eff4ed2da9374dadfe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:38:00 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0aefb57eaec004eba3d5da2f12ba0d045b20379aaf3713eff4ed2da9374dadfe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:38:00 np0005473739 podman[125409]: 2025-10-07 13:38:00.987389921 +0000 UTC m=+0.156993209 container init ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:38:00 np0005473739 podman[125409]: 2025-10-07 13:38:00.993276325 +0000 UTC m=+0.162879573 container start ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:38:01 np0005473739 podman[125409]: 2025-10-07 13:38:01.00346529 +0000 UTC m=+0.173068538 container attach ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:38:01 np0005473739 serene_jang[125448]: {
Oct  7 09:38:01 np0005473739 serene_jang[125448]:    "0": [
Oct  7 09:38:01 np0005473739 serene_jang[125448]:        {
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "devices": [
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "/dev/loop3"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            ],
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_name": "ceph_lv0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_size": "21470642176",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "name": "ceph_lv0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "tags": {
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cluster_name": "ceph",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.crush_device_class": "",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.encrypted": "0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osd_id": "0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.type": "block",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.vdo": "0"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            },
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "type": "block",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "vg_name": "ceph_vg0"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:        }
Oct  7 09:38:01 np0005473739 serene_jang[125448]:    ],
Oct  7 09:38:01 np0005473739 serene_jang[125448]:    "1": [
Oct  7 09:38:01 np0005473739 serene_jang[125448]:        {
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "devices": [
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "/dev/loop4"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            ],
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_name": "ceph_lv1",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_size": "21470642176",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "name": "ceph_lv1",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "tags": {
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cluster_name": "ceph",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.crush_device_class": "",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.encrypted": "0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osd_id": "1",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.type": "block",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.vdo": "0"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            },
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "type": "block",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "vg_name": "ceph_vg1"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:        }
Oct  7 09:38:01 np0005473739 serene_jang[125448]:    ],
Oct  7 09:38:01 np0005473739 serene_jang[125448]:    "2": [
Oct  7 09:38:01 np0005473739 serene_jang[125448]:        {
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "devices": [
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "/dev/loop5"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            ],
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_name": "ceph_lv2",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_size": "21470642176",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "name": "ceph_lv2",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "tags": {
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.cluster_name": "ceph",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.crush_device_class": "",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.encrypted": "0",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osd_id": "2",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.type": "block",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:                "ceph.vdo": "0"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            },
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "type": "block",
Oct  7 09:38:01 np0005473739 serene_jang[125448]:            "vg_name": "ceph_vg2"
Oct  7 09:38:01 np0005473739 serene_jang[125448]:        }
Oct  7 09:38:01 np0005473739 serene_jang[125448]:    ]
Oct  7 09:38:01 np0005473739 serene_jang[125448]: }
Oct  7 09:38:01 np0005473739 systemd[1]: libpod-ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc.scope: Deactivated successfully.
Oct  7 09:38:01 np0005473739 podman[125409]: 2025-10-07 13:38:01.719184006 +0000 UTC m=+0.888787254 container died ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:38:01 np0005473739 python3.9[125580]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Oct  7 09:38:01 np0005473739 systemd[1]: Starting Time & Date Service...
Oct  7 09:38:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0aefb57eaec004eba3d5da2f12ba0d045b20379aaf3713eff4ed2da9374dadfe-merged.mount: Deactivated successfully.
Oct  7 09:38:01 np0005473739 podman[125409]: 2025-10-07 13:38:01.863797922 +0000 UTC m=+1.033401180 container remove ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_jang, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 09:38:01 np0005473739 systemd[1]: Started Time & Date Service.
Oct  7 09:38:01 np0005473739 systemd[1]: libpod-conmon-ec05ec5a12d8af304c211a97949412de216682d280499336d6b4339dce5a5fbc.scope: Deactivated successfully.
Oct  7 09:38:02 np0005473739 podman[125888]: 2025-10-07 13:38:02.609775337 +0000 UTC m=+0.103268696 container create fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_franklin, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:38:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v308: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:02 np0005473739 podman[125888]: 2025-10-07 13:38:02.542777069 +0000 UTC m=+0.036270508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:38:02 np0005473739 python3.9[125880]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:02 np0005473739 systemd[1]: Started libpod-conmon-fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd.scope.
Oct  7 09:38:02 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:38:02 np0005473739 podman[125888]: 2025-10-07 13:38:02.776307015 +0000 UTC m=+0.269800394 container init fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:38:02 np0005473739 podman[125888]: 2025-10-07 13:38:02.788712779 +0000 UTC m=+0.282206138 container start fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_franklin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:38:02 np0005473739 kind_franklin[125905]: 167 167
Oct  7 09:38:02 np0005473739 systemd[1]: libpod-fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd.scope: Deactivated successfully.
Oct  7 09:38:02 np0005473739 podman[125888]: 2025-10-07 13:38:02.855643327 +0000 UTC m=+0.349136706 container attach fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_franklin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:38:02 np0005473739 podman[125888]: 2025-10-07 13:38:02.85651862 +0000 UTC m=+0.350012009 container died fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_franklin, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:38:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9d8e5fc4a7ec3fca55a7f9dfebadef4220a170e3ff3cee80ae43f6bfb30b872b-merged.mount: Deactivated successfully.
Oct  7 09:38:02 np0005473739 podman[125888]: 2025-10-07 13:38:02.992685654 +0000 UTC m=+0.486179043 container remove fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_franklin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:38:03 np0005473739 systemd[1]: libpod-conmon-fd3f5b519c479daad4642d22baa1f6cfb5ba34dc749704a584c57b6e42b77dbd.scope: Deactivated successfully.
Oct  7 09:38:03 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.18 scrub starts
Oct  7 09:38:03 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.18 scrub ok
Oct  7 09:38:03 np0005473739 podman[126031]: 2025-10-07 13:38:03.172807987 +0000 UTC m=+0.059731800 container create a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:38:03 np0005473739 podman[126031]: 2025-10-07 13:38:03.133749177 +0000 UTC m=+0.020673000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:38:03 np0005473739 systemd[1]: Started libpod-conmon-a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301.scope.
Oct  7 09:38:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:38:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a86a1f0461fcac5b45199b942ffb2d4935707b7e34141de4c58c45246e2340/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:38:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a86a1f0461fcac5b45199b942ffb2d4935707b7e34141de4c58c45246e2340/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:38:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a86a1f0461fcac5b45199b942ffb2d4935707b7e34141de4c58c45246e2340/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:38:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/52a86a1f0461fcac5b45199b942ffb2d4935707b7e34141de4c58c45246e2340/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:38:03 np0005473739 podman[126031]: 2025-10-07 13:38:03.270470846 +0000 UTC m=+0.157394659 container init a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:38:03 np0005473739 podman[126031]: 2025-10-07 13:38:03.277038388 +0000 UTC m=+0.163962201 container start a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:38:03 np0005473739 podman[126031]: 2025-10-07 13:38:03.327898306 +0000 UTC m=+0.214822119 container attach a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  7 09:38:03 np0005473739 python3.9[126092]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:03 np0005473739 python3.9[126177]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]: {
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "osd_id": 2,
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "type": "bluestore"
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:    },
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "osd_id": 1,
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "type": "bluestore"
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:    },
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "osd_id": 0,
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:        "type": "bluestore"
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]:    }
Oct  7 09:38:04 np0005473739 eloquent_feistel[126095]: }
Oct  7 09:38:04 np0005473739 systemd[1]: libpod-a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301.scope: Deactivated successfully.
Oct  7 09:38:04 np0005473739 podman[126031]: 2025-10-07 13:38:04.227752459 +0000 UTC m=+1.114676282 container died a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:38:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-52a86a1f0461fcac5b45199b942ffb2d4935707b7e34141de4c58c45246e2340-merged.mount: Deactivated successfully.
Oct  7 09:38:04 np0005473739 podman[126031]: 2025-10-07 13:38:04.30669381 +0000 UTC m=+1.193617623 container remove a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_feistel, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  7 09:38:04 np0005473739 systemd[1]: libpod-conmon-a8d20cf0782ce058625e45d6ace028ce16187aa83ecba4b41c5834ea147cf301.scope: Deactivated successfully.
Oct  7 09:38:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:38:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:38:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:38:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:38:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f0b632ef-ef53-44e5-80d3-1ce7381c5b55 does not exist
Oct  7 09:38:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 60f97125-052f-4145-bbc9-c782fa0373e2 does not exist
Oct  7 09:38:04 np0005473739 python3.9[126368]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v309: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:04 np0005473739 python3.9[126499]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.2pvdknzg recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:38:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:38:05 np0005473739 python3.9[126651]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:06 np0005473739 python3.9[126729]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v310: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:06 np0005473739 python3.9[126881]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:38:07 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.14 scrub starts
Oct  7 09:38:07 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 11.14 scrub ok
Oct  7 09:38:07 np0005473739 python3[127034]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  7 09:38:08 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.8 scrub starts
Oct  7 09:38:08 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.8 scrub ok
Oct  7 09:38:08 np0005473739 python3.9[127186]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v311: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:08 np0005473739 python3.9[127264]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:09 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.d deep-scrub starts
Oct  7 09:38:09 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.d deep-scrub ok
Oct  7 09:38:09 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.16 scrub starts
Oct  7 09:38:09 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.16 scrub ok
Oct  7 09:38:09 np0005473739 python3.9[127416]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:09 np0005473739 python3.9[127494]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:10 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.9 scrub starts
Oct  7 09:38:10 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.9 scrub ok
Oct  7 09:38:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:10 np0005473739 python3.9[127646]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v312: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:10 np0005473739 python3.9[127724]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:11 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1c deep-scrub starts
Oct  7 09:38:11 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1c deep-scrub ok
Oct  7 09:38:11 np0005473739 python3.9[127876]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:12 np0005473739 python3.9[127954]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:12 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.3 scrub starts
Oct  7 09:38:12 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.3 scrub ok
Oct  7 09:38:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v313: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:12 np0005473739 python3.9[128106]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:13 np0005473739 python3.9[128184]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-rules.nft _original_basename=ruleset.j2 recurse=False state=file path=/etc/nftables/edpm-rules.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:13 np0005473739 python3.9[128336]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:38:14 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1e scrub starts
Oct  7 09:38:14 np0005473739 ceph-osd[88039]: log_channel(cluster) log [DBG] : 9.1e scrub ok
Oct  7 09:38:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v314: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:14 np0005473739 python3.9[128491]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:15 np0005473739 python3.9[128643]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:15 np0005473739 python3.9[128795]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:16 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.b scrub starts
Oct  7 09:38:16 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.b scrub ok
Oct  7 09:38:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v315: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:16 np0005473739 python3.9[128947]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  7 09:38:17 np0005473739 python3.9[129099]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Oct  7 09:38:17 np0005473739 systemd[1]: session-41.scope: Deactivated successfully.
Oct  7 09:38:17 np0005473739 systemd[1]: session-41.scope: Consumed 27.881s CPU time.
Oct  7 09:38:17 np0005473739 systemd-logind[801]: Session 41 logged out. Waiting for processes to exit.
Oct  7 09:38:17 np0005473739 systemd-logind[801]: Removed session 41.
Oct  7 09:38:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v316: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v317: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:38:22
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.data', 'vms', 'cephfs.cephfs.meta', '.mgr', 'backups', 'volumes', 'default.rgw.control', 'images', 'default.rgw.log']
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:38:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v318: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:22 np0005473739 systemd-logind[801]: New session 42 of user zuul.
Oct  7 09:38:22 np0005473739 systemd[1]: Started Session 42 of User zuul.
Oct  7 09:38:23 np0005473739 python3.9[129279]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Oct  7 09:38:24 np0005473739 python3.9[129431]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:38:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v319: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:25 np0005473739 python3.9[129585]: ansible-ansible.builtin.slurp Invoked with src=/etc/ssh/ssh_known_hosts
Oct  7 09:38:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:25 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.12 scrub starts
Oct  7 09:38:25 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.12 scrub ok
Oct  7 09:38:25 np0005473739 python3.9[129737]: ansible-ansible.legacy.stat Invoked with path=/tmp/ansible._1a85_5e follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:38:26 np0005473739 python3.9[129862]: ansible-ansible.legacy.copy Invoked with dest=/tmp/ansible._1a85_5e mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844305.3162751-44-141828955078484/.source._1a85_5e _original_basename=.lg1f5bmd follow=False checksum=e9116497fc287c8f0fa97e85743158e1eb04f96d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v320: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:27 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.15 scrub starts
Oct  7 09:38:27 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 11.15 scrub ok
Oct  7 09:38:27 np0005473739 python3.9[130014]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:38:28 np0005473739 python3.9[130166]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCss9vtiNZQ+c6V8hur44yjQlLn3NTXkBVZKor2H62rtudT4XVEQzGxi7gUwyRBte1R7sx4lqPoaKdKJZvRNgQvGY2Lv2fyd8EaX2Wrg97CplWcR7GA/CJbrXqozq2dLKTmKZaHTua3ql5+RRXXYrh+uisLXnV9Q4/PZN1YT5upOOwkGP/XYKheCA9G+0S61h1r3AwCU+J9wev+nTZBNm9WtF7qUcsAxH9AB5+bC45hVFLxIzvmOgaQdHD5W5Ak9xuJGAuzBDm2yj9X+NS9k1lfL9n809pJghMIiQZrC+D9sXqTiQ0aE7Wk7P5urmzOE1ScviwZWTZXPp5U8n1KCwJJ+BeCrhZpZoeSjLGmueTVJ4EDkywA9GavBjsh/S/P8x1Jr3JwVRJu4zaR+Nb2LvdvrhZ2idwDWJvx8jGH0K+R5Dm9ZHNNsgINe8uptoo317TDeW0H/ArL32AgFdDypUijFC1vWCCcysYZ7TYDKiIOmHY3rmhEN4SbXLyiUAA1K5E=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICGuEdBfoHqnIxaSxhCoPMgT82A1vTxx/84MRgwXo+he#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBkThUzBWDevZ6qT2+zfe0gUZaoXo9kc5/FHJDMm+yQRRP0i/33XvNHPmBucX4ysll81rQvIQhwXZ0yEC0uwqxA=#012 create=True mode=0644 path=/tmp/ansible._1a85_5e state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v321: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:29 np0005473739 python3.9[130318]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible._1a85_5e' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:38:29 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.19 scrub starts
Oct  7 09:38:29 np0005473739 ceph-osd[90092]: log_channel(cluster) log [DBG] : 9.19 scrub ok
Oct  7 09:38:29 np0005473739 python3.9[130472]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible._1a85_5e state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:30 np0005473739 systemd[1]: session-42.scope: Deactivated successfully.
Oct  7 09:38:30 np0005473739 systemd[1]: session-42.scope: Consumed 4.874s CPU time.
Oct  7 09:38:30 np0005473739 systemd-logind[801]: Session 42 logged out. Waiting for processes to exit.
Oct  7 09:38:30 np0005473739 systemd-logind[801]: Removed session 42.
Oct  7 09:38:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v322: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:38:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:38:31 np0005473739 systemd[1]: systemd-timedated.service: Deactivated successfully.
Oct  7 09:38:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v323: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v324: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:36 np0005473739 systemd-logind[801]: New session 43 of user zuul.
Oct  7 09:38:36 np0005473739 systemd[1]: Started Session 43 of User zuul.
Oct  7 09:38:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v325: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:37 np0005473739 python3.9[130652]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:38:38 np0005473739 python3.9[130808]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  7 09:38:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v326: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:39 np0005473739 python3.9[130962]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:38:40 np0005473739 python3.9[131115]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:38:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v327: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:40 np0005473739 python3.9[131268]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:38:41 np0005473739 python3.9[131420]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:38:42 np0005473739 systemd[1]: session-43.scope: Deactivated successfully.
Oct  7 09:38:42 np0005473739 systemd[1]: session-43.scope: Consumed 3.726s CPU time.
Oct  7 09:38:42 np0005473739 systemd-logind[801]: Session 43 logged out. Waiting for processes to exit.
Oct  7 09:38:42 np0005473739 systemd-logind[801]: Removed session 43.
Oct  7 09:38:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v328: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v329: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v330: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:48 np0005473739 systemd-logind[801]: New session 44 of user zuul.
Oct  7 09:38:48 np0005473739 systemd[1]: Started Session 44 of User zuul.
Oct  7 09:38:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v331: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:49 np0005473739 python3.9[131598]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:38:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:50 np0005473739 python3.9[131754]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:38:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v332: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:51 np0005473739 python3.9[131838]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Oct  7 09:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:38:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v333: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:53 np0005473739 python3.9[131989]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:38:54 np0005473739 python3.9[132140]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  7 09:38:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v334: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:55 np0005473739 python3.9[132290]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:38:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:38:55 np0005473739 python3.9[132440]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:38:56 np0005473739 systemd[1]: session-44.scope: Deactivated successfully.
Oct  7 09:38:56 np0005473739 systemd[1]: session-44.scope: Consumed 5.771s CPU time.
Oct  7 09:38:56 np0005473739 systemd-logind[801]: Session 44 logged out. Waiting for processes to exit.
Oct  7 09:38:56 np0005473739 systemd-logind[801]: Removed session 44.
Oct  7 09:38:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v335: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:38:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v336: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v337: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:01 np0005473739 systemd-logind[801]: New session 45 of user zuul.
Oct  7 09:39:01 np0005473739 systemd[1]: Started Session 45 of User zuul.
Oct  7 09:39:02 np0005473739 python3.9[132618]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:39:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v338: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:03 np0005473739 python3.9[132774]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:04 np0005473739 python3.9[132926]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v339: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:39:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d8959416-6a4b-4780-8734-c757a5bb7bfd does not exist
Oct  7 09:39:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 23ea744d-c228-4e67-a237-1e3c8a5d1a5b does not exist
Oct  7 09:39:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0ce13c55-e4c8-414f-903e-8710069f3660 does not exist
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:39:05 np0005473739 python3.9[133233]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:39:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:39:05 np0005473739 podman[133397]: 2025-10-07 13:39:05.910895179 +0000 UTC m=+0.036245599 container create aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:39:05 np0005473739 systemd[1]: Started libpod-conmon-aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68.scope.
Oct  7 09:39:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:39:05 np0005473739 podman[133397]: 2025-10-07 13:39:05.985140271 +0000 UTC m=+0.110490711 container init aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:39:05 np0005473739 podman[133397]: 2025-10-07 13:39:05.894213263 +0000 UTC m=+0.019563713 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:39:05 np0005473739 podman[133397]: 2025-10-07 13:39:05.994007338 +0000 UTC m=+0.119357758 container start aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:39:05 np0005473739 podman[133397]: 2025-10-07 13:39:05.996882884 +0000 UTC m=+0.122233304 container attach aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:39:05 np0005473739 jolly_ritchie[133414]: 167 167
Oct  7 09:39:06 np0005473739 podman[133397]: 2025-10-07 13:39:06.000251225 +0000 UTC m=+0.125601645 container died aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:39:06 np0005473739 systemd[1]: libpod-aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68.scope: Deactivated successfully.
Oct  7 09:39:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c45f28768c7f9bab495e662a3d91fa51840e7f950ca933b01ab52b0245753d57-merged.mount: Deactivated successfully.
Oct  7 09:39:06 np0005473739 podman[133397]: 2025-10-07 13:39:06.038489225 +0000 UTC m=+0.163839645 container remove aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_ritchie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:39:06 np0005473739 systemd[1]: libpod-conmon-aa47bb2c85c58819a2f43ce12ebdf340e7430ebb03d40a09339814a41d8b4e68.scope: Deactivated successfully.
Oct  7 09:39:06 np0005473739 podman[133484]: 2025-10-07 13:39:06.171499607 +0000 UTC m=+0.035514149 container create fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:39:06 np0005473739 systemd[1]: Started libpod-conmon-fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec.scope.
Oct  7 09:39:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:39:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb22abbf0c3c49c9810cab65e1280615126552a739baacbab9a15ba31ee86fb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb22abbf0c3c49c9810cab65e1280615126552a739baacbab9a15ba31ee86fb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb22abbf0c3c49c9810cab65e1280615126552a739baacbab9a15ba31ee86fb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb22abbf0c3c49c9810cab65e1280615126552a739baacbab9a15ba31ee86fb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb22abbf0c3c49c9810cab65e1280615126552a739baacbab9a15ba31ee86fb7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:06 np0005473739 podman[133484]: 2025-10-07 13:39:06.157075902 +0000 UTC m=+0.021090464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:39:06 np0005473739 podman[133484]: 2025-10-07 13:39:06.255159831 +0000 UTC m=+0.119174393 container init fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 09:39:06 np0005473739 podman[133484]: 2025-10-07 13:39:06.261363846 +0000 UTC m=+0.125378388 container start fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:39:06 np0005473739 podman[133484]: 2025-10-07 13:39:06.2648636 +0000 UTC m=+0.128878142 container attach fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  7 09:39:06 np0005473739 python3.9[133527]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844344.806518-65-67959099320349/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=05eed9706ad3882b808ea07d5e04699fef576a05 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v340: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:06 np0005473739 python3.9[133686]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:07 np0005473739 quirky_lamport[133530]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:39:07 np0005473739 quirky_lamport[133530]: --> relative data size: 1.0
Oct  7 09:39:07 np0005473739 quirky_lamport[133530]: --> All data devices are unavailable
Oct  7 09:39:07 np0005473739 systemd[1]: libpod-fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec.scope: Deactivated successfully.
Oct  7 09:39:07 np0005473739 podman[133484]: 2025-10-07 13:39:07.272651287 +0000 UTC m=+1.136665849 container died fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:39:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fb22abbf0c3c49c9810cab65e1280615126552a739baacbab9a15ba31ee86fb7-merged.mount: Deactivated successfully.
Oct  7 09:39:07 np0005473739 podman[133484]: 2025-10-07 13:39:07.329510456 +0000 UTC m=+1.193524998 container remove fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 09:39:07 np0005473739 systemd[1]: libpod-conmon-fec4dbe23a7fcf8f6e62b372734877da21c84f351a8082007a2342338bafadec.scope: Deactivated successfully.
Oct  7 09:39:07 np0005473739 python3.9[133834]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844346.5164454-65-176498012466808/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=74a5eeb4c96311931de031452884185320cc772d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:07 np0005473739 podman[134119]: 2025-10-07 13:39:07.8663605 +0000 UTC m=+0.035401347 container create 04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keldysh, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:39:07 np0005473739 systemd[1]: Started libpod-conmon-04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847.scope.
Oct  7 09:39:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:39:07 np0005473739 podman[134119]: 2025-10-07 13:39:07.931740215 +0000 UTC m=+0.100781092 container init 04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keldysh, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 09:39:07 np0005473739 podman[134119]: 2025-10-07 13:39:07.944249139 +0000 UTC m=+0.113289986 container start 04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 09:39:07 np0005473739 podman[134119]: 2025-10-07 13:39:07.850208388 +0000 UTC m=+0.019249265 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:39:07 np0005473739 podman[134119]: 2025-10-07 13:39:07.948076021 +0000 UTC m=+0.117116898 container attach 04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keldysh, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 09:39:07 np0005473739 boring_keldysh[134148]: 167 167
Oct  7 09:39:07 np0005473739 podman[134119]: 2025-10-07 13:39:07.950528827 +0000 UTC m=+0.119569674 container died 04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keldysh, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:39:07 np0005473739 systemd[1]: libpod-04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847.scope: Deactivated successfully.
Oct  7 09:39:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0db6be4d176328784e1a97ddf945914c202f8b65bc949953014641fc4e564324-merged.mount: Deactivated successfully.
Oct  7 09:39:07 np0005473739 podman[134119]: 2025-10-07 13:39:07.984183696 +0000 UTC m=+0.153224543 container remove 04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 09:39:07 np0005473739 systemd[1]: libpod-conmon-04e8198a1c843832fa95c54db611268305ee56256b5f9563f320e1dc2ef26847.scope: Deactivated successfully.
Oct  7 09:39:08 np0005473739 python3.9[134145]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:08 np0005473739 podman[134175]: 2025-10-07 13:39:08.134593481 +0000 UTC m=+0.050415037 container create d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:39:08 np0005473739 systemd[1]: Started libpod-conmon-d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471.scope.
Oct  7 09:39:08 np0005473739 podman[134175]: 2025-10-07 13:39:08.110337944 +0000 UTC m=+0.026159520 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:39:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:39:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4fd92447727aabdb2c0a376fbe390e3879fad1eabbcda6fac4748a9961210/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4fd92447727aabdb2c0a376fbe390e3879fad1eabbcda6fac4748a9961210/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4fd92447727aabdb2c0a376fbe390e3879fad1eabbcda6fac4748a9961210/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb4fd92447727aabdb2c0a376fbe390e3879fad1eabbcda6fac4748a9961210/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:08 np0005473739 podman[134175]: 2025-10-07 13:39:08.222532069 +0000 UTC m=+0.138353625 container init d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:39:08 np0005473739 podman[134175]: 2025-10-07 13:39:08.2326682 +0000 UTC m=+0.148489756 container start d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:39:08 np0005473739 podman[134175]: 2025-10-07 13:39:08.235600478 +0000 UTC m=+0.151422174 container attach d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:39:08 np0005473739 python3.9[134315]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844347.620034-65-263757023923514/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=d7842d92833f03bdc293551c290efc17daed170a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v341: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]: {
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:    "0": [
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:        {
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "devices": [
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "/dev/loop3"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            ],
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_name": "ceph_lv0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_size": "21470642176",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "name": "ceph_lv0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "tags": {
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cluster_name": "ceph",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.crush_device_class": "",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.encrypted": "0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osd_id": "0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.type": "block",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.vdo": "0"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            },
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "type": "block",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "vg_name": "ceph_vg0"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:        }
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:    ],
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:    "1": [
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:        {
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "devices": [
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "/dev/loop4"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            ],
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_name": "ceph_lv1",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_size": "21470642176",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "name": "ceph_lv1",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "tags": {
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cluster_name": "ceph",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.crush_device_class": "",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.encrypted": "0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osd_id": "1",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.type": "block",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.vdo": "0"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            },
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "type": "block",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "vg_name": "ceph_vg1"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:        }
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:    ],
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:    "2": [
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:        {
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "devices": [
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "/dev/loop5"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            ],
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_name": "ceph_lv2",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_size": "21470642176",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "name": "ceph_lv2",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "tags": {
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.cluster_name": "ceph",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.crush_device_class": "",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.encrypted": "0",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osd_id": "2",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.type": "block",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:                "ceph.vdo": "0"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            },
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "type": "block",
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:            "vg_name": "ceph_vg2"
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:        }
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]:    ]
Oct  7 09:39:08 np0005473739 gallant_satoshi[134235]: }
Oct  7 09:39:08 np0005473739 systemd[1]: libpod-d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471.scope: Deactivated successfully.
Oct  7 09:39:08 np0005473739 podman[134175]: 2025-10-07 13:39:08.987108784 +0000 UTC m=+0.902930340 container died d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:39:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0fb4fd92447727aabdb2c0a376fbe390e3879fad1eabbcda6fac4748a9961210-merged.mount: Deactivated successfully.
Oct  7 09:39:09 np0005473739 podman[134175]: 2025-10-07 13:39:09.034484689 +0000 UTC m=+0.950306245 container remove d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:39:09 np0005473739 systemd[1]: libpod-conmon-d3b8ba70f42b1c786380df4f242c8c029dedabc2a4dbc725ad09c34de8b3b471.scope: Deactivated successfully.
Oct  7 09:39:09 np0005473739 python3.9[134541]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:09 np0005473739 podman[134671]: 2025-10-07 13:39:09.552846299 +0000 UTC m=+0.036484826 container create aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:39:09 np0005473739 systemd[1]: Started libpod-conmon-aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa.scope.
Oct  7 09:39:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:39:09 np0005473739 podman[134671]: 2025-10-07 13:39:09.608248838 +0000 UTC m=+0.091887385 container init aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:39:09 np0005473739 podman[134671]: 2025-10-07 13:39:09.616109628 +0000 UTC m=+0.099748145 container start aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 09:39:09 np0005473739 podman[134671]: 2025-10-07 13:39:09.619105387 +0000 UTC m=+0.102743914 container attach aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:39:09 np0005473739 inspiring_colden[134717]: 167 167
Oct  7 09:39:09 np0005473739 systemd[1]: libpod-aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa.scope: Deactivated successfully.
Oct  7 09:39:09 np0005473739 podman[134671]: 2025-10-07 13:39:09.622281143 +0000 UTC m=+0.105919670 container died aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 09:39:09 np0005473739 podman[134671]: 2025-10-07 13:39:09.538013052 +0000 UTC m=+0.021651599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:39:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c3e5a58999c1fc36287eafd7a5077ea3664b263377f1b26cc87955c4d4eacd6d-merged.mount: Deactivated successfully.
Oct  7 09:39:09 np0005473739 podman[134671]: 2025-10-07 13:39:09.657295558 +0000 UTC m=+0.140934085 container remove aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_colden, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:39:09 np0005473739 systemd[1]: libpod-conmon-aeb564c2c45f1901060cf752f2d8f0bc4ee5f7b788b228dbf56f24cb3b65d1aa.scope: Deactivated successfully.
Oct  7 09:39:09 np0005473739 podman[134804]: 2025-10-07 13:39:09.794996714 +0000 UTC m=+0.039161897 container create 927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:39:09 np0005473739 systemd[1]: Started libpod-conmon-927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997.scope.
Oct  7 09:39:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:39:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67bc0e440101379a8f2fafc18ea775fc435e20470cd93b4760dfd26ed66f2c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67bc0e440101379a8f2fafc18ea775fc435e20470cd93b4760dfd26ed66f2c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67bc0e440101379a8f2fafc18ea775fc435e20470cd93b4760dfd26ed66f2c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b67bc0e440101379a8f2fafc18ea775fc435e20470cd93b4760dfd26ed66f2c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:39:09 np0005473739 podman[134804]: 2025-10-07 13:39:09.779240994 +0000 UTC m=+0.023406187 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:39:09 np0005473739 podman[134804]: 2025-10-07 13:39:09.876609113 +0000 UTC m=+0.120774316 container init 927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:39:09 np0005473739 podman[134804]: 2025-10-07 13:39:09.882758998 +0000 UTC m=+0.126924181 container start 927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:39:09 np0005473739 podman[134804]: 2025-10-07 13:39:09.885790938 +0000 UTC m=+0.129956141 container attach 927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:39:09 np0005473739 python3.9[134831]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:10 np0005473739 python3.9[134991]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v342: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:10 np0005473739 cranky_booth[134835]: {
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "osd_id": 2,
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "type": "bluestore"
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:    },
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "osd_id": 1,
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "type": "bluestore"
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:    },
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "osd_id": 0,
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:        "type": "bluestore"
Oct  7 09:39:10 np0005473739 cranky_booth[134835]:    }
Oct  7 09:39:10 np0005473739 cranky_booth[134835]: }
Oct  7 09:39:10 np0005473739 systemd[1]: libpod-927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997.scope: Deactivated successfully.
Oct  7 09:39:10 np0005473739 podman[134804]: 2025-10-07 13:39:10.851559505 +0000 UTC m=+1.095724688 container died 927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef)
Oct  7 09:39:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b67bc0e440101379a8f2fafc18ea775fc435e20470cd93b4760dfd26ed66f2c6-merged.mount: Deactivated successfully.
Oct  7 09:39:10 np0005473739 podman[134804]: 2025-10-07 13:39:10.898438826 +0000 UTC m=+1.142604009 container remove 927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 09:39:10 np0005473739 systemd[1]: libpod-conmon-927ff0d079b16ed6c4f72071f50959fa295a10d41d11038c43c76abc8a536997.scope: Deactivated successfully.
Oct  7 09:39:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:39:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:39:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:39:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:39:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 980935c3-77af-4ca3-9282-8af4d318521b does not exist
Oct  7 09:39:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a2af98ca-44fa-4d4e-b1fc-4393195ca571 does not exist
Oct  7 09:39:11 np0005473739 python3.9[135175]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844350.1555529-124-169667080801094/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=ec046c7f2c473ca38e7b43fa613ad1960eb21d1e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:11 np0005473739 python3.9[135354]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:39:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:39:12 np0005473739 python3.9[135477]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844351.313814-124-149208695934810/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=36328bf7b3971b686959456ac87a0357e1b6aabd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v343: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:12 np0005473739 python3.9[135629]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:13 np0005473739 python3.9[135752]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844352.4591267-124-10647627198031/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=1608fff30e2da5b63428e79793c344ecc40df76b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:14 np0005473739 python3.9[135904]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:14 np0005473739 systemd-logind[801]: Session 19 logged out. Waiting for processes to exit.
Oct  7 09:39:14 np0005473739 systemd[1]: session-19.scope: Deactivated successfully.
Oct  7 09:39:14 np0005473739 systemd[1]: session-19.scope: Consumed 1min 26.447s CPU time.
Oct  7 09:39:14 np0005473739 systemd-logind[801]: Removed session 19.
Oct  7 09:39:14 np0005473739 python3.9[136056]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v344: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:15 np0005473739 python3.9[136208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:15 np0005473739 python3.9[136331]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844354.8139544-183-156132196605178/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4dd25ac5c5e8d8c421705a525cb4474a30f74d7f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:16 np0005473739 python3.9[136483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v345: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:16 np0005473739 python3.9[136606]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844355.9226947-183-118244144571570/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=36328bf7b3971b686959456ac87a0357e1b6aabd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:17 np0005473739 python3.9[136758]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:18 np0005473739 python3.9[136881]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844357.065534-183-87072160529500/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2055f887497b1876585385ddbd1310835aadbd67 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v346: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:19 np0005473739 python3.9[137033]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:19 np0005473739 python3.9[137185]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:20 np0005473739 python3.9[137308]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844359.577008-251-21599310939050/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=21e54520d72c08ac2f6c098097c6f54e30e64e0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v347: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:21 np0005473739 python3.9[137460]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:21 np0005473739 python3.9[137612]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:22 np0005473739 python3.9[137735]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844361.3837585-275-128223369409362/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=21e54520d72c08ac2f6c098097c6f54e30e64e0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:39:22
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', 'backups', 'cephfs.cephfs.data', 'images', '.rgw.root', 'volumes']
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:39:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v348: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:23 np0005473739 python3.9[137887]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:23 np0005473739 python3.9[138039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:24 np0005473739 python3.9[138162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844363.1835406-299-86947719027514/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=21e54520d72c08ac2f6c098097c6f54e30e64e0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v349: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:24 np0005473739 python3.9[138314]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:25 np0005473739 python3.9[138466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:25 np0005473739 python3.9[138589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844364.9711342-323-246643405913987/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=21e54520d72c08ac2f6c098097c6f54e30e64e0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:26 np0005473739 python3.9[138741]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v350: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:27 np0005473739 python3.9[138893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:27 np0005473739 python3.9[139016]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844366.7194374-347-100872633618878/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=21e54520d72c08ac2f6c098097c6f54e30e64e0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:28 np0005473739 python3.9[139168]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v351: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:28 np0005473739 python3.9[139320]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:29 np0005473739 python3.9[139443]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844368.536277-371-205975903393520/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=21e54520d72c08ac2f6c098097c6f54e30e64e0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:29 np0005473739 systemd[1]: session-45.scope: Deactivated successfully.
Oct  7 09:39:29 np0005473739 systemd[1]: session-45.scope: Consumed 21.298s CPU time.
Oct  7 09:39:29 np0005473739 systemd-logind[801]: Session 45 logged out. Waiting for processes to exit.
Oct  7 09:39:29 np0005473739 systemd-logind[801]: Removed session 45.
Oct  7 09:39:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v352: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:39:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:39:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v353: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v354: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:35 np0005473739 systemd-logind[801]: New session 46 of user zuul.
Oct  7 09:39:35 np0005473739 systemd[1]: Started Session 46 of User zuul.
Oct  7 09:39:36 np0005473739 python3.9[139623]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v355: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:37 np0005473739 python3.9[139775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:37 np0005473739 python3.9[139898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844376.5667458-34-24059385738712/.source.conf _original_basename=ceph.conf follow=False checksum=91769f29d5600a967bc21cabfff57288aa7cd64c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:38 np0005473739 python3.9[140050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v356: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:39 np0005473739 python3.9[140173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/ceph/ceph.client.openstack.keyring mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844378.0955827-34-160090684478066/.source.keyring _original_basename=ceph.client.openstack.keyring follow=False checksum=a5e69f9fe4e1ae5e1f06c4bb7cdaf69a79fd44f5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:39 np0005473739 systemd[1]: session-46.scope: Deactivated successfully.
Oct  7 09:39:39 np0005473739 systemd[1]: session-46.scope: Consumed 2.683s CPU time.
Oct  7 09:39:39 np0005473739 systemd-logind[801]: Session 46 logged out. Waiting for processes to exit.
Oct  7 09:39:39 np0005473739 systemd-logind[801]: Removed session 46.
Oct  7 09:39:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v357: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v358: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v359: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:44 np0005473739 systemd-logind[801]: New session 47 of user zuul.
Oct  7 09:39:44 np0005473739 systemd[1]: Started Session 47 of User zuul.
Oct  7 09:39:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:45 np0005473739 python3.9[140351]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:39:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v360: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:47 np0005473739 python3.9[140507]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:47 np0005473739 python3.9[140659]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:39:48 np0005473739 python3.9[140809]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:39:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v361: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:49 np0005473739 python3.9[140961]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  7 09:39:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v362: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:51 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Oct  7 09:39:51 np0005473739 python3.9[141117]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:39:52 np0005473739 python3.9[141201]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:39:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v363: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v364: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:54 np0005473739 python3.9[141354]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:39:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:39:55 np0005473739 python3[141509]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Oct  7 09:39:56 np0005473739 python3.9[141661]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v365: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:57 np0005473739 python3.9[141813]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:57 np0005473739 python3.9[141891]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:58 np0005473739 python3.9[142043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v366: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:39:58 np0005473739 python3.9[142121]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.r_4uk8te recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:39:59 np0005473739 python3.9[142273]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:39:59 np0005473739 python3.9[142351]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:00 np0005473739 python3.9[142503]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v367: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:01 np0005473739 python3[142656]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  7 09:40:02 np0005473739 python3.9[142808]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v368: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:02 np0005473739 python3.9[142933]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844401.712969-157-131674072208461/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:03 np0005473739 python3.9[143085]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:04 np0005473739 python3.9[143210]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844403.1460898-172-3501299921732/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v369: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:05 np0005473739 python3.9[143362]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:05 np0005473739 python3.9[143487]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844404.5206718-187-216410367934798/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:06 np0005473739 python3.9[143639]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v370: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:06 np0005473739 python3.9[143764]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844405.7994998-202-7973530172167/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:07 np0005473739 python3.9[143916]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:08 np0005473739 python3.9[144041]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844407.1237395-217-193955714807332/.source.nft follow=False _original_basename=ruleset.j2 checksum=bdba38546f86123f1927359d89789bd211aba99d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v371: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:08 np0005473739 python3.9[144193]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:09 np0005473739 python3.9[144345]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:10 np0005473739 python3.9[144500]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v372: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:11 np0005473739 python3.9[144652]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:11 np0005473739 python3.9[144919]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:40:11 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 216e57d3-9574-4bcb-96f8-6afe9b2cfe16 does not exist
Oct  7 09:40:11 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 00118ff6-d6b8-4817-8596-bb0827367fab does not exist
Oct  7 09:40:11 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c0855972-35cf-4628-84c7-15dcd7929e9b does not exist
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:40:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:40:12 np0005473739 podman[145230]: 2025-10-07 13:40:12.398970173 +0000 UTC m=+0.051127121 container create 47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:40:12 np0005473739 python3.9[145203]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:12 np0005473739 systemd[1]: Started libpod-conmon-47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d.scope.
Oct  7 09:40:12 np0005473739 podman[145230]: 2025-10-07 13:40:12.373430649 +0000 UTC m=+0.025587617 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:40:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:40:12 np0005473739 podman[145230]: 2025-10-07 13:40:12.521049376 +0000 UTC m=+0.173206354 container init 47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:40:12 np0005473739 podman[145230]: 2025-10-07 13:40:12.53411562 +0000 UTC m=+0.186272568 container start 47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_torvalds, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:40:12 np0005473739 podman[145230]: 2025-10-07 13:40:12.539487502 +0000 UTC m=+0.191644470 container attach 47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_torvalds, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True)
Oct  7 09:40:12 np0005473739 goofy_torvalds[145249]: 167 167
Oct  7 09:40:12 np0005473739 systemd[1]: libpod-47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d.scope: Deactivated successfully.
Oct  7 09:40:12 np0005473739 conmon[145249]: conmon 47100e09d0391dacaa4c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d.scope/container/memory.events
Oct  7 09:40:12 np0005473739 podman[145230]: 2025-10-07 13:40:12.544561296 +0000 UTC m=+0.196718244 container died 47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_torvalds, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:40:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-790a7908ea11b96a01ea374b21828c9512dcc99c6e8bb566a27b627a4873e865-merged.mount: Deactivated successfully.
Oct  7 09:40:12 np0005473739 podman[145230]: 2025-10-07 13:40:12.607671513 +0000 UTC m=+0.259828461 container remove 47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 09:40:12 np0005473739 systemd[1]: libpod-conmon-47100e09d0391dacaa4c6c4466922fd97a822281f1a4eecda597593ba12a431d.scope: Deactivated successfully.
Oct  7 09:40:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v373: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:12 np0005473739 podman[145347]: 2025-10-07 13:40:12.779798105 +0000 UTC m=+0.051235043 container create 90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:40:12 np0005473739 systemd[1]: Started libpod-conmon-90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74.scope.
Oct  7 09:40:12 np0005473739 podman[145347]: 2025-10-07 13:40:12.754834337 +0000 UTC m=+0.026271295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:40:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:40:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1b3d988a110354064d64baee1d8dc2504f239226f9b45818c137fa0f10389a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1b3d988a110354064d64baee1d8dc2504f239226f9b45818c137fa0f10389a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1b3d988a110354064d64baee1d8dc2504f239226f9b45818c137fa0f10389a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1b3d988a110354064d64baee1d8dc2504f239226f9b45818c137fa0f10389a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e1b3d988a110354064d64baee1d8dc2504f239226f9b45818c137fa0f10389a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:12 np0005473739 podman[145347]: 2025-10-07 13:40:12.884357556 +0000 UTC m=+0.155794514 container init 90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct  7 09:40:12 np0005473739 podman[145347]: 2025-10-07 13:40:12.893171828 +0000 UTC m=+0.164608766 container start 90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 09:40:12 np0005473739 podman[145347]: 2025-10-07 13:40:12.896852316 +0000 UTC m=+0.168289264 container attach 90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:40:13 np0005473739 python3.9[145443]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:14 np0005473739 pensive_haibt[145410]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:40:14 np0005473739 pensive_haibt[145410]: --> relative data size: 1.0
Oct  7 09:40:14 np0005473739 pensive_haibt[145410]: --> All data devices are unavailable
Oct  7 09:40:14 np0005473739 systemd[1]: libpod-90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74.scope: Deactivated successfully.
Oct  7 09:40:14 np0005473739 systemd[1]: libpod-90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74.scope: Consumed 1.082s CPU time.
Oct  7 09:40:14 np0005473739 podman[145347]: 2025-10-07 13:40:14.064901379 +0000 UTC m=+1.336338337 container died 90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:40:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3e1b3d988a110354064d64baee1d8dc2504f239226f9b45818c137fa0f10389a-merged.mount: Deactivated successfully.
Oct  7 09:40:14 np0005473739 podman[145347]: 2025-10-07 13:40:14.138319837 +0000 UTC m=+1.409756775 container remove 90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_haibt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:40:14 np0005473739 systemd[1]: libpod-conmon-90b6c52d50dfd35f10f4d7dd8ccd8f9f864c1742c09fe59452b4a9ce90e80f74.scope: Deactivated successfully.
Oct  7 09:40:14 np0005473739 python3.9[145617]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:40:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v374: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:14 np0005473739 podman[145794]: 2025-10-07 13:40:14.885800599 +0000 UTC m=+0.066445726 container create 36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cerf, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:40:14 np0005473739 systemd[1]: Started libpod-conmon-36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875.scope.
Oct  7 09:40:14 np0005473739 podman[145794]: 2025-10-07 13:40:14.844648922 +0000 UTC m=+0.025294069 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:40:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:40:15 np0005473739 podman[145794]: 2025-10-07 13:40:15.059654018 +0000 UTC m=+0.240299165 container init 36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 09:40:15 np0005473739 podman[145794]: 2025-10-07 13:40:15.082505051 +0000 UTC m=+0.263150178 container start 36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:40:15 np0005473739 silly_cerf[145856]: 167 167
Oct  7 09:40:15 np0005473739 systemd[1]: libpod-36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875.scope: Deactivated successfully.
Oct  7 09:40:15 np0005473739 podman[145794]: 2025-10-07 13:40:15.235115259 +0000 UTC m=+0.415760386 container attach 36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:40:15 np0005473739 podman[145794]: 2025-10-07 13:40:15.235969662 +0000 UTC m=+0.416614849 container died 36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cerf, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:40:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-84723ef561cc3d6788eb10490b422281eecf79deeb87220aae6d2b89ff7fc8e6-merged.mount: Deactivated successfully.
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #21. Immutable memtables: 0.
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.450726) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 5] Flushing memtable with next log file: 21
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844415450876, "job": 5, "event": "flush_started", "num_memtables": 1, "num_entries": 1603, "num_deletes": 251, "total_data_size": 2400280, "memory_usage": 2430304, "flush_reason": "Manual Compaction"}
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 5] Level-0 flush table #22: started
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844415511993, "cf_name": "default", "job": 5, "event": "table_file_creation", "file_number": 22, "file_size": 1393490, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 7355, "largest_seqno": 8957, "table_properties": {"data_size": 1388128, "index_size": 2438, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14933, "raw_average_key_size": 20, "raw_value_size": 1375738, "raw_average_value_size": 1889, "num_data_blocks": 115, "num_entries": 728, "num_filter_entries": 728, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759844256, "oldest_key_time": 1759844256, "file_creation_time": 1759844415, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 22, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 5] Flush lasted 61274 microseconds, and 5994 cpu microseconds.
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:40:15 np0005473739 python3.9[145954]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:d8:76:c8:90" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.512042) [db/flush_job.cc:967] [default] [JOB 5] Level-0 flush table #22: 1393490 bytes OK
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.512060) [db/memtable_list.cc:519] [default] Level-0 commit table #22 started
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.525877) [db/memtable_list.cc:722] [default] Level-0 commit table #22: memtable #1 done
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.525977) EVENT_LOG_v1 {"time_micros": 1759844415525955, "job": 5, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.526019) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 5] Try to delete WAL files size 2393152, prev total WAL file size 2393152, number of live WAL files 2.
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000018.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.527334) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740030' seq:72057594037927935, type:22 .. '6D67727374617400323532' seq:0, type:0; will stop at (end)
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 6] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 5 Base level 0, inputs: [22(1360KB)], [20(7241KB)]
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844415527386, "job": 6, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [22], "files_L6": [20], "score": -1, "input_data_size": 8809164, "oldest_snapshot_seqno": -1}
Oct  7 09:40:15 np0005473739 podman[145794]: 2025-10-07 13:40:15.526800158 +0000 UTC m=+0.707445295 container remove 36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:40:15 np0005473739 ovs-vsctl[145956]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:d8:76:c8:90 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Oct  7 09:40:15 np0005473739 systemd[1]: libpod-conmon-36525af53bd60fe41dbaacb142748232851549d6b6cc432d2b55887ce47da875.scope: Deactivated successfully.
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 6] Generated table #23: 3372 keys, 6871166 bytes, temperature: kUnknown
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844415596909, "cf_name": "default", "job": 6, "event": "table_file_creation", "file_number": 23, "file_size": 6871166, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6845385, "index_size": 16264, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8453, "raw_key_size": 80764, "raw_average_key_size": 23, "raw_value_size": 6781235, "raw_average_value_size": 2011, "num_data_blocks": 722, "num_entries": 3372, "num_filter_entries": 3372, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759844415, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 23, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.597163) [db/compaction/compaction_job.cc:1663] [default] [JOB 6] Compacted 1@0 + 1@6 files to L6 => 6871166 bytes
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.605221) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 126.6 rd, 98.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 7.1 +0.0 blob) out(6.6 +0.0 blob), read-write-amplify(11.3) write-amplify(4.9) OK, records in: 3812, records dropped: 440 output_compression: NoCompression
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.605258) EVENT_LOG_v1 {"time_micros": 1759844415605245, "job": 6, "event": "compaction_finished", "compaction_time_micros": 69610, "compaction_time_cpu_micros": 19099, "output_level": 6, "num_output_files": 1, "total_output_size": 6871166, "num_input_records": 3812, "num_output_records": 3372, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000022.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844415605700, "job": 6, "event": "table_file_deletion", "file_number": 22}
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000020.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844415607075, "job": 6, "event": "table_file_deletion", "file_number": 20}
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.527222) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.607161) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.607172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.607175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.607177) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:40:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:40:15.607179) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:40:15 np0005473739 podman[145988]: 2025-10-07 13:40:15.714700939 +0000 UTC m=+0.054810028 container create cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:40:15 np0005473739 systemd[1]: Started libpod-conmon-cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade.scope.
Oct  7 09:40:15 np0005473739 podman[145988]: 2025-10-07 13:40:15.691168558 +0000 UTC m=+0.031277677 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:40:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:40:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5734d0a6cd5e602d3a056aa7a4c6d52e0d67562bf1fbad80e728d9ff0865b434/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5734d0a6cd5e602d3a056aa7a4c6d52e0d67562bf1fbad80e728d9ff0865b434/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5734d0a6cd5e602d3a056aa7a4c6d52e0d67562bf1fbad80e728d9ff0865b434/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5734d0a6cd5e602d3a056aa7a4c6d52e0d67562bf1fbad80e728d9ff0865b434/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:15 np0005473739 podman[145988]: 2025-10-07 13:40:15.828983125 +0000 UTC m=+0.169092244 container init cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:40:15 np0005473739 podman[145988]: 2025-10-07 13:40:15.839046611 +0000 UTC m=+0.179155710 container start cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:40:15 np0005473739 podman[145988]: 2025-10-07 13:40:15.849551629 +0000 UTC m=+0.189660758 container attach cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 09:40:16 np0005473739 python3.9[146136]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]: {
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:    "0": [
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:        {
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "devices": [
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "/dev/loop3"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            ],
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_name": "ceph_lv0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_size": "21470642176",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "name": "ceph_lv0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "tags": {
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cluster_name": "ceph",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.crush_device_class": "",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.encrypted": "0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osd_id": "0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.type": "block",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.vdo": "0"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            },
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "type": "block",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "vg_name": "ceph_vg0"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:        }
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:    ],
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:    "1": [
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:        {
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "devices": [
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "/dev/loop4"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            ],
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_name": "ceph_lv1",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_size": "21470642176",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "name": "ceph_lv1",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "tags": {
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cluster_name": "ceph",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.crush_device_class": "",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.encrypted": "0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osd_id": "1",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.type": "block",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.vdo": "0"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            },
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "type": "block",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "vg_name": "ceph_vg1"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:        }
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:    ],
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:    "2": [
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:        {
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "devices": [
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "/dev/loop5"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            ],
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_name": "ceph_lv2",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_size": "21470642176",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "name": "ceph_lv2",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "tags": {
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.cluster_name": "ceph",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.crush_device_class": "",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.encrypted": "0",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osd_id": "2",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.type": "block",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:                "ceph.vdo": "0"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            },
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "type": "block",
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:            "vg_name": "ceph_vg2"
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:        }
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]:    ]
Oct  7 09:40:16 np0005473739 naughty_cannon[146027]: }
Oct  7 09:40:16 np0005473739 systemd[1]: libpod-cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade.scope: Deactivated successfully.
Oct  7 09:40:16 np0005473739 podman[145988]: 2025-10-07 13:40:16.664706536 +0000 UTC m=+1.004815645 container died cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:40:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v375: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5734d0a6cd5e602d3a056aa7a4c6d52e0d67562bf1fbad80e728d9ff0865b434-merged.mount: Deactivated successfully.
Oct  7 09:40:16 np0005473739 podman[145988]: 2025-10-07 13:40:16.76712008 +0000 UTC m=+1.107229179 container remove cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_cannon, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:40:16 np0005473739 systemd[1]: libpod-conmon-cad5ad181ffbbe9232046d4766428f8071585562dffbe543dbb207aa2ed59ade.scope: Deactivated successfully.
Oct  7 09:40:16 np0005473739 python3.9[146307]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:16 np0005473739 ovs-vsctl[146356]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Oct  7 09:40:17 np0005473739 podman[146579]: 2025-10-07 13:40:17.477121572 +0000 UTC m=+0.099113507 container create 73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:40:17 np0005473739 podman[146579]: 2025-10-07 13:40:17.406660192 +0000 UTC m=+0.028652147 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:40:17 np0005473739 systemd[1]: Started libpod-conmon-73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218.scope.
Oct  7 09:40:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:40:17 np0005473739 podman[146579]: 2025-10-07 13:40:17.577683356 +0000 UTC m=+0.199675311 container init 73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 09:40:17 np0005473739 podman[146579]: 2025-10-07 13:40:17.585252427 +0000 UTC m=+0.207244362 container start 73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:40:17 np0005473739 podman[146579]: 2025-10-07 13:40:17.591155532 +0000 UTC m=+0.213147467 container attach 73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:40:17 np0005473739 agitated_lichterman[146615]: 167 167
Oct  7 09:40:17 np0005473739 systemd[1]: libpod-73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218.scope: Deactivated successfully.
Oct  7 09:40:17 np0005473739 podman[146579]: 2025-10-07 13:40:17.597056728 +0000 UTC m=+0.219048663 container died 73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:40:17 np0005473739 python3.9[146609]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:40:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-60b680535815dc32c75b60e1d5001b47e59d8470f10dc79624c5042577806eba-merged.mount: Deactivated successfully.
Oct  7 09:40:17 np0005473739 podman[146579]: 2025-10-07 13:40:17.800666512 +0000 UTC m=+0.422658437 container remove 73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:40:17 np0005473739 systemd[1]: libpod-conmon-73e1ba50eff8791bb2ec678fabb56b9e520bd30c1484597c6434ac95b47f6218.scope: Deactivated successfully.
Oct  7 09:40:17 np0005473739 podman[146693]: 2025-10-07 13:40:17.977509011 +0000 UTC m=+0.041231130 container create a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nobel, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:40:18 np0005473739 systemd[1]: Started libpod-conmon-a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6.scope.
Oct  7 09:40:18 np0005473739 podman[146693]: 2025-10-07 13:40:17.958511419 +0000 UTC m=+0.022233568 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:40:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:40:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ea2840b214c7c6b0bf7d51838f314819d880d92e0d6d37a216125805bde8eb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ea2840b214c7c6b0bf7d51838f314819d880d92e0d6d37a216125805bde8eb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ea2840b214c7c6b0bf7d51838f314819d880d92e0d6d37a216125805bde8eb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d9ea2840b214c7c6b0bf7d51838f314819d880d92e0d6d37a216125805bde8eb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:18 np0005473739 podman[146693]: 2025-10-07 13:40:18.163237953 +0000 UTC m=+0.226960092 container init a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nobel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct  7 09:40:18 np0005473739 podman[146693]: 2025-10-07 13:40:18.171339677 +0000 UTC m=+0.235061796 container start a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:40:18 np0005473739 podman[146693]: 2025-10-07 13:40:18.219430927 +0000 UTC m=+0.283153046 container attach a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nobel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:40:18 np0005473739 python3.9[146816]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:40:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v376: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:19 np0005473739 python3.9[146970]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]: {
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "osd_id": 2,
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "type": "bluestore"
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:    },
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "osd_id": 1,
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "type": "bluestore"
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:    },
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "osd_id": 0,
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:        "type": "bluestore"
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]:    }
Oct  7 09:40:19 np0005473739 vigilant_nobel[146756]: }
Oct  7 09:40:19 np0005473739 systemd[1]: libpod-a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6.scope: Deactivated successfully.
Oct  7 09:40:19 np0005473739 podman[146693]: 2025-10-07 13:40:19.250472134 +0000 UTC m=+1.314194273 container died a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nobel, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:40:19 np0005473739 systemd[1]: libpod-a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6.scope: Consumed 1.044s CPU time.
Oct  7 09:40:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d9ea2840b214c7c6b0bf7d51838f314819d880d92e0d6d37a216125805bde8eb-merged.mount: Deactivated successfully.
Oct  7 09:40:19 np0005473739 python3.9[147086]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:40:19 np0005473739 podman[146693]: 2025-10-07 13:40:19.714836742 +0000 UTC m=+1.778558861 container remove a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_nobel, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:40:19 np0005473739 systemd[1]: libpod-conmon-a1582e44ef9d4fa397f73e341d0d35a659ddb1315e3419d985d7a187fd4319f6.scope: Deactivated successfully.
Oct  7 09:40:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:40:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:40:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:40:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:40:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 798820e7-4e8c-4bc1-96bb-d33023a51246 does not exist
Oct  7 09:40:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 970680e5-159b-40db-a4b1-e10ee49cdeba does not exist
Oct  7 09:40:20 np0005473739 python3.9[147289]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:40:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:40:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:20 np0005473739 python3.9[147367]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:40:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v377: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:21 np0005473739 python3.9[147519]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:21 np0005473739 python3.9[147671]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:22 np0005473739 python3.9[147749]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:40:22
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta', '.rgw.root', '.mgr', 'backups', 'images', 'default.rgw.meta', 'default.rgw.log']
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:40:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v378: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:22 np0005473739 python3.9[147901]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:23 np0005473739 python3.9[147979]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:24 np0005473739 python3.9[148131]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:40:24 np0005473739 systemd[1]: Reloading.
Oct  7 09:40:24 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:40:24 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:40:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v379: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:24 np0005473739 python3.9[148321]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:25 np0005473739 python3.9[148399]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:25 np0005473739 python3.9[148551]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:26 np0005473739 python3.9[148629]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v380: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:27 np0005473739 python3.9[148781]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:40:27 np0005473739 systemd[1]: Reloading.
Oct  7 09:40:27 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:40:27 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:40:27 np0005473739 systemd[1]: Starting Create netns directory...
Oct  7 09:40:27 np0005473739 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  7 09:40:27 np0005473739 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  7 09:40:27 np0005473739 systemd[1]: Finished Create netns directory.
Oct  7 09:40:28 np0005473739 python3.9[148975]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:40:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v381: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:28 np0005473739 python3.9[149127]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:29 np0005473739 python3.9[149250]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844428.430232-468-258883448057719/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:40:30 np0005473739 python3.9[149402]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:40:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v382: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:31 np0005473739 python3.9[149554]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:40:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:40:31 np0005473739 python3.9[149677]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844430.7208219-493-80391249325613/.source.json _original_basename=.egn1otom follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:32 np0005473739 python3.9[149829]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v383: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v384: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:34 np0005473739 python3.9[150256]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Oct  7 09:40:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:40:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 2013 writes, 8971 keys, 2013 commit groups, 1.0 writes per commit group, ingest: 0.01 GB, 0.02 MB/s#012Cumulative WAL: 2012 writes, 2012 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2013 writes, 8971 keys, 2013 commit groups, 1.0 writes per commit group, ingest: 11.39 MB, 0.02 MB/s#012Interval WAL: 2012 writes, 2012 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     65.3      0.13              0.02         3    0.043       0      0       0.0       0.0#012  L6      1/0    6.55 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.6     90.1     79.1      0.17              0.03         2    0.086    7186    730       0.0       0.0#012 Sum      1/0    6.55 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     51.5     73.2      0.30              0.05         5    0.060    7186    730       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6     52.1     73.9      0.30              0.05         4    0.074    7186    730       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0     90.1     79.1      0.17              0.03         2    0.086    7186    730       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     66.8      0.13              0.02         2    0.063       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.008, interval 0.008#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds#012Interval compaction: 0.02 GB write, 0.04 MB/s write, 0.02 GB read, 0.03 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 308.00 MB usage: 577.91 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 0 last_secs: 7.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(37,490.19 KB,0.155422%) FilterBlock(6,28.55 KB,0.00905124%) IndexBlock(6,59.17 KB,0.0187614%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 09:40:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:35 np0005473739 python3.9[150408]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  7 09:40:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v385: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:36 np0005473739 python3.9[150560]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  7 09:40:38 np0005473739 python3[150738]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  7 09:40:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v386: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v387: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v388: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:44 np0005473739 podman[150752]: 2025-10-07 13:40:44.01321762 +0000 UTC m=+5.644076638 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  7 09:40:44 np0005473739 podman[150869]: 2025-10-07 13:40:44.178641217 +0000 UTC m=+0.076502421 container create 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  7 09:40:44 np0005473739 podman[150869]: 2025-10-07 13:40:44.126070968 +0000 UTC m=+0.023932272 image pull 70c92fb64e1eda6ef063d34e60e9a541e44edbaa51e757e8304331202c76a3a7 quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  7 09:40:44 np0005473739 python3[150738]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857
Oct  7 09:40:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v389: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:44 np0005473739 python3.9[151059]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:40:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:45 np0005473739 python3.9[151213]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:46 np0005473739 python3.9[151289]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:40:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v390: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:46 np0005473739 python3.9[151440]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759844446.1871614-581-147104339237151/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:40:47 np0005473739 python3.9[151516]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:40:47 np0005473739 systemd[1]: Reloading.
Oct  7 09:40:47 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:40:47 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:40:48 np0005473739 python3.9[151628]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:40:48 np0005473739 systemd[1]: Reloading.
Oct  7 09:40:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v391: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:48 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:40:48 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:40:49 np0005473739 systemd[1]: Starting ovn_controller container...
Oct  7 09:40:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:40:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b05c6a318dc077b97f549084bd8a3795a926d8dc2052dd0f99088707fef00507/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Oct  7 09:40:49 np0005473739 systemd[1]: Started /usr/bin/podman healthcheck run 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf.
Oct  7 09:40:49 np0005473739 podman[151668]: 2025-10-07 13:40:49.319642805 +0000 UTC m=+0.230259190 container init 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + sudo -E kolla_set_configs
Oct  7 09:40:49 np0005473739 podman[151668]: 2025-10-07 13:40:49.35622338 +0000 UTC m=+0.266839695 container start 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 09:40:49 np0005473739 edpm-start-podman-container[151668]: ovn_controller
Oct  7 09:40:49 np0005473739 systemd[1]: Created slice User Slice of UID 0.
Oct  7 09:40:49 np0005473739 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  7 09:40:49 np0005473739 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  7 09:40:49 np0005473739 systemd[1]: Starting User Manager for UID 0...
Oct  7 09:40:49 np0005473739 edpm-start-podman-container[151667]: Creating additional drop-in dependency for "ovn_controller" (43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf)
Oct  7 09:40:49 np0005473739 podman[151691]: 2025-10-07 13:40:49.482441022 +0000 UTC m=+0.104371737 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 09:40:49 np0005473739 systemd[1]: 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf-1d61b9cb947f4a3a.service: Main process exited, code=exited, status=1/FAILURE
Oct  7 09:40:49 np0005473739 systemd[1]: 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf-1d61b9cb947f4a3a.service: Failed with result 'exit-code'.
Oct  7 09:40:49 np0005473739 systemd[1]: Reloading.
Oct  7 09:40:49 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:40:49 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:40:49 np0005473739 systemd[151718]: Queued start job for default target Main User Target.
Oct  7 09:40:49 np0005473739 systemd[151718]: Created slice User Application Slice.
Oct  7 09:40:49 np0005473739 systemd[151718]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  7 09:40:49 np0005473739 systemd[151718]: Started Daily Cleanup of User's Temporary Directories.
Oct  7 09:40:49 np0005473739 systemd[151718]: Reached target Paths.
Oct  7 09:40:49 np0005473739 systemd[151718]: Reached target Timers.
Oct  7 09:40:49 np0005473739 systemd[151718]: Starting D-Bus User Message Bus Socket...
Oct  7 09:40:49 np0005473739 systemd[151718]: Starting Create User's Volatile Files and Directories...
Oct  7 09:40:49 np0005473739 systemd[151718]: Listening on D-Bus User Message Bus Socket.
Oct  7 09:40:49 np0005473739 systemd[151718]: Reached target Sockets.
Oct  7 09:40:49 np0005473739 systemd[151718]: Finished Create User's Volatile Files and Directories.
Oct  7 09:40:49 np0005473739 systemd[151718]: Reached target Basic System.
Oct  7 09:40:49 np0005473739 systemd[151718]: Reached target Main User Target.
Oct  7 09:40:49 np0005473739 systemd[151718]: Startup finished in 165ms.
Oct  7 09:40:49 np0005473739 systemd[1]: Started User Manager for UID 0.
Oct  7 09:40:49 np0005473739 systemd[1]: Started ovn_controller container.
Oct  7 09:40:49 np0005473739 systemd[1]: Started Session c1 of User root.
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: INFO:__main__:Validating config file
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: INFO:__main__:Writing out command to execute
Oct  7 09:40:49 np0005473739 systemd[1]: session-c1.scope: Deactivated successfully.
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: ++ cat /run_command
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + ARGS=
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + sudo kolla_copy_cacerts
Oct  7 09:40:49 np0005473739 systemd[1]: Started Session c2 of User root.
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + [[ ! -n '' ]]
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + . kolla_extend_start
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + umask 0022
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Oct  7 09:40:49 np0005473739 systemd[1]: session-c2.scope: Deactivated successfully.
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:49Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:49Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:49Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:49Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:49Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Oct  7 09:40:49 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:49Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Oct  7 09:40:49 np0005473739 NetworkManager[44949]: <info>  [1759844449.9972] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/16)
Oct  7 09:40:49 np0005473739 NetworkManager[44949]: <info>  [1759844449.9981] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 09:40:49 np0005473739 NetworkManager[44949]: <info>  [1759844449.9994] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Oct  7 09:40:50 np0005473739 NetworkManager[44949]: <info>  [1759844450.0001] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/18)
Oct  7 09:40:50 np0005473739 NetworkManager[44949]: <info>  [1759844450.0005] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Oct  7 09:40:50 np0005473739 kernel: br-int: entered promiscuous mode
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00014|main|INFO|OVS feature set changed, force recompute.
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00019|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00021|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00022|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00023|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00024|main|INFO|OVS feature set changed, force recompute.
Oct  7 09:40:50 np0005473739 systemd-udevd[151839]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  7 09:40:50 np0005473739 ovn_controller[151684]: 2025-10-07T13:40:50Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Oct  7 09:40:50 np0005473739 NetworkManager[44949]: <info>  [1759844450.1608] manager: (ovn-37f025-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Oct  7 09:40:50 np0005473739 systemd-udevd[151845]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 09:40:50 np0005473739 kernel: genev_sys_6081: entered promiscuous mode
Oct  7 09:40:50 np0005473739 NetworkManager[44949]: <info>  [1759844450.1908] device (genev_sys_6081): carrier: link connected
Oct  7 09:40:50 np0005473739 NetworkManager[44949]: <info>  [1759844450.1914] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/20)
Oct  7 09:40:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:50 np0005473739 python3.9[151948]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:50 np0005473739 ovs-vsctl[151949]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Oct  7 09:40:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v392: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:51 np0005473739 python3.9[152101]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:51 np0005473739 ovs-vsctl[152103]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Oct  7 09:40:52 np0005473739 python3.9[152256]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:40:52 np0005473739 ovs-vsctl[152257]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Oct  7 09:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:40:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v393: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:52 np0005473739 systemd[1]: session-47.scope: Deactivated successfully.
Oct  7 09:40:52 np0005473739 systemd[1]: session-47.scope: Consumed 59.482s CPU time.
Oct  7 09:40:52 np0005473739 systemd-logind[801]: Session 47 logged out. Waiting for processes to exit.
Oct  7 09:40:52 np0005473739 systemd-logind[801]: Removed session 47.
Oct  7 09:40:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v394: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:40:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v395: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:58 np0005473739 systemd-logind[801]: New session 49 of user zuul.
Oct  7 09:40:58 np0005473739 systemd[1]: Started Session 49 of User zuul.
Oct  7 09:40:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v396: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:40:59 np0005473739 python3.9[152435]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:40:59 np0005473739 systemd[1]: Stopping User Manager for UID 0...
Oct  7 09:40:59 np0005473739 systemd[151718]: Activating special unit Exit the Session...
Oct  7 09:40:59 np0005473739 systemd[151718]: Stopped target Main User Target.
Oct  7 09:40:59 np0005473739 systemd[151718]: Stopped target Basic System.
Oct  7 09:40:59 np0005473739 systemd[151718]: Stopped target Paths.
Oct  7 09:40:59 np0005473739 systemd[151718]: Stopped target Sockets.
Oct  7 09:40:59 np0005473739 systemd[151718]: Stopped target Timers.
Oct  7 09:40:59 np0005473739 systemd[151718]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  7 09:40:59 np0005473739 systemd[151718]: Closed D-Bus User Message Bus Socket.
Oct  7 09:40:59 np0005473739 systemd[151718]: Stopped Create User's Volatile Files and Directories.
Oct  7 09:40:59 np0005473739 systemd[151718]: Removed slice User Application Slice.
Oct  7 09:40:59 np0005473739 systemd[151718]: Reached target Shutdown.
Oct  7 09:40:59 np0005473739 systemd[151718]: Finished Exit the Session.
Oct  7 09:40:59 np0005473739 systemd[151718]: Reached target Exit the Session.
Oct  7 09:41:00 np0005473739 systemd[1]: user@0.service: Deactivated successfully.
Oct  7 09:41:00 np0005473739 systemd[1]: Stopped User Manager for UID 0.
Oct  7 09:41:00 np0005473739 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  7 09:41:00 np0005473739 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  7 09:41:00 np0005473739 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  7 09:41:00 np0005473739 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  7 09:41:00 np0005473739 systemd[1]: Removed slice User Slice of UID 0.
Oct  7 09:41:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v397: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:00 np0005473739 python3.9[152593]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:01 np0005473739 python3.9[152745]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:02 np0005473739 python3.9[152897]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v398: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:02 np0005473739 python3.9[153049]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:03 np0005473739 python3.9[153201]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:04 np0005473739 python3.9[153351]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:41:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v399: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:06 np0005473739 python3.9[153503]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Oct  7 09:41:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v400: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:07 np0005473739 python3.9[153653]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:08 np0005473739 python3.9[153775]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844467.132193-86-64592065353059/.source follow=False _original_basename=haproxy.j2 checksum=4bca74f6ee0b6450624d22997e2f90c414d58b44 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v401: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:08 np0005473739 python3.9[153925]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:09 np0005473739 python3.9[154046]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844468.55386-101-154371455868029/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:10 np0005473739 python3.9[154198]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:41:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v402: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:11 np0005473739 python3.9[154282]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:41:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v403: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:13 np0005473739 python3.9[154435]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:41:14 np0005473739 python3.9[154588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:14 np0005473739 python3.9[154709]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844473.7805614-138-66880280766808/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v404: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:15 np0005473739 python3.9[154859]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:15 np0005473739 python3.9[154980]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844474.8491812-138-195167931674261/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v405: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:16 np0005473739 python3.9[155130]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:17 np0005473739 python3.9[155251]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844476.4854472-182-228550901350106/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:18 np0005473739 python3.9[155401]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v406: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:18 np0005473739 python3.9[155522]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844477.6921604-182-103411836950687/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:19 np0005473739 python3.9[155672]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:41:20 np0005473739 ovn_controller[151684]: 2025-10-07T13:41:20Z|00025|memory|INFO|17152 kB peak resident set size after 30.1 seconds
Oct  7 09:41:20 np0005473739 ovn_controller[151684]: 2025-10-07T13:41:20Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:528 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Oct  7 09:41:20 np0005473739 podman[155798]: 2025-10-07 13:41:20.144647462 +0000 UTC m=+0.130288967 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:41:20 np0005473739 python3.9[155932]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:41:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 88bf19f9-023d-44da-8c6c-478e60806c1f does not exist
Oct  7 09:41:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 41e7a54f-8a0e-4f19-acdb-483dbb812ca9 does not exist
Oct  7 09:41:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e8ae3fc-e958-4c98-b839-4555c542e806 does not exist
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:41:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:41:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v407: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:21 np0005473739 python3.9[156213]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:41:21 np0005473739 podman[156292]: 2025-10-07 13:41:21.329820436 +0000 UTC m=+0.051019825 container create df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lehmann, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #24. Immutable memtables: 0.
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.332628) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 7] Flushing memtable with next log file: 24
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844481332706, "job": 7, "event": "flush_started", "num_memtables": 1, "num_entries": 748, "num_deletes": 251, "total_data_size": 955682, "memory_usage": 969336, "flush_reason": "Manual Compaction"}
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 7] Level-0 flush table #25: started
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844481350156, "cf_name": "default", "job": 7, "event": "table_file_creation", "file_number": 25, "file_size": 947106, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 8958, "largest_seqno": 9705, "table_properties": {"data_size": 943242, "index_size": 1644, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8241, "raw_average_key_size": 18, "raw_value_size": 935540, "raw_average_value_size": 2107, "num_data_blocks": 76, "num_entries": 444, "num_filter_entries": 444, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759844415, "oldest_key_time": 1759844415, "file_creation_time": 1759844481, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 25, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 7] Flush lasted 17572 microseconds, and 3765 cpu microseconds.
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.350203) [db/flush_job.cc:967] [default] [JOB 7] Level-0 flush table #25: 947106 bytes OK
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.350223) [db/memtable_list.cc:519] [default] Level-0 commit table #25 started
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.354119) [db/memtable_list.cc:722] [default] Level-0 commit table #25: memtable #1 done
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.354133) EVENT_LOG_v1 {"time_micros": 1759844481354129, "job": 7, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.354152) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 7] Try to delete WAL files size 951875, prev total WAL file size 951875, number of live WAL files 2.
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000021.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.354996) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300323531' seq:72057594037927935, type:22 .. '7061786F7300353033' seq:0, type:0; will stop at (end)
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 8] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 7 Base level 0, inputs: [25(924KB)], [23(6710KB)]
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844481355087, "job": 8, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [25], "files_L6": [23], "score": -1, "input_data_size": 7818272, "oldest_snapshot_seqno": -1}
Oct  7 09:41:21 np0005473739 systemd[1]: Started libpod-conmon-df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5.scope.
Oct  7 09:41:21 np0005473739 podman[156292]: 2025-10-07 13:41:21.309746689 +0000 UTC m=+0.030946108 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 8] Generated table #26: 3302 keys, 6106755 bytes, temperature: kUnknown
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844481422525, "cf_name": "default", "job": 8, "event": "table_file_creation", "file_number": 26, "file_size": 6106755, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6082802, "index_size": 14628, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 8261, "raw_key_size": 80080, "raw_average_key_size": 24, "raw_value_size": 6021190, "raw_average_value_size": 1823, "num_data_blocks": 638, "num_entries": 3302, "num_filter_entries": 3302, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759844481, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 26, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:41:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.424142) [db/compaction/compaction_job.cc:1663] [default] [JOB 8] Compacted 1@0 + 1@6 files to L6 => 6106755 bytes
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.427807) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 115.0 rd, 89.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 6.6 +0.0 blob) out(5.8 +0.0 blob), read-write-amplify(14.7) write-amplify(6.4) OK, records in: 3816, records dropped: 514 output_compression: NoCompression
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.427854) EVENT_LOG_v1 {"time_micros": 1759844481427834, "job": 8, "event": "compaction_finished", "compaction_time_micros": 67957, "compaction_time_cpu_micros": 21229, "output_level": 6, "num_output_files": 1, "total_output_size": 6106755, "num_input_records": 3816, "num_output_records": 3302, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000025.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844481428421, "job": 8, "event": "table_file_deletion", "file_number": 25}
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000023.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844481430588, "job": 8, "event": "table_file_deletion", "file_number": 23}
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.354853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.430640) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.430646) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.430647) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.430649) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:41:21 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:41:21.430651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:41:21 np0005473739 podman[156292]: 2025-10-07 13:41:21.447409135 +0000 UTC m=+0.168608554 container init df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lehmann, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:41:21 np0005473739 podman[156292]: 2025-10-07 13:41:21.456821187 +0000 UTC m=+0.178020576 container start df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:41:21 np0005473739 podman[156292]: 2025-10-07 13:41:21.462783291 +0000 UTC m=+0.183982710 container attach df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lehmann, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:41:21 np0005473739 systemd[1]: libpod-df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5.scope: Deactivated successfully.
Oct  7 09:41:21 np0005473739 pensive_lehmann[156335]: 167 167
Oct  7 09:41:21 np0005473739 conmon[156335]: conmon df501e312b83cd56e027 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5.scope/container/memory.events
Oct  7 09:41:21 np0005473739 podman[156292]: 2025-10-07 13:41:21.46628001 +0000 UTC m=+0.187479399 container died df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 09:41:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-71395c57ec149c9490fe15416eb985cbb763eef1ec9299e23e005b8a9a845be9-merged.mount: Deactivated successfully.
Oct  7 09:41:21 np0005473739 podman[156292]: 2025-10-07 13:41:21.61612111 +0000 UTC m=+0.337320499 container remove df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_lehmann, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 09:41:21 np0005473739 systemd[1]: libpod-conmon-df501e312b83cd56e0277bd717f7722e4ca043cd535680b4570e7fbcadef05b5.scope: Deactivated successfully.
Oct  7 09:41:21 np0005473739 python3.9[156369]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:21 np0005473739 podman[156391]: 2025-10-07 13:41:21.836428124 +0000 UTC m=+0.098589441 container create e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:41:21 np0005473739 podman[156391]: 2025-10-07 13:41:21.764783848 +0000 UTC m=+0.026945155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:41:21 np0005473739 systemd[1]: Started libpod-conmon-e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6.scope.
Oct  7 09:41:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:41:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7f3613710b072c5590b719f1c198a88c4c8936cf68666107bbf985ab34fc83/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7f3613710b072c5590b719f1c198a88c4c8936cf68666107bbf985ab34fc83/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7f3613710b072c5590b719f1c198a88c4c8936cf68666107bbf985ab34fc83/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7f3613710b072c5590b719f1c198a88c4c8936cf68666107bbf985ab34fc83/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f7f3613710b072c5590b719f1c198a88c4c8936cf68666107bbf985ab34fc83/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:21 np0005473739 podman[156391]: 2025-10-07 13:41:21.934346406 +0000 UTC m=+0.196507683 container init e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:41:21 np0005473739 podman[156391]: 2025-10-07 13:41:21.943162693 +0000 UTC m=+0.205323970 container start e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:41:21 np0005473739 podman[156391]: 2025-10-07 13:41:21.95119242 +0000 UTC m=+0.213353727 container attach e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 09:41:22 np0005473739 python3.9[156558]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:41:22
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', 'volumes', 'vms', '.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'images', 'default.rgw.log']
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:41:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v408: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:22 np0005473739 python3.9[156643]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:23 np0005473739 elated_hypatia[156457]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:41:23 np0005473739 elated_hypatia[156457]: --> relative data size: 1.0
Oct  7 09:41:23 np0005473739 elated_hypatia[156457]: --> All data devices are unavailable
Oct  7 09:41:23 np0005473739 systemd[1]: libpod-e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6.scope: Deactivated successfully.
Oct  7 09:41:23 np0005473739 podman[156391]: 2025-10-07 13:41:23.100441278 +0000 UTC m=+1.362602565 container died e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:41:23 np0005473739 systemd[1]: libpod-e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6.scope: Consumed 1.092s CPU time.
Oct  7 09:41:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8f7f3613710b072c5590b719f1c198a88c4c8936cf68666107bbf985ab34fc83-merged.mount: Deactivated successfully.
Oct  7 09:41:23 np0005473739 podman[156391]: 2025-10-07 13:41:23.179225298 +0000 UTC m=+1.441386575 container remove e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_hypatia, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:41:23 np0005473739 systemd[1]: libpod-conmon-e1ab7fbbc1ba3cb4e7dce7af1450bd10701c52cc2d912f96255aa5d639fe5bf6.scope: Deactivated successfully.
Oct  7 09:41:23 np0005473739 python3.9[156924]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:24 np0005473739 podman[156994]: 2025-10-07 13:41:23.999806882 +0000 UTC m=+0.047984128 container create 94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chatelet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:41:24 np0005473739 systemd[1]: Started libpod-conmon-94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12.scope.
Oct  7 09:41:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:41:24 np0005473739 podman[156994]: 2025-10-07 13:41:23.98188039 +0000 UTC m=+0.030057656 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:41:24 np0005473739 podman[156994]: 2025-10-07 13:41:24.088401553 +0000 UTC m=+0.136578819 container init 94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:41:24 np0005473739 podman[156994]: 2025-10-07 13:41:24.097924939 +0000 UTC m=+0.146102195 container start 94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chatelet, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:41:24 np0005473739 podman[156994]: 2025-10-07 13:41:24.101711645 +0000 UTC m=+0.149888921 container attach 94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chatelet, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:41:24 np0005473739 affectionate_chatelet[157053]: 167 167
Oct  7 09:41:24 np0005473739 systemd[1]: libpod-94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12.scope: Deactivated successfully.
Oct  7 09:41:24 np0005473739 podman[156994]: 2025-10-07 13:41:24.105678008 +0000 UTC m=+0.153855274 container died 94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:41:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3e28826043f26914e59540bd72ca0e03992a67c74cadf478eebbae1a587168f8-merged.mount: Deactivated successfully.
Oct  7 09:41:24 np0005473739 podman[156994]: 2025-10-07 13:41:24.15314684 +0000 UTC m=+0.201324106 container remove 94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_chatelet, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:41:24 np0005473739 systemd[1]: libpod-conmon-94789991b48c6fd4e832be3d0c63d89bf987bc4220589d9fe2028bfddee09b12.scope: Deactivated successfully.
Oct  7 09:41:24 np0005473739 podman[157155]: 2025-10-07 13:41:24.336508063 +0000 UTC m=+0.062474739 container create 37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 09:41:24 np0005473739 systemd[1]: Started libpod-conmon-37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e.scope.
Oct  7 09:41:24 np0005473739 podman[157155]: 2025-10-07 13:41:24.302351094 +0000 UTC m=+0.028317780 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:41:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:41:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92bac64782e9ab643b7f62c93c6320afe9e682e3df8500ecb2605c0aa76aef2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92bac64782e9ab643b7f62c93c6320afe9e682e3df8500ecb2605c0aa76aef2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92bac64782e9ab643b7f62c93c6320afe9e682e3df8500ecb2605c0aa76aef2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a92bac64782e9ab643b7f62c93c6320afe9e682e3df8500ecb2605c0aa76aef2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:24 np0005473739 podman[157155]: 2025-10-07 13:41:24.44238383 +0000 UTC m=+0.168350516 container init 37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:41:24 np0005473739 podman[157155]: 2025-10-07 13:41:24.456215557 +0000 UTC m=+0.182182223 container start 37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:41:24 np0005473739 podman[157155]: 2025-10-07 13:41:24.461199054 +0000 UTC m=+0.187165780 container attach 37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:41:24 np0005473739 python3.9[157168]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v409: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:25 np0005473739 python3.9[157255]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:25 np0005473739 lucid_newton[157173]: {
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:    "0": [
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:        {
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "devices": [
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "/dev/loop3"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            ],
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_name": "ceph_lv0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_size": "21470642176",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "name": "ceph_lv0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "tags": {
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cluster_name": "ceph",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.crush_device_class": "",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.encrypted": "0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osd_id": "0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.type": "block",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.vdo": "0"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            },
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "type": "block",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "vg_name": "ceph_vg0"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:        }
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:    ],
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:    "1": [
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:        {
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "devices": [
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "/dev/loop4"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            ],
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_name": "ceph_lv1",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_size": "21470642176",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "name": "ceph_lv1",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "tags": {
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cluster_name": "ceph",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.crush_device_class": "",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.encrypted": "0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osd_id": "1",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.type": "block",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.vdo": "0"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            },
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "type": "block",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "vg_name": "ceph_vg1"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:        }
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:    ],
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:    "2": [
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:        {
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "devices": [
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "/dev/loop5"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            ],
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_name": "ceph_lv2",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_size": "21470642176",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "name": "ceph_lv2",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "tags": {
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.cluster_name": "ceph",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.crush_device_class": "",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.encrypted": "0",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osd_id": "2",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.type": "block",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:                "ceph.vdo": "0"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            },
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "type": "block",
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:            "vg_name": "ceph_vg2"
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:        }
Oct  7 09:41:25 np0005473739 lucid_newton[157173]:    ]
Oct  7 09:41:25 np0005473739 lucid_newton[157173]: }
Oct  7 09:41:25 np0005473739 systemd[1]: libpod-37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e.scope: Deactivated successfully.
Oct  7 09:41:25 np0005473739 podman[157155]: 2025-10-07 13:41:25.343000855 +0000 UTC m=+1.068967541 container died 37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:41:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a92bac64782e9ab643b7f62c93c6320afe9e682e3df8500ecb2605c0aa76aef2-merged.mount: Deactivated successfully.
Oct  7 09:41:25 np0005473739 podman[157155]: 2025-10-07 13:41:25.407794624 +0000 UTC m=+1.133761280 container remove 37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_newton, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:41:25 np0005473739 systemd[1]: libpod-conmon-37f7dba347abdc310e1969a2fff442dcf9a74e1b38ac747d1b79166ebf5c5e7e.scope: Deactivated successfully.
Oct  7 09:41:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:25 np0005473739 python3.9[157464]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:26 np0005473739 podman[157617]: 2025-10-07 13:41:26.082788358 +0000 UTC m=+0.036024748 container create bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:41:26 np0005473739 systemd[1]: Started libpod-conmon-bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751.scope.
Oct  7 09:41:26 np0005473739 podman[157617]: 2025-10-07 13:41:26.066266073 +0000 UTC m=+0.019502483 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:41:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:41:26 np0005473739 podman[157617]: 2025-10-07 13:41:26.198254422 +0000 UTC m=+0.151490822 container init bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 09:41:26 np0005473739 podman[157617]: 2025-10-07 13:41:26.206502735 +0000 UTC m=+0.159739125 container start bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:41:26 np0005473739 podman[157617]: 2025-10-07 13:41:26.209521412 +0000 UTC m=+0.162757852 container attach bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:41:26 np0005473739 cranky_jang[157656]: 167 167
Oct  7 09:41:26 np0005473739 systemd[1]: libpod-bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751.scope: Deactivated successfully.
Oct  7 09:41:26 np0005473739 podman[157617]: 2025-10-07 13:41:26.214356627 +0000 UTC m=+0.167593047 container died bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:41:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-099cfc9da86eea4497216858fc3cd8129d82f76865446b0014154ffecb4b7626-merged.mount: Deactivated successfully.
Oct  7 09:41:26 np0005473739 podman[157617]: 2025-10-07 13:41:26.273282714 +0000 UTC m=+0.226519144 container remove bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_jang, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:41:26 np0005473739 systemd[1]: libpod-conmon-bc6a458272b1525c8ee730da81d6a52e3a9ff56ef40b8788564dd5fa09981751.scope: Deactivated successfully.
Oct  7 09:41:26 np0005473739 python3.9[157653]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:26 np0005473739 podman[157700]: 2025-10-07 13:41:26.481455026 +0000 UTC m=+0.059888544 container create c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:41:26 np0005473739 systemd[1]: Started libpod-conmon-c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d.scope.
Oct  7 09:41:26 np0005473739 podman[157700]: 2025-10-07 13:41:26.453111356 +0000 UTC m=+0.031544894 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:41:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:41:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3915d62fff965e6a8d07f4d9c8f753603c567a26fac14a22d961f4c3f150a6ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3915d62fff965e6a8d07f4d9c8f753603c567a26fac14a22d961f4c3f150a6ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3915d62fff965e6a8d07f4d9c8f753603c567a26fac14a22d961f4c3f150a6ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3915d62fff965e6a8d07f4d9c8f753603c567a26fac14a22d961f4c3f150a6ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:26 np0005473739 podman[157700]: 2025-10-07 13:41:26.627464506 +0000 UTC m=+0.205898014 container init c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:41:26 np0005473739 podman[157700]: 2025-10-07 13:41:26.634824676 +0000 UTC m=+0.213258154 container start c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:41:26 np0005473739 podman[157700]: 2025-10-07 13:41:26.640988025 +0000 UTC m=+0.219421583 container attach c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:41:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v410: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:27 np0005473739 python3.9[157851]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:41:27 np0005473739 systemd[1]: Reloading.
Oct  7 09:41:27 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:41:27 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]: {
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "osd_id": 2,
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "type": "bluestore"
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:    },
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "osd_id": 1,
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "type": "bluestore"
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:    },
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "osd_id": 0,
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:        "type": "bluestore"
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]:    }
Oct  7 09:41:27 np0005473739 optimistic_cannon[157731]: }
Oct  7 09:41:27 np0005473739 systemd[1]: libpod-c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d.scope: Deactivated successfully.
Oct  7 09:41:27 np0005473739 podman[157700]: 2025-10-07 13:41:27.756990597 +0000 UTC m=+1.335424075 container died c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 09:41:27 np0005473739 systemd[1]: libpod-c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d.scope: Consumed 1.120s CPU time.
Oct  7 09:41:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3915d62fff965e6a8d07f4d9c8f753603c567a26fac14a22d961f4c3f150a6ee-merged.mount: Deactivated successfully.
Oct  7 09:41:27 np0005473739 podman[157700]: 2025-10-07 13:41:27.827014721 +0000 UTC m=+1.405448229 container remove c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:41:27 np0005473739 systemd[1]: libpod-conmon-c42cb685c661b687cf9ee0e8f84716298630122d057284a037bfc943c4c77e1d.scope: Deactivated successfully.
Oct  7 09:41:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:41:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:41:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:41:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:41:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 68d49878-1d7a-4eaf-b1e7-e9acced678d0 does not exist
Oct  7 09:41:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3dd47f84-8689-4f7f-aced-3fd16a6543b6 does not exist
Oct  7 09:41:28 np0005473739 python3.9[158133]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:41:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:41:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v411: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:28 np0005473739 python3.9[158211]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:29 np0005473739 python3.9[158363]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:30 np0005473739 python3.9[158441]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v412: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:31 np0005473739 python3.9[158593]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:41:31 np0005473739 systemd[1]: Reloading.
Oct  7 09:41:31 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:41:31 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:41:31 np0005473739 systemd[1]: Starting Create netns directory...
Oct  7 09:41:31 np0005473739 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  7 09:41:31 np0005473739 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  7 09:41:31 np0005473739 systemd[1]: Finished Create netns directory.
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:41:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:41:32 np0005473739 python3.9[158788]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v413: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:33 np0005473739 python3.9[158940]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:33 np0005473739 python3.9[159063]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844492.6207688-333-32716573443397/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v414: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:34 np0005473739 python3.9[159215]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:41:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:35 np0005473739 python3.9[159367]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:41:36 np0005473739 python3.9[159490]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844495.159292-358-190351019725681/.source.json _original_basename=.fy7exsa0 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v415: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:37 np0005473739 python3.9[159642]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v416: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:39 np0005473739 python3.9[160069]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Oct  7 09:41:40 np0005473739 python3.9[160221]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  7 09:41:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v417: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:41 np0005473739 python3.9[160373]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  7 09:41:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v418: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:42 np0005473739 python3[160552]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  7 09:41:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v419: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v420: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v421: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v422: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:51 np0005473739 podman[160646]: 2025-10-07 13:41:51.618391416 +0000 UTC m=+1.073560040 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 09:41:52 np0005473739 podman[160565]: 2025-10-07 13:41:52.133804631 +0000 UTC m=+9.115413618 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 09:41:52 np0005473739 podman[160714]: 2025-10-07 13:41:52.323675381 +0000 UTC m=+0.071129833 container create 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  7 09:41:52 np0005473739 podman[160714]: 2025-10-07 13:41:52.279787241 +0000 UTC m=+0.027241763 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 09:41:52 np0005473739 python3[160552]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 09:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:41:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v423: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:53 np0005473739 python3.9[160904]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:41:54 np0005473739 python3.9[161058]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:54 np0005473739 python3.9[161134]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:41:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v424: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:55 np0005473739 python3.9[161285]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759844514.6043668-446-214861892961758/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:41:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:41:55 np0005473739 python3.9[161361]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:41:55 np0005473739 systemd[1]: Reloading.
Oct  7 09:41:56 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:41:56 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:41:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v425: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:56 np0005473739 python3.9[161472]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:41:57 np0005473739 systemd[1]: Reloading.
Oct  7 09:41:57 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:41:57 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:41:57 np0005473739 systemd[1]: Starting ovn_metadata_agent container...
Oct  7 09:41:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:41:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4a47635b60b7d33e1afadff6c368df31163bac365f196fa40a7c86cf7f518/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1de4a47635b60b7d33e1afadff6c368df31163bac365f196fa40a7c86cf7f518/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 09:41:57 np0005473739 systemd[1]: Started /usr/bin/podman healthcheck run 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6.
Oct  7 09:41:57 np0005473739 podman[161514]: 2025-10-07 13:41:57.737545176 +0000 UTC m=+0.293156872 container init 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + sudo -E kolla_set_configs
Oct  7 09:41:57 np0005473739 podman[161514]: 2025-10-07 13:41:57.78315698 +0000 UTC m=+0.338768646 container start 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 09:41:57 np0005473739 edpm-start-podman-container[161514]: ovn_metadata_agent
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Validating config file
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Copying service configuration files
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Writing out command to execute
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Setting permission for /var/lib/neutron
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Setting permission for /var/lib/neutron/external
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Oct  7 09:41:57 np0005473739 edpm-start-podman-container[161513]: Creating additional drop-in dependency for "ovn_metadata_agent" (2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6)
Oct  7 09:41:57 np0005473739 podman[161537]: 2025-10-07 13:41:57.896222632 +0000 UTC m=+0.092740969 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: ++ cat /run_command
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + CMD=neutron-ovn-metadata-agent
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + ARGS=
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + sudo kolla_copy_cacerts
Oct  7 09:41:57 np0005473739 systemd[1]: Reloading.
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + [[ ! -n '' ]]
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + . kolla_extend_start
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: Running command: 'neutron-ovn-metadata-agent'
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + umask 0022
Oct  7 09:41:57 np0005473739 ovn_metadata_agent[161531]: + exec neutron-ovn-metadata-agent
Oct  7 09:41:58 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:41:58 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:41:58 np0005473739 systemd[1]: Started ovn_metadata_agent container.
Oct  7 09:41:58 np0005473739 systemd[1]: session-49.scope: Deactivated successfully.
Oct  7 09:41:58 np0005473739 systemd[1]: session-49.scope: Consumed 1min 52ms CPU time.
Oct  7 09:41:58 np0005473739 systemd-logind[801]: Session 49 logged out. Waiting for processes to exit.
Oct  7 09:41:58 np0005473739 systemd-logind[801]: Removed session 49.
Oct  7 09:41:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v426: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.975 161536 INFO neutron.common.config [-] Logging enabled!#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.976 161536 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.976 161536 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.976 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.976 161536 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.976 161536 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.977 161536 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.978 161536 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.979 161536 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.980 161536 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.980 161536 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.980 161536 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.980 161536 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.980 161536 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.980 161536 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.980 161536 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.981 161536 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.982 161536 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.983 161536 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.984 161536 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.985 161536 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.986 161536 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.987 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.988 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.989 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.990 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.990 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.990 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.990 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.990 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.990 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.990 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.991 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.992 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.993 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.994 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.994 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.994 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.994 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.994 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.994 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.994 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.994 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.995 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.996 161536 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.996 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.996 161536 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.996 161536 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.996 161536 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.996 161536 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.996 161536 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.996 161536 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.997 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.998 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:41:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:41:59.999 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.000 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.001 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.002 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.002 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.002 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.002 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.002 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.002 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.002 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.002 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.003 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.004 161536 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.005 161536 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.005 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.005 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.005 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.005 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.005 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.005 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.005 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.006 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.007 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.008 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.009 161536 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.009 161536 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.022 161536 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.022 161536 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.023 161536 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.023 161536 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.023 161536 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.040 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 52903340-c961-4e65-8ffc-97dd01d2b2e2 (UUID: 52903340-c961-4e65-8ffc-97dd01d2b2e2) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.070 161536 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.070 161536 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.070 161536 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.070 161536 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.074 161536 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.080 161536 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.086 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '52903340-c961-4e65-8ffc-97dd01d2b2e2'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], external_ids={}, name=52903340-c961-4e65-8ffc-97dd01d2b2e2, nb_cfg_timestamp=1759844458024, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.087 161536 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7ff3e5cb6e20>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.088 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.088 161536 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.088 161536 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.088 161536 INFO oslo_service.service [-] Starting 1 workers#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.094 161536 DEBUG oslo_service.service [-] Started child 161642 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.099 161536 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpwypecvx9/privsep.sock']#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.099 161642 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-236276'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.126 161642 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.127 161642 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.127 161642 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.131 161642 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.138 161642 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.145 161642 INFO eventlet.wsgi.server [-] (161642) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Oct  7 09:42:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:00 np0005473739 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Oct  7 09:42:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v427: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.860 161536 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.861 161536 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpwypecvx9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.702 161647 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.709 161647 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.713 161647 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.714 161647 INFO oslo.privsep.daemon [-] privsep daemon running as pid 161647#033[00m
Oct  7 09:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:00.864 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[767914d3-b8b8-4def-8d61-4095554802ca]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 09:42:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:01.393 161647 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:42:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:01.393 161647 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:42:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:01.393 161647 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:42:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:01.938 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[d523c94f-0732-4200-b346-edfa4abe55f3]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 09:42:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:01.943 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, column=external_ids, values=({'neutron:ovn-metadata-id': '0145fcbf-5c3d-5eb1-a7e5-f4b4ab75448f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.097 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.110 161536 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.111 161536 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.111 161536 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.111 161536 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.111 161536 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.111 161536 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.112 161536 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.112 161536 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.112 161536 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.112 161536 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.113 161536 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.113 161536 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.113 161536 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.113 161536 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.113 161536 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.114 161536 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.114 161536 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.114 161536 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.114 161536 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.114 161536 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.115 161536 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.115 161536 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.115 161536 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.115 161536 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.116 161536 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.116 161536 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.116 161536 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.116 161536 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.117 161536 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.117 161536 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.117 161536 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.117 161536 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.117 161536 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.118 161536 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.118 161536 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.118 161536 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.118 161536 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.119 161536 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.119 161536 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.119 161536 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.119 161536 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.120 161536 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.120 161536 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.120 161536 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.120 161536 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.120 161536 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.121 161536 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.121 161536 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.121 161536 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.121 161536 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.121 161536 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.122 161536 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.122 161536 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.122 161536 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.122 161536 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.122 161536 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.122 161536 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.123 161536 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.123 161536 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.123 161536 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.124 161536 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.124 161536 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.124 161536 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.124 161536 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.125 161536 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.125 161536 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.125 161536 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.125 161536 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.125 161536 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.126 161536 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.126 161536 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.126 161536 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.126 161536 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.126 161536 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.127 161536 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.127 161536 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.127 161536 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.127 161536 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.127 161536 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.128 161536 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.128 161536 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.128 161536 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.128 161536 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.128 161536 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.129 161536 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.129 161536 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.129 161536 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.129 161536 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.129 161536 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.130 161536 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.130 161536 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.130 161536 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.130 161536 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.130 161536 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.131 161536 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.131 161536 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.131 161536 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.131 161536 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.131 161536 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.132 161536 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.132 161536 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.132 161536 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.132 161536 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.132 161536 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.133 161536 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.133 161536 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.133 161536 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.133 161536 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.134 161536 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.134 161536 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.134 161536 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.134 161536 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.134 161536 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.135 161536 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.135 161536 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.135 161536 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.135 161536 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.136 161536 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.136 161536 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.136 161536 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.136 161536 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.136 161536 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.137 161536 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.137 161536 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.137 161536 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.137 161536 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.138 161536 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.138 161536 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.138 161536 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.138 161536 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.138 161536 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.139 161536 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.139 161536 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.139 161536 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.139 161536 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.139 161536 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.140 161536 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.140 161536 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.140 161536 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.140 161536 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.140 161536 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.141 161536 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.141 161536 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.141 161536 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.141 161536 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.141 161536 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.141 161536 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.142 161536 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.142 161536 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.142 161536 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.142 161536 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.142 161536 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.143 161536 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.143 161536 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.143 161536 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.144 161536 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.144 161536 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.144 161536 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.144 161536 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.144 161536 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.145 161536 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.145 161536 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.145 161536 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.145 161536 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.146 161536 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.146 161536 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.146 161536 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.146 161536 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.147 161536 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.147 161536 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.147 161536 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.147 161536 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.147 161536 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.148 161536 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.148 161536 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.148 161536 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.148 161536 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.148 161536 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.148 161536 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.149 161536 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.149 161536 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.149 161536 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.149 161536 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.149 161536 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.149 161536 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.149 161536 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.150 161536 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.150 161536 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.150 161536 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.150 161536 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.150 161536 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.150 161536 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.150 161536 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.151 161536 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.151 161536 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.151 161536 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.151 161536 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.151 161536 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.151 161536 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.151 161536 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.152 161536 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.152 161536 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.152 161536 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.152 161536 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.152 161536 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.152 161536 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.152 161536 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.152 161536 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.153 161536 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.153 161536 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.153 161536 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.153 161536 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.153 161536 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.153 161536 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.153 161536 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.154 161536 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.154 161536 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.154 161536 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.154 161536 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.154 161536 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.154 161536 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.154 161536 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.154 161536 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.155 161536 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.155 161536 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.155 161536 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.155 161536 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.155 161536 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.155 161536 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.155 161536 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.156 161536 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.156 161536 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.156 161536 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.156 161536 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.156 161536 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.156 161536 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.156 161536 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.157 161536 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.157 161536 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.157 161536 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.157 161536 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.157 161536 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.157 161536 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.157 161536 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.158 161536 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.158 161536 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.158 161536 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.158 161536 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.158 161536 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.158 161536 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.158 161536 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.159 161536 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.159 161536 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.159 161536 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.159 161536 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.159 161536 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.159 161536 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.159 161536 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.160 161536 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.160 161536 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.160 161536 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.160 161536 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.160 161536 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.160 161536 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.160 161536 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.161 161536 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.161 161536 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.161 161536 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.161 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.161 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.161 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.162 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.162 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.162 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.162 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.162 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.162 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.162 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.163 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.163 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.163 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.163 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.163 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.163 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.164 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.164 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.164 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.164 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.164 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.164 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.164 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.165 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.165 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.165 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.165 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.165 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.165 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.165 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.166 161536 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.166 161536 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.166 161536 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.166 161536 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.166 161536 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:42:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:42:02.166 161536 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  7 09:42:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v428: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v429: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:05 np0005473739 systemd-logind[801]: New session 50 of user zuul.
Oct  7 09:42:05 np0005473739 systemd[1]: Started Session 50 of User zuul.
Oct  7 09:42:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:06 np0005473739 python3.9[161805]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:42:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v430: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:07 np0005473739 python3.9[161961]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v431: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:09 np0005473739 python3.9[162126]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:42:09 np0005473739 systemd[1]: Reloading.
Oct  7 09:42:09 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:42:09 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:42:10 np0005473739 python3.9[162312]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:42:10 np0005473739 network[162329]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:42:10 np0005473739 network[162330]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:42:10 np0005473739 network[162331]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:42:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v432: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v433: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:42:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.0 total, 600.0 interval#012Cumulative writes: 5478 writes, 23K keys, 5478 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5478 writes, 824 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5478 writes, 23K keys, 5478 commit groups, 1.0 writes per commit group, ingest: 18.46 MB, 0.03 MB/s#012Interval WAL: 5478 writes, 824 syncs, 6.65 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 4.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  7 09:42:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v434: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v435: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:16 np0005473739 python3.9[162596]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:42:17 np0005473739 python3.9[162749]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:42:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:42:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 6633 writes, 27K keys, 6633 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 6633 writes, 1193 syncs, 5.56 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 6633 writes, 27K keys, 6633 commit groups, 1.0 writes per commit group, ingest: 19.37 MB, 0.03 MB/s#012Interval WAL: 6633 writes, 1193 syncs, 5.56 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000114 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 0.000114 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Oct  7 09:42:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v436: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:18 np0005473739 python3.9[162902]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:42:19 np0005473739 python3.9[163055]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:42:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:20 np0005473739 python3.9[163208]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:42:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v437: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:21 np0005473739 python3.9[163361]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:42:22 np0005473739 podman[163486]: 2025-10-07 13:42:22.068837376 +0000 UTC m=+0.129990310 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 09:42:22 np0005473739 python3.9[163535]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:42:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:42:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 600.1 total, 600.0 interval#012Cumulative writes: 5387 writes, 23K keys, 5387 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.03 MB/s#012Cumulative WAL: 5387 writes, 767 syncs, 7.02 writes per sync, written: 0.02 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5387 writes, 23K keys, 5387 commit groups, 1.0 writes per commit group, ingest: 18.23 MB, 0.03 MB/s#012Interval WAL: 5387 writes, 767 syncs, 7.02 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 2 last_copies: 8 last_secs: 5.2e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 600.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:42:22
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'backups', '.rgw.root', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'vms', '.mgr']
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:42:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v438: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:23 np0005473739 python3.9[163695]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:23 np0005473739 python3.9[163847]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 09:42:24 np0005473739 python3.9[163999]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v439: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:25 np0005473739 python3.9[164153]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:26 np0005473739 python3.9[164305]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v440: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:27 np0005473739 python3.9[164457]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:27 np0005473739 python3.9[164610]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:28 np0005473739 podman[164612]: 2025-10-07 13:42:28.088887246 +0000 UTC m=+0.071527176 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 09:42:28 np0005473739 python3.9[164881]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:42:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:42:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v441: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:29 np0005473739 python3.9[165154]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev aef5b82b-2154-4fbd-bc6d-ab1efbd53ad1 does not exist
Oct  7 09:42:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9357203c-a1d8-4972-9ffc-b5b403bfb5a1 does not exist
Oct  7 09:42:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 91d643c0-3708-424f-a1fd-040a9bb8d7ea does not exist
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:42:29 np0005473739 python3.9[165429]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:30 np0005473739 podman[165513]: 2025-10-07 13:42:30.144239906 +0000 UTC m=+0.050848107 container create c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:42:30 np0005473739 systemd[1]: Started libpod-conmon-c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c.scope.
Oct  7 09:42:30 np0005473739 podman[165513]: 2025-10-07 13:42:30.1214015 +0000 UTC m=+0.028009741 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:42:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:42:30 np0005473739 podman[165513]: 2025-10-07 13:42:30.274182605 +0000 UTC m=+0.180790826 container init c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:42:30 np0005473739 podman[165513]: 2025-10-07 13:42:30.283081627 +0000 UTC m=+0.189689828 container start c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:42:30 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:42:30 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:42:30 np0005473739 podman[165513]: 2025-10-07 13:42:30.294277749 +0000 UTC m=+0.200885950 container attach c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:42:30 np0005473739 systemd[1]: libpod-c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c.scope: Deactivated successfully.
Oct  7 09:42:30 np0005473739 unruffled_bose[165572]: 167 167
Oct  7 09:42:30 np0005473739 conmon[165572]: conmon c3348def8b976f422eba <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c.scope/container/memory.events
Oct  7 09:42:30 np0005473739 podman[165513]: 2025-10-07 13:42:30.309224809 +0000 UTC m=+0.215833010 container died c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:42:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-80122b03dd3b2b08a91f88e104718be5f2e5cac39ebe8d27148663963978ed3c-merged.mount: Deactivated successfully.
Oct  7 09:42:30 np0005473739 podman[165513]: 2025-10-07 13:42:30.368779722 +0000 UTC m=+0.275387923 container remove c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_bose, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 09:42:30 np0005473739 systemd[1]: libpod-conmon-c3348def8b976f422ebad199ba508e38fbc5e2624fe8ba6c07fd49d2ee43e93c.scope: Deactivated successfully.
Oct  7 09:42:30 np0005473739 podman[165670]: 2025-10-07 13:42:30.559344712 +0000 UTC m=+0.064728069 container create 395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_yalow, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:42:30 np0005473739 systemd[1]: Started libpod-conmon-395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c.scope.
Oct  7 09:42:30 np0005473739 podman[165670]: 2025-10-07 13:42:30.529625647 +0000 UTC m=+0.035009034 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:42:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:42:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d363a6a4266f37d342937fca64c22f9334f8788059b7f0b84cd459437ca2a1c5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:30 np0005473739 python3.9[165664]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d363a6a4266f37d342937fca64c22f9334f8788059b7f0b84cd459437ca2a1c5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d363a6a4266f37d342937fca64c22f9334f8788059b7f0b84cd459437ca2a1c5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d363a6a4266f37d342937fca64c22f9334f8788059b7f0b84cd459437ca2a1c5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d363a6a4266f37d342937fca64c22f9334f8788059b7f0b84cd459437ca2a1c5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:30 np0005473739 podman[165670]: 2025-10-07 13:42:30.665503701 +0000 UTC m=+0.170887108 container init 395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_yalow, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  7 09:42:30 np0005473739 podman[165670]: 2025-10-07 13:42:30.675133813 +0000 UTC m=+0.180517130 container start 395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:42:30 np0005473739 podman[165670]: 2025-10-07 13:42:30.679179558 +0000 UTC m=+0.184562915 container attach 395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:42:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v442: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:31 np0005473739 python3.9[165843]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:31 np0005473739 sad_yalow[165687]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:42:31 np0005473739 sad_yalow[165687]: --> relative data size: 1.0
Oct  7 09:42:31 np0005473739 sad_yalow[165687]: --> All data devices are unavailable
Oct  7 09:42:31 np0005473739 systemd[1]: libpod-395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c.scope: Deactivated successfully.
Oct  7 09:42:31 np0005473739 systemd[1]: libpod-395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c.scope: Consumed 1.113s CPU time.
Oct  7 09:42:31 np0005473739 podman[165670]: 2025-10-07 13:42:31.901916851 +0000 UTC m=+1.407300198 container died 395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:42:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:42:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d363a6a4266f37d342937fca64c22f9334f8788059b7f0b84cd459437ca2a1c5-merged.mount: Deactivated successfully.
Oct  7 09:42:31 np0005473739 python3.9[166015]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:31 np0005473739 podman[165670]: 2025-10-07 13:42:31.988496789 +0000 UTC m=+1.493880116 container remove 395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_yalow, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:42:32 np0005473739 systemd[1]: libpod-conmon-395575d7a6deb4531cb734829530f7d334c961fc68b6be8535e05503b05b7d9c.scope: Deactivated successfully.
Oct  7 09:42:32 np0005473739 python3.9[166285]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:42:32 np0005473739 podman[166326]: 2025-10-07 13:42:32.75568746 +0000 UTC m=+0.057323427 container create f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldstine, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:42:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v443: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:32 np0005473739 systemd[1]: Started libpod-conmon-f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41.scope.
Oct  7 09:42:32 np0005473739 podman[166326]: 2025-10-07 13:42:32.73041759 +0000 UTC m=+0.032053608 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:42:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:42:32 np0005473739 podman[166326]: 2025-10-07 13:42:32.852482784 +0000 UTC m=+0.154118801 container init f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldstine, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:42:32 np0005473739 podman[166326]: 2025-10-07 13:42:32.867407024 +0000 UTC m=+0.169043011 container start f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldstine, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:42:32 np0005473739 gracious_goldstine[166366]: 167 167
Oct  7 09:42:32 np0005473739 systemd[1]: libpod-f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41.scope: Deactivated successfully.
Oct  7 09:42:32 np0005473739 podman[166326]: 2025-10-07 13:42:32.875184806 +0000 UTC m=+0.176820783 container attach f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldstine, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:42:32 np0005473739 podman[166326]: 2025-10-07 13:42:32.876887421 +0000 UTC m=+0.178523398 container died f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:42:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-de99152f8f56dc42996200fe3a2cff560f3d40084d4c452c316ad4d748acae33-merged.mount: Deactivated successfully.
Oct  7 09:42:32 np0005473739 podman[166326]: 2025-10-07 13:42:32.929109273 +0000 UTC m=+0.230745250 container remove f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:42:32 np0005473739 systemd[1]: libpod-conmon-f037ae614feefa8098b320cc1892e272f66d3721a0c589ba0359dcecc1639b41.scope: Deactivated successfully.
Oct  7 09:42:33 np0005473739 podman[166459]: 2025-10-07 13:42:33.149358127 +0000 UTC m=+0.066452643 container create 2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:42:33 np0005473739 systemd[1]: Started libpod-conmon-2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161.scope.
Oct  7 09:42:33 np0005473739 podman[166459]: 2025-10-07 13:42:33.128124234 +0000 UTC m=+0.045218840 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:42:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:42:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c51b22aad7e7534783b4b8f434d6a2c31a405f902ee69c19e1f50a2f616af6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c51b22aad7e7534783b4b8f434d6a2c31a405f902ee69c19e1f50a2f616af6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c51b22aad7e7534783b4b8f434d6a2c31a405f902ee69c19e1f50a2f616af6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11c51b22aad7e7534783b4b8f434d6a2c31a405f902ee69c19e1f50a2f616af6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:33 np0005473739 podman[166459]: 2025-10-07 13:42:33.262747375 +0000 UTC m=+0.179841971 container init 2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:42:33 np0005473739 podman[166459]: 2025-10-07 13:42:33.272896119 +0000 UTC m=+0.189990635 container start 2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:42:33 np0005473739 podman[166459]: 2025-10-07 13:42:33.277775297 +0000 UTC m=+0.194869893 container attach 2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:42:33 np0005473739 python3.9[166536]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]: {
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:    "0": [
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:        {
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "devices": [
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "/dev/loop3"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            ],
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_name": "ceph_lv0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_size": "21470642176",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "name": "ceph_lv0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "tags": {
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cluster_name": "ceph",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.crush_device_class": "",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.encrypted": "0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osd_id": "0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.type": "block",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.vdo": "0"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            },
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "type": "block",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "vg_name": "ceph_vg0"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:        }
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:    ],
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:    "1": [
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:        {
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "devices": [
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "/dev/loop4"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            ],
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_name": "ceph_lv1",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_size": "21470642176",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "name": "ceph_lv1",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "tags": {
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cluster_name": "ceph",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.crush_device_class": "",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.encrypted": "0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osd_id": "1",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.type": "block",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.vdo": "0"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            },
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "type": "block",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "vg_name": "ceph_vg1"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:        }
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:    ],
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:    "2": [
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:        {
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "devices": [
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "/dev/loop5"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            ],
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_name": "ceph_lv2",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_size": "21470642176",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "name": "ceph_lv2",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "tags": {
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.cluster_name": "ceph",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.crush_device_class": "",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.encrypted": "0",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osd_id": "2",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.type": "block",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:                "ceph.vdo": "0"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            },
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "type": "block",
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:            "vg_name": "ceph_vg2"
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:        }
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]:    ]
Oct  7 09:42:34 np0005473739 stoic_shannon[166504]: }
Oct  7 09:42:34 np0005473739 systemd[1]: libpod-2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161.scope: Deactivated successfully.
Oct  7 09:42:34 np0005473739 podman[166459]: 2025-10-07 13:42:34.149425092 +0000 UTC m=+1.066519648 container died 2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  7 09:42:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-11c51b22aad7e7534783b4b8f434d6a2c31a405f902ee69c19e1f50a2f616af6-merged.mount: Deactivated successfully.
Oct  7 09:42:34 np0005473739 podman[166459]: 2025-10-07 13:42:34.23063117 +0000 UTC m=+1.147725686 container remove 2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_shannon, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:42:34 np0005473739 systemd[1]: libpod-conmon-2b61be1205d579769fef8e8b78ef2ddbb64600c7220b20a833725efe87518161.scope: Deactivated successfully.
Oct  7 09:42:34 np0005473739 python3.9[166693]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  7 09:42:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v444: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:35 np0005473739 podman[166996]: 2025-10-07 13:42:34.943420062 +0000 UTC m=+0.029143431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:42:35 np0005473739 podman[166996]: 2025-10-07 13:42:35.075784094 +0000 UTC m=+0.161507433 container create 87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:42:35 np0005473739 systemd[1]: Started libpod-conmon-87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b.scope.
Oct  7 09:42:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:42:35 np0005473739 python3.9[166997]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:42:35 np0005473739 podman[166996]: 2025-10-07 13:42:35.275451632 +0000 UTC m=+0.361174991 container init 87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nightingale, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:42:35 np0005473739 podman[166996]: 2025-10-07 13:42:35.289673903 +0000 UTC m=+0.375397252 container start 87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nightingale, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 09:42:35 np0005473739 systemd[1]: Reloading.
Oct  7 09:42:35 np0005473739 affectionate_nightingale[167014]: 167 167
Oct  7 09:42:35 np0005473739 podman[166996]: 2025-10-07 13:42:35.346237508 +0000 UTC m=+0.431960867 container attach 87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nightingale, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:42:35 np0005473739 podman[166996]: 2025-10-07 13:42:35.34898068 +0000 UTC m=+0.434704079 container died 87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:42:35 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:42:35 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:42:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:35 np0005473739 systemd[1]: libpod-87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b.scope: Deactivated successfully.
Oct  7 09:42:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-08f819145d60985c52032b1c51598c4c004055a6f53dea5101d5cf65fe931926-merged.mount: Deactivated successfully.
Oct  7 09:42:35 np0005473739 podman[166996]: 2025-10-07 13:42:35.708415775 +0000 UTC m=+0.794139154 container remove 87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 09:42:35 np0005473739 systemd[1]: libpod-conmon-87cd27223d9ad9b5237c686cca4ae32c48395bb2d01837e55efdc6d3ae01ee9b.scope: Deactivated successfully.
Oct  7 09:42:35 np0005473739 podman[167118]: 2025-10-07 13:42:35.946668989 +0000 UTC m=+0.077976015 container create 1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 09:42:36 np0005473739 podman[167118]: 2025-10-07 13:42:35.910173957 +0000 UTC m=+0.041481033 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:42:36 np0005473739 systemd[1]: Started libpod-conmon-1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7.scope.
Oct  7 09:42:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:42:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/536ef47dad347bc0535145d4fcb0481d1cf0126e9061075e05f1df14b923be36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/536ef47dad347bc0535145d4fcb0481d1cf0126e9061075e05f1df14b923be36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/536ef47dad347bc0535145d4fcb0481d1cf0126e9061075e05f1df14b923be36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/536ef47dad347bc0535145d4fcb0481d1cf0126e9061075e05f1df14b923be36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:42:36 np0005473739 podman[167118]: 2025-10-07 13:42:36.076971028 +0000 UTC m=+0.208278034 container init 1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:42:36 np0005473739 podman[167118]: 2025-10-07 13:42:36.087278687 +0000 UTC m=+0.218585693 container start 1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:42:36 np0005473739 podman[167118]: 2025-10-07 13:42:36.095050149 +0000 UTC m=+0.226357185 container attach 1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:42:36 np0005473739 python3.9[167248]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v445: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]: {
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "osd_id": 2,
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "type": "bluestore"
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:    },
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "osd_id": 1,
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "type": "bluestore"
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:    },
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "osd_id": 0,
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:        "type": "bluestore"
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]:    }
Oct  7 09:42:37 np0005473739 flamboyant_meninsky[167176]: }
Oct  7 09:42:37 np0005473739 systemd[1]: libpod-1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7.scope: Deactivated successfully.
Oct  7 09:42:37 np0005473739 systemd[1]: libpod-1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7.scope: Consumed 1.051s CPU time.
Oct  7 09:42:37 np0005473739 podman[167118]: 2025-10-07 13:42:37.155780347 +0000 UTC m=+1.287087403 container died 1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 09:42:37 np0005473739 python3.9[167409]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-536ef47dad347bc0535145d4fcb0481d1cf0126e9061075e05f1df14b923be36-merged.mount: Deactivated successfully.
Oct  7 09:42:37 np0005473739 podman[167118]: 2025-10-07 13:42:37.34456749 +0000 UTC m=+1.475874476 container remove 1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_meninsky, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:42:37 np0005473739 systemd[1]: libpod-conmon-1526e2800b38c7a0fd150929068eb8ea946a2c36e1edf58255358d3fe15359a7.scope: Deactivated successfully.
Oct  7 09:42:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:42:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:42:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b8a60245-c6c9-4259-a0e6-e45d1f81b847 does not exist
Oct  7 09:42:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9bf1c5b1-da88-4070-b329-35704bce60a8 does not exist
Oct  7 09:42:37 np0005473739 python3.9[167646]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:42:38 np0005473739 python3.9[167799]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v446: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:39 np0005473739 python3.9[167952]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:39 np0005473739 python3.9[168105]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:40 np0005473739 python3.9[168258]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:42:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v447: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:41 np0005473739 python3.9[168411]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Oct  7 09:42:42 np0005473739 python3.9[168564]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  7 09:42:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v448: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:43 np0005473739 python3.9[168722]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  7 09:42:44 np0005473739 python3.9[168882]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:42:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v449: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:45 np0005473739 python3.9[168966]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:42:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v450: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v451: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v452: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:42:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v453: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:53 np0005473739 podman[168978]: 2025-10-07 13:42:53.12854772 +0000 UTC m=+0.105192855 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  7 09:42:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v454: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:42:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v455: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v456: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:42:59 np0005473739 podman[169004]: 2025-10-07 13:42:59.078197013 +0000 UTC m=+0.059373840 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct  7 09:43:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:43:00.014 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:43:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:43:00.014 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:43:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:43:00.015 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:43:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v457: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v458: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v459: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v460: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v461: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v462: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v463: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v464: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v465: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v466: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v467: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:43:22
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.log', 'volumes', '.rgw.root', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta']
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:43:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v468: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:24 np0005473739 podman[169196]: 2025-10-07 13:43:24.105299726 +0000 UTC m=+0.093020650 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct  7 09:43:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v469: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v470: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v471: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 2.7 KiB/s rd, 0 B/s wr, 4 op/s
Oct  7 09:43:30 np0005473739 podman[169222]: 2025-10-07 13:43:30.096979149 +0000 UTC m=+0.071942333 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:43:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v472: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct  7 09:43:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:43:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:43:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v473: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 5.0 KiB/s rd, 0 B/s wr, 8 op/s
Oct  7 09:43:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v474: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 9.2 KiB/s rd, 0 B/s wr, 15 op/s
Oct  7 09:43:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v475: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 14 KiB/s rd, 0 B/s wr, 23 op/s
Oct  7 09:43:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v476: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 28 KiB/s rd, 0 B/s wr, 46 op/s
Oct  7 09:43:38 np0005473739 podman[169423]: 2025-10-07 13:43:38.928035026 +0000 UTC m=+0.500715439 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:43:39 np0005473739 podman[169423]: 2025-10-07 13:43:39.15739994 +0000 UTC m=+0.730080363 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:43:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:43:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:43:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:43:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v477: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Oct  7 09:43:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:43:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:43:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:43:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:43:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:43:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:43:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:43:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:43:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v478: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 29 KiB/s rd, 0 B/s wr, 48 op/s
Oct  7 09:43:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:43:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 66779a1c-76ee-441c-ab46-37ed9920bd43 does not exist
Oct  7 09:43:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e0e5c997-5dcc-43f1-8933-56782d100aee does not exist
Oct  7 09:43:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 33d2d104-50ec-43b9-bcee-ae4363dddf47 does not exist
Oct  7 09:43:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:43:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:43:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:43:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:43:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:43:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:43:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:43:44 np0005473739 podman[169853]: 2025-10-07 13:43:43.999184052 +0000 UTC m=+0.029787719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:43:44 np0005473739 podman[169853]: 2025-10-07 13:43:44.199059157 +0000 UTC m=+0.229662804 container create 789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 09:43:44 np0005473739 systemd[1]: Started libpod-conmon-789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2.scope.
Oct  7 09:43:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:43:44 np0005473739 podman[169853]: 2025-10-07 13:43:44.755743855 +0000 UTC m=+0.786347552 container init 789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:43:44 np0005473739 podman[169853]: 2025-10-07 13:43:44.764772216 +0000 UTC m=+0.795375863 container start 789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:43:44 np0005473739 dreamy_stonebraker[169870]: 167 167
Oct  7 09:43:44 np0005473739 systemd[1]: libpod-789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2.scope: Deactivated successfully.
Oct  7 09:43:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v479: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 51 op/s
Oct  7 09:43:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:43:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:43:45 np0005473739 podman[169853]: 2025-10-07 13:43:45.104883689 +0000 UTC m=+1.135487376 container attach 789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:43:45 np0005473739 podman[169853]: 2025-10-07 13:43:45.107653299 +0000 UTC m=+1.138256956 container died 789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:43:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v480: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 0 B/s wr, 44 op/s
Oct  7 09:43:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v481: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 0 B/s wr, 36 op/s
Oct  7 09:43:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dd1a33af8e35fc13c18d1027e265c13bce6b0d47785491846e6ae40ec3572d30-merged.mount: Deactivated successfully.
Oct  7 09:43:50 np0005473739 podman[169853]: 2025-10-07 13:43:50.562517501 +0000 UTC m=+6.593121168 container remove 789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_stonebraker, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:43:50 np0005473739 systemd[1]: libpod-conmon-789a91f85d71330328992d629f88d286548cc836734345bf22f15331714acaa2.scope: Deactivated successfully.
Oct  7 09:43:50 np0005473739 kernel: SELinux:  Converting 2765 SID table entries...
Oct  7 09:43:50 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:43:50 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:43:50 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:43:50 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:43:50 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:43:50 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:43:50 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:43:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v482: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 8.0 KiB/s rd, 0 B/s wr, 13 op/s
Oct  7 09:43:50 np0005473739 podman[169901]: 2025-10-07 13:43:50.767902314 +0000 UTC m=+0.033640719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:43:51 np0005473739 podman[169901]: 2025-10-07 13:43:51.180638249 +0000 UTC m=+0.446376674 container create 4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:43:51 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Oct  7 09:43:51 np0005473739 systemd[1]: Started libpod-conmon-4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f.scope.
Oct  7 09:43:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:43:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd4cdf62f8252e9814d412fe868a8369087eb9e552293e17016da2ef4bf84a0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd4cdf62f8252e9814d412fe868a8369087eb9e552293e17016da2ef4bf84a0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd4cdf62f8252e9814d412fe868a8369087eb9e552293e17016da2ef4bf84a0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd4cdf62f8252e9814d412fe868a8369087eb9e552293e17016da2ef4bf84a0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cd4cdf62f8252e9814d412fe868a8369087eb9e552293e17016da2ef4bf84a0/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:52 np0005473739 podman[169901]: 2025-10-07 13:43:52.450024349 +0000 UTC m=+1.715762824 container init 4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:43:52 np0005473739 podman[169901]: 2025-10-07 13:43:52.458841013 +0000 UTC m=+1.724579438 container start 4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keller, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:43:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v483: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  7 09:43:53 np0005473739 podman[169901]: 2025-10-07 13:43:53.218572258 +0000 UTC m=+2.484310683 container attach 4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keller, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 09:43:53 np0005473739 intelligent_keller[169918]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:43:53 np0005473739 intelligent_keller[169918]: --> relative data size: 1.0
Oct  7 09:43:53 np0005473739 intelligent_keller[169918]: --> All data devices are unavailable
Oct  7 09:43:53 np0005473739 systemd[1]: libpod-4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f.scope: Deactivated successfully.
Oct  7 09:43:53 np0005473739 podman[169901]: 2025-10-07 13:43:53.707782614 +0000 UTC m=+2.973521059 container died 4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keller, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:43:53 np0005473739 systemd[1]: libpod-4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f.scope: Consumed 1.160s CPU time.
Oct  7 09:43:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4cd4cdf62f8252e9814d412fe868a8369087eb9e552293e17016da2ef4bf84a0-merged.mount: Deactivated successfully.
Oct  7 09:43:54 np0005473739 podman[169901]: 2025-10-07 13:43:54.473345098 +0000 UTC m=+3.739083493 container remove 4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:43:54 np0005473739 systemd[1]: libpod-conmon-4fd6e3c9404f5446906e563932459c1407d4f9ee67c9e8ac7df09ff463f9726f.scope: Deactivated successfully.
Oct  7 09:43:54 np0005473739 podman[169961]: 2025-10-07 13:43:54.633074357 +0000 UTC m=+0.343854073 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:43:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v484: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  7 09:43:55 np0005473739 podman[170127]: 2025-10-07 13:43:55.213417536 +0000 UTC m=+0.052560554 container create 6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:43:55 np0005473739 systemd[1]: Started libpod-conmon-6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c.scope.
Oct  7 09:43:55 np0005473739 podman[170127]: 2025-10-07 13:43:55.191590108 +0000 UTC m=+0.030733136 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:43:55 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:43:55 np0005473739 podman[170127]: 2025-10-07 13:43:55.367273526 +0000 UTC m=+0.206416594 container init 6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:43:55 np0005473739 podman[170127]: 2025-10-07 13:43:55.379421316 +0000 UTC m=+0.218564294 container start 6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:43:55 np0005473739 optimistic_chatelet[170144]: 167 167
Oct  7 09:43:55 np0005473739 systemd[1]: libpod-6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c.scope: Deactivated successfully.
Oct  7 09:43:55 np0005473739 podman[170127]: 2025-10-07 13:43:55.436437828 +0000 UTC m=+0.275580826 container attach 6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 09:43:55 np0005473739 podman[170127]: 2025-10-07 13:43:55.437078116 +0000 UTC m=+0.276221114 container died 6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 09:43:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dd8b9964c8b0c0ee385dc13630dcf273019324c0ede17e05d47d6af6e007f610-merged.mount: Deactivated successfully.
Oct  7 09:43:55 np0005473739 podman[170127]: 2025-10-07 13:43:55.589705701 +0000 UTC m=+0.428848679 container remove 6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:43:55 np0005473739 systemd[1]: libpod-conmon-6b5275a0ef45d877024bb15dd45034e82c9bf3dbae0af2a5aa084abdc235706c.scope: Deactivated successfully.
Oct  7 09:43:55 np0005473739 podman[170170]: 2025-10-07 13:43:55.810123857 +0000 UTC m=+0.087015836 container create de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_swartz, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:43:55 np0005473739 podman[170170]: 2025-10-07 13:43:55.751990784 +0000 UTC m=+0.028882783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:43:55 np0005473739 systemd[1]: Started libpod-conmon-de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5.scope.
Oct  7 09:43:55 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:43:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5cf6c93f96ab19d088c1213795e571f81f3ae9e131cd91482b36e7217d9f96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5cf6c93f96ab19d088c1213795e571f81f3ae9e131cd91482b36e7217d9f96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5cf6c93f96ab19d088c1213795e571f81f3ae9e131cd91482b36e7217d9f96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2d5cf6c93f96ab19d088c1213795e571f81f3ae9e131cd91482b36e7217d9f96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:55 np0005473739 podman[170170]: 2025-10-07 13:43:55.99044646 +0000 UTC m=+0.267338439 container init de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 09:43:55 np0005473739 podman[170170]: 2025-10-07 13:43:55.999610444 +0000 UTC m=+0.276502423 container start de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:43:56 np0005473739 podman[170170]: 2025-10-07 13:43:56.034769596 +0000 UTC m=+0.311661655 container attach de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_swartz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]: {
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:    "0": [
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:        {
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "devices": [
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "/dev/loop3"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            ],
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_name": "ceph_lv0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_size": "21470642176",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "name": "ceph_lv0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "tags": {
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cluster_name": "ceph",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.crush_device_class": "",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.encrypted": "0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osd_id": "0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.type": "block",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.vdo": "0"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            },
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "type": "block",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "vg_name": "ceph_vg0"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:        }
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:    ],
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:    "1": [
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:        {
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "devices": [
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "/dev/loop4"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            ],
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_name": "ceph_lv1",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_size": "21470642176",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "name": "ceph_lv1",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "tags": {
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cluster_name": "ceph",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.crush_device_class": "",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.encrypted": "0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osd_id": "1",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.type": "block",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.vdo": "0"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            },
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "type": "block",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "vg_name": "ceph_vg1"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:        }
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:    ],
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:    "2": [
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:        {
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "devices": [
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "/dev/loop5"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            ],
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_name": "ceph_lv2",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_size": "21470642176",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "name": "ceph_lv2",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "tags": {
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.cluster_name": "ceph",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.crush_device_class": "",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.encrypted": "0",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osd_id": "2",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.type": "block",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:                "ceph.vdo": "0"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            },
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "type": "block",
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:            "vg_name": "ceph_vg2"
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:        }
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]:    ]
Oct  7 09:43:56 np0005473739 quirky_swartz[170187]: }
Oct  7 09:43:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v485: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:56 np0005473739 systemd[1]: libpod-de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5.scope: Deactivated successfully.
Oct  7 09:43:56 np0005473739 podman[170170]: 2025-10-07 13:43:56.814495297 +0000 UTC m=+1.091387286 container died de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_swartz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 09:43:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2d5cf6c93f96ab19d088c1213795e571f81f3ae9e131cd91482b36e7217d9f96-merged.mount: Deactivated successfully.
Oct  7 09:43:57 np0005473739 podman[170170]: 2025-10-07 13:43:57.02055364 +0000 UTC m=+1.297445639 container remove de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_swartz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:43:57 np0005473739 systemd[1]: libpod-conmon-de716a05bfa7e44dffa5ec4e9025322bb0a4f35a9ecf7456b24693de430226c5.scope: Deactivated successfully.
Oct  7 09:43:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:43:57 np0005473739 podman[170347]: 2025-10-07 13:43:57.735476036 +0000 UTC m=+0.099748644 container create 96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True)
Oct  7 09:43:57 np0005473739 podman[170347]: 2025-10-07 13:43:57.660448545 +0000 UTC m=+0.024721233 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:43:57 np0005473739 systemd[1]: Started libpod-conmon-96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921.scope.
Oct  7 09:43:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:43:57 np0005473739 podman[170347]: 2025-10-07 13:43:57.900855247 +0000 UTC m=+0.265127885 container init 96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_galois, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:43:57 np0005473739 podman[170347]: 2025-10-07 13:43:57.917844226 +0000 UTC m=+0.282116834 container start 96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:43:57 np0005473739 jovial_galois[170363]: 167 167
Oct  7 09:43:57 np0005473739 systemd[1]: libpod-96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921.scope: Deactivated successfully.
Oct  7 09:43:57 np0005473739 podman[170347]: 2025-10-07 13:43:57.960982629 +0000 UTC m=+0.325255237 container attach 96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 09:43:57 np0005473739 podman[170347]: 2025-10-07 13:43:57.96174412 +0000 UTC m=+0.326016728 container died 96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_galois, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:43:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0785955396ca0a346ce0b17fefa3383fdef86ce29e4a2bceee9f324a31b1c0a7-merged.mount: Deactivated successfully.
Oct  7 09:43:58 np0005473739 podman[170347]: 2025-10-07 13:43:58.260847133 +0000 UTC m=+0.625119741 container remove 96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:43:58 np0005473739 systemd[1]: libpod-conmon-96758cbe4d7cb3b2cf61bca3f9c6ceee4c4ed80452ccb3cadaaec2615e0de921.scope: Deactivated successfully.
Oct  7 09:43:58 np0005473739 podman[170387]: 2025-10-07 13:43:58.459048109 +0000 UTC m=+0.067617078 container create ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:43:58 np0005473739 podman[170387]: 2025-10-07 13:43:58.415702631 +0000 UTC m=+0.024271620 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:43:58 np0005473739 systemd[1]: Started libpod-conmon-ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a.scope.
Oct  7 09:43:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:43:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08f46c0c0ef3dec46f8401f2c25df9ea4c2f4de09921201f61382517f34627e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08f46c0c0ef3dec46f8401f2c25df9ea4c2f4de09921201f61382517f34627e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08f46c0c0ef3dec46f8401f2c25df9ea4c2f4de09921201f61382517f34627e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d08f46c0c0ef3dec46f8401f2c25df9ea4c2f4de09921201f61382517f34627e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:43:58 np0005473739 podman[170387]: 2025-10-07 13:43:58.594987603 +0000 UTC m=+0.203556572 container init ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:43:58 np0005473739 podman[170387]: 2025-10-07 13:43:58.603076516 +0000 UTC m=+0.211645505 container start ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:43:58 np0005473739 podman[170387]: 2025-10-07 13:43:58.663477565 +0000 UTC m=+0.272046534 container attach ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:43:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v486: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]: {
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "osd_id": 2,
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "type": "bluestore"
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:    },
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "osd_id": 1,
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "type": "bluestore"
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:    },
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "osd_id": 0,
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:        "type": "bluestore"
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]:    }
Oct  7 09:43:59 np0005473739 lucid_leavitt[170404]: }
Oct  7 09:43:59 np0005473739 podman[170387]: 2025-10-07 13:43:59.625668541 +0000 UTC m=+1.234237550 container died ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:43:59 np0005473739 systemd[1]: libpod-ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a.scope: Deactivated successfully.
Oct  7 09:43:59 np0005473739 systemd[1]: libpod-ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a.scope: Consumed 1.007s CPU time.
Oct  7 09:43:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d08f46c0c0ef3dec46f8401f2c25df9ea4c2f4de09921201f61382517f34627e-merged.mount: Deactivated successfully.
Oct  7 09:43:59 np0005473739 podman[170387]: 2025-10-07 13:43:59.976889243 +0000 UTC m=+1.585458212 container remove ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:43:59 np0005473739 systemd[1]: libpod-conmon-ef708469a6fc345b663f44891c34427901c184da0a6d4a934881812f9199dd8a.scope: Deactivated successfully.
Oct  7 09:44:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:44:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:44:00.015 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:44:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:44:00.016 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:44:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:44:00.016 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:44:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:44:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:44:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:44:00 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e9bd117e-c69a-4194-9580-e77d65e95bbf does not exist
Oct  7 09:44:00 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1274d5ea-7f0d-484c-8299-a8b4cb4631ec does not exist
Oct  7 09:44:00 np0005473739 podman[170476]: 2025-10-07 13:44:00.389889354 +0000 UTC m=+0.092532015 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 09:44:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v487: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:44:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:44:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v488: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:03 np0005473739 kernel: SELinux:  Converting 2765 SID table entries...
Oct  7 09:44:03 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:44:03 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:44:03 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:44:03 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:44:03 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:44:03 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:44:03 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:44:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v489: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v490: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v491: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v492: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v493: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v494: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v495: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v496: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v497: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:44:22
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'images', 'default.rgw.meta', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'volumes', 'backups', '.mgr']
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:44:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v498: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v499: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:24 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Oct  7 09:44:25 np0005473739 podman[174883]: 2025-10-07 13:44:25.144005821 +0000 UTC m=+0.115003219 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 09:44:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v500: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v501: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v502: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:31 np0005473739 podman[177929]: 2025-10-07 13:44:31.072007563 +0000 UTC m=+0.059831562 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:44:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:44:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v503: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v504: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v505: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v506: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v507: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v508: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v509: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v510: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v511: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v512: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:44:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v513: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v514: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:56 np0005473739 podman[187323]: 2025-10-07 13:44:56.16999886 +0000 UTC m=+0.131475048 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:44:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v515: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:44:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:44:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v516: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:45:00.016 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:45:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:45:00.017 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:45:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:45:00.017 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:45:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v517: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:45:01 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 784408bd-abe7-406c-84b2-f00d93871463 does not exist
Oct  7 09:45:01 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 58ae163a-fdb4-460c-a59b-364eb2c785ad does not exist
Oct  7 09:45:01 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6414de0c-70a1-4f80-a99e-06aa56e81a8a does not exist
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:45:01 np0005473739 podman[187504]: 2025-10-07 13:45:01.32060622 +0000 UTC m=+0.084370764 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:45:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:45:01 np0005473739 podman[187637]: 2025-10-07 13:45:01.822555468 +0000 UTC m=+0.044994958 container create 9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:45:01 np0005473739 systemd[1]: Started libpod-conmon-9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534.scope.
Oct  7 09:45:01 np0005473739 podman[187637]: 2025-10-07 13:45:01.80342744 +0000 UTC m=+0.025866950 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:45:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:45:01 np0005473739 podman[187637]: 2025-10-07 13:45:01.920441121 +0000 UTC m=+0.142880631 container init 9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:45:01 np0005473739 podman[187637]: 2025-10-07 13:45:01.930380575 +0000 UTC m=+0.152820065 container start 9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:45:01 np0005473739 podman[187637]: 2025-10-07 13:45:01.934353861 +0000 UTC m=+0.156793381 container attach 9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:45:01 np0005473739 hardcore_stonebraker[187655]: 167 167
Oct  7 09:45:01 np0005473739 systemd[1]: libpod-9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534.scope: Deactivated successfully.
Oct  7 09:45:01 np0005473739 conmon[187655]: conmon 9a00a2a8500d09648d48 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534.scope/container/memory.events
Oct  7 09:45:01 np0005473739 podman[187637]: 2025-10-07 13:45:01.944995124 +0000 UTC m=+0.167434674 container died 9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:45:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3d0b6d84ad729ac102e5a76599434677318d666f71b9cd36b84f35afa332536c-merged.mount: Deactivated successfully.
Oct  7 09:45:02 np0005473739 podman[187637]: 2025-10-07 13:45:02.005062142 +0000 UTC m=+0.227501662 container remove 9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:45:02 np0005473739 systemd[1]: libpod-conmon-9a00a2a8500d09648d48879e9c2b80b49655a09be3b77f664ce81ce069af4534.scope: Deactivated successfully.
Oct  7 09:45:02 np0005473739 podman[187680]: 2025-10-07 13:45:02.235391886 +0000 UTC m=+0.053158874 container create 9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:45:02 np0005473739 systemd[1]: Started libpod-conmon-9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851.scope.
Oct  7 09:45:02 np0005473739 podman[187680]: 2025-10-07 13:45:02.213964447 +0000 UTC m=+0.031731485 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:45:02 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:45:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f9e26c3e4c052251a5b0820dcbf521b485c5136731240b5703bf635f199a1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f9e26c3e4c052251a5b0820dcbf521b485c5136731240b5703bf635f199a1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f9e26c3e4c052251a5b0820dcbf521b485c5136731240b5703bf635f199a1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f9e26c3e4c052251a5b0820dcbf521b485c5136731240b5703bf635f199a1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/128f9e26c3e4c052251a5b0820dcbf521b485c5136731240b5703bf635f199a1/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:02 np0005473739 podman[187680]: 2025-10-07 13:45:02.34420296 +0000 UTC m=+0.161969978 container init 9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 09:45:02 np0005473739 podman[187680]: 2025-10-07 13:45:02.35358459 +0000 UTC m=+0.171351578 container start 9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 09:45:02 np0005473739 podman[187680]: 2025-10-07 13:45:02.358597233 +0000 UTC m=+0.176364261 container attach 9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:45:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v518: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:03 np0005473739 distracted_khayyam[187697]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:45:03 np0005473739 distracted_khayyam[187697]: --> relative data size: 1.0
Oct  7 09:45:03 np0005473739 distracted_khayyam[187697]: --> All data devices are unavailable
Oct  7 09:45:03 np0005473739 systemd[1]: libpod-9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851.scope: Deactivated successfully.
Oct  7 09:45:03 np0005473739 podman[187680]: 2025-10-07 13:45:03.470534502 +0000 UTC m=+1.288301500 container died 9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True)
Oct  7 09:45:03 np0005473739 systemd[1]: libpod-9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851.scope: Consumed 1.059s CPU time.
Oct  7 09:45:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-128f9e26c3e4c052251a5b0820dcbf521b485c5136731240b5703bf635f199a1-merged.mount: Deactivated successfully.
Oct  7 09:45:03 np0005473739 podman[187680]: 2025-10-07 13:45:03.532594723 +0000 UTC m=+1.350361711 container remove 9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:45:03 np0005473739 systemd[1]: libpod-conmon-9cd328a345b34fb9d690cb3c47fe335ee73d6821b2e8aa5333bf6ad856d6a851.scope: Deactivated successfully.
Oct  7 09:45:04 np0005473739 podman[187883]: 2025-10-07 13:45:04.200567826 +0000 UTC m=+0.025778007 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:45:04 np0005473739 podman[187883]: 2025-10-07 13:45:04.428876498 +0000 UTC m=+0.254086689 container create 1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cohen, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:45:04 np0005473739 systemd[1]: Started libpod-conmon-1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414.scope.
Oct  7 09:45:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:45:04 np0005473739 podman[187883]: 2025-10-07 13:45:04.587185318 +0000 UTC m=+0.412395509 container init 1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:45:04 np0005473739 podman[187883]: 2025-10-07 13:45:04.596017682 +0000 UTC m=+0.421227843 container start 1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cohen, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:45:04 np0005473739 sweet_cohen[187901]: 167 167
Oct  7 09:45:04 np0005473739 systemd[1]: libpod-1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414.scope: Deactivated successfully.
Oct  7 09:45:04 np0005473739 podman[187883]: 2025-10-07 13:45:04.608573986 +0000 UTC m=+0.433784197 container attach 1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cohen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  7 09:45:04 np0005473739 podman[187883]: 2025-10-07 13:45:04.609955863 +0000 UTC m=+0.435166024 container died 1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cohen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:45:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0cf7c27432f98ff1f1adf386226e9365ece808533e877edbd22f41554d135d70-merged.mount: Deactivated successfully.
Oct  7 09:45:04 np0005473739 podman[187883]: 2025-10-07 13:45:04.672918077 +0000 UTC m=+0.498128238 container remove 1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_cohen, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:45:04 np0005473739 systemd[1]: libpod-conmon-1ac5a13e59577af3a8da0c1af68048c89852c177bbbf32216c10e92a8bdac414.scope: Deactivated successfully.
Oct  7 09:45:04 np0005473739 kernel: SELinux:  Converting 2766 SID table entries...
Oct  7 09:45:04 np0005473739 kernel: SELinux:  policy capability network_peer_controls=1
Oct  7 09:45:04 np0005473739 kernel: SELinux:  policy capability open_perms=1
Oct  7 09:45:04 np0005473739 kernel: SELinux:  policy capability extended_socket_class=1
Oct  7 09:45:04 np0005473739 kernel: SELinux:  policy capability always_check_network=0
Oct  7 09:45:04 np0005473739 kernel: SELinux:  policy capability cgroup_seclabel=1
Oct  7 09:45:04 np0005473739 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Oct  7 09:45:04 np0005473739 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Oct  7 09:45:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v519: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:04 np0005473739 podman[187925]: 2025-10-07 13:45:04.899828252 +0000 UTC m=+0.064248870 container create c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wright, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:45:04 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Oct  7 09:45:04 np0005473739 podman[187925]: 2025-10-07 13:45:04.869150046 +0000 UTC m=+0.033570674 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:45:04 np0005473739 systemd[1]: Started libpod-conmon-c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd.scope.
Oct  7 09:45:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:45:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09da3c863d4eab779b36752614265d08b1de98ef090a1544f797ad61f88ebe2f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09da3c863d4eab779b36752614265d08b1de98ef090a1544f797ad61f88ebe2f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09da3c863d4eab779b36752614265d08b1de98ef090a1544f797ad61f88ebe2f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/09da3c863d4eab779b36752614265d08b1de98ef090a1544f797ad61f88ebe2f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:05 np0005473739 podman[187925]: 2025-10-07 13:45:05.038337305 +0000 UTC m=+0.202757903 container init c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wright, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:45:05 np0005473739 podman[187925]: 2025-10-07 13:45:05.052021439 +0000 UTC m=+0.216442087 container start c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wright, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:45:05 np0005473739 podman[187925]: 2025-10-07 13:45:05.057815773 +0000 UTC m=+0.222236381 container attach c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wright, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:45:05 np0005473739 elegant_wright[187941]: {
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:    "0": [
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:        {
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "devices": [
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "/dev/loop3"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            ],
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_name": "ceph_lv0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_size": "21470642176",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "name": "ceph_lv0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "tags": {
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cluster_name": "ceph",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.crush_device_class": "",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.encrypted": "0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osd_id": "0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.type": "block",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.vdo": "0"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            },
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "type": "block",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "vg_name": "ceph_vg0"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:        }
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:    ],
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:    "1": [
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:        {
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "devices": [
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "/dev/loop4"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            ],
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_name": "ceph_lv1",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_size": "21470642176",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "name": "ceph_lv1",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "tags": {
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cluster_name": "ceph",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.crush_device_class": "",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.encrypted": "0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osd_id": "1",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.type": "block",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.vdo": "0"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            },
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "type": "block",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "vg_name": "ceph_vg1"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:        }
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:    ],
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:    "2": [
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:        {
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "devices": [
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "/dev/loop5"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            ],
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_name": "ceph_lv2",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_size": "21470642176",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "name": "ceph_lv2",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "tags": {
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.cluster_name": "ceph",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.crush_device_class": "",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.encrypted": "0",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osd_id": "2",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.type": "block",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:                "ceph.vdo": "0"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            },
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "type": "block",
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:            "vg_name": "ceph_vg2"
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:        }
Oct  7 09:45:05 np0005473739 elegant_wright[187941]:    ]
Oct  7 09:45:05 np0005473739 elegant_wright[187941]: }
Oct  7 09:45:05 np0005473739 systemd[1]: libpod-c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd.scope: Deactivated successfully.
Oct  7 09:45:05 np0005473739 podman[187925]: 2025-10-07 13:45:05.952022583 +0000 UTC m=+1.116443191 container died c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 09:45:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-09da3c863d4eab779b36752614265d08b1de98ef090a1544f797ad61f88ebe2f-merged.mount: Deactivated successfully.
Oct  7 09:45:06 np0005473739 podman[187925]: 2025-10-07 13:45:06.014039062 +0000 UTC m=+1.178459660 container remove c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_wright, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:45:06 np0005473739 systemd[1]: libpod-conmon-c2bc9d36e8a9816dcb17c0d4b25288010567027275be4e7379fac778705ed5bd.scope: Deactivated successfully.
Oct  7 09:45:06 np0005473739 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Oct  7 09:45:06 np0005473739 dbus-broker-launch[762]: Noticed file-system modification, trigger reload.
Oct  7 09:45:06 np0005473739 podman[188124]: 2025-10-07 13:45:06.721468114 +0000 UTC m=+0.057201742 container create 4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:45:06 np0005473739 systemd[1]: Started libpod-conmon-4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42.scope.
Oct  7 09:45:06 np0005473739 podman[188124]: 2025-10-07 13:45:06.695680469 +0000 UTC m=+0.031414067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:45:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:45:06 np0005473739 podman[188124]: 2025-10-07 13:45:06.830043222 +0000 UTC m=+0.165776850 container init 4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:45:06 np0005473739 podman[188124]: 2025-10-07 13:45:06.840180141 +0000 UTC m=+0.175913739 container start 4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:45:06 np0005473739 podman[188124]: 2025-10-07 13:45:06.84389202 +0000 UTC m=+0.179625648 container attach 4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:45:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v520: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:06 np0005473739 systemd[1]: libpod-4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42.scope: Deactivated successfully.
Oct  7 09:45:06 np0005473739 wonderful_hypatia[188140]: 167 167
Oct  7 09:45:06 np0005473739 conmon[188140]: conmon 4bad5315eef5fb2644f2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42.scope/container/memory.events
Oct  7 09:45:06 np0005473739 podman[188124]: 2025-10-07 13:45:06.851511333 +0000 UTC m=+0.187244931 container died 4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 09:45:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9c63f30f0a23504da24dfcad5f51a49083b204083ee31aaeeff4045630a6d063-merged.mount: Deactivated successfully.
Oct  7 09:45:06 np0005473739 podman[188124]: 2025-10-07 13:45:06.89838637 +0000 UTC m=+0.234119958 container remove 4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_hypatia, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:45:06 np0005473739 systemd[1]: libpod-conmon-4bad5315eef5fb2644f2e76da7a179c2c74f8d6526ec7eea22eb3e57f0211e42.scope: Deactivated successfully.
Oct  7 09:45:07 np0005473739 podman[188164]: 2025-10-07 13:45:07.115046771 +0000 UTC m=+0.068454612 container create 9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mahavira, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:45:07 np0005473739 systemd[1]: Started libpod-conmon-9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13.scope.
Oct  7 09:45:07 np0005473739 podman[188164]: 2025-10-07 13:45:07.078016066 +0000 UTC m=+0.031423957 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:45:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:45:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2decb43fb7d586fedb56e1b4f07c67ef29a6a2abc43fd1a4790afed5b0f0a8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2decb43fb7d586fedb56e1b4f07c67ef29a6a2abc43fd1a4790afed5b0f0a8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2decb43fb7d586fedb56e1b4f07c67ef29a6a2abc43fd1a4790afed5b0f0a8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9b2decb43fb7d586fedb56e1b4f07c67ef29a6a2abc43fd1a4790afed5b0f0a8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:45:07 np0005473739 podman[188164]: 2025-10-07 13:45:07.221419279 +0000 UTC m=+0.174827110 container init 9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mahavira, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:45:07 np0005473739 podman[188164]: 2025-10-07 13:45:07.23233693 +0000 UTC m=+0.185744731 container start 9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:45:07 np0005473739 podman[188164]: 2025-10-07 13:45:07.236750057 +0000 UTC m=+0.190157868 container attach 9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mahavira, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:45:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]: {
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "osd_id": 2,
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "type": "bluestore"
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:    },
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "osd_id": 1,
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "type": "bluestore"
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:    },
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "osd_id": 0,
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:        "type": "bluestore"
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]:    }
Oct  7 09:45:08 np0005473739 compassionate_mahavira[188184]: }
Oct  7 09:45:08 np0005473739 systemd[1]: libpod-9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13.scope: Deactivated successfully.
Oct  7 09:45:08 np0005473739 systemd[1]: libpod-9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13.scope: Consumed 1.064s CPU time.
Oct  7 09:45:08 np0005473739 podman[188164]: 2025-10-07 13:45:08.289091942 +0000 UTC m=+1.242499773 container died 9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mahavira, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:45:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9b2decb43fb7d586fedb56e1b4f07c67ef29a6a2abc43fd1a4790afed5b0f0a8-merged.mount: Deactivated successfully.
Oct  7 09:45:08 np0005473739 podman[188164]: 2025-10-07 13:45:08.37400621 +0000 UTC m=+1.327414031 container remove 9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mahavira, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 09:45:08 np0005473739 systemd[1]: libpod-conmon-9d2c6ac856e7918816d58a0c98b49287f374d732b8331a58ac57f54e25ad5c13.scope: Deactivated successfully.
Oct  7 09:45:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:45:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:45:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:45:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:45:08 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4a3d9348-33fb-4acd-a3c5-97d1580a690b does not exist
Oct  7 09:45:08 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 36cc799b-6f12-4d2d-94f9-6fc9be45ce4b does not exist
Oct  7 09:45:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v521: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:45:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:45:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v522: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v523: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:14 np0005473739 systemd[1]: Stopping OpenSSH server daemon...
Oct  7 09:45:14 np0005473739 systemd[1]: sshd.service: Deactivated successfully.
Oct  7 09:45:14 np0005473739 systemd[1]: Stopped OpenSSH server daemon.
Oct  7 09:45:14 np0005473739 systemd[1]: sshd.service: Consumed 2.207s CPU time, read 0B from disk, written 4.0K to disk.
Oct  7 09:45:14 np0005473739 systemd[1]: Stopped target sshd-keygen.target.
Oct  7 09:45:14 np0005473739 systemd[1]: Stopping sshd-keygen.target...
Oct  7 09:45:14 np0005473739 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  7 09:45:14 np0005473739 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  7 09:45:14 np0005473739 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Oct  7 09:45:14 np0005473739 systemd[1]: Reached target sshd-keygen.target.
Oct  7 09:45:14 np0005473739 systemd[1]: Starting OpenSSH server daemon...
Oct  7 09:45:14 np0005473739 systemd[1]: Started OpenSSH server daemon.
Oct  7 09:45:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v524: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #27. Immutable memtables: 0.
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:14.936564) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 9] Flushing memtable with next log file: 27
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844714936732, "job": 9, "event": "flush_started", "num_memtables": 1, "num_entries": 2042, "num_deletes": 251, "total_data_size": 3555458, "memory_usage": 3608208, "flush_reason": "Manual Compaction"}
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 9] Level-0 flush table #28: started
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844714959789, "cf_name": "default", "job": 9, "event": "table_file_creation", "file_number": 28, "file_size": 3469639, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 9706, "largest_seqno": 11747, "table_properties": {"data_size": 3460348, "index_size": 5913, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2309, "raw_key_size": 17785, "raw_average_key_size": 19, "raw_value_size": 3441995, "raw_average_value_size": 3761, "num_data_blocks": 268, "num_entries": 915, "num_filter_entries": 915, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759844481, "oldest_key_time": 1759844481, "file_creation_time": 1759844714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 28, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 9] Flush lasted 23251 microseconds, and 9063 cpu microseconds.
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:14.959841) [db/flush_job.cc:967] [default] [JOB 9] Level-0 flush table #28: 3469639 bytes OK
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:14.959865) [db/memtable_list.cc:519] [default] Level-0 commit table #28 started
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:14.961398) [db/memtable_list.cc:722] [default] Level-0 commit table #28: memtable #1 done
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:14.961413) EVENT_LOG_v1 {"time_micros": 1759844714961408, "job": 9, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:14.961429) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 9] Try to delete WAL files size 3546937, prev total WAL file size 3546937, number of live WAL files 2.
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000024.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:14.962470) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F7300353032' seq:72057594037927935, type:22 .. '7061786F7300373534' seq:0, type:0; will stop at (end)
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 10] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 9 Base level 0, inputs: [28(3388KB)], [26(5963KB)]
Oct  7 09:45:14 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844714962517, "job": 10, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [28], "files_L6": [26], "score": -1, "input_data_size": 9576394, "oldest_snapshot_seqno": -1}
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 10] Generated table #29: 3703 keys, 7983948 bytes, temperature: kUnknown
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844715030413, "cf_name": "default", "job": 10, "event": "table_file_creation", "file_number": 29, "file_size": 7983948, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7955433, "index_size": 18165, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9285, "raw_key_size": 88940, "raw_average_key_size": 24, "raw_value_size": 7884857, "raw_average_value_size": 2129, "num_data_blocks": 787, "num_entries": 3703, "num_filter_entries": 3703, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759844714, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 29, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:15.030691) [db/compaction/compaction_job.cc:1663] [default] [JOB 10] Compacted 1@0 + 1@6 files to L6 => 7983948 bytes
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:15.032051) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 140.8 rd, 117.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.3, 5.8 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(5.1) write-amplify(2.3) OK, records in: 4217, records dropped: 514 output_compression: NoCompression
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:15.032073) EVENT_LOG_v1 {"time_micros": 1759844715032062, "job": 10, "event": "compaction_finished", "compaction_time_micros": 68007, "compaction_time_cpu_micros": 19193, "output_level": 6, "num_output_files": 1, "total_output_size": 7983948, "num_input_records": 4217, "num_output_records": 3703, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000028.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844715033167, "job": 10, "event": "table_file_deletion", "file_number": 28}
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000026.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844715034749, "job": 10, "event": "table_file_deletion", "file_number": 26}
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:14.962391) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:15.034834) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:15.034838) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:15.034840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:15.034842) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:45:15 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:45:15.034844) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:45:16 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:45:16 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:45:16 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:16 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:16 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v525: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:17 np0005473739 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  7 09:45:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v526: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:19 np0005473739 systemd[1]: Starting PackageKit Daemon...
Oct  7 09:45:19 np0005473739 systemd[1]: Started PackageKit Daemon.
Oct  7 09:45:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v527: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:21 np0005473739 python3.9[192967]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:45:21 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:21 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:21 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:22 np0005473739 python3.9[194153]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:45:22
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'default.rgw.control', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.meta', 'volumes']
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:45:22 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:45:22 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:22 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:45:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v528: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:23 np0005473739 python3.9[195227]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:45:23 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:23 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:23 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v529: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:24 np0005473739 python3.9[196292]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:45:24 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:25 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:25 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:25 np0005473739 auditd[701]: Audit daemon rotating log files
Oct  7 09:45:26 np0005473739 python3.9[197525]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:26 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:26 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:26 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:26 np0005473739 podman[197821]: 2025-10-07 13:45:26.380952549 +0000 UTC m=+0.151421116 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 09:45:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v530: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:27 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:45:27 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:45:27 np0005473739 systemd[1]: man-db-cache-update.service: Consumed 13.330s CPU time.
Oct  7 09:45:27 np0005473739 systemd[1]: run-r5452616609c2469180ee18ed1a769e4a.service: Deactivated successfully.
Oct  7 09:45:27 np0005473739 python3.9[198671]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:27 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:27 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:27 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:28 np0005473739 python3.9[198934]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:28 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v531: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:28 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:28 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:29 np0005473739 python3.9[199124]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:30 np0005473739 python3.9[199279]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:30 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v532: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:30 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:30 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:31 np0005473739 podman[199444]: 2025-10-07 13:45:31.720060131 +0000 UTC m=+0.081507559 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:45:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:45:31 np0005473739 python3.9[199483]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Oct  7 09:45:32 np0005473739 systemd[1]: Reloading.
Oct  7 09:45:32 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:45:32 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:45:32 np0005473739 systemd[1]: Listening on libvirt proxy daemon socket.
Oct  7 09:45:32 np0005473739 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Oct  7 09:45:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v533: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:33 np0005473739 python3.9[199689]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:34 np0005473739 python3.9[199844]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v534: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:35 np0005473739 python3.9[199999]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:36 np0005473739 python3.9[200154]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v535: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:36 np0005473739 python3.9[200309]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:37 np0005473739 python3.9[200464]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:38 np0005473739 python3.9[200619]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v536: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:39 np0005473739 python3.9[200774]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:40 np0005473739 python3.9[200929]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v537: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:42 np0005473739 python3.9[201084]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v538: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:43 np0005473739 python3.9[201239]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:43 np0005473739 python3.9[201394]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:44 np0005473739 python3.9[201549]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v539: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:45 np0005473739 python3.9[201704]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Oct  7 09:45:46 np0005473739 python3.9[201859]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:45:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v540: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:46 np0005473739 python3.9[202011]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:45:47 np0005473739 python3.9[202163]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:45:48 np0005473739 python3.9[202315]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:45:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:48 np0005473739 python3.9[202467]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:45:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v541: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:49 np0005473739 python3.9[202619]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:45:50 np0005473739 python3.9[202771]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:45:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v542: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:51 np0005473739 python3.9[202896]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759844749.67607-554-248826753697090/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:45:51 np0005473739 python3.9[203048]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:45:52 np0005473739 python3.9[203173]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759844751.2615874-554-99200634465598/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:45:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v543: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:53 np0005473739 python3.9[203325]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:45:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:53 np0005473739 python3.9[203450]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759844752.488426-554-225452332089273/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:45:54 np0005473739 python3.9[203602]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:45:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v544: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:54 np0005473739 python3.9[203727]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759844753.772546-554-179622189004129/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:45:55 np0005473739 python3.9[203879]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:45:56 np0005473739 python3.9[204004]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759844755.0573297-554-14616299109218/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:45:56 np0005473739 podman[204128]: 2025-10-07 13:45:56.733289819 +0000 UTC m=+0.118619879 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 09:45:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v545: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:56 np0005473739 python3.9[204176]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:45:57 np0005473739 python3.9[204307]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759844756.3343391-554-221909658248120/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:45:58 np0005473739 python3.9[204459]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:45:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:45:58 np0005473739 python3.9[204582]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759844757.723367-554-105855884786951/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:45:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v546: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:45:59 np0005473739 python3.9[204734]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:46:00.018 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:46:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:46:00.018 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:46:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:46:00.018 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:46:00 np0005473739 python3.9[204859]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1759844759.0024571-554-130595269616148/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:00 np0005473739 python3.9[205011]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Oct  7 09:46:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v547: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:01 np0005473739 python3.9[205164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:02 np0005473739 podman[205288]: 2025-10-07 13:46:02.024117381 +0000 UTC m=+0.048399674 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct  7 09:46:02 np0005473739 python3.9[205333]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:02 np0005473739 python3.9[205485]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v548: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:03 np0005473739 python3.9[205637]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:04 np0005473739 python3.9[205789]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v549: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:05 np0005473739 python3.9[205941]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:05 np0005473739 python3.9[206093]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:06 np0005473739 python3.9[206245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v550: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:07 np0005473739 python3.9[206397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:07 np0005473739 python3.9[206549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:08 np0005473739 python3.9[206701]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v551: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:09 np0005473739 python3.9[206932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:46:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c879e208-898f-4a5b-87cb-781b8722f331 does not exist
Oct  7 09:46:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 59579e34-01b9-4381-8471-70712261d72e does not exist
Oct  7 09:46:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f39bddd8-8e8c-407f-aa7a-b7c640e62585 does not exist
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:46:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:46:09 np0005473739 python3.9[207164]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:10 np0005473739 podman[207374]: 2025-10-07 13:46:10.157697861 +0000 UTC m=+0.061807072 container create 29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:46:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:46:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:46:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:46:10 np0005473739 systemd[1]: Started libpod-conmon-29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095.scope.
Oct  7 09:46:10 np0005473739 podman[207374]: 2025-10-07 13:46:10.124117573 +0000 UTC m=+0.028226804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:46:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:46:10 np0005473739 podman[207374]: 2025-10-07 13:46:10.276144545 +0000 UTC m=+0.180253776 container init 29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:46:10 np0005473739 podman[207374]: 2025-10-07 13:46:10.286306496 +0000 UTC m=+0.190415707 container start 29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:46:10 np0005473739 zen_montalcini[207440]: 167 167
Oct  7 09:46:10 np0005473739 systemd[1]: libpod-29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095.scope: Deactivated successfully.
Oct  7 09:46:10 np0005473739 podman[207374]: 2025-10-07 13:46:10.299534579 +0000 UTC m=+0.203643820 container attach 29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 09:46:10 np0005473739 podman[207374]: 2025-10-07 13:46:10.301505742 +0000 UTC m=+0.205614993 container died 29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:46:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9fd9208d4b5db6ccdd59259d0f4fb2f01779013bac8a7b4428c34000f5a901c9-merged.mount: Deactivated successfully.
Oct  7 09:46:10 np0005473739 podman[207374]: 2025-10-07 13:46:10.348837077 +0000 UTC m=+0.252946298 container remove 29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_montalcini, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507)
Oct  7 09:46:10 np0005473739 systemd[1]: libpod-conmon-29f32deda778fea7f4f8d95f78cff0ca6c88f3254cae985dd6e7201f69fef095.scope: Deactivated successfully.
Oct  7 09:46:10 np0005473739 python3.9[207445]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:10 np0005473739 podman[207467]: 2025-10-07 13:46:10.53538587 +0000 UTC m=+0.052661428 container create 975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banzai, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 09:46:10 np0005473739 systemd[1]: Started libpod-conmon-975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e.scope.
Oct  7 09:46:10 np0005473739 podman[207467]: 2025-10-07 13:46:10.511072091 +0000 UTC m=+0.028347669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:46:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:46:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66eb4a0ca495b58cfab1bddd042f408c7be06873ff5060bb637a18ec897a0e74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66eb4a0ca495b58cfab1bddd042f408c7be06873ff5060bb637a18ec897a0e74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66eb4a0ca495b58cfab1bddd042f408c7be06873ff5060bb637a18ec897a0e74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66eb4a0ca495b58cfab1bddd042f408c7be06873ff5060bb637a18ec897a0e74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/66eb4a0ca495b58cfab1bddd042f408c7be06873ff5060bb637a18ec897a0e74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:10 np0005473739 podman[207467]: 2025-10-07 13:46:10.653881065 +0000 UTC m=+0.171156623 container init 975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banzai, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:46:10 np0005473739 podman[207467]: 2025-10-07 13:46:10.661150499 +0000 UTC m=+0.178426057 container start 975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banzai, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:46:10 np0005473739 podman[207467]: 2025-10-07 13:46:10.664583541 +0000 UTC m=+0.181859099 container attach 975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banzai, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:46:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v552: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:11 np0005473739 python3.9[207640]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:11 np0005473739 youthful_banzai[207504]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:46:11 np0005473739 youthful_banzai[207504]: --> relative data size: 1.0
Oct  7 09:46:11 np0005473739 youthful_banzai[207504]: --> All data devices are unavailable
Oct  7 09:46:11 np0005473739 systemd[1]: libpod-975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e.scope: Deactivated successfully.
Oct  7 09:46:11 np0005473739 systemd[1]: libpod-975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e.scope: Consumed 1.030s CPU time.
Oct  7 09:46:11 np0005473739 podman[207788]: 2025-10-07 13:46:11.802486487 +0000 UTC m=+0.036006573 container died 975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banzai, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:46:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-66eb4a0ca495b58cfab1bddd042f408c7be06873ff5060bb637a18ec897a0e74-merged.mount: Deactivated successfully.
Oct  7 09:46:11 np0005473739 podman[207788]: 2025-10-07 13:46:11.904716848 +0000 UTC m=+0.138236934 container remove 975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 09:46:11 np0005473739 systemd[1]: libpod-conmon-975ebe7c416f771d279437cefecceccd5229461e58125e98bee07e2adcda7a6e.scope: Deactivated successfully.
Oct  7 09:46:11 np0005473739 python3.9[207787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844770.7659774-775-47359178590902/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:12 np0005473739 podman[208097]: 2025-10-07 13:46:12.577679395 +0000 UTC m=+0.043247946 container create 462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hodgkin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:46:12 np0005473739 python3.9[208069]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:12 np0005473739 systemd[1]: Started libpod-conmon-462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09.scope.
Oct  7 09:46:12 np0005473739 podman[208097]: 2025-10-07 13:46:12.559791977 +0000 UTC m=+0.025360518 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:46:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:46:12 np0005473739 podman[208097]: 2025-10-07 13:46:12.685480625 +0000 UTC m=+0.151049196 container init 462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:46:12 np0005473739 podman[208097]: 2025-10-07 13:46:12.693675883 +0000 UTC m=+0.159244444 container start 462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hodgkin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:46:12 np0005473739 strange_hodgkin[208114]: 167 167
Oct  7 09:46:12 np0005473739 systemd[1]: libpod-462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09.scope: Deactivated successfully.
Oct  7 09:46:12 np0005473739 podman[208097]: 2025-10-07 13:46:12.713600346 +0000 UTC m=+0.179168907 container attach 462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hodgkin, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:46:12 np0005473739 podman[208097]: 2025-10-07 13:46:12.715113796 +0000 UTC m=+0.180682337 container died 462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hodgkin, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 09:46:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ee4450e24c960cf2bbb76b3323c66c309dade31b9f47f57b9f2e5b874046cb12-merged.mount: Deactivated successfully.
Oct  7 09:46:12 np0005473739 podman[208097]: 2025-10-07 13:46:12.764258239 +0000 UTC m=+0.229826790 container remove 462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_hodgkin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:46:12 np0005473739 systemd[1]: libpod-conmon-462e518ebce824c46b976ad604f9a255da7a33a62d064d17db0167cadf118e09.scope: Deactivated successfully.
Oct  7 09:46:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v553: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:13 np0005473739 podman[208232]: 2025-10-07 13:46:12.952336133 +0000 UTC m=+0.042440995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:46:13 np0005473739 podman[208232]: 2025-10-07 13:46:13.051344098 +0000 UTC m=+0.141449000 container create 48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:46:13 np0005473739 systemd[1]: Started libpod-conmon-48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9.scope.
Oct  7 09:46:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:46:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7155f5fa08a52d2879e9cc8f223e8bd1108425cde7fd64b5b57d050cb60c7e3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7155f5fa08a52d2879e9cc8f223e8bd1108425cde7fd64b5b57d050cb60c7e3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7155f5fa08a52d2879e9cc8f223e8bd1108425cde7fd64b5b57d050cb60c7e3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7155f5fa08a52d2879e9cc8f223e8bd1108425cde7fd64b5b57d050cb60c7e3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:13 np0005473739 podman[208232]: 2025-10-07 13:46:13.184477964 +0000 UTC m=+0.274582816 container init 48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:46:13 np0005473739 podman[208232]: 2025-10-07 13:46:13.193181666 +0000 UTC m=+0.283286518 container start 48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:46:13 np0005473739 podman[208232]: 2025-10-07 13:46:13.197577964 +0000 UTC m=+0.287682806 container attach 48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:46:13 np0005473739 python3.9[208275]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844772.1269615-775-100128009735699/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:13 np0005473739 python3.9[208434]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:13 np0005473739 keen_ride[208278]: {
Oct  7 09:46:14 np0005473739 keen_ride[208278]:    "0": [
Oct  7 09:46:14 np0005473739 keen_ride[208278]:        {
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "devices": [
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "/dev/loop3"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            ],
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_name": "ceph_lv0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_size": "21470642176",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "name": "ceph_lv0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "tags": {
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cluster_name": "ceph",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.crush_device_class": "",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.encrypted": "0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osd_id": "0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.type": "block",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.vdo": "0"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            },
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "type": "block",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "vg_name": "ceph_vg0"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:        }
Oct  7 09:46:14 np0005473739 keen_ride[208278]:    ],
Oct  7 09:46:14 np0005473739 keen_ride[208278]:    "1": [
Oct  7 09:46:14 np0005473739 keen_ride[208278]:        {
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "devices": [
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "/dev/loop4"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            ],
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_name": "ceph_lv1",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_size": "21470642176",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "name": "ceph_lv1",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "tags": {
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cluster_name": "ceph",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.crush_device_class": "",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.encrypted": "0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osd_id": "1",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.type": "block",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.vdo": "0"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            },
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "type": "block",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "vg_name": "ceph_vg1"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:        }
Oct  7 09:46:14 np0005473739 keen_ride[208278]:    ],
Oct  7 09:46:14 np0005473739 keen_ride[208278]:    "2": [
Oct  7 09:46:14 np0005473739 keen_ride[208278]:        {
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "devices": [
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "/dev/loop5"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            ],
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_name": "ceph_lv2",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_size": "21470642176",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "name": "ceph_lv2",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "tags": {
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.cluster_name": "ceph",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.crush_device_class": "",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.encrypted": "0",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osd_id": "2",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.type": "block",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:                "ceph.vdo": "0"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            },
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "type": "block",
Oct  7 09:46:14 np0005473739 keen_ride[208278]:            "vg_name": "ceph_vg2"
Oct  7 09:46:14 np0005473739 keen_ride[208278]:        }
Oct  7 09:46:14 np0005473739 keen_ride[208278]:    ]
Oct  7 09:46:14 np0005473739 keen_ride[208278]: }
Oct  7 09:46:14 np0005473739 systemd[1]: libpod-48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9.scope: Deactivated successfully.
Oct  7 09:46:14 np0005473739 podman[208232]: 2025-10-07 13:46:14.048630078 +0000 UTC m=+1.138734970 container died 48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 09:46:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7155f5fa08a52d2879e9cc8f223e8bd1108425cde7fd64b5b57d050cb60c7e3d-merged.mount: Deactivated successfully.
Oct  7 09:46:14 np0005473739 podman[208232]: 2025-10-07 13:46:14.138516868 +0000 UTC m=+1.228621770 container remove 48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_ride, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:46:14 np0005473739 systemd[1]: libpod-conmon-48f582bc6e3d52cb0a9eacd947c27bb663d7b412ddb7f77518b7123c7a9110d9.scope: Deactivated successfully.
Oct  7 09:46:14 np0005473739 python3.9[208620]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844773.4168804-775-182213345873812/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:14 np0005473739 podman[208791]: 2025-10-07 13:46:14.847642752 +0000 UTC m=+0.050672125 container create f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:46:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v554: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:14 np0005473739 systemd[1]: Started libpod-conmon-f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48.scope.
Oct  7 09:46:14 np0005473739 podman[208791]: 2025-10-07 13:46:14.824735709 +0000 UTC m=+0.027765082 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:46:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:46:14 np0005473739 podman[208791]: 2025-10-07 13:46:14.949158843 +0000 UTC m=+0.152188206 container init f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:46:14 np0005473739 podman[208791]: 2025-10-07 13:46:14.958130433 +0000 UTC m=+0.161159766 container start f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 09:46:14 np0005473739 podman[208791]: 2025-10-07 13:46:14.961972275 +0000 UTC m=+0.165001678 container attach f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:46:14 np0005473739 clever_agnesi[208843]: 167 167
Oct  7 09:46:14 np0005473739 systemd[1]: libpod-f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48.scope: Deactivated successfully.
Oct  7 09:46:14 np0005473739 podman[208791]: 2025-10-07 13:46:14.967975436 +0000 UTC m=+0.171004779 container died f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:46:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f43fe1096f94a37e378705cb6ac622b2a4bd6368c8282a59339e6a1c0961fb0b-merged.mount: Deactivated successfully.
Oct  7 09:46:15 np0005473739 podman[208791]: 2025-10-07 13:46:15.011331354 +0000 UTC m=+0.214360687 container remove f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_agnesi, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:46:15 np0005473739 systemd[1]: libpod-conmon-f4c34887862478bf89d06b98b8929906ba118e488421781ac480e9771dc04d48.scope: Deactivated successfully.
Oct  7 09:46:15 np0005473739 podman[208906]: 2025-10-07 13:46:15.17927309 +0000 UTC m=+0.048543318 container create f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  7 09:46:15 np0005473739 python3.9[208900]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:15 np0005473739 systemd[1]: Started libpod-conmon-f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9.scope.
Oct  7 09:46:15 np0005473739 podman[208906]: 2025-10-07 13:46:15.160878388 +0000 UTC m=+0.030148646 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:46:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:46:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88957683f5acf6626907624c84b7ecac348c8f6b13c7dcdab834832f7185ecc7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88957683f5acf6626907624c84b7ecac348c8f6b13c7dcdab834832f7185ecc7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88957683f5acf6626907624c84b7ecac348c8f6b13c7dcdab834832f7185ecc7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88957683f5acf6626907624c84b7ecac348c8f6b13c7dcdab834832f7185ecc7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:46:15 np0005473739 podman[208906]: 2025-10-07 13:46:15.28071008 +0000 UTC m=+0.149980308 container init f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:46:15 np0005473739 podman[208906]: 2025-10-07 13:46:15.289378511 +0000 UTC m=+0.158648739 container start f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rosalind, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:46:15 np0005473739 podman[208906]: 2025-10-07 13:46:15.293285176 +0000 UTC m=+0.162555404 container attach f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rosalind, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 09:46:15 np0005473739 python3.9[209050]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844774.69644-775-71294263149679/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]: {
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "osd_id": 2,
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "type": "bluestore"
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:    },
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "osd_id": 1,
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "type": "bluestore"
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:    },
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "osd_id": 0,
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:        "type": "bluestore"
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]:    }
Oct  7 09:46:16 np0005473739 relaxed_rosalind[208922]: }
Oct  7 09:46:16 np0005473739 systemd[1]: libpod-f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9.scope: Deactivated successfully.
Oct  7 09:46:16 np0005473739 systemd[1]: libpod-f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9.scope: Consumed 1.023s CPU time.
Oct  7 09:46:16 np0005473739 podman[208906]: 2025-10-07 13:46:16.307777526 +0000 UTC m=+1.177047764 container died f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:46:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-88957683f5acf6626907624c84b7ecac348c8f6b13c7dcdab834832f7185ecc7-merged.mount: Deactivated successfully.
Oct  7 09:46:16 np0005473739 podman[208906]: 2025-10-07 13:46:16.377762545 +0000 UTC m=+1.247032783 container remove f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True)
Oct  7 09:46:16 np0005473739 systemd[1]: libpod-conmon-f3162d5c8f684b343cd5b76539acece6f23624e762bcf3c84c3118872a0773a9.scope: Deactivated successfully.
Oct  7 09:46:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:46:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:46:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:46:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:46:16 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 034a39b3-6cd8-4e5f-ae05-b6955d899296 does not exist
Oct  7 09:46:16 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5bb12ca0-95cd-41b2-b6da-6e83e98d68e7 does not exist
Oct  7 09:46:16 np0005473739 python3.9[209230]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v555: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:17 np0005473739 python3.9[209412]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844775.9556725-775-121225673229865/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:46:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:46:17 np0005473739 python3.9[209564]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:18 np0005473739 python3.9[209687]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844777.2277706-775-99830896217897/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v556: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:18 np0005473739 python3.9[209839]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:19 np0005473739 python3.9[209962]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844778.5455327-775-39537034713748/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:20 np0005473739 python3.9[210114]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v557: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:20 np0005473739 python3.9[210237]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844779.8049355-775-16277546562251/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:21 np0005473739 python3.9[210389]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:22 np0005473739 python3.9[210512]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844781.0973282-775-274444160025635/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:46:22
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.meta', 'backups', '.rgw.root', '.mgr', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.log', 'vms', 'cephfs.cephfs.meta']
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:46:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v558: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:22 np0005473739 python3.9[210664]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:23 np0005473739 python3.9[210787]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844782.4248326-775-59036198251048/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:24 np0005473739 python3.9[210939]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v559: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:25 np0005473739 python3.9[211062]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844783.7799482-775-3857411559452/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:25 np0005473739 python3.9[211214]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:26 np0005473739 python3.9[211337]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844785.2584596-775-189993095015309/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v560: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:27 np0005473739 podman[211461]: 2025-10-07 13:46:27.050209626 +0000 UTC m=+0.131203288 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  7 09:46:27 np0005473739 python3.9[211505]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:27 np0005473739 python3.9[211637]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844786.6227381-775-221598792265319/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:28 np0005473739 python3.9[211789]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v561: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:29 np0005473739 python3.9[211912]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844788.0122936-775-200295096371619/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:29 np0005473739 python3.9[212062]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:46:30 np0005473739 python3.9[212217]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Oct  7 09:46:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v562: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:46:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:46:32 np0005473739 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Oct  7 09:46:32 np0005473739 podman[212345]: 2025-10-07 13:46:32.495999256 +0000 UTC m=+0.088146678 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  7 09:46:32 np0005473739 python3.9[212390]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v563: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:33 np0005473739 python3.9[212544]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:33 np0005473739 python3.9[212696]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:34 np0005473739 python3.9[212848]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v564: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:35 np0005473739 python3.9[213000]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:36 np0005473739 python3.9[213152]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v565: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:36 np0005473739 python3.9[213304]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:37 np0005473739 python3.9[213456]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:38 np0005473739 python3.9[213608]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v566: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:38 np0005473739 python3.9[213760]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:39 np0005473739 python3.9[213912]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:46:39 np0005473739 systemd[1]: Reloading.
Oct  7 09:46:39 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:46:39 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:46:40 np0005473739 systemd[1]: Starting libvirt logging daemon socket...
Oct  7 09:46:40 np0005473739 systemd[1]: Listening on libvirt logging daemon socket.
Oct  7 09:46:40 np0005473739 systemd[1]: Starting libvirt logging daemon admin socket...
Oct  7 09:46:40 np0005473739 systemd[1]: Listening on libvirt logging daemon admin socket.
Oct  7 09:46:40 np0005473739 systemd[1]: Starting libvirt logging daemon...
Oct  7 09:46:40 np0005473739 systemd[1]: Started libvirt logging daemon.
Oct  7 09:46:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v567: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:41 np0005473739 python3.9[214104]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:46:41 np0005473739 systemd[1]: Reloading.
Oct  7 09:46:41 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:46:41 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:46:41 np0005473739 systemd[1]: Starting libvirt nodedev daemon socket...
Oct  7 09:46:41 np0005473739 systemd[1]: Listening on libvirt nodedev daemon socket.
Oct  7 09:46:41 np0005473739 systemd[1]: Starting libvirt nodedev daemon admin socket...
Oct  7 09:46:41 np0005473739 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Oct  7 09:46:41 np0005473739 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Oct  7 09:46:41 np0005473739 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Oct  7 09:46:41 np0005473739 systemd[1]: Starting libvirt nodedev daemon...
Oct  7 09:46:41 np0005473739 systemd[1]: Started libvirt nodedev daemon.
Oct  7 09:46:42 np0005473739 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Oct  7 09:46:42 np0005473739 python3.9[214322]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:46:42 np0005473739 systemd[1]: Reloading.
Oct  7 09:46:42 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:46:42 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:46:42 np0005473739 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Oct  7 09:46:42 np0005473739 systemd[1]: Starting libvirt proxy daemon admin socket...
Oct  7 09:46:42 np0005473739 systemd[1]: Starting libvirt proxy daemon read-only socket...
Oct  7 09:46:42 np0005473739 systemd[1]: Listening on libvirt proxy daemon admin socket.
Oct  7 09:46:42 np0005473739 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Oct  7 09:46:42 np0005473739 systemd[1]: Starting libvirt proxy daemon...
Oct  7 09:46:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v568: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:42 np0005473739 systemd[1]: Started libvirt proxy daemon.
Oct  7 09:46:43 np0005473739 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Oct  7 09:46:43 np0005473739 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Oct  7 09:46:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:43 np0005473739 python3.9[214540]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:46:43 np0005473739 systemd[1]: Reloading.
Oct  7 09:46:43 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:46:43 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:46:43 np0005473739 setroubleshoot[214297]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fb3e7a82-ee2c-4975-af5c-5224d22dbf79
Oct  7 09:46:44 np0005473739 setroubleshoot[214297]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  7 09:46:44 np0005473739 setroubleshoot[214297]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l fb3e7a82-ee2c-4975-af5c-5224d22dbf79
Oct  7 09:46:44 np0005473739 setroubleshoot[214297]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Oct  7 09:46:44 np0005473739 systemd[1]: Listening on libvirt locking daemon socket.
Oct  7 09:46:44 np0005473739 systemd[1]: Starting libvirt QEMU daemon socket...
Oct  7 09:46:44 np0005473739 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Oct  7 09:46:44 np0005473739 systemd[1]: Starting Virtual Machine and Container Registration Service...
Oct  7 09:46:44 np0005473739 systemd[1]: Listening on libvirt QEMU daemon socket.
Oct  7 09:46:44 np0005473739 systemd[1]: Starting libvirt QEMU daemon admin socket...
Oct  7 09:46:44 np0005473739 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Oct  7 09:46:44 np0005473739 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Oct  7 09:46:44 np0005473739 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Oct  7 09:46:44 np0005473739 systemd[1]: Started Virtual Machine and Container Registration Service.
Oct  7 09:46:44 np0005473739 systemd[1]: Starting libvirt QEMU daemon...
Oct  7 09:46:44 np0005473739 systemd[1]: Started libvirt QEMU daemon.
Oct  7 09:46:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v569: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:45 np0005473739 python3.9[214753]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:46:45 np0005473739 systemd[1]: Reloading.
Oct  7 09:46:45 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:46:45 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:46:45 np0005473739 systemd[1]: Starting libvirt secret daemon socket...
Oct  7 09:46:45 np0005473739 systemd[1]: Listening on libvirt secret daemon socket.
Oct  7 09:46:45 np0005473739 systemd[1]: Starting libvirt secret daemon admin socket...
Oct  7 09:46:45 np0005473739 systemd[1]: Starting libvirt secret daemon read-only socket...
Oct  7 09:46:45 np0005473739 systemd[1]: Listening on libvirt secret daemon admin socket.
Oct  7 09:46:45 np0005473739 systemd[1]: Listening on libvirt secret daemon read-only socket.
Oct  7 09:46:45 np0005473739 systemd[1]: Starting libvirt secret daemon...
Oct  7 09:46:45 np0005473739 systemd[1]: Started libvirt secret daemon.
Oct  7 09:46:46 np0005473739 python3.9[214963]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v570: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:47 np0005473739 python3.9[215115]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  7 09:46:48 np0005473739 python3.9[215267]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail;#012echo ceph#012awk -F '=' '/fsid/ {print $2}' /var/lib/openstack/config/ceph/ceph.conf | xargs#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:46:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:48 np0005473739 python3.9[215421]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.keyring'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  7 09:46:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v571: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:49 np0005473739 python3.9[215571]: ansible-ansible.legacy.stat Invoked with path=/tmp/secret.xml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:50 np0005473739 python3.9[215692]: ansible-ansible.legacy.copy Invoked with dest=/tmp/secret.xml mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844809.1171346-1133-62197337548349/.source.xml follow=False _original_basename=secret.xml.j2 checksum=51c061b391a2a73a2724f3e5cbfa5882f0c9580f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:50 np0005473739 python3.9[215844]: ansible-ansible.legacy.command Invoked with _raw_params=virsh secret-undefine 82044f27-a8da-5b2a-a297-ff6afc620e1f#012virsh secret-define --file /tmp/secret.xml#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:46:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v572: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:51 np0005473739 python3.9[216006]: ansible-ansible.builtin.file Invoked with path=/tmp/secret.xml state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:46:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v573: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:54 np0005473739 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Oct  7 09:46:54 np0005473739 systemd[1]: setroubleshootd.service: Deactivated successfully.
Oct  7 09:46:54 np0005473739 python3.9[216469]: ansible-ansible.legacy.copy Invoked with dest=/etc/ceph/ceph.conf group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/config/ceph/ceph.conf backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v574: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:55 np0005473739 python3.9[216621]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:55 np0005473739 python3.9[216744]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844814.4832127-1188-69463809967903/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:56 np0005473739 python3.9[216896]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v575: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:57 np0005473739 podman[217048]: 2025-10-07 13:46:57.247998255 +0000 UTC m=+0.129530430 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:46:57 np0005473739 python3.9[217049]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:57 np0005473739 python3.9[217152]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:58 np0005473739 python3.9[217304]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:46:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:46:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v576: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:46:59 np0005473739 python3.9[217382]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.zmsaoue1 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:46:59 np0005473739 python3.9[217534]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:47:00.019 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:47:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:47:00.019 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:47:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:47:00.020 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:47:00 np0005473739 python3.9[217612]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v577: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:01 np0005473739 python3.9[217764]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:47:02 np0005473739 python3[217917]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Oct  7 09:47:02 np0005473739 podman[218041]: 2025-10-07 13:47:02.622198331 +0000 UTC m=+0.073724887 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  7 09:47:02 np0005473739 python3.9[218088]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v578: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:03 np0005473739 python3.9[218167]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:03 np0005473739 python3.9[218319]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:04 np0005473739 python3.9[218397]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v579: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:05 np0005473739 python3.9[218549]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:05 np0005473739 python3.9[218627]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:06 np0005473739 python3.9[218779]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v580: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:07 np0005473739 python3.9[218857]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:08 np0005473739 python3.9[219009]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:08 np0005473739 python3.9[219134]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1759844827.348767-1313-160112656564604/.source.nft follow=False _original_basename=ruleset.j2 checksum=ac3ce8ce2d33fa5fe0a79b0c811c97734ce43fa5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v581: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:09 np0005473739 python3.9[219286]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:10 np0005473739 python3.9[219438]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:47:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v582: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:11 np0005473739 python3.9[219593]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:11 np0005473739 python3.9[219745]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:47:12 np0005473739 python3.9[219898]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:47:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v583: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:13 np0005473739 python3.9[220052]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:47:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:14 np0005473739 python3.9[220207]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:14 np0005473739 python3.9[220359]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v584: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:15 np0005473739 python3.9[220482]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844834.3499093-1385-251814995970470/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:16 np0005473739 python3.9[220634]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:16 np0005473739 python3.9[220781]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844835.6705792-1400-53537687412701/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v585: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:47:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2e070769-7501-453c-9eac-4ee1b01e2c7b does not exist
Oct  7 09:47:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1c45256e-7e06-4183-a261-58a961fb7a26 does not exist
Oct  7 09:47:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c92e5574-c29b-413b-8e10-8cb96d2928a8 does not exist
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:47:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:47:17 np0005473739 python3.9[221033]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:47:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:47:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:47:18 np0005473739 podman[221303]: 2025-10-07 13:47:18.107043277 +0000 UTC m=+0.062692091 container create 256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ride, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:47:18 np0005473739 python3.9[221274]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844837.0407808-1415-197703847311077/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:18 np0005473739 podman[221303]: 2025-10-07 13:47:18.070432231 +0000 UTC m=+0.026081075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:47:18 np0005473739 systemd[1]: Started libpod-conmon-256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206.scope.
Oct  7 09:47:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:47:18 np0005473739 podman[221303]: 2025-10-07 13:47:18.246247542 +0000 UTC m=+0.201896376 container init 256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ride, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:47:18 np0005473739 podman[221303]: 2025-10-07 13:47:18.259268634 +0000 UTC m=+0.214917428 container start 256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:47:18 np0005473739 podman[221303]: 2025-10-07 13:47:18.26334886 +0000 UTC m=+0.218997644 container attach 256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ride, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:47:18 np0005473739 adoring_ride[221323]: 167 167
Oct  7 09:47:18 np0005473739 systemd[1]: libpod-256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206.scope: Deactivated successfully.
Oct  7 09:47:18 np0005473739 podman[221303]: 2025-10-07 13:47:18.26613898 +0000 UTC m=+0.221787774 container died 256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ride, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:47:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0c2aa74b31064c647fd387db3dc710348921b9b48544f2a96ada75d479952f6d-merged.mount: Deactivated successfully.
Oct  7 09:47:18 np0005473739 podman[221303]: 2025-10-07 13:47:18.312094863 +0000 UTC m=+0.267743647 container remove 256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_ride, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:47:18 np0005473739 systemd[1]: libpod-conmon-256fa1e0448be2a55b93c774338373824d38197f9f02cd5ba67d7176b4587206.scope: Deactivated successfully.
Oct  7 09:47:18 np0005473739 podman[221422]: 2025-10-07 13:47:18.472138193 +0000 UTC m=+0.048747774 container create 8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jennings, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:47:18 np0005473739 systemd[1]: Started libpod-conmon-8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e.scope.
Oct  7 09:47:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:47:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7026906aa99d9fcda7db540b1800926d2f299f5619cbaba0a1847b77466ebc9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7026906aa99d9fcda7db540b1800926d2f299f5619cbaba0a1847b77466ebc9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7026906aa99d9fcda7db540b1800926d2f299f5619cbaba0a1847b77466ebc9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7026906aa99d9fcda7db540b1800926d2f299f5619cbaba0a1847b77466ebc9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7026906aa99d9fcda7db540b1800926d2f299f5619cbaba0a1847b77466ebc9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:18 np0005473739 podman[221422]: 2025-10-07 13:47:18.447438567 +0000 UTC m=+0.024048168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:47:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:18 np0005473739 podman[221422]: 2025-10-07 13:47:18.562295797 +0000 UTC m=+0.138905398 container init 8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jennings, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:47:18 np0005473739 podman[221422]: 2025-10-07 13:47:18.573529327 +0000 UTC m=+0.150138908 container start 8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jennings, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 09:47:18 np0005473739 podman[221422]: 2025-10-07 13:47:18.576863653 +0000 UTC m=+0.153473234 container attach 8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jennings, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:47:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v586: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:18 np0005473739 python3.9[221514]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:47:18 np0005473739 systemd[1]: Reloading.
Oct  7 09:47:19 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:47:19 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:47:19 np0005473739 systemd[1]: Reached target edpm_libvirt.target.
Oct  7 09:47:19 np0005473739 nervous_jennings[221474]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:47:19 np0005473739 nervous_jennings[221474]: --> relative data size: 1.0
Oct  7 09:47:19 np0005473739 nervous_jennings[221474]: --> All data devices are unavailable
Oct  7 09:47:19 np0005473739 systemd[1]: libpod-8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e.scope: Deactivated successfully.
Oct  7 09:47:19 np0005473739 systemd[1]: libpod-8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e.scope: Consumed 1.043s CPU time.
Oct  7 09:47:19 np0005473739 podman[221422]: 2025-10-07 13:47:19.704339399 +0000 UTC m=+1.280948990 container died 8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:47:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c7026906aa99d9fcda7db540b1800926d2f299f5619cbaba0a1847b77466ebc9-merged.mount: Deactivated successfully.
Oct  7 09:47:19 np0005473739 podman[221422]: 2025-10-07 13:47:19.774876803 +0000 UTC m=+1.351486384 container remove 8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_jennings, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:47:19 np0005473739 systemd[1]: libpod-conmon-8328188d2e1525d147d52331a7e404f4cf85c0003280fe665c2df9d05600567e.scope: Deactivated successfully.
Oct  7 09:47:20 np0005473739 python3.9[221820]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Oct  7 09:47:20 np0005473739 systemd[1]: Reloading.
Oct  7 09:47:20 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:47:20 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:47:20 np0005473739 podman[221889]: 2025-10-07 13:47:20.521054291 +0000 UTC m=+0.060502328 container create 389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:47:20 np0005473739 podman[221889]: 2025-10-07 13:47:20.491711243 +0000 UTC m=+0.031159310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:47:20 np0005473739 systemd[1]: Started libpod-conmon-389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6.scope.
Oct  7 09:47:20 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:47:20 np0005473739 systemd[1]: Reloading.
Oct  7 09:47:20 np0005473739 podman[221889]: 2025-10-07 13:47:20.913086906 +0000 UTC m=+0.452535023 container init 389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shockley, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:47:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v587: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:20 np0005473739 podman[221889]: 2025-10-07 13:47:20.922986759 +0000 UTC m=+0.462434816 container start 389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shockley, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:47:20 np0005473739 funny_shockley[221939]: 167 167
Oct  7 09:47:20 np0005473739 podman[221889]: 2025-10-07 13:47:20.940653014 +0000 UTC m=+0.480101071 container attach 389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shockley, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 09:47:20 np0005473739 podman[221889]: 2025-10-07 13:47:20.942462535 +0000 UTC m=+0.481910562 container died 389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shockley, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:47:20 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:47:20 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:47:21 np0005473739 systemd[1]: libpod-389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6.scope: Deactivated successfully.
Oct  7 09:47:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-84948a20d511a70607a65c011d795753f9eccaf7b09d8a72c0dd2838c8893cdf-merged.mount: Deactivated successfully.
Oct  7 09:47:21 np0005473739 podman[221889]: 2025-10-07 13:47:21.302685032 +0000 UTC m=+0.842133059 container remove 389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:47:21 np0005473739 systemd[1]: libpod-conmon-389f0853e5000bd0c77ed9aa3e5ac8d90e17135575f6e50c35f95dcc4b9b83a6.scope: Deactivated successfully.
Oct  7 09:47:21 np0005473739 podman[222025]: 2025-10-07 13:47:21.497054092 +0000 UTC m=+0.084228216 container create a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:47:21 np0005473739 podman[222025]: 2025-10-07 13:47:21.442881556 +0000 UTC m=+0.030055690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:47:21 np0005473739 systemd[1]: Started libpod-conmon-a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be.scope.
Oct  7 09:47:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:47:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33c4c36c5ada5e8604dd283983ed61ad6c7a2ccde08c3d5bdf79d05183f4b74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33c4c36c5ada5e8604dd283983ed61ad6c7a2ccde08c3d5bdf79d05183f4b74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33c4c36c5ada5e8604dd283983ed61ad6c7a2ccde08c3d5bdf79d05183f4b74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b33c4c36c5ada5e8604dd283983ed61ad6c7a2ccde08c3d5bdf79d05183f4b74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:21 np0005473739 systemd[1]: session-50.scope: Deactivated successfully.
Oct  7 09:47:21 np0005473739 systemd[1]: session-50.scope: Consumed 3min 50.400s CPU time.
Oct  7 09:47:21 np0005473739 systemd-logind[801]: Session 50 logged out. Waiting for processes to exit.
Oct  7 09:47:21 np0005473739 systemd-logind[801]: Removed session 50.
Oct  7 09:47:21 np0005473739 podman[222025]: 2025-10-07 13:47:21.752791785 +0000 UTC m=+0.339966009 container init a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 09:47:21 np0005473739 podman[222025]: 2025-10-07 13:47:21.766484377 +0000 UTC m=+0.353658531 container start a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 09:47:21 np0005473739 podman[222025]: 2025-10-07 13:47:21.844575447 +0000 UTC m=+0.431749611 container attach a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:47:22
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images', '.mgr', 'default.rgw.meta', 'backups', '.rgw.root']
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:47:22 np0005473739 festive_haslett[222042]: {
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:    "0": [
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:        {
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "devices": [
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "/dev/loop3"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            ],
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_name": "ceph_lv0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_size": "21470642176",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "name": "ceph_lv0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "tags": {
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cluster_name": "ceph",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.crush_device_class": "",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.encrypted": "0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osd_id": "0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.type": "block",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.vdo": "0"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            },
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "type": "block",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "vg_name": "ceph_vg0"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:        }
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:    ],
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:    "1": [
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:        {
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "devices": [
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "/dev/loop4"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            ],
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_name": "ceph_lv1",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_size": "21470642176",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "name": "ceph_lv1",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "tags": {
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cluster_name": "ceph",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.crush_device_class": "",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.encrypted": "0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osd_id": "1",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.type": "block",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.vdo": "0"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            },
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "type": "block",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "vg_name": "ceph_vg1"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:        }
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:    ],
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:    "2": [
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:        {
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "devices": [
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "/dev/loop5"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            ],
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_name": "ceph_lv2",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_size": "21470642176",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "name": "ceph_lv2",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "tags": {
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.cluster_name": "ceph",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.crush_device_class": "",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.encrypted": "0",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osd_id": "2",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.type": "block",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:                "ceph.vdo": "0"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            },
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "type": "block",
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:            "vg_name": "ceph_vg2"
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:        }
Oct  7 09:47:22 np0005473739 festive_haslett[222042]:    ]
Oct  7 09:47:22 np0005473739 festive_haslett[222042]: }
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:47:22 np0005473739 systemd[1]: libpod-a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be.scope: Deactivated successfully.
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:47:22 np0005473739 podman[222051]: 2025-10-07 13:47:22.778841496 +0000 UTC m=+0.052917153 container died a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 09:47:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b33c4c36c5ada5e8604dd283983ed61ad6c7a2ccde08c3d5bdf79d05183f4b74-merged.mount: Deactivated successfully.
Oct  7 09:47:22 np0005473739 podman[222051]: 2025-10-07 13:47:22.853397034 +0000 UTC m=+0.127472611 container remove a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_haslett, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:47:22 np0005473739 systemd[1]: libpod-conmon-a5bdaeef429cd7c74f06b9594a995187415dac8ddc43ada71afafe1cb667a1be.scope: Deactivated successfully.
Oct  7 09:47:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v588: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:23 np0005473739 podman[222206]: 2025-10-07 13:47:23.578801949 +0000 UTC m=+0.057914864 container create c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 09:47:23 np0005473739 systemd[1]: Started libpod-conmon-c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab.scope.
Oct  7 09:47:23 np0005473739 podman[222206]: 2025-10-07 13:47:23.553426995 +0000 UTC m=+0.032540000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:47:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:47:23 np0005473739 podman[222206]: 2025-10-07 13:47:23.668158121 +0000 UTC m=+0.147271056 container init c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 09:47:23 np0005473739 podman[222206]: 2025-10-07 13:47:23.679718501 +0000 UTC m=+0.158831426 container start c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 09:47:23 np0005473739 podman[222206]: 2025-10-07 13:47:23.684039734 +0000 UTC m=+0.163152649 container attach c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:47:23 np0005473739 quizzical_heisenberg[222221]: 167 167
Oct  7 09:47:23 np0005473739 systemd[1]: libpod-c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab.scope: Deactivated successfully.
Oct  7 09:47:23 np0005473739 podman[222206]: 2025-10-07 13:47:23.689770609 +0000 UTC m=+0.168883524 container died c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:47:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-76d6809150b3e602759ee89369ba408dab6b5a751073801cde3cffe0cd236218-merged.mount: Deactivated successfully.
Oct  7 09:47:23 np0005473739 podman[222206]: 2025-10-07 13:47:23.73079811 +0000 UTC m=+0.209911055 container remove c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_heisenberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:47:23 np0005473739 systemd[1]: libpod-conmon-c92eba3abc05e6a3fc5a3573f9eca748c057bdd3b80c1ff748b371001e0453ab.scope: Deactivated successfully.
Oct  7 09:47:24 np0005473739 podman[222246]: 2025-10-07 13:47:24.001916081 +0000 UTC m=+0.075562667 container create c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:47:24 np0005473739 systemd[1]: Started libpod-conmon-c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9.scope.
Oct  7 09:47:24 np0005473739 podman[222246]: 2025-10-07 13:47:23.971174825 +0000 UTC m=+0.044821501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:47:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:47:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942c08eb7b796d2a28a9e814391245ca7ffc23a5e6510b8f5b7c0eed795d41c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942c08eb7b796d2a28a9e814391245ca7ffc23a5e6510b8f5b7c0eed795d41c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942c08eb7b796d2a28a9e814391245ca7ffc23a5e6510b8f5b7c0eed795d41c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/942c08eb7b796d2a28a9e814391245ca7ffc23a5e6510b8f5b7c0eed795d41c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:47:24 np0005473739 podman[222246]: 2025-10-07 13:47:24.118978045 +0000 UTC m=+0.192624701 container init c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:47:24 np0005473739 podman[222246]: 2025-10-07 13:47:24.131973725 +0000 UTC m=+0.205620321 container start c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:47:24 np0005473739 podman[222246]: 2025-10-07 13:47:24.136632229 +0000 UTC m=+0.210278895 container attach c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:47:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v589: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]: {
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "osd_id": 2,
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "type": "bluestore"
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:    },
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "osd_id": 1,
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "type": "bluestore"
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:    },
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "osd_id": 0,
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:        "type": "bluestore"
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]:    }
Oct  7 09:47:25 np0005473739 beautiful_keldysh[222262]: }
Oct  7 09:47:25 np0005473739 podman[222246]: 2025-10-07 13:47:25.207416526 +0000 UTC m=+1.281063122 container died c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 09:47:25 np0005473739 systemd[1]: libpod-c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9.scope: Deactivated successfully.
Oct  7 09:47:25 np0005473739 systemd[1]: libpod-c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9.scope: Consumed 1.083s CPU time.
Oct  7 09:47:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-942c08eb7b796d2a28a9e814391245ca7ffc23a5e6510b8f5b7c0eed795d41c4-merged.mount: Deactivated successfully.
Oct  7 09:47:25 np0005473739 podman[222246]: 2025-10-07 13:47:25.295076759 +0000 UTC m=+1.368723335 container remove c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:47:25 np0005473739 systemd[1]: libpod-conmon-c207a4bef512836ef9176fafb3b88c5042cd51f13d8ba40b3fd74532a14480d9.scope: Deactivated successfully.
Oct  7 09:47:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:47:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:47:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:47:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:47:25 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ef6c1b26-ec35-432d-80bb-086696a03305 does not exist
Oct  7 09:47:25 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ced626bf-2bd8-40cb-8295-24e4ca0ed65b does not exist
Oct  7 09:47:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:47:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:47:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v590: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:27 np0005473739 systemd-logind[801]: New session 51 of user zuul.
Oct  7 09:47:27 np0005473739 systemd[1]: Started Session 51 of User zuul.
Oct  7 09:47:27 np0005473739 podman[222358]: 2025-10-07 13:47:27.876966908 +0000 UTC m=+0.103777644 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 09:47:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:28 np0005473739 python3.9[222535]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:47:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v591: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:30 np0005473739 python3.9[222691]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:47:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v592: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:30 np0005473739 python3.9[222843]: ansible-ansible.builtin.file Invoked with path=/etc/target setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:47:31 np0005473739 python3.9[222995]: ansible-ansible.builtin.file Invoked with path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:47:31 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:47:32 np0005473739 python3.9[223147]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  7 09:47:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v593: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:33 np0005473739 podman[223271]: 2025-10-07 13:47:33.023868892 +0000 UTC m=+0.113769979 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:47:33 np0005473739 python3.9[223318]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/config-data/ansible-generated/iscsid setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:47:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:34 np0005473739 python3.9[223470]: ansible-ansible.builtin.stat Invoked with path=/lib/systemd/system/iscsid.socket follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:47:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v594: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:35 np0005473739 python3.9[223624]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsid.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:47:36 np0005473739 systemd[1]: Reloading.
Oct  7 09:47:36 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:47:36 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:47:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v595: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:37 np0005473739 python3.9[223812]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:47:37 np0005473739 network[223829]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:47:37 np0005473739 network[223830]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:47:37 np0005473739 network[223831]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:47:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v596: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v597: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v598: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:43 np0005473739 python3.9[224105]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iscsi-starter.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:47:43 np0005473739 systemd[1]: Reloading.
Oct  7 09:47:43 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:47:43 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:47:44 np0005473739 python3.9[224291]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:47:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v599: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:45 np0005473739 python3.9[224443]: ansible-containers.podman.podman_container Invoked with command=/usr/sbin/iscsi-iname detach=False image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f name=iscsid_config rm=True tty=True executable=podman state=started debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  7 09:47:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v600: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:46 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:47:46 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:47:47 np0005473739 podman[224456]: 2025-10-07 13:47:47.174408007 +0000 UTC m=+1.259453397 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  7 09:47:47 np0005473739 podman[224516]: 2025-10-07 13:47:47.37414619 +0000 UTC m=+0.054001472 container create a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4247] manager: (podman0): new Bridge device (/org/freedesktop/NetworkManager/Devices/21)
Oct  7 09:47:47 np0005473739 podman[224516]: 2025-10-07 13:47:47.346163831 +0000 UTC m=+0.026019213 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  7 09:47:47 np0005473739 kernel: podman0: port 1(veth0) entered blocking state
Oct  7 09:47:47 np0005473739 kernel: podman0: port 1(veth0) entered disabled state
Oct  7 09:47:47 np0005473739 kernel: veth0: entered allmulticast mode
Oct  7 09:47:47 np0005473739 kernel: veth0: entered promiscuous mode
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4522] manager: (veth0): new Veth device (/org/freedesktop/NetworkManager/Devices/22)
Oct  7 09:47:47 np0005473739 kernel: podman0: port 1(veth0) entered blocking state
Oct  7 09:47:47 np0005473739 kernel: podman0: port 1(veth0) entered forwarding state
Oct  7 09:47:47 np0005473739 systemd-udevd[224537]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 09:47:47 np0005473739 systemd-udevd[224534]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4572] device (veth0): carrier: link connected
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4608] device (podman0): carrier: link connected
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4762] device (podman0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4780] device (podman0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4797] device (podman0): Activation: starting connection 'podman0' (702d1ecd-f22c-4a9e-af41-ef865076f3d1)
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4800] device (podman0): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4807] device (podman0): state change: prepare -> config (reason 'none', managed-type: 'external')
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4823] device (podman0): state change: config -> ip-config (reason 'none', managed-type: 'external')
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.4828] device (podman0): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Oct  7 09:47:47 np0005473739 systemd[1]: Starting Network Manager Script Dispatcher Service...
Oct  7 09:47:47 np0005473739 systemd[1]: Started Network Manager Script Dispatcher Service.
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.5311] device (podman0): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.5317] device (podman0): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.5329] device (podman0): Activation: successful, device activated.
Oct  7 09:47:47 np0005473739 systemd[1]: iscsi.service: Unit cannot be reloaded because it is inactive.
Oct  7 09:47:47 np0005473739 systemd[1]: Started libpod-conmon-a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878.scope.
Oct  7 09:47:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:47:47 np0005473739 podman[224516]: 2025-10-07 13:47:47.787849284 +0000 UTC m=+0.467704626 container init a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:47:47 np0005473739 podman[224516]: 2025-10-07 13:47:47.803376488 +0000 UTC m=+0.483231780 container start a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  7 09:47:47 np0005473739 podman[224516]: 2025-10-07 13:47:47.807328421 +0000 UTC m=+0.487183763 container attach a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:47:47 np0005473739 iscsid_config[224673]: iqn.1994-05.com.redhat:ca85e68997f5#015
Oct  7 09:47:47 np0005473739 systemd[1]: libpod-a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878.scope: Deactivated successfully.
Oct  7 09:47:47 np0005473739 podman[224516]: 2025-10-07 13:47:47.811067867 +0000 UTC m=+0.490923169 container died a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:47:47 np0005473739 kernel: podman0: port 1(veth0) entered disabled state
Oct  7 09:47:47 np0005473739 kernel: veth0 (unregistering): left allmulticast mode
Oct  7 09:47:47 np0005473739 kernel: veth0 (unregistering): left promiscuous mode
Oct  7 09:47:47 np0005473739 kernel: podman0: port 1(veth0) entered disabled state
Oct  7 09:47:47 np0005473739 NetworkManager[44949]: <info>  [1759844867.8710] device (podman0): state change: activated -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 09:47:48 np0005473739 systemd[1]: run-netns-netns\x2db6ab8928\x2d29cf\x2da224\x2d7588\x2d109a379cc80e.mount: Deactivated successfully.
Oct  7 09:47:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e337712932a4cb3efddeffe45808ea8539dc27367a58ca85a5d5c52b70f88b5b-merged.mount: Deactivated successfully.
Oct  7 09:47:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878-userdata-shm.mount: Deactivated successfully.
Oct  7 09:47:48 np0005473739 podman[224516]: 2025-10-07 13:47:48.240181101 +0000 UTC m=+0.920036433 container remove a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878 (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid_config, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 09:47:48 np0005473739 python3.9[224443]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman run --name iscsid_config --detach=False --rm --tty=True quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f /usr/sbin/iscsi-iname
Oct  7 09:47:48 np0005473739 systemd[1]: libpod-conmon-a5c9fca3de4cb962cc746e4199eca8a43b3e0c6f0801d0501f1513db6cb11878.scope: Deactivated successfully.
Oct  7 09:47:48 np0005473739 python3.9[224443]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: Error generating systemd: #012DEPRECATED command:#012It is recommended to use Quadlets for running containers and pods under systemd.#012#012Please refer to podman-systemd.unit(5) for details.#012Error: iscsid_config does not refer to a container or pod: no pod with name or ID iscsid_config found: no such pod: no container with name or ID "iscsid_config" found: no such container
Oct  7 09:47:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v601: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:49 np0005473739 python3.9[224913]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:49 np0005473739 python3.9[225036]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844868.5607429-119-153439488941604/.source.iscsi _original_basename=.g62cmvfa follow=False checksum=23f25cf914c2512cfd7c47a46615279c343d57de backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:50 np0005473739 python3.9[225188]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v602: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:51 np0005473739 python3.9[225338]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/iscsid.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:47:52 np0005473739 python3.9[225492]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:47:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v603: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:52 np0005473739 python3.9[225644]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:47:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:53 np0005473739 python3.9[225796]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:54 np0005473739 python3.9[225874]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:47:54 np0005473739 python3.9[226026]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v604: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:55 np0005473739 python3.9[226104]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:47:56 np0005473739 python3.9[226256]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:56 np0005473739 python3.9[226408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v605: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:57 np0005473739 python3.9[226486]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:57 np0005473739 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Oct  7 09:47:57 np0005473739 python3.9[226638]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:47:58 np0005473739 podman[226639]: 2025-10-07 13:47:58.071072433 +0000 UTC m=+0.121406418 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:47:58 np0005473739 python3.9[226741]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:47:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:47:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v606: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:47:59 np0005473739 python3.9[226893]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:47:59 np0005473739 systemd[1]: Reloading.
Oct  7 09:47:59 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:47:59 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:48:00.020 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:48:00.021 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:48:00.021 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:48:00 np0005473739 python3.9[227082]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:00 np0005473739 python3.9[227160]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v607: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:01 np0005473739 python3.9[227312]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:02 np0005473739 python3.9[227390]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:02 np0005473739 python3.9[227542]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:48:02 np0005473739 systemd[1]: Reloading.
Oct  7 09:48:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v608: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:03 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:48:03 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:48:03 np0005473739 systemd[1]: Starting Create netns directory...
Oct  7 09:48:03 np0005473739 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  7 09:48:03 np0005473739 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  7 09:48:03 np0005473739 systemd[1]: Finished Create netns directory.
Oct  7 09:48:03 np0005473739 podman[227579]: 2025-10-07 13:48:03.392159212 +0000 UTC m=+0.098250637 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct  7 09:48:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:04 np0005473739 python3.9[227752]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:04 np0005473739 python3.9[227904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/iscsid/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v609: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:05 np0005473739 python3.9[228027]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/iscsid/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844884.3804243-273-202732763681484/.source _original_basename=healthcheck follow=False checksum=2e1237e7fe015c809b173c52e24cfb87132f4344 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:06 np0005473739 python3.9[228179]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v610: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:07 np0005473739 python3.9[228331]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/iscsid.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:07 np0005473739 python3.9[228454]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/iscsid.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844886.5220814-298-143856167113994/.source.json _original_basename=.yej2s40e follow=False checksum=80e4f97460718c7e5c66b21ef8b846eba0e0dbc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:08 np0005473739 python3.9[228606]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/iscsid state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v611: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:10 np0005473739 python3.9[229033]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/iscsid config_pattern=*.json debug=False
Oct  7 09:48:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v612: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:11 np0005473739 python3.9[229185]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  7 09:48:12 np0005473739 python3.9[229337]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  7 09:48:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v613: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:13 np0005473739 python3[229516]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/iscsid config_id=iscsid config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  7 09:48:14 np0005473739 podman[229551]: 2025-10-07 13:48:14.149455586 +0000 UTC m=+0.053617468 container create fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 09:48:14 np0005473739 podman[229551]: 2025-10-07 13:48:14.120887421 +0000 UTC m=+0.025049343 image pull 74877095db294c27659f24e7f86074178a6f28eee68561c30e3ce4d18519e09c quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  7 09:48:14 np0005473739 python3[229516]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name iscsid --conmon-pidfile /run/iscsid.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=iscsid --label container_name=iscsid --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run:/run --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:z --volume /etc/target:/etc/target:z --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /var/lib/openstack/healthchecks/iscsid:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f
Oct  7 09:48:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v614: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:14 np0005473739 python3.9[229739]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:48:15 np0005473739 python3.9[229893]: ansible-file Invoked with path=/etc/systemd/system/edpm_iscsid.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:16 np0005473739 python3.9[229969]: ansible-stat Invoked with path=/etc/systemd/system/edpm_iscsid_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:48:16 np0005473739 python3.9[230120]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759844896.2556455-386-82952878663928/source dest=/etc/systemd/system/edpm_iscsid.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v615: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:17 np0005473739 python3.9[230196]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:48:17 np0005473739 systemd[1]: Reloading.
Oct  7 09:48:17 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:48:17 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:48:18 np0005473739 python3.9[230308]: ansible-systemd Invoked with state=restarted name=edpm_iscsid.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:48:18 np0005473739 systemd[1]: Reloading.
Oct  7 09:48:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:18 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:48:18 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:48:18 np0005473739 systemd[1]: Starting iscsid container...
Oct  7 09:48:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v616: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:48:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f07570ee50af560688030a4239a76b547f82fd7ef89ec1b53b62e5ae4c4a025/merged/etc/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f07570ee50af560688030a4239a76b547f82fd7ef89ec1b53b62e5ae4c4a025/merged/etc/target supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1f07570ee50af560688030a4239a76b547f82fd7ef89ec1b53b62e5ae4c4a025/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:19 np0005473739 systemd[1]: Started /usr/bin/podman healthcheck run fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c.
Oct  7 09:48:19 np0005473739 podman[230348]: 2025-10-07 13:48:19.112916278 +0000 UTC m=+0.158390527 container init fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 09:48:19 np0005473739 iscsid[230363]: + sudo -E kolla_set_configs
Oct  7 09:48:19 np0005473739 podman[230348]: 2025-10-07 13:48:19.143011194 +0000 UTC m=+0.188485333 container start fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 09:48:19 np0005473739 podman[230348]: iscsid
Oct  7 09:48:19 np0005473739 systemd[1]: Started iscsid container.
Oct  7 09:48:19 np0005473739 systemd[1]: Created slice User Slice of UID 0.
Oct  7 09:48:19 np0005473739 systemd[1]: Starting User Runtime Directory /run/user/0...
Oct  7 09:48:19 np0005473739 podman[230370]: 2025-10-07 13:48:19.242988356 +0000 UTC m=+0.089687651 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:48:19 np0005473739 systemd[1]: Finished User Runtime Directory /run/user/0.
Oct  7 09:48:19 np0005473739 systemd[1]: fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c-2baa887f93426dc0.service: Main process exited, code=exited, status=1/FAILURE
Oct  7 09:48:19 np0005473739 systemd[1]: fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c-2baa887f93426dc0.service: Failed with result 'exit-code'.
Oct  7 09:48:19 np0005473739 systemd[1]: Starting User Manager for UID 0...
Oct  7 09:48:19 np0005473739 systemd[230398]: Queued start job for default target Main User Target.
Oct  7 09:48:19 np0005473739 systemd[230398]: Created slice User Application Slice.
Oct  7 09:48:19 np0005473739 systemd[230398]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Oct  7 09:48:19 np0005473739 systemd[230398]: Started Daily Cleanup of User's Temporary Directories.
Oct  7 09:48:19 np0005473739 systemd[230398]: Reached target Paths.
Oct  7 09:48:19 np0005473739 systemd[230398]: Reached target Timers.
Oct  7 09:48:19 np0005473739 systemd[230398]: Starting D-Bus User Message Bus Socket...
Oct  7 09:48:19 np0005473739 systemd[230398]: Starting Create User's Volatile Files and Directories...
Oct  7 09:48:19 np0005473739 systemd[230398]: Finished Create User's Volatile Files and Directories.
Oct  7 09:48:19 np0005473739 systemd[230398]: Listening on D-Bus User Message Bus Socket.
Oct  7 09:48:19 np0005473739 systemd[230398]: Reached target Sockets.
Oct  7 09:48:19 np0005473739 systemd[230398]: Reached target Basic System.
Oct  7 09:48:19 np0005473739 systemd[230398]: Reached target Main User Target.
Oct  7 09:48:19 np0005473739 systemd[230398]: Startup finished in 153ms.
Oct  7 09:48:19 np0005473739 systemd[1]: Started User Manager for UID 0.
Oct  7 09:48:19 np0005473739 systemd[1]: Started Session c3 of User root.
Oct  7 09:48:19 np0005473739 iscsid[230363]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  7 09:48:19 np0005473739 iscsid[230363]: INFO:__main__:Validating config file
Oct  7 09:48:19 np0005473739 iscsid[230363]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  7 09:48:19 np0005473739 iscsid[230363]: INFO:__main__:Writing out command to execute
Oct  7 09:48:19 np0005473739 systemd[1]: session-c3.scope: Deactivated successfully.
Oct  7 09:48:19 np0005473739 iscsid[230363]: ++ cat /run_command
Oct  7 09:48:19 np0005473739 iscsid[230363]: + CMD='/usr/sbin/iscsid -f'
Oct  7 09:48:19 np0005473739 iscsid[230363]: + ARGS=
Oct  7 09:48:19 np0005473739 iscsid[230363]: + sudo kolla_copy_cacerts
Oct  7 09:48:19 np0005473739 systemd[1]: Started Session c4 of User root.
Oct  7 09:48:19 np0005473739 iscsid[230363]: + [[ ! -n '' ]]
Oct  7 09:48:19 np0005473739 iscsid[230363]: + . kolla_extend_start
Oct  7 09:48:19 np0005473739 iscsid[230363]: ++ [[ ! -f /etc/iscsi/initiatorname.iscsi ]]
Oct  7 09:48:19 np0005473739 iscsid[230363]: + echo 'Running command: '\''/usr/sbin/iscsid -f'\'''
Oct  7 09:48:19 np0005473739 iscsid[230363]: + umask 0022
Oct  7 09:48:19 np0005473739 iscsid[230363]: + exec /usr/sbin/iscsid -f
Oct  7 09:48:19 np0005473739 iscsid[230363]: Running command: '/usr/sbin/iscsid -f'
Oct  7 09:48:19 np0005473739 systemd[1]: session-c4.scope: Deactivated successfully.
Oct  7 09:48:19 np0005473739 kernel: Loading iSCSI transport class v2.0-870.
Oct  7 09:48:19 np0005473739 python3.9[230562]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.iscsid_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:48:20 np0005473739 python3.9[230719]: ansible-ansible.builtin.file Invoked with path=/etc/iscsi/.iscsid_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v617: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:21 np0005473739 python3.9[230871]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:48:21 np0005473739 network[230888]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:48:21 np0005473739 network[230889]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:48:21 np0005473739 network[230890]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:48:22
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'vms', 'volumes', '.rgw.root', '.mgr']
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:48:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v618: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v619: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:25 np0005473739 python3.9[231165]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:48:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 57c74cd4-aa7c-4f8f-88d5-02eec62f2f0a does not exist
Oct  7 09:48:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev dd47b55b-bb59-42f5-acc9-bbb3d37aecf9 does not exist
Oct  7 09:48:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 30da63b3-7418-4b28-b261-0f3640f6e34d does not exist
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:48:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:48:26 np0005473739 python3.9[231460]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Oct  7 09:48:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v620: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:27 np0005473739 podman[231667]: 2025-10-07 13:48:27.006436547 +0000 UTC m=+0.040067170 container create 2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:48:27 np0005473739 systemd[1]: Started libpod-conmon-2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5.scope.
Oct  7 09:48:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:48:27 np0005473739 podman[231667]: 2025-10-07 13:48:26.987961819 +0000 UTC m=+0.021592462 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:48:27 np0005473739 podman[231667]: 2025-10-07 13:48:27.091640629 +0000 UTC m=+0.125271262 container init 2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:48:27 np0005473739 podman[231667]: 2025-10-07 13:48:27.100923815 +0000 UTC m=+0.134554478 container start 2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:48:27 np0005473739 podman[231667]: 2025-10-07 13:48:27.105018102 +0000 UTC m=+0.138648745 container attach 2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 09:48:27 np0005473739 naughty_chatelet[231711]: 167 167
Oct  7 09:48:27 np0005473739 systemd[1]: libpod-2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5.scope: Deactivated successfully.
Oct  7 09:48:27 np0005473739 conmon[231711]: conmon 2b82339db1e61a4b43b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5.scope/container/memory.events
Oct  7 09:48:27 np0005473739 podman[231667]: 2025-10-07 13:48:27.111495194 +0000 UTC m=+0.145125797 container died 2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:48:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c19dec01614e2fdd9816de458fa4ae1aaf8540e1d1efdf035b1a748a1cbf9c1f-merged.mount: Deactivated successfully.
Oct  7 09:48:27 np0005473739 podman[231667]: 2025-10-07 13:48:27.160556351 +0000 UTC m=+0.194186964 container remove 2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_chatelet, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:48:27 np0005473739 systemd[1]: libpod-conmon-2b82339db1e61a4b43b8534464a975dd998995b9e1e27a4302263f46f2fae0a5.scope: Deactivated successfully.
Oct  7 09:48:27 np0005473739 podman[231781]: 2025-10-07 13:48:27.350883661 +0000 UTC m=+0.052067207 container create 1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:48:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:48:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:48:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:48:27 np0005473739 python3.9[231775]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:27 np0005473739 systemd[1]: Started libpod-conmon-1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52.scope.
Oct  7 09:48:27 np0005473739 podman[231781]: 2025-10-07 13:48:27.330610296 +0000 UTC m=+0.031793872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:48:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:48:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fa56da19d20bfba014821783eb64709ca846ff043d15145202c826e8d0158e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fa56da19d20bfba014821783eb64709ca846ff043d15145202c826e8d0158e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fa56da19d20bfba014821783eb64709ca846ff043d15145202c826e8d0158e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fa56da19d20bfba014821783eb64709ca846ff043d15145202c826e8d0158e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7fa56da19d20bfba014821783eb64709ca846ff043d15145202c826e8d0158e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:27 np0005473739 podman[231781]: 2025-10-07 13:48:27.44882251 +0000 UTC m=+0.150006086 container init 1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2)
Oct  7 09:48:27 np0005473739 podman[231781]: 2025-10-07 13:48:27.457838758 +0000 UTC m=+0.159022344 container start 1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 09:48:27 np0005473739 podman[231781]: 2025-10-07 13:48:27.462154323 +0000 UTC m=+0.163337899 container attach 1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:48:27 np0005473739 python3.9[231925]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844906.8812935-460-98462293002824/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:28 np0005473739 youthful_volhard[231798]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:48:28 np0005473739 youthful_volhard[231798]: --> relative data size: 1.0
Oct  7 09:48:28 np0005473739 youthful_volhard[231798]: --> All data devices are unavailable
Oct  7 09:48:28 np0005473739 systemd[1]: libpod-1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52.scope: Deactivated successfully.
Oct  7 09:48:28 np0005473739 podman[231781]: 2025-10-07 13:48:28.516422283 +0000 UTC m=+1.217605839 container died 1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:48:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c7fa56da19d20bfba014821783eb64709ca846ff043d15145202c826e8d0158e-merged.mount: Deactivated successfully.
Oct  7 09:48:28 np0005473739 podman[232072]: 2025-10-07 13:48:28.584601884 +0000 UTC m=+0.098418877 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  7 09:48:28 np0005473739 podman[231781]: 2025-10-07 13:48:28.591081159 +0000 UTC m=+1.292264705 container remove 1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 09:48:28 np0005473739 systemd[1]: libpod-conmon-1bb9e8b94cf81b86b781a2d1df03384474acb45cf18a28a04a802a31ec132c52.scope: Deactivated successfully.
Oct  7 09:48:28 np0005473739 python3.9[232126]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v621: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:29 np0005473739 podman[232377]: 2025-10-07 13:48:29.183142178 +0000 UTC m=+0.043753693 container create cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermi, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:48:29 np0005473739 systemd[1]: Started libpod-conmon-cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342.scope.
Oct  7 09:48:29 np0005473739 podman[232377]: 2025-10-07 13:48:29.164146625 +0000 UTC m=+0.024758160 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:48:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:48:29 np0005473739 podman[232377]: 2025-10-07 13:48:29.297583206 +0000 UTC m=+0.158194781 container init cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:48:29 np0005473739 podman[232377]: 2025-10-07 13:48:29.304695887 +0000 UTC m=+0.165307422 container start cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 09:48:29 np0005473739 podman[232377]: 2025-10-07 13:48:29.308630984 +0000 UTC m=+0.169242549 container attach cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermi, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:48:29 np0005473739 confident_fermi[232428]: 167 167
Oct  7 09:48:29 np0005473739 systemd[1]: libpod-cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342.scope: Deactivated successfully.
Oct  7 09:48:29 np0005473739 podman[232377]: 2025-10-07 13:48:29.309771635 +0000 UTC m=+0.170383160 container died cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:48:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-471e2af8edf47cb790d76538803cec2ac644e76953d234a7d60b8da26f96f8ea-merged.mount: Deactivated successfully.
Oct  7 09:48:29 np0005473739 podman[232377]: 2025-10-07 13:48:29.35702584 +0000 UTC m=+0.217637365 container remove cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:48:29 np0005473739 systemd[1]: libpod-conmon-cdce182f2880677e0adf35f670ba875e85c3a9e71459e9abd917c3e7705eb342.scope: Deactivated successfully.
Oct  7 09:48:29 np0005473739 podman[232470]: 2025-10-07 13:48:29.529290839 +0000 UTC m=+0.042035265 container create 08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rubin, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 09:48:29 np0005473739 systemd[1]: Started libpod-conmon-08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8.scope.
Oct  7 09:48:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:48:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88e4ebe5b52f087f8b439b76994ab2e02366d0cc7c9afadb1439b3575d11d2c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88e4ebe5b52f087f8b439b76994ab2e02366d0cc7c9afadb1439b3575d11d2c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88e4ebe5b52f087f8b439b76994ab2e02366d0cc7c9afadb1439b3575d11d2c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b88e4ebe5b52f087f8b439b76994ab2e02366d0cc7c9afadb1439b3575d11d2c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:29 np0005473739 podman[232470]: 2025-10-07 13:48:29.512231849 +0000 UTC m=+0.024976295 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:48:29 np0005473739 python3.9[232448]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:48:29 np0005473739 podman[232470]: 2025-10-07 13:48:29.618681802 +0000 UTC m=+0.131426268 container init 08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rubin, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:48:29 np0005473739 podman[232470]: 2025-10-07 13:48:29.62938733 +0000 UTC m=+0.142131766 container start 08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rubin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3)
Oct  7 09:48:29 np0005473739 podman[232470]: 2025-10-07 13:48:29.633541093 +0000 UTC m=+0.146285529 container attach 08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 09:48:29 np0005473739 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  7 09:48:29 np0005473739 systemd[1]: Stopped Load Kernel Modules.
Oct  7 09:48:29 np0005473739 systemd[1]: Stopping Load Kernel Modules...
Oct  7 09:48:29 np0005473739 systemd[1]: Starting Load Kernel Modules...
Oct  7 09:48:29 np0005473739 systemd[1]: Stopping User Manager for UID 0...
Oct  7 09:48:29 np0005473739 systemd[230398]: Activating special unit Exit the Session...
Oct  7 09:48:29 np0005473739 systemd[230398]: Stopped target Main User Target.
Oct  7 09:48:29 np0005473739 systemd[230398]: Stopped target Basic System.
Oct  7 09:48:29 np0005473739 systemd[230398]: Stopped target Paths.
Oct  7 09:48:29 np0005473739 systemd[230398]: Stopped target Sockets.
Oct  7 09:48:29 np0005473739 systemd[230398]: Stopped target Timers.
Oct  7 09:48:29 np0005473739 systemd[230398]: Stopped Daily Cleanup of User's Temporary Directories.
Oct  7 09:48:29 np0005473739 systemd[230398]: Closed D-Bus User Message Bus Socket.
Oct  7 09:48:29 np0005473739 systemd[230398]: Stopped Create User's Volatile Files and Directories.
Oct  7 09:48:29 np0005473739 systemd[230398]: Removed slice User Application Slice.
Oct  7 09:48:29 np0005473739 systemd[230398]: Reached target Shutdown.
Oct  7 09:48:29 np0005473739 systemd[230398]: Finished Exit the Session.
Oct  7 09:48:29 np0005473739 systemd[230398]: Reached target Exit the Session.
Oct  7 09:48:29 np0005473739 systemd[1]: Finished Load Kernel Modules.
Oct  7 09:48:29 np0005473739 systemd[1]: user@0.service: Deactivated successfully.
Oct  7 09:48:29 np0005473739 systemd[1]: Stopped User Manager for UID 0.
Oct  7 09:48:29 np0005473739 systemd[1]: Stopping User Runtime Directory /run/user/0...
Oct  7 09:48:29 np0005473739 systemd[1]: run-user-0.mount: Deactivated successfully.
Oct  7 09:48:29 np0005473739 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Oct  7 09:48:29 np0005473739 systemd[1]: Stopped User Runtime Directory /run/user/0.
Oct  7 09:48:29 np0005473739 systemd[1]: Removed slice User Slice of UID 0.
Oct  7 09:48:30 np0005473739 python3.9[232649]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]: {
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:    "0": [
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:        {
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "devices": [
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "/dev/loop3"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            ],
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_name": "ceph_lv0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_size": "21470642176",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "name": "ceph_lv0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "tags": {
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cluster_name": "ceph",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.crush_device_class": "",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.encrypted": "0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osd_id": "0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.type": "block",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.vdo": "0"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            },
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "type": "block",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "vg_name": "ceph_vg0"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:        }
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:    ],
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:    "1": [
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:        {
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "devices": [
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "/dev/loop4"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            ],
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_name": "ceph_lv1",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_size": "21470642176",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "name": "ceph_lv1",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "tags": {
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cluster_name": "ceph",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.crush_device_class": "",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.encrypted": "0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osd_id": "1",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.type": "block",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.vdo": "0"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            },
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "type": "block",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "vg_name": "ceph_vg1"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:        }
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:    ],
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:    "2": [
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:        {
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "devices": [
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "/dev/loop5"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            ],
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_name": "ceph_lv2",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_size": "21470642176",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "name": "ceph_lv2",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "tags": {
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.cluster_name": "ceph",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.crush_device_class": "",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.encrypted": "0",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osd_id": "2",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.type": "block",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:                "ceph.vdo": "0"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            },
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "type": "block",
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:            "vg_name": "ceph_vg2"
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:        }
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]:    ]
Oct  7 09:48:30 np0005473739 wizardly_rubin[232487]: }
Oct  7 09:48:30 np0005473739 systemd[1]: libpod-08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8.scope: Deactivated successfully.
Oct  7 09:48:30 np0005473739 podman[232470]: 2025-10-07 13:48:30.548658791 +0000 UTC m=+1.061403227 container died 08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rubin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:48:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b88e4ebe5b52f087f8b439b76994ab2e02366d0cc7c9afadb1439b3575d11d2c-merged.mount: Deactivated successfully.
Oct  7 09:48:30 np0005473739 podman[232470]: 2025-10-07 13:48:30.618010952 +0000 UTC m=+1.130755378 container remove 08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_rubin, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:48:30 np0005473739 systemd[1]: libpod-conmon-08e10a9b047cd48c5cf450b7348b7ea8246f22f7f5c81377d8d6e96805c1acb8.scope: Deactivated successfully.
Oct  7 09:48:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v622: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:31 np0005473739 python3.9[232917]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:48:31 np0005473739 podman[232980]: 2025-10-07 13:48:31.273065171 +0000 UTC m=+0.036494656 container create 1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bassi, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:48:31 np0005473739 systemd[1]: Started libpod-conmon-1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3.scope.
Oct  7 09:48:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:48:31 np0005473739 podman[232980]: 2025-10-07 13:48:31.258038996 +0000 UTC m=+0.021468501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:48:31 np0005473739 podman[232980]: 2025-10-07 13:48:31.367847259 +0000 UTC m=+0.131276804 container init 1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bassi, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 09:48:31 np0005473739 podman[232980]: 2025-10-07 13:48:31.375000392 +0000 UTC m=+0.138429887 container start 1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bassi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:48:31 np0005473739 podman[232980]: 2025-10-07 13:48:31.378087125 +0000 UTC m=+0.141516670 container attach 1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bassi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:48:31 np0005473739 vigilant_bassi[233019]: 167 167
Oct  7 09:48:31 np0005473739 systemd[1]: libpod-1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3.scope: Deactivated successfully.
Oct  7 09:48:31 np0005473739 podman[232980]: 2025-10-07 13:48:31.38233827 +0000 UTC m=+0.145767785 container died 1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bassi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 09:48:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e2f45fa444f96a3b90bf5d6669b41cab2c48f1a4790b16cdf325140b63115edc-merged.mount: Deactivated successfully.
Oct  7 09:48:31 np0005473739 podman[232980]: 2025-10-07 13:48:31.426708878 +0000 UTC m=+0.190138383 container remove 1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_bassi, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 09:48:31 np0005473739 systemd[1]: libpod-conmon-1494bdd5e84ac600db03c98e53b1b1d17e47123df1c176f89c3c2aaf276850b3.scope: Deactivated successfully.
Oct  7 09:48:31 np0005473739 podman[233124]: 2025-10-07 13:48:31.642208254 +0000 UTC m=+0.054157703 container create eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:48:31 np0005473739 systemd[1]: Started libpod-conmon-eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f.scope.
Oct  7 09:48:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:48:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd8bc289cb9afb99fe346c7af6c82fbe26bf58c3351638b7783f0fe288d4326/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd8bc289cb9afb99fe346c7af6c82fbe26bf58c3351638b7783f0fe288d4326/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd8bc289cb9afb99fe346c7af6c82fbe26bf58c3351638b7783f0fe288d4326/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebd8bc289cb9afb99fe346c7af6c82fbe26bf58c3351638b7783f0fe288d4326/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:48:31 np0005473739 podman[233124]: 2025-10-07 13:48:31.621810413 +0000 UTC m=+0.033759832 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:48:31 np0005473739 podman[233124]: 2025-10-07 13:48:31.726968591 +0000 UTC m=+0.138918100 container init eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:48:31 np0005473739 podman[233124]: 2025-10-07 13:48:31.733304082 +0000 UTC m=+0.145253491 container start eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:48:31 np0005473739 podman[233124]: 2025-10-07 13:48:31.746369784 +0000 UTC m=+0.158319273 container attach eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:48:31 np0005473739 python3.9[233161]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:48:32 np0005473739 python3.9[233321]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:32 np0005473739 silly_nash[233164]: {
Oct  7 09:48:32 np0005473739 silly_nash[233164]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "osd_id": 2,
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "type": "bluestore"
Oct  7 09:48:32 np0005473739 silly_nash[233164]:    },
Oct  7 09:48:32 np0005473739 silly_nash[233164]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "osd_id": 1,
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "type": "bluestore"
Oct  7 09:48:32 np0005473739 silly_nash[233164]:    },
Oct  7 09:48:32 np0005473739 silly_nash[233164]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "osd_id": 0,
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:48:32 np0005473739 silly_nash[233164]:        "type": "bluestore"
Oct  7 09:48:32 np0005473739 silly_nash[233164]:    }
Oct  7 09:48:32 np0005473739 silly_nash[233164]: }
Oct  7 09:48:32 np0005473739 systemd[1]: libpod-eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f.scope: Deactivated successfully.
Oct  7 09:48:32 np0005473739 systemd[1]: libpod-eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f.scope: Consumed 1.035s CPU time.
Oct  7 09:48:32 np0005473739 podman[233124]: 2025-10-07 13:48:32.771494572 +0000 UTC m=+1.183444011 container died eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:48:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ebd8bc289cb9afb99fe346c7af6c82fbe26bf58c3351638b7783f0fe288d4326-merged.mount: Deactivated successfully.
Oct  7 09:48:32 np0005473739 podman[233124]: 2025-10-07 13:48:32.892859326 +0000 UTC m=+1.304808755 container remove eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_nash, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:48:32 np0005473739 systemd[1]: libpod-conmon-eb4492c784bc39aa5505a92e921edd8ce9a40d6e80d11145d8d373fd0680079f.scope: Deactivated successfully.
Oct  7 09:48:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v623: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:48:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:48:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5f441d42-247b-4ad2-a6fd-143693316815 does not exist
Oct  7 09:48:32 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c9709364-3987-4b2b-97c3-ca2e2c622c2d does not exist
Oct  7 09:48:33 np0005473739 python3.9[233484]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844912.042497-518-232694786668919/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:33 np0005473739 podman[233657]: 2025-10-07 13:48:33.935003952 +0000 UTC m=+0.092847837 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  7 09:48:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:48:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:48:34 np0005473739 python3.9[233699]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:48:34 np0005473739 python3.9[233854]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v624: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:35 np0005473739 python3.9[234006]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:36 np0005473739 python3.9[234158]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v625: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:37 np0005473739 python3.9[234310]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:37 np0005473739 python3.9[234462]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:38 np0005473739 python3.9[234614]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #30. Immutable memtables: 0.
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.593540) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 11] Flushing memtable with next log file: 30
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844918593615, "job": 11, "event": "flush_started", "num_memtables": 1, "num_entries": 1779, "num_deletes": 250, "total_data_size": 2989050, "memory_usage": 3033984, "flush_reason": "Manual Compaction"}
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 11] Level-0 flush table #31: started
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844918649785, "cf_name": "default", "job": 11, "event": "table_file_creation", "file_number": 31, "file_size": 1686450, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 11748, "largest_seqno": 13526, "table_properties": {"data_size": 1680621, "index_size": 2904, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14565, "raw_average_key_size": 20, "raw_value_size": 1667763, "raw_average_value_size": 2303, "num_data_blocks": 134, "num_entries": 724, "num_filter_entries": 724, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759844715, "oldest_key_time": 1759844715, "file_creation_time": 1759844918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 31, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 11] Flush lasted 56296 microseconds, and 5856 cpu microseconds.
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.649844) [db/flush_job.cc:967] [default] [JOB 11] Level-0 flush table #31: 1686450 bytes OK
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.649867) [db/memtable_list.cc:519] [default] Level-0 commit table #31 started
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.665557) [db/memtable_list.cc:722] [default] Level-0 commit table #31: memtable #1 done
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.665619) EVENT_LOG_v1 {"time_micros": 1759844918665605, "job": 11, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.665650) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 11] Try to delete WAL files size 2981507, prev total WAL file size 2981507, number of live WAL files 2.
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000027.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.667011) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400323531' seq:72057594037927935, type:22 .. '6D67727374617400353032' seq:0, type:0; will stop at (end)
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 12] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 11 Base level 0, inputs: [31(1646KB)], [29(7796KB)]
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844918667078, "job": 12, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [31], "files_L6": [29], "score": -1, "input_data_size": 9670398, "oldest_snapshot_seqno": -1}
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 12] Generated table #32: 4010 keys, 7619058 bytes, temperature: kUnknown
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844918724083, "cf_name": "default", "job": 12, "event": "table_file_creation", "file_number": 32, "file_size": 7619058, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7590422, "index_size": 17518, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 95417, "raw_average_key_size": 23, "raw_value_size": 7516302, "raw_average_value_size": 1874, "num_data_blocks": 764, "num_entries": 4010, "num_filter_entries": 4010, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759844918, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 32, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.724391) [db/compaction/compaction_job.cc:1663] [default] [JOB 12] Compacted 1@0 + 1@6 files to L6 => 7619058 bytes
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.726008) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 169.4 rd, 133.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 7.6 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(10.3) write-amplify(4.5) OK, records in: 4427, records dropped: 417 output_compression: NoCompression
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.726030) EVENT_LOG_v1 {"time_micros": 1759844918726018, "job": 12, "event": "compaction_finished", "compaction_time_micros": 57096, "compaction_time_cpu_micros": 25831, "output_level": 6, "num_output_files": 1, "total_output_size": 7619058, "num_input_records": 4427, "num_output_records": 4010, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000031.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844918726545, "job": 12, "event": "table_file_deletion", "file_number": 31}
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000029.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759844918728608, "job": 12, "event": "table_file_deletion", "file_number": 29}
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.666840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.728645) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.728651) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.728653) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.728655) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:48:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:48:38.728657) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:48:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v626: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:39 np0005473739 python3.9[234766]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:39 np0005473739 python3.9[234918]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:48:40 np0005473739 python3.9[235072]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v627: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:41 np0005473739 python3.9[235224]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:41 np0005473739 systemd[1]: virtnodedevd.service: Deactivated successfully.
Oct  7 09:48:42 np0005473739 python3.9[235377]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:42 np0005473739 python3.9[235455]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:42 np0005473739 systemd[1]: virtproxyd.service: Deactivated successfully.
Oct  7 09:48:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v628: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:43 np0005473739 python3.9[235608]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:43 np0005473739 python3.9[235686]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:44 np0005473739 python3.9[235838]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v629: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:44 np0005473739 python3.9[235990]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:45 np0005473739 python3.9[236068]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:46 np0005473739 python3.9[236220]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:46 np0005473739 python3.9[236298]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v630: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:47 np0005473739 python3.9[236450]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:48:47 np0005473739 systemd[1]: Reloading.
Oct  7 09:48:47 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:48:47 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:48:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:48 np0005473739 python3.9[236639]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v631: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:49 np0005473739 python3.9[236717]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:49 np0005473739 podman[236841]: 2025-10-07 13:48:49.982111446 +0000 UTC m=+0.095723725 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3)
Oct  7 09:48:50 np0005473739 python3.9[236886]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:50 np0005473739 python3.9[236968]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v632: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:51 np0005473739 python3.9[237120]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:48:51 np0005473739 systemd[1]: Reloading.
Oct  7 09:48:51 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:48:51 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:48:51 np0005473739 systemd[1]: Starting Create netns directory...
Oct  7 09:48:51 np0005473739 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Oct  7 09:48:51 np0005473739 systemd[1]: netns-placeholder.service: Deactivated successfully.
Oct  7 09:48:51 np0005473739 systemd[1]: Finished Create netns directory.
Oct  7 09:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:48:52 np0005473739 python3.9[237314]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v633: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:53 np0005473739 python3.9[237466]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:53 np0005473739 systemd[1]: virtqemud.service: Deactivated successfully.
Oct  7 09:48:53 np0005473739 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  7 09:48:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:54 np0005473739 python3.9[237591]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759844932.9712846-725-188607353391428/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v634: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:55 np0005473739 python3.9[237743]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:48:55 np0005473739 python3.9[237895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:48:56 np0005473739 python3.9[238018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844935.2982347-750-18731066237013/.source.json _original_basename=.sxl3_50a follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v635: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:57 np0005473739 python3.9[238170]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:48:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:48:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v636: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:48:59 np0005473739 podman[238470]: 2025-10-07 13:48:59.147464393 +0000 UTC m=+0.130411231 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  7 09:48:59 np0005473739 python3.9[238623]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Oct  7 09:49:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:49:00.021 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:49:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:49:00.021 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:49:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:49:00.021 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:49:00 np0005473739 python3.9[238775]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  7 09:49:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v637: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:01 np0005473739 python3.9[238927]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Oct  7 09:49:02 np0005473739 python3[239106]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Oct  7 09:49:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v638: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:04 np0005473739 podman[239133]: 2025-10-07 13:49:04.078885934 +0000 UTC m=+0.064769919 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 09:49:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v639: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:05 np0005473739 podman[239120]: 2025-10-07 13:49:05.268415897 +0000 UTC m=+2.292052010 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  7 09:49:05 np0005473739 podman[239197]: 2025-10-07 13:49:05.432695771 +0000 UTC m=+0.062623861 container create 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  7 09:49:05 np0005473739 podman[239197]: 2025-10-07 13:49:05.392323502 +0000 UTC m=+0.022251592 image pull f541ff382622bd8bc9ad206129d2a8e74c239ff4503fa3b67d3bdf6d5b50b511 quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  7 09:49:05 np0005473739 python3[239106]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43
Oct  7 09:49:06 np0005473739 python3.9[239387]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:49:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v640: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:07 np0005473739 python3.9[239541]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:07 np0005473739 python3.9[239617]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:49:08 np0005473739 python3.9[239768]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759844947.9166698-838-66640177909947/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v641: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:09 np0005473739 python3.9[239844]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:49:09 np0005473739 systemd[1]: Reloading.
Oct  7 09:49:09 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:49:09 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:49:10 np0005473739 python3.9[239954]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v642: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:11 np0005473739 systemd[1]: Reloading.
Oct  7 09:49:11 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:49:11 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:49:11 np0005473739 systemd[1]: Starting multipathd container...
Oct  7 09:49:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:49:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61272b010c70d0d83d2a8a3a0ac5f67f885e010cbe0fd6d2f62d4df381e4dfe7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61272b010c70d0d83d2a8a3a0ac5f67f885e010cbe0fd6d2f62d4df381e4dfe7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:11 np0005473739 systemd[1]: Started /usr/bin/podman healthcheck run 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f.
Oct  7 09:49:11 np0005473739 podman[239994]: 2025-10-07 13:49:11.826120199 +0000 UTC m=+0.165282591 container init 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  7 09:49:11 np0005473739 multipathd[240009]: + sudo -E kolla_set_configs
Oct  7 09:49:11 np0005473739 podman[239994]: 2025-10-07 13:49:11.858735919 +0000 UTC m=+0.197898311 container start 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:49:11 np0005473739 podman[239994]: multipathd
Oct  7 09:49:11 np0005473739 systemd[1]: Started multipathd container.
Oct  7 09:49:11 np0005473739 multipathd[240009]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  7 09:49:11 np0005473739 multipathd[240009]: INFO:__main__:Validating config file
Oct  7 09:49:11 np0005473739 multipathd[240009]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  7 09:49:11 np0005473739 multipathd[240009]: INFO:__main__:Writing out command to execute
Oct  7 09:49:11 np0005473739 multipathd[240009]: ++ cat /run_command
Oct  7 09:49:11 np0005473739 multipathd[240009]: + CMD='/usr/sbin/multipathd -d'
Oct  7 09:49:11 np0005473739 multipathd[240009]: + ARGS=
Oct  7 09:49:11 np0005473739 multipathd[240009]: + sudo kolla_copy_cacerts
Oct  7 09:49:11 np0005473739 multipathd[240009]: + [[ ! -n '' ]]
Oct  7 09:49:11 np0005473739 multipathd[240009]: + . kolla_extend_start
Oct  7 09:49:11 np0005473739 multipathd[240009]: Running command: '/usr/sbin/multipathd -d'
Oct  7 09:49:11 np0005473739 multipathd[240009]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  7 09:49:11 np0005473739 multipathd[240009]: + umask 0022
Oct  7 09:49:11 np0005473739 multipathd[240009]: + exec /usr/sbin/multipathd -d
Oct  7 09:49:11 np0005473739 podman[240017]: 2025-10-07 13:49:11.976818786 +0000 UTC m=+0.106488745 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 09:49:11 np0005473739 systemd[1]: 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f-78c571e5c9e884f6.service: Main process exited, code=exited, status=1/FAILURE
Oct  7 09:49:11 np0005473739 multipathd[240009]: 5535.609398 | --------start up--------
Oct  7 09:49:11 np0005473739 multipathd[240009]: 5535.609412 | read /etc/multipath.conf
Oct  7 09:49:11 np0005473739 systemd[1]: 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f-78c571e5c9e884f6.service: Failed with result 'exit-code'.
Oct  7 09:49:11 np0005473739 multipathd[240009]: 5535.616714 | path checkers start up
Oct  7 09:49:12 np0005473739 python3.9[240200]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:49:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v643: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:13 np0005473739 python3.9[240354]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:49:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:14 np0005473739 python3.9[240519]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:49:14 np0005473739 systemd[1]: Stopping multipathd container...
Oct  7 09:49:14 np0005473739 multipathd[240009]: 5538.129698 | exit (signal)
Oct  7 09:49:14 np0005473739 multipathd[240009]: 5538.129765 | --------shut down-------
Oct  7 09:49:14 np0005473739 systemd[1]: libpod-819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f.scope: Deactivated successfully.
Oct  7 09:49:14 np0005473739 podman[240523]: 2025-10-07 13:49:14.537297499 +0000 UTC m=+0.068078047 container died 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 09:49:14 np0005473739 systemd[1]: 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f-78c571e5c9e884f6.timer: Deactivated successfully.
Oct  7 09:49:14 np0005473739 systemd[1]: Stopped /usr/bin/podman healthcheck run 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f.
Oct  7 09:49:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f-userdata-shm.mount: Deactivated successfully.
Oct  7 09:49:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-61272b010c70d0d83d2a8a3a0ac5f67f885e010cbe0fd6d2f62d4df381e4dfe7-merged.mount: Deactivated successfully.
Oct  7 09:49:14 np0005473739 podman[240523]: 2025-10-07 13:49:14.586148308 +0000 UTC m=+0.116928856 container cleanup 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:49:14 np0005473739 podman[240523]: multipathd
Oct  7 09:49:14 np0005473739 podman[240550]: multipathd
Oct  7 09:49:14 np0005473739 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Oct  7 09:49:14 np0005473739 systemd[1]: Stopped multipathd container.
Oct  7 09:49:14 np0005473739 systemd[1]: Starting multipathd container...
Oct  7 09:49:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:49:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61272b010c70d0d83d2a8a3a0ac5f67f885e010cbe0fd6d2f62d4df381e4dfe7/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/61272b010c70d0d83d2a8a3a0ac5f67f885e010cbe0fd6d2f62d4df381e4dfe7/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:14 np0005473739 systemd[1]: Started /usr/bin/podman healthcheck run 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f.
Oct  7 09:49:14 np0005473739 podman[240563]: 2025-10-07 13:49:14.897300815 +0000 UTC m=+0.215475786 container init 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 09:49:14 np0005473739 multipathd[240579]: + sudo -E kolla_set_configs
Oct  7 09:49:14 np0005473739 podman[240563]: 2025-10-07 13:49:14.934766826 +0000 UTC m=+0.252941807 container start 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:49:14 np0005473739 podman[240563]: multipathd
Oct  7 09:49:14 np0005473739 systemd[1]: Started multipathd container.
Oct  7 09:49:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v644: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:15 np0005473739 multipathd[240579]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  7 09:49:15 np0005473739 multipathd[240579]: INFO:__main__:Validating config file
Oct  7 09:49:15 np0005473739 multipathd[240579]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  7 09:49:15 np0005473739 multipathd[240579]: INFO:__main__:Writing out command to execute
Oct  7 09:49:15 np0005473739 multipathd[240579]: ++ cat /run_command
Oct  7 09:49:15 np0005473739 multipathd[240579]: + CMD='/usr/sbin/multipathd -d'
Oct  7 09:49:15 np0005473739 multipathd[240579]: + ARGS=
Oct  7 09:49:15 np0005473739 multipathd[240579]: + sudo kolla_copy_cacerts
Oct  7 09:49:15 np0005473739 podman[240586]: 2025-10-07 13:49:15.035087334 +0000 UTC m=+0.085817607 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 09:49:15 np0005473739 multipathd[240579]: + [[ ! -n '' ]]
Oct  7 09:49:15 np0005473739 multipathd[240579]: + . kolla_extend_start
Oct  7 09:49:15 np0005473739 multipathd[240579]: Running command: '/usr/sbin/multipathd -d'
Oct  7 09:49:15 np0005473739 multipathd[240579]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Oct  7 09:49:15 np0005473739 multipathd[240579]: + umask 0022
Oct  7 09:49:15 np0005473739 multipathd[240579]: + exec /usr/sbin/multipathd -d
Oct  7 09:49:15 np0005473739 systemd[1]: 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f-61738b21a29f8fc5.service: Main process exited, code=exited, status=1/FAILURE
Oct  7 09:49:15 np0005473739 systemd[1]: 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f-61738b21a29f8fc5.service: Failed with result 'exit-code'.
Oct  7 09:49:15 np0005473739 multipathd[240579]: 5538.682961 | --------start up--------
Oct  7 09:49:15 np0005473739 multipathd[240579]: 5538.682989 | read /etc/multipath.conf
Oct  7 09:49:15 np0005473739 multipathd[240579]: 5538.690200 | path checkers start up
Oct  7 09:49:15 np0005473739 python3.9[240769]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:16 np0005473739 python3.9[240921]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Oct  7 09:49:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v645: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:17 np0005473739 python3.9[241073]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Oct  7 09:49:17 np0005473739 kernel: Key type psk registered
Oct  7 09:49:18 np0005473739 python3.9[241236]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:49:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v646: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:19 np0005473739 python3.9[241359]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1759844957.8260112-918-224950771437433/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:19 np0005473739 python3.9[241511]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:20 np0005473739 podman[241635]: 2025-10-07 13:49:20.368438522 +0000 UTC m=+0.068174541 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:49:20 np0005473739 python3.9[241682]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:49:20 np0005473739 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Oct  7 09:49:20 np0005473739 systemd[1]: Stopped Load Kernel Modules.
Oct  7 09:49:20 np0005473739 systemd[1]: Stopping Load Kernel Modules...
Oct  7 09:49:20 np0005473739 systemd[1]: Starting Load Kernel Modules...
Oct  7 09:49:20 np0005473739 systemd[1]: Finished Load Kernel Modules.
Oct  7 09:49:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v647: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:21 np0005473739 python3.9[241839]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:49:22
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.meta', '.mgr', 'default.rgw.log', 'default.rgw.control', 'backups', 'vms', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes']
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:49:22 np0005473739 python3.9[241923]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:49:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v648: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v649: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v650: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v651: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:29 np0005473739 systemd[1]: Reloading.
Oct  7 09:49:29 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:49:29 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:49:29 np0005473739 podman[241930]: 2025-10-07 13:49:29.472701894 +0000 UTC m=+0.156229277 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  7 09:49:29 np0005473739 systemd[1]: Reloading.
Oct  7 09:49:29 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:49:29 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:49:30 np0005473739 systemd-logind[801]: Watching system buttons on /dev/input/event0 (Power Button)
Oct  7 09:49:30 np0005473739 systemd-logind[801]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Oct  7 09:49:30 np0005473739 lvm[242063]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  7 09:49:30 np0005473739 lvm[242063]: VG ceph_vg0 finished
Oct  7 09:49:30 np0005473739 lvm[242065]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  7 09:49:30 np0005473739 lvm[242065]: VG ceph_vg2 finished
Oct  7 09:49:30 np0005473739 lvm[242064]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  7 09:49:30 np0005473739 lvm[242064]: VG ceph_vg1 finished
Oct  7 09:49:30 np0005473739 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Oct  7 09:49:30 np0005473739 systemd[1]: Starting man-db-cache-update.service...
Oct  7 09:49:30 np0005473739 systemd[1]: Reloading.
Oct  7 09:49:30 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:49:30 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:49:30 np0005473739 systemd[1]: Queuing reload/restart jobs for marked units…
Oct  7 09:49:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v652: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:31 np0005473739 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Oct  7 09:49:31 np0005473739 systemd[1]: Finished man-db-cache-update.service.
Oct  7 09:49:31 np0005473739 systemd[1]: man-db-cache-update.service: Consumed 1.541s CPU time.
Oct  7 09:49:31 np0005473739 systemd[1]: run-r826a04a61f234135821f5a621deb12dd.service: Deactivated successfully.
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:49:32 np0005473739 python3.9[243407]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.iscsid_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v653: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:33 np0005473739 python3.9[243557]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Oct  7 09:49:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:49:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2a45e0ce-afc9-42b2-a39c-f22da577a018 does not exist
Oct  7 09:49:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f8f0dd1c-a391-4dec-8b75-a9f86fd47932 does not exist
Oct  7 09:49:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a062e51e-d4b7-479d-967e-b42dedb3fe49 does not exist
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:49:34 np0005473739 python3.9[243845]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:34 np0005473739 podman[243870]: 2025-10-07 13:49:34.298144065 +0000 UTC m=+0.090130574 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:49:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:49:34 np0005473739 podman[244081]: 2025-10-07 13:49:34.777596164 +0000 UTC m=+0.043940887 container create 23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wing, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:49:34 np0005473739 systemd[1]: Started libpod-conmon-23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492.scope.
Oct  7 09:49:34 np0005473739 podman[244081]: 2025-10-07 13:49:34.75524391 +0000 UTC m=+0.021588663 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:49:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:49:34 np0005473739 podman[244081]: 2025-10-07 13:49:34.880968024 +0000 UTC m=+0.147312757 container init 23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wing, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:49:34 np0005473739 podman[244081]: 2025-10-07 13:49:34.892819943 +0000 UTC m=+0.159164656 container start 23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:49:34 np0005473739 podman[244081]: 2025-10-07 13:49:34.896682368 +0000 UTC m=+0.163027111 container attach 23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:49:34 np0005473739 systemd[1]: libpod-23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492.scope: Deactivated successfully.
Oct  7 09:49:34 np0005473739 loving_wing[244097]: 167 167
Oct  7 09:49:34 np0005473739 conmon[244097]: conmon 23b37bbc3b80c4969418 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492.scope/container/memory.events
Oct  7 09:49:34 np0005473739 podman[244081]: 2025-10-07 13:49:34.903921913 +0000 UTC m=+0.170266616 container died 23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wing, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:49:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-eb76179c98d55eb42da43b951f47e14fd9cd214d5de491fc0ac4b8856fea5573-merged.mount: Deactivated successfully.
Oct  7 09:49:34 np0005473739 podman[244081]: 2025-10-07 13:49:34.963403059 +0000 UTC m=+0.229747772 container remove 23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_wing, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True)
Oct  7 09:49:34 np0005473739 systemd[1]: libpod-conmon-23b37bbc3b80c4969418bf47275e9b66c1711f3e938b221071ad9ec4adb2e492.scope: Deactivated successfully.
Oct  7 09:49:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v654: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:35 np0005473739 podman[244133]: 2025-10-07 13:49:35.178059672 +0000 UTC m=+0.053321870 container create 951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 09:49:35 np0005473739 systemd[1]: Started libpod-conmon-951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d.scope.
Oct  7 09:49:35 np0005473739 podman[244133]: 2025-10-07 13:49:35.1546549 +0000 UTC m=+0.029917128 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:49:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:49:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d0d668c46b5a90a5d4e8dcca5599cd567f492247be6ee43926c79587378e892/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d0d668c46b5a90a5d4e8dcca5599cd567f492247be6ee43926c79587378e892/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d0d668c46b5a90a5d4e8dcca5599cd567f492247be6ee43926c79587378e892/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d0d668c46b5a90a5d4e8dcca5599cd567f492247be6ee43926c79587378e892/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d0d668c46b5a90a5d4e8dcca5599cd567f492247be6ee43926c79587378e892/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:35 np0005473739 podman[244133]: 2025-10-07 13:49:35.296770776 +0000 UTC m=+0.172033034 container init 951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:49:35 np0005473739 podman[244133]: 2025-10-07 13:49:35.305587503 +0000 UTC m=+0.180849701 container start 951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 09:49:35 np0005473739 podman[244133]: 2025-10-07 13:49:35.309077218 +0000 UTC m=+0.184339466 container attach 951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 09:49:35 np0005473739 python3.9[244217]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:49:35 np0005473739 systemd[1]: Reloading.
Oct  7 09:49:35 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:49:35 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:49:36 np0005473739 inspiring_curran[244184]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:49:36 np0005473739 inspiring_curran[244184]: --> relative data size: 1.0
Oct  7 09:49:36 np0005473739 inspiring_curran[244184]: --> All data devices are unavailable
Oct  7 09:49:36 np0005473739 systemd[1]: libpod-951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d.scope: Deactivated successfully.
Oct  7 09:49:36 np0005473739 systemd[1]: libpod-951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d.scope: Consumed 1.144s CPU time.
Oct  7 09:49:36 np0005473739 podman[244133]: 2025-10-07 13:49:36.529688919 +0000 UTC m=+1.404951127 container died 951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:49:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9d0d668c46b5a90a5d4e8dcca5599cd567f492247be6ee43926c79587378e892-merged.mount: Deactivated successfully.
Oct  7 09:49:36 np0005473739 podman[244133]: 2025-10-07 13:49:36.595726653 +0000 UTC m=+1.470988851 container remove 951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_curran, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:49:36 np0005473739 systemd[1]: libpod-conmon-951a82e18c1274fcaa1f728a6c987092b531153b4ef340cf3d9ad84347620d1d.scope: Deactivated successfully.
Oct  7 09:49:36 np0005473739 python3.9[244437]: ansible-ansible.builtin.service_facts Invoked
Oct  7 09:49:36 np0005473739 network[244527]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Oct  7 09:49:36 np0005473739 network[244528]: 'network-scripts' will be removed from distribution in near future.
Oct  7 09:49:36 np0005473739 network[244530]: It is advised to switch to 'NetworkManager' instead for network management.
Oct  7 09:49:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v655: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:38 np0005473739 podman[244603]: 2025-10-07 13:49:38.231057427 +0000 UTC m=+0.081400638 container create 7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:49:38 np0005473739 podman[244603]: 2025-10-07 13:49:38.189857065 +0000 UTC m=+0.040200316 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:49:38 np0005473739 systemd[1]: Started libpod-conmon-7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe.scope.
Oct  7 09:49:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:49:38 np0005473739 podman[244603]: 2025-10-07 13:49:38.328876996 +0000 UTC m=+0.179220197 container init 7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:49:38 np0005473739 podman[244603]: 2025-10-07 13:49:38.341740064 +0000 UTC m=+0.192083245 container start 7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:49:38 np0005473739 podman[244603]: 2025-10-07 13:49:38.345242408 +0000 UTC m=+0.195585589 container attach 7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:49:38 np0005473739 xenodochial_mendel[244619]: 167 167
Oct  7 09:49:38 np0005473739 systemd[1]: libpod-7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe.scope: Deactivated successfully.
Oct  7 09:49:38 np0005473739 conmon[244619]: conmon 7393f59150e29b8a9b69 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe.scope/container/memory.events
Oct  7 09:49:38 np0005473739 podman[244603]: 2025-10-07 13:49:38.348534747 +0000 UTC m=+0.198877958 container died 7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:49:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1f06c66d428f156ef42afbd9eca155a3dfa4dde9b00683cd5c3e10433f99466f-merged.mount: Deactivated successfully.
Oct  7 09:49:38 np0005473739 podman[244603]: 2025-10-07 13:49:38.396952484 +0000 UTC m=+0.247295665 container remove 7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_mendel, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:49:38 np0005473739 systemd[1]: libpod-conmon-7393f59150e29b8a9b69466f66c7a6a2d715ab00acdb2f34862416c705c3c8fe.scope: Deactivated successfully.
Oct  7 09:49:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:38 np0005473739 podman[244643]: 2025-10-07 13:49:38.636352835 +0000 UTC m=+0.070860644 container create fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:49:38 np0005473739 systemd[1]: Started libpod-conmon-fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a.scope.
Oct  7 09:49:38 np0005473739 podman[244643]: 2025-10-07 13:49:38.6068849 +0000 UTC m=+0.041392789 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:49:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:49:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74238f9958f6103ba9085d500892fbeea1f845ec586dc6ff825e4a9a63c314b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74238f9958f6103ba9085d500892fbeea1f845ec586dc6ff825e4a9a63c314b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74238f9958f6103ba9085d500892fbeea1f845ec586dc6ff825e4a9a63c314b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e74238f9958f6103ba9085d500892fbeea1f845ec586dc6ff825e4a9a63c314b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:38 np0005473739 podman[244643]: 2025-10-07 13:49:38.727248888 +0000 UTC m=+0.161756777 container init fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:49:38 np0005473739 podman[244643]: 2025-10-07 13:49:38.735066819 +0000 UTC m=+0.169574658 container start fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:49:38 np0005473739 podman[244643]: 2025-10-07 13:49:38.738659826 +0000 UTC m=+0.173167665 container attach fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:49:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v656: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]: {
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:    "0": [
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:        {
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "devices": [
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "/dev/loop3"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            ],
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_name": "ceph_lv0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_size": "21470642176",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "name": "ceph_lv0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "tags": {
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cluster_name": "ceph",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.crush_device_class": "",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.encrypted": "0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osd_id": "0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.type": "block",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.vdo": "0"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            },
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "type": "block",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "vg_name": "ceph_vg0"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:        }
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:    ],
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:    "1": [
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:        {
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "devices": [
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "/dev/loop4"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            ],
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_name": "ceph_lv1",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_size": "21470642176",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "name": "ceph_lv1",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "tags": {
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cluster_name": "ceph",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.crush_device_class": "",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.encrypted": "0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osd_id": "1",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.type": "block",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.vdo": "0"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            },
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "type": "block",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "vg_name": "ceph_vg1"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:        }
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:    ],
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:    "2": [
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:        {
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "devices": [
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "/dev/loop5"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            ],
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_name": "ceph_lv2",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_size": "21470642176",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "name": "ceph_lv2",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "tags": {
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.cluster_name": "ceph",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.crush_device_class": "",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.encrypted": "0",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osd_id": "2",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.type": "block",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:                "ceph.vdo": "0"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            },
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "type": "block",
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:            "vg_name": "ceph_vg2"
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:        }
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]:    ]
Oct  7 09:49:39 np0005473739 nostalgic_beaver[244660]: }
Oct  7 09:49:39 np0005473739 systemd[1]: libpod-fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a.scope: Deactivated successfully.
Oct  7 09:49:39 np0005473739 podman[244701]: 2025-10-07 13:49:39.590206358 +0000 UTC m=+0.028989613 container died fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:49:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e74238f9958f6103ba9085d500892fbeea1f845ec586dc6ff825e4a9a63c314b-merged.mount: Deactivated successfully.
Oct  7 09:49:39 np0005473739 podman[244701]: 2025-10-07 13:49:39.658865751 +0000 UTC m=+0.097648976 container remove fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_beaver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:49:39 np0005473739 systemd[1]: libpod-conmon-fcba5fd7a01517f13ebe8e5549663bfc532c136fefed537d9aa784bf9d91eb6a.scope: Deactivated successfully.
Oct  7 09:49:40 np0005473739 podman[244893]: 2025-10-07 13:49:40.381892565 +0000 UTC m=+0.038418489 container create 9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jepsen, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:49:40 np0005473739 systemd[1]: Started libpod-conmon-9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b.scope.
Oct  7 09:49:40 np0005473739 podman[244893]: 2025-10-07 13:49:40.365034 +0000 UTC m=+0.021559974 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:49:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:49:40 np0005473739 podman[244893]: 2025-10-07 13:49:40.493338882 +0000 UTC m=+0.149864816 container init 9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:49:40 np0005473739 podman[244893]: 2025-10-07 13:49:40.502163541 +0000 UTC m=+0.158689465 container start 9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jepsen, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:49:40 np0005473739 podman[244893]: 2025-10-07 13:49:40.505390897 +0000 UTC m=+0.161916821 container attach 9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:49:40 np0005473739 gallant_jepsen[244913]: 167 167
Oct  7 09:49:40 np0005473739 podman[244893]: 2025-10-07 13:49:40.510131165 +0000 UTC m=+0.166657089 container died 9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jepsen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:49:40 np0005473739 systemd[1]: libpod-9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b.scope: Deactivated successfully.
Oct  7 09:49:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-82c160fd24332315c523b4f3959bbd6f11f8c09447b70ba2782e1c4f54f296ae-merged.mount: Deactivated successfully.
Oct  7 09:49:40 np0005473739 podman[244893]: 2025-10-07 13:49:40.562151289 +0000 UTC m=+0.218677213 container remove 9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_jepsen, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 09:49:40 np0005473739 systemd[1]: libpod-conmon-9ae86b56f41008184216d16cab3c2fc4aff39ab5bdd36024bc3469e67595494b.scope: Deactivated successfully.
Oct  7 09:49:40 np0005473739 podman[244948]: 2025-10-07 13:49:40.748402195 +0000 UTC m=+0.050873543 container create 18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 09:49:40 np0005473739 systemd[1]: Started libpod-conmon-18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5.scope.
Oct  7 09:49:40 np0005473739 podman[244948]: 2025-10-07 13:49:40.721859819 +0000 UTC m=+0.024331197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:49:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:49:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31aa884de90185eb8731bb40173beba70c91f932d38442211bd8fa596ff3526/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31aa884de90185eb8731bb40173beba70c91f932d38442211bd8fa596ff3526/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31aa884de90185eb8731bb40173beba70c91f932d38442211bd8fa596ff3526/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a31aa884de90185eb8731bb40173beba70c91f932d38442211bd8fa596ff3526/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:49:40 np0005473739 podman[244948]: 2025-10-07 13:49:40.860198203 +0000 UTC m=+0.162669591 container init 18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:49:40 np0005473739 podman[244948]: 2025-10-07 13:49:40.86674421 +0000 UTC m=+0.169215558 container start 18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:49:40 np0005473739 podman[244948]: 2025-10-07 13:49:40.878293311 +0000 UTC m=+0.180764709 container attach 18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 09:49:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v657: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]: {
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "osd_id": 2,
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "type": "bluestore"
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:    },
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "osd_id": 1,
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "type": "bluestore"
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:    },
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "osd_id": 0,
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:        "type": "bluestore"
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]:    }
Oct  7 09:49:41 np0005473739 boring_rhodes[244969]: }
Oct  7 09:49:42 np0005473739 systemd[1]: libpod-18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5.scope: Deactivated successfully.
Oct  7 09:49:42 np0005473739 systemd[1]: libpod-18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5.scope: Consumed 1.159s CPU time.
Oct  7 09:49:42 np0005473739 podman[244948]: 2025-10-07 13:49:42.018189455 +0000 UTC m=+1.320660913 container died 18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:49:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a31aa884de90185eb8731bb40173beba70c91f932d38442211bd8fa596ff3526-merged.mount: Deactivated successfully.
Oct  7 09:49:42 np0005473739 podman[244948]: 2025-10-07 13:49:42.083170959 +0000 UTC m=+1.385642307 container remove 18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:49:42 np0005473739 systemd[1]: libpod-conmon-18a45e0158a0c9ce9864a8fb013fb7405dd19f8eca8793d03bf27d7bcf4f93f5.scope: Deactivated successfully.
Oct  7 09:49:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:49:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:49:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:49:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:49:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 324a7524-3564-4ff3-a74b-6f036629be7b does not exist
Oct  7 09:49:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e7e54712-20c0-4614-93fc-0d31c63f5b20 does not exist
Oct  7 09:49:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:49:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:49:42 np0005473739 python3.9[245223]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v658: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:43 np0005473739 python3.9[245402]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:44 np0005473739 python3.9[245555]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v659: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:45 np0005473739 python3.9[245708]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:45 np0005473739 podman[245710]: 2025-10-07 13:49:45.37609903 +0000 UTC m=+0.087420661 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Oct  7 09:49:46 np0005473739 python3.9[245881]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:46 np0005473739 python3.9[246034]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v660: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:47 np0005473739 python3.9[246187]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:48 np0005473739 python3.9[246341]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:49:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v661: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:49 np0005473739 python3.9[246494]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:50 np0005473739 python3.9[246646]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:50 np0005473739 podman[246770]: 2025-10-07 13:49:50.54610234 +0000 UTC m=+0.057713409 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001)
Oct  7 09:49:50 np0005473739 python3.9[246818]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v662: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:51 np0005473739 python3.9[246971]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:52 np0005473739 python3.9[247123]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:49:52 np0005473739 python3.9[247275]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v663: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:53 np0005473739 python3.9[247427]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:53 np0005473739 python3.9[247579]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:54 np0005473739 python3.9[247731]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v664: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:55 np0005473739 python3.9[247883]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:55 np0005473739 python3.9[248035]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:56 np0005473739 python3.9[248187]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v665: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:57 np0005473739 python3.9[248339]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:57 np0005473739 python3.9[248491]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:58 np0005473739 python3.9[248643]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:49:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v666: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:49:59 np0005473739 python3.9[248795]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:49:59 np0005473739 podman[248947]: 2025-10-07 13:49:59.823920112 +0000 UTC m=+0.099756933 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller)
Oct  7 09:49:59 np0005473739 python3.9[248948]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:50:00.022 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:50:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:50:00.023 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:50:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:50:00.023 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:50:00 np0005473739 python3.9[249125]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Oct  7 09:50:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v667: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:01 np0005473739 python3.9[249277]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:50:01 np0005473739 systemd[1]: Reloading.
Oct  7 09:50:01 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:50:01 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:50:02 np0005473739 python3.9[249464]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v668: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:03 np0005473739 python3.9[249617]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:04 np0005473739 python3.9[249770]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:04 np0005473739 podman[249895]: 2025-10-07 13:50:04.592262762 +0000 UTC m=+0.075149820 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:50:04 np0005473739 python3.9[249936]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v669: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:05 np0005473739 python3.9[250096]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:06 np0005473739 python3.9[250249]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:06 np0005473739 python3.9[250402]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v670: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:07 np0005473739 python3.9[250555]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Oct  7 09:50:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:08 np0005473739 python3.9[250708]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v671: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:09 np0005473739 python3.9[250860]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:10 np0005473739 python3.9[251012]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:10 np0005473739 python3.9[251164]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v672: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:11 np0005473739 python3.9[251316]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:12 np0005473739 python3.9[251468]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:12 np0005473739 python3.9[251620]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v673: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:13 np0005473739 python3.9[251772]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:14 np0005473739 python3.9[251924]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:14 np0005473739 python3.9[252076]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/iscsi setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v674: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:15 np0005473739 python3.9[252228]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:15 np0005473739 podman[252229]: 2025-10-07 13:50:15.660736392 +0000 UTC m=+0.075976022 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 09:50:16 np0005473739 python3.9[252400]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v675: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v676: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v677: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:21 np0005473739 podman[252477]: 2025-10-07 13:50:21.12709555 +0000 UTC m=+0.109538787 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:50:21 np0005473739 python3.9[252572]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Oct  7 09:50:22 np0005473739 python3.9[252725]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:50:22
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'backups', 'default.rgw.log', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:50:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:50:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v678: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:23 np0005473739 python3.9[252883]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Oct  7 09:50:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:24 np0005473739 systemd-logind[801]: New session 53 of user zuul.
Oct  7 09:50:24 np0005473739 systemd[1]: Started Session 53 of User zuul.
Oct  7 09:50:24 np0005473739 systemd[1]: session-53.scope: Deactivated successfully.
Oct  7 09:50:24 np0005473739 systemd-logind[801]: Session 53 logged out. Waiting for processes to exit.
Oct  7 09:50:24 np0005473739 systemd-logind[801]: Removed session 53.
Oct  7 09:50:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v679: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:25 np0005473739 python3.9[253069]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:50:25 np0005473739 python3.9[253190]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759845024.679691-1555-253070769491727/.source.json follow=False _original_basename=config.json.j2 checksum=2c2474b5f24ef7c9ed37f49680082593e0d1100b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:26 np0005473739 python3.9[253340]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:50:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v680: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:27 np0005473739 python3.9[253416]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:28 np0005473739 python3.9[253566]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #33. Immutable memtables: 0.
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.630348) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 13] Flushing memtable with next log file: 33
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845028630656, "job": 13, "event": "flush_started", "num_memtables": 1, "num_entries": 1317, "num_deletes": 506, "total_data_size": 1608954, "memory_usage": 1640880, "flush_reason": "Manual Compaction"}
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 13] Level-0 flush table #34: started
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845028644194, "cf_name": "default", "job": 13, "event": "table_file_creation", "file_number": 34, "file_size": 1593867, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 13527, "largest_seqno": 14843, "table_properties": {"data_size": 1588033, "index_size": 2718, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 14551, "raw_average_key_size": 17, "raw_value_size": 1574453, "raw_average_value_size": 1941, "num_data_blocks": 125, "num_entries": 811, "num_filter_entries": 811, "num_deletions": 506, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759844919, "oldest_key_time": 1759844919, "file_creation_time": 1759845028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 34, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 13] Flush lasted 13904 microseconds, and 6126 cpu microseconds.
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.644249) [db/flush_job.cc:967] [default] [JOB 13] Level-0 flush table #34: 1593867 bytes OK
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.644268) [db/memtable_list.cc:519] [default] Level-0 commit table #34 started
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.646937) [db/memtable_list.cc:722] [default] Level-0 commit table #34: memtable #1 done
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.646968) EVENT_LOG_v1 {"time_micros": 1759845028646960, "job": 13, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.646991) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 13] Try to delete WAL files size 1601989, prev total WAL file size 1601989, number of live WAL files 2.
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000030.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.648076) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0030' seq:72057594037927935, type:22 .. '6C6F676D00323532' seq:0, type:0; will stop at (end)
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 14] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 13 Base level 0, inputs: [34(1556KB)], [32(7440KB)]
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845028648111, "job": 14, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [34], "files_L6": [32], "score": -1, "input_data_size": 9212925, "oldest_snapshot_seqno": -1}
Oct  7 09:50:28 np0005473739 python3.9[253687]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759845027.464906-1555-185156657169249/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 14] Generated table #35: 3796 keys, 7235205 bytes, temperature: kUnknown
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845028707801, "cf_name": "default", "job": 14, "event": "table_file_creation", "file_number": 35, "file_size": 7235205, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7208033, "index_size": 16566, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 9541, "raw_key_size": 93023, "raw_average_key_size": 24, "raw_value_size": 7137520, "raw_average_value_size": 1880, "num_data_blocks": 703, "num_entries": 3796, "num_filter_entries": 3796, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845028, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 35, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.708168) [db/compaction/compaction_job.cc:1663] [default] [JOB 14] Compacted 1@0 + 1@6 files to L6 => 7235205 bytes
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.709897) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.1 rd, 121.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 7.3 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(10.3) write-amplify(4.5) OK, records in: 4821, records dropped: 1025 output_compression: NoCompression
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.709921) EVENT_LOG_v1 {"time_micros": 1759845028709908, "job": 14, "event": "compaction_finished", "compaction_time_micros": 59783, "compaction_time_cpu_micros": 25903, "output_level": 6, "num_output_files": 1, "total_output_size": 7235205, "num_input_records": 4821, "num_output_records": 3796, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000034.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845028710465, "job": 14, "event": "table_file_deletion", "file_number": 34}
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000032.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845028712210, "job": 14, "event": "table_file_deletion", "file_number": 32}
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.648018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.712306) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.712313) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.712315) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.712317) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:50:28 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:50:28.712319) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:50:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v681: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:29 np0005473739 python3.9[253837]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:50:29 np0005473739 python3.9[253958]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759845028.8279972-1555-98903487340255/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:30 np0005473739 podman[253959]: 2025-10-07 13:50:30.022827642 +0000 UTC m=+0.089924238 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 09:50:30 np0005473739 python3.9[254134]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:50:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v682: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:31 np0005473739 python3.9[254255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759845030.0823646-1555-120790960390734/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:31 np0005473739 python3.9[254407]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:50:32 np0005473739 python3.9[254559]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:50:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v683: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:33 np0005473739 python3.9[254711]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:50:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:34 np0005473739 python3.9[254863]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:50:34 np0005473739 python3.9[254986]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1759845033.5106864-1648-9006406977097/.source _original_basename=.a2uz74r6 follow=False checksum=6c9194524d5491400a424666d7f1a5f43faabc1b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Oct  7 09:50:34 np0005473739 podman[254989]: 2025-10-07 13:50:34.725685034 +0000 UTC m=+0.078490299 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:50:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v684: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:50:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 3284 writes, 14K keys, 3284 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 3284 writes, 3284 syncs, 1.00 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1271 writes, 5776 keys, 1271 commit groups, 1.0 writes per commit group, ingest: 8.45 MB, 0.01 MB/s#012Interval WAL: 1272 writes, 1272 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     65.7      0.24              0.04         7    0.034       0      0       0.0       0.0#012  L6      1/0    6.90 MB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   2.6    117.9     97.0      0.43              0.13         6    0.071     24K   3200       0.0       0.0#012 Sum      1/0    6.90 MB   0.0      0.0     0.0      0.0       0.1      0.0       0.0   3.6     75.3     85.7      0.67              0.17        13    0.051     24K   3200       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.8     95.1     96.0      0.36              0.12         8    0.045     17K   2470       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0    117.9     97.0      0.43              0.13         6    0.071     24K   3200       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     66.5      0.24              0.04         6    0.039       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.015, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.06 GB write, 0.05 MB/s write, 0.05 GB read, 0.04 MB/s read, 0.7 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 308.00 MB usage: 1.49 MB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 0 last_secs: 8.4e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(102,1.27 MB,0.411284%) FilterBlock(14,75.80 KB,0.0240326%) IndexBlock(14,149.30 KB,0.0473369%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 09:50:35 np0005473739 python3.9[255157]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:50:36 np0005473739 python3.9[255309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:50:36 np0005473739 python3.9[255430]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759845035.6596916-1674-148062486884975/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=837ffd9c004e5987a2e117698c56827ebbfeb5b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v685: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:37 np0005473739 python3.9[255580]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True
Oct  7 09:50:38 np0005473739 python3.9[255701]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1759845037.028882-1689-108781133647408/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=722ab36345f3375cbdcf911ce8f6e1a8083d7e59 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Oct  7 09:50:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:38 np0005473739 python3.9[255853]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Oct  7 09:50:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v686: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:39 np0005473739 python3.9[256005]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  7 09:50:40 np0005473739 python3[256157]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Oct  7 09:50:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v687: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v688: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:50:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev edc6322c-9611-426e-a099-06a64c931ae7 does not exist
Oct  7 09:50:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f5e58a95-63ee-4e49-917b-3d51a5953bf7 does not exist
Oct  7 09:50:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bad49ec6-ae33-40d7-a687-c4d40f19366a does not exist
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:50:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:50:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:50:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:50:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v689: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v690: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:47 np0005473739 podman[256452]: 2025-10-07 13:50:47.03927142 +0000 UTC m=+1.020257603 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  7 09:50:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v691: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v692: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:51 np0005473739 podman[256171]: 2025-10-07 13:50:51.123907461 +0000 UTC m=+10.391854027 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  7 09:50:51 np0005473739 podman[256528]: 2025-10-07 13:50:51.282263797 +0000 UTC m=+0.058559871 container create 249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 09:50:51 np0005473739 systemd[1]: Started libpod-conmon-249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f.scope.
Oct  7 09:50:51 np0005473739 podman[256550]: 2025-10-07 13:50:51.341586047 +0000 UTC m=+0.076818921 container create ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2)
Oct  7 09:50:51 np0005473739 podman[256550]: 2025-10-07 13:50:51.308311155 +0000 UTC m=+0.043544049 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  7 09:50:51 np0005473739 podman[256528]: 2025-10-07 13:50:51.254154813 +0000 UTC m=+0.030450907 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:50:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:50:51 np0005473739 python3[256157]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Oct  7 09:50:51 np0005473739 podman[256528]: 2025-10-07 13:50:51.370972455 +0000 UTC m=+0.147268539 container init 249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 09:50:51 np0005473739 podman[256528]: 2025-10-07 13:50:51.380715676 +0000 UTC m=+0.157011760 container start 249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:50:51 np0005473739 podman[256528]: 2025-10-07 13:50:51.386441539 +0000 UTC m=+0.162737603 container attach 249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:50:51 np0005473739 crazy_northcutt[256566]: 167 167
Oct  7 09:50:51 np0005473739 systemd[1]: libpod-249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f.scope: Deactivated successfully.
Oct  7 09:50:51 np0005473739 podman[256528]: 2025-10-07 13:50:51.390992891 +0000 UTC m=+0.167288955 container died 249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:50:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-20ae4762ebb5aa196109c71fae697aaffb0e564734475b3898066135cc4a8400-merged.mount: Deactivated successfully.
Oct  7 09:50:51 np0005473739 podman[256528]: 2025-10-07 13:50:51.438162626 +0000 UTC m=+0.214458690 container remove 249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_northcutt, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 09:50:51 np0005473739 podman[256563]: 2025-10-07 13:50:51.444968449 +0000 UTC m=+0.119385962 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  7 09:50:51 np0005473739 systemd[1]: libpod-conmon-249fa07886faa92166a7937e546cfcac63691f48fcb49d0d4f1ab457b8cce15f.scope: Deactivated successfully.
Oct  7 09:50:51 np0005473739 podman[256633]: 2025-10-07 13:50:51.638407434 +0000 UTC m=+0.057210415 container create eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:50:51 np0005473739 systemd[1]: Started libpod-conmon-eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5.scope.
Oct  7 09:50:51 np0005473739 podman[256633]: 2025-10-07 13:50:51.613660481 +0000 UTC m=+0.032463552 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:50:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:50:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b323d0665e698737a1f90bc8a475c556b2f6e9f17b47911b29a27604c47474ec/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b323d0665e698737a1f90bc8a475c556b2f6e9f17b47911b29a27604c47474ec/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b323d0665e698737a1f90bc8a475c556b2f6e9f17b47911b29a27604c47474ec/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b323d0665e698737a1f90bc8a475c556b2f6e9f17b47911b29a27604c47474ec/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b323d0665e698737a1f90bc8a475c556b2f6e9f17b47911b29a27604c47474ec/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:51 np0005473739 podman[256633]: 2025-10-07 13:50:51.731357696 +0000 UTC m=+0.150160697 container init eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_neumann, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:50:51 np0005473739 podman[256633]: 2025-10-07 13:50:51.744456827 +0000 UTC m=+0.163259828 container start eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_neumann, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:50:51 np0005473739 podman[256633]: 2025-10-07 13:50:51.748160476 +0000 UTC m=+0.166963467 container attach eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_neumann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:50:52 np0005473739 python3.9[256806]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:50:52 np0005473739 keen_neumann[256674]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:50:52 np0005473739 keen_neumann[256674]: --> relative data size: 1.0
Oct  7 09:50:52 np0005473739 keen_neumann[256674]: --> All data devices are unavailable
Oct  7 09:50:52 np0005473739 systemd[1]: libpod-eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5.scope: Deactivated successfully.
Oct  7 09:50:52 np0005473739 systemd[1]: libpod-eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5.scope: Consumed 1.051s CPU time.
Oct  7 09:50:52 np0005473739 podman[256979]: 2025-10-07 13:50:52.901329411 +0000 UTC m=+0.023839370 container died eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_neumann, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  7 09:50:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b323d0665e698737a1f90bc8a475c556b2f6e9f17b47911b29a27604c47474ec-merged.mount: Deactivated successfully.
Oct  7 09:50:52 np0005473739 podman[256979]: 2025-10-07 13:50:52.969003625 +0000 UTC m=+0.091513574 container remove eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:50:52 np0005473739 systemd[1]: libpod-conmon-eb8dd2ceb6869b24f0bc5345e964ebd1ffc059d23d94bc91ebe8c34323ac12e5.scope: Deactivated successfully.
Oct  7 09:50:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v693: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:53 np0005473739 python3.9[256994]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Oct  7 09:50:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:53 np0005473739 podman[257289]: 2025-10-07 13:50:53.7144879 +0000 UTC m=+0.112336363 container create 443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:50:53 np0005473739 podman[257289]: 2025-10-07 13:50:53.630764506 +0000 UTC m=+0.028612979 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:50:53 np0005473739 systemd[1]: Started libpod-conmon-443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989.scope.
Oct  7 09:50:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:50:53 np0005473739 podman[257289]: 2025-10-07 13:50:53.851667678 +0000 UTC m=+0.249516161 container init 443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 09:50:53 np0005473739 podman[257289]: 2025-10-07 13:50:53.860265358 +0000 UTC m=+0.258113841 container start 443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:50:53 np0005473739 podman[257289]: 2025-10-07 13:50:53.864647836 +0000 UTC m=+0.262496329 container attach 443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:50:53 np0005473739 exciting_feistel[257310]: 167 167
Oct  7 09:50:53 np0005473739 systemd[1]: libpod-443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989.scope: Deactivated successfully.
Oct  7 09:50:53 np0005473739 podman[257289]: 2025-10-07 13:50:53.867121792 +0000 UTC m=+0.264970255 container died 443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:50:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6ce398749b11877918963599971879dab460f1e1292affef755878b54472ffc7-merged.mount: Deactivated successfully.
Oct  7 09:50:53 np0005473739 python3.9[257304]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Oct  7 09:50:53 np0005473739 podman[257289]: 2025-10-07 13:50:53.907250417 +0000 UTC m=+0.305098880 container remove 443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:50:53 np0005473739 systemd[1]: libpod-conmon-443faebdfef9ba9524b9bce8a027cfff04b387ed817656c154f691a813c19989.scope: Deactivated successfully.
Oct  7 09:50:54 np0005473739 podman[257357]: 2025-10-07 13:50:54.100281222 +0000 UTC m=+0.042921861 container create bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 09:50:54 np0005473739 systemd[1]: Started libpod-conmon-bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259.scope.
Oct  7 09:50:54 np0005473739 podman[257357]: 2025-10-07 13:50:54.081752826 +0000 UTC m=+0.024393465 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:50:54 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:50:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b76b353b4f0dab5a5a1e8f3e27504cd81ef470755295163e64430d8aedb32f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b76b353b4f0dab5a5a1e8f3e27504cd81ef470755295163e64430d8aedb32f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b76b353b4f0dab5a5a1e8f3e27504cd81ef470755295163e64430d8aedb32f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d5b76b353b4f0dab5a5a1e8f3e27504cd81ef470755295163e64430d8aedb32f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:54 np0005473739 podman[257357]: 2025-10-07 13:50:54.21281487 +0000 UTC m=+0.155455579 container init bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_agnesi, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:50:54 np0005473739 podman[257357]: 2025-10-07 13:50:54.228689945 +0000 UTC m=+0.171330564 container start bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_agnesi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:50:54 np0005473739 podman[257357]: 2025-10-07 13:50:54.234223953 +0000 UTC m=+0.176864582 container attach bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_agnesi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:50:54 np0005473739 python3[257508]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Oct  7 09:50:54 np0005473739 podman[257545]: 2025-10-07 13:50:54.960350059 +0000 UTC m=+0.063545814 container create 43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 09:50:54 np0005473739 podman[257545]: 2025-10-07 13:50:54.922025232 +0000 UTC m=+0.025221077 image pull 7ac362f4e836cf46e10a309acb4abf774df9481a1d6404c213437495cfb42f5d quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844
Oct  7 09:50:54 np0005473739 python3[257508]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi:z --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844 kolla_start
Oct  7 09:50:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v694: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]: {
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:    "0": [
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:        {
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "devices": [
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "/dev/loop3"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            ],
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_name": "ceph_lv0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_size": "21470642176",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "name": "ceph_lv0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "tags": {
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cluster_name": "ceph",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.crush_device_class": "",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.encrypted": "0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osd_id": "0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.type": "block",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.vdo": "0"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            },
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "type": "block",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "vg_name": "ceph_vg0"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:        }
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:    ],
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:    "1": [
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:        {
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "devices": [
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "/dev/loop4"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            ],
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_name": "ceph_lv1",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_size": "21470642176",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "name": "ceph_lv1",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "tags": {
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cluster_name": "ceph",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.crush_device_class": "",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.encrypted": "0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osd_id": "1",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.type": "block",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.vdo": "0"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            },
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "type": "block",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "vg_name": "ceph_vg1"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:        }
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:    ],
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:    "2": [
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:        {
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "devices": [
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "/dev/loop5"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            ],
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_name": "ceph_lv2",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_size": "21470642176",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "name": "ceph_lv2",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "tags": {
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.cluster_name": "ceph",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.crush_device_class": "",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.encrypted": "0",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osd_id": "2",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.type": "block",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:                "ceph.vdo": "0"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            },
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "type": "block",
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:            "vg_name": "ceph_vg2"
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:        }
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]:    ]
Oct  7 09:50:55 np0005473739 kind_agnesi[257388]: }
Oct  7 09:50:55 np0005473739 systemd[1]: libpod-bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259.scope: Deactivated successfully.
Oct  7 09:50:55 np0005473739 podman[257357]: 2025-10-07 13:50:55.068082087 +0000 UTC m=+1.010722706 container died bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 09:50:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d5b76b353b4f0dab5a5a1e8f3e27504cd81ef470755295163e64430d8aedb32f-merged.mount: Deactivated successfully.
Oct  7 09:50:55 np0005473739 podman[257357]: 2025-10-07 13:50:55.128020504 +0000 UTC m=+1.070661123 container remove bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_agnesi, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:50:55 np0005473739 systemd[1]: libpod-conmon-bad890054ddc97d69131ae9b6834171eecde90886805e0091e35bf7525e2e259.scope: Deactivated successfully.
Oct  7 09:50:55 np0005473739 python3.9[257851]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:50:55 np0005473739 podman[257890]: 2025-10-07 13:50:55.794767228 +0000 UTC m=+0.041252386 container create 3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:50:55 np0005473739 systemd[1]: Started libpod-conmon-3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e.scope.
Oct  7 09:50:55 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:50:55 np0005473739 podman[257890]: 2025-10-07 13:50:55.775268876 +0000 UTC m=+0.021754064 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:50:55 np0005473739 podman[257890]: 2025-10-07 13:50:55.972646196 +0000 UTC m=+0.219131364 container init 3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:50:55 np0005473739 podman[257890]: 2025-10-07 13:50:55.985032508 +0000 UTC m=+0.231517706 container start 3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:50:56 np0005473739 compassionate_easley[257908]: 167 167
Oct  7 09:50:56 np0005473739 systemd[1]: libpod-3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e.scope: Deactivated successfully.
Oct  7 09:50:56 np0005473739 podman[257890]: 2025-10-07 13:50:56.074695652 +0000 UTC m=+0.321180840 container attach 3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 09:50:56 np0005473739 podman[257890]: 2025-10-07 13:50:56.076319897 +0000 UTC m=+0.322805045 container died 3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:50:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dfecdc4efb4dfae25347cb90517b09dcbbeec9ccbe04926f34ce05a687e91a47-merged.mount: Deactivated successfully.
Oct  7 09:50:56 np0005473739 podman[257890]: 2025-10-07 13:50:56.12420071 +0000 UTC m=+0.370685868 container remove 3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:50:56 np0005473739 systemd[1]: libpod-conmon-3dc79843fd970af442b9f876566e05d178a8a23948a6a4ad3b6a58a7ccd66a2e.scope: Deactivated successfully.
Oct  7 09:50:56 np0005473739 podman[258032]: 2025-10-07 13:50:56.301443151 +0000 UTC m=+0.044718000 container create 1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:50:56 np0005473739 systemd[1]: Started libpod-conmon-1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e.scope.
Oct  7 09:50:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:50:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e3fc593cc70ffbe30d2780c5761cca845b0afc3a416d29ea9f2e178478c1bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e3fc593cc70ffbe30d2780c5761cca845b0afc3a416d29ea9f2e178478c1bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e3fc593cc70ffbe30d2780c5761cca845b0afc3a416d29ea9f2e178478c1bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11e3fc593cc70ffbe30d2780c5761cca845b0afc3a416d29ea9f2e178478c1bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:56 np0005473739 podman[258032]: 2025-10-07 13:50:56.282839303 +0000 UTC m=+0.026114172 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:50:56 np0005473739 podman[258032]: 2025-10-07 13:50:56.394182908 +0000 UTC m=+0.137457787 container init 1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:50:56 np0005473739 podman[258032]: 2025-10-07 13:50:56.403568079 +0000 UTC m=+0.146842928 container start 1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 09:50:56 np0005473739 podman[258032]: 2025-10-07 13:50:56.409081097 +0000 UTC m=+0.152355976 container attach 1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:50:56 np0005473739 python3.9[258103]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:50:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v695: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:57 np0005473739 python3.9[258261]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1759845056.6893094-1781-249814259809087/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]: {
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "osd_id": 2,
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "type": "bluestore"
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:    },
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "osd_id": 1,
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "type": "bluestore"
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:    },
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "osd_id": 0,
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:        "type": "bluestore"
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]:    }
Oct  7 09:50:57 np0005473739 dazzling_bose[258098]: }
Oct  7 09:50:57 np0005473739 systemd[1]: libpod-1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e.scope: Deactivated successfully.
Oct  7 09:50:57 np0005473739 systemd[1]: libpod-1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e.scope: Consumed 1.041s CPU time.
Oct  7 09:50:57 np0005473739 podman[258032]: 2025-10-07 13:50:57.453804454 +0000 UTC m=+1.197079313 container died 1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:50:57 np0005473739 systemd[1]: var-lib-containers-storage-overlay-11e3fc593cc70ffbe30d2780c5761cca845b0afc3a416d29ea9f2e178478c1bd-merged.mount: Deactivated successfully.
Oct  7 09:50:57 np0005473739 podman[258032]: 2025-10-07 13:50:57.50962857 +0000 UTC m=+1.252903419 container remove 1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_bose, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:50:57 np0005473739 systemd[1]: libpod-conmon-1a2f0713d845fe2505a8a167dab223d92120ec5f59a4e176f08d20528d670d3e.scope: Deactivated successfully.
Oct  7 09:50:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:50:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:50:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:50:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:50:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f36e7686-67a4-4ce4-975c-16c5f7186da6 does not exist
Oct  7 09:50:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2e5c19ed-5dda-4287-852c-e6a4d8a2f412 does not exist
Oct  7 09:50:57 np0005473739 python3.9[258398]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Oct  7 09:50:57 np0005473739 systemd[1]: Reloading.
Oct  7 09:50:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:50:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:50:58 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:50:58 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:50:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:50:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v696: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:50:59 np0005473739 python3.9[258535]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Oct  7 09:50:59 np0005473739 systemd[1]: Reloading.
Oct  7 09:50:59 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 09:50:59 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 09:50:59 np0005473739 systemd[1]: Starting nova_compute container...
Oct  7 09:50:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:50:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  7 09:50:59 np0005473739 podman[258574]: 2025-10-07 13:50:59.667348955 +0000 UTC m=+0.107007869 container init 43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible)
Oct  7 09:50:59 np0005473739 podman[258574]: 2025-10-07 13:50:59.674477326 +0000 UTC m=+0.114136210 container start 43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:50:59 np0005473739 podman[258574]: nova_compute
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + sudo -E kolla_set_configs
Oct  7 09:50:59 np0005473739 systemd[1]: Started nova_compute container.
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Validating config file
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying service configuration files
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Deleting /etc/ceph
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Creating directory /etc/ceph
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/ceph
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Writing out command to execute
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  7 09:50:59 np0005473739 nova_compute[258589]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  7 09:50:59 np0005473739 nova_compute[258589]: ++ cat /run_command
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + CMD=nova-compute
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + ARGS=
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + sudo kolla_copy_cacerts
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + [[ ! -n '' ]]
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + . kolla_extend_start
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + echo 'Running command: '\''nova-compute'\'''
Oct  7 09:50:59 np0005473739 nova_compute[258589]: Running command: 'nova-compute'
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + umask 0022
Oct  7 09:50:59 np0005473739 nova_compute[258589]: + exec nova-compute
Oct  7 09:51:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:51:00.024 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:51:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:51:00.024 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:51:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:51:00.025 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:51:00 np0005473739 podman[258724]: 2025-10-07 13:51:00.436719921 +0000 UTC m=+0.095554012 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 09:51:00 np0005473739 python3.9[258766]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:51:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v697: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:01 np0005473739 python3.9[258927]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:51:02 np0005473739 python3.9[259077]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Oct  7 09:51:02 np0005473739 nova_compute[258589]: 2025-10-07 13:51:02.342 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  7 09:51:02 np0005473739 nova_compute[258589]: 2025-10-07 13:51:02.343 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  7 09:51:02 np0005473739 nova_compute[258589]: 2025-10-07 13:51:02.344 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  7 09:51:02 np0005473739 nova_compute[258589]: 2025-10-07 13:51:02.345 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  7 09:51:02 np0005473739 nova_compute[258589]: 2025-10-07 13:51:02.531 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:51:02 np0005473739 nova_compute[258589]: 2025-10-07 13:51:02.566 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.035s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:51:02 np0005473739 python3.9[259232]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  7 09:51:02 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:51:02 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 09:51:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v698: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.197 2 INFO nova.virt.driver [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.356 2 INFO nova.compute.provider_config [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.379 2 DEBUG oslo_concurrency.lockutils [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.379 2 DEBUG oslo_concurrency.lockutils [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.379 2 DEBUG oslo_concurrency.lockutils [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.380 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.380 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.380 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.381 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.381 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.381 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.382 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.382 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.382 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.382 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.383 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.383 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.383 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.383 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.383 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.384 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.384 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.384 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.384 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.385 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.385 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.385 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.385 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.386 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.386 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.386 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.386 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.386 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.387 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.387 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.388 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.388 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.388 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.389 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.389 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.389 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.389 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.389 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.390 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.390 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.390 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.390 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.391 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.391 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.391 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.391 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.392 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.392 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.392 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.392 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.393 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.393 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.393 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.394 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.394 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.394 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.394 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.394 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.395 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.395 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.395 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.395 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.396 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.396 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.396 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.396 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.396 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.397 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.397 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.397 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.397 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.398 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.398 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.398 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.398 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.399 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.399 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.399 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.399 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.400 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.400 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.400 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.400 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.401 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.401 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.401 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.401 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.401 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.402 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.402 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.402 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.402 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.402 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.403 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.403 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.403 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.403 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.403 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.404 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.404 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.404 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.404 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.405 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.405 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.405 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.405 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.405 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.406 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.406 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.406 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.406 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.406 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.407 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.407 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.407 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.407 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.408 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.408 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.408 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.408 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.408 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.409 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.409 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.409 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.409 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.409 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.410 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.410 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.410 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.410 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.411 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.411 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.411 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.411 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.411 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.412 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.412 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.412 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.412 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.412 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.412 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.412 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.413 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.413 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.413 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.413 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.413 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.413 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.413 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.414 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.414 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.414 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.414 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.414 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.414 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.415 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.415 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.415 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.415 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.415 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.415 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.416 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.416 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.416 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.416 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.416 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.416 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.416 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.417 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.417 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.417 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.417 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.417 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.417 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.418 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.418 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.418 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.418 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.418 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.418 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.419 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.419 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.419 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.419 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.419 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.419 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.419 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.420 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.420 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.420 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.420 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.420 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.420 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.421 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.421 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.421 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.421 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.421 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.421 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.421 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.422 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.422 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.422 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.422 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.422 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.422 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.422 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.423 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.423 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.423 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.423 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.423 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.423 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.424 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.424 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.424 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.424 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.424 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.424 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.424 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.424 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.425 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.425 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.425 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.425 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.425 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.425 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.425 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.426 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.426 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.426 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.426 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.426 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.426 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.426 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.427 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.427 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.427 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.427 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.427 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.427 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.428 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.428 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.428 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.428 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.428 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.428 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.429 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.429 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.429 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.429 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.429 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.429 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.429 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.430 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.430 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.430 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.430 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.430 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.430 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.430 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.431 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.431 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.431 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.431 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.431 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.431 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.431 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.432 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.432 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.432 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.432 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.432 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.432 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.433 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.433 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.433 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.433 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.433 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.433 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.433 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.434 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.434 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.434 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.434 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.434 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.434 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.435 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.435 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.435 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.435 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.435 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.435 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.435 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.436 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.436 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.436 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.436 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.436 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.436 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.436 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.437 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.437 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.437 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.437 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.437 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.437 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.437 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.438 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.438 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.438 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.438 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.438 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.438 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.438 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.439 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.439 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.439 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.439 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.439 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.439 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.439 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.440 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.440 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.440 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.440 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.440 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.440 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.440 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.441 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.441 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.441 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.441 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.441 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.441 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.441 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.442 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.442 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.442 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.442 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.442 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.442 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.442 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.443 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.443 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.443 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.443 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.444 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.444 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.444 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.444 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.444 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.444 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.445 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.445 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.445 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.445 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.445 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.445 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.446 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.446 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.446 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.446 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.446 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.446 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.447 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.447 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.447 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.447 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.447 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.447 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.448 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.448 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.448 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.448 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.448 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.448 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.449 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.449 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.449 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.449 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.449 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.449 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.449 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.450 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.450 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.450 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.450 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.450 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.450 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.450 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.451 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.451 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.451 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.451 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.451 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.451 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.451 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.452 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.452 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.452 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.452 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.452 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.452 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.452 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.453 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.453 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.453 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.453 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.453 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.453 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.453 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.454 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.454 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.454 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.454 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.454 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.454 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.454 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.455 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.455 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.455 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.455 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.455 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.455 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.455 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.456 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.456 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.456 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.456 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.456 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.456 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.456 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.457 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.457 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.457 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.457 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.457 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.457 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.457 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.458 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.458 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.458 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.458 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.458 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.458 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.458 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.459 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.459 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.459 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.459 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.459 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.459 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.459 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.460 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.460 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.460 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.460 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.460 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.460 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.461 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.461 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.461 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.461 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.461 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.461 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.461 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.462 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.462 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.462 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.462 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.462 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.462 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.463 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.463 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.463 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.463 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.463 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.463 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.463 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.464 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.464 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.464 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.464 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.464 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.464 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.464 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.465 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.465 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.465 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.465 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.465 2 WARNING oslo_config.cfg [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  7 09:51:03 np0005473739 nova_compute[258589]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  7 09:51:03 np0005473739 nova_compute[258589]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  7 09:51:03 np0005473739 nova_compute[258589]: and ``live_migration_inbound_addr`` respectively.
Oct  7 09:51:03 np0005473739 nova_compute[258589]: ).  Its value may be silently ignored in the future.#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.466 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.466 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.466 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.466 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.466 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.466 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.467 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.467 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.467 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.467 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.467 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.467 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.468 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.468 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.468 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.468 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.468 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.468 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.469 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rbd_secret_uuid        = 82044f27-a8da-5b2a-a297-ff6afc620e1f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.469 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.469 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.469 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.469 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.469 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.470 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.470 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.470 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.470 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.470 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.471 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.471 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.471 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.471 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.471 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.472 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.472 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.472 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.472 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.472 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.472 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.472 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.473 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.473 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.473 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.473 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.473 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.473 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.474 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.474 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.474 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.474 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.474 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.474 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.474 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.475 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.475 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.475 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.475 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.475 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.475 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.476 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.476 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.476 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.476 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.477 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.477 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.477 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.477 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.477 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.478 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.478 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.478 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.478 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.478 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.479 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.479 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.479 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.479 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.479 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.480 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.480 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.480 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.480 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.480 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.480 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.481 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.481 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.481 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.481 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.481 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.482 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.482 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.482 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.482 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.482 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.482 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.483 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.483 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.483 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.483 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.483 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.483 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.484 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.484 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.484 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.484 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.484 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.484 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.484 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.485 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.485 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.485 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.485 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.485 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.485 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.486 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.486 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.486 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.486 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.486 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.486 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.486 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.487 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.487 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.487 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.487 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.487 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.487 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.488 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.488 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.488 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.488 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.488 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.488 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.488 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.489 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.489 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.489 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.489 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.489 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.490 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.490 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.490 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.490 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.490 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.490 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.491 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.491 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.491 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.491 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.491 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.491 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.491 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.492 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.492 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.492 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.492 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.492 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.492 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.493 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.493 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.493 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.493 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.493 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.493 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.493 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.494 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.494 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.494 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.494 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.494 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.495 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.495 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.495 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.495 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.495 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.495 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.495 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.496 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.496 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.496 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.496 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.496 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.497 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.497 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.497 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.497 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.497 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.497 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.497 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.498 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.498 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.498 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.498 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.498 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.498 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.499 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.499 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.499 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.499 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.499 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.499 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.500 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.500 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.500 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.500 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.500 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.500 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.501 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.501 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.501 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.501 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.501 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.501 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.501 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.502 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.502 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.502 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.502 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.502 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.502 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.503 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.503 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.503 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.503 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.503 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.504 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.504 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.504 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.504 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.504 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.504 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.505 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.505 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.505 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.505 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.505 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.506 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.506 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.506 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.506 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.506 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.506 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.507 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.507 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.507 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.507 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.507 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.508 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.508 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.508 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.508 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.508 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.509 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.509 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.509 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.509 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.509 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.510 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.510 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.510 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.510 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.510 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.510 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.511 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.511 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.511 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.511 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.511 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.511 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.512 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.512 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.512 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.512 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.512 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.512 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.512 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.513 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.513 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.513 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.513 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.513 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.513 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.514 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.514 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.514 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.514 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.514 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.514 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.515 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.515 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.515 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.515 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.515 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.516 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.516 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.516 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.516 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.516 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.516 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.516 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.517 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.517 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.517 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.517 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.517 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.517 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.518 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.518 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.518 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.518 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.518 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.518 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.518 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.519 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.519 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.519 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.519 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.519 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.520 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.520 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.520 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.520 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.520 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.520 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.521 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.521 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.521 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.521 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.521 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.521 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.521 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.522 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.522 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.522 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.522 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.522 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.522 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.523 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.523 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.523 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.523 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.523 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.523 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.523 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.524 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.524 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.524 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.524 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.524 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.524 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.524 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.525 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.525 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.525 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.525 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.525 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.525 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.526 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.526 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.526 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.526 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.526 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.526 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.526 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.527 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.527 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.527 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.527 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.527 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.527 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.527 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.528 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.528 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.528 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.528 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.528 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.529 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.529 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.529 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.529 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.529 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.529 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.529 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.530 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.530 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.530 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.530 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.530 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.530 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.531 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.531 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.531 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.531 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.531 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.531 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.531 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.532 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.532 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.532 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.532 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.532 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.532 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.532 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.533 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.533 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.533 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.533 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.533 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.533 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.534 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.534 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.534 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.534 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.534 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.534 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.534 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.535 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.535 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.535 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.535 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.535 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.535 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.536 2 DEBUG oslo_service.service [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.537 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.555 2 DEBUG nova.virt.libvirt.host [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.556 2 DEBUG nova.virt.libvirt.host [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.556 2 DEBUG nova.virt.libvirt.host [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.557 2 DEBUG nova.virt.libvirt.host [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  7 09:51:03 np0005473739 systemd[1]: Starting libvirt QEMU daemon...
Oct  7 09:51:03 np0005473739 systemd[1]: Started libvirt QEMU daemon.
Oct  7 09:51:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.649 2 DEBUG nova.virt.libvirt.host [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f3e5fe0b7f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.654 2 DEBUG nova.virt.libvirt.host [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f3e5fe0b7f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.655 2 INFO nova.virt.libvirt.driver [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.680 2 WARNING nova.virt.libvirt.driver [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.680 2 DEBUG nova.virt.libvirt.volume.mount [None req-44e391fd-022e-4cf2-9f2a-505e886859e5 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  7 09:51:03 np0005473739 python3.9[259408]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Oct  7 09:51:03 np0005473739 systemd[1]: Stopping nova_compute container...
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.912 2 DEBUG oslo_concurrency.lockutils [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.912 2 DEBUG oslo_concurrency.lockutils [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 09:51:03 np0005473739 nova_compute[258589]: 2025-10-07 13:51:03.913 2 DEBUG oslo_concurrency.lockutils [None req-591adac8-b72c-48bb-b8fb-3041462eee1a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 09:51:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v699: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:05 np0005473739 virtqemud[259430]: libvirt version: 10.10.0, package: 15.el9 (builder@centos.org, 2025-08-18-13:22:20, )
Oct  7 09:51:05 np0005473739 virtqemud[259430]: hostname: compute-0
Oct  7 09:51:05 np0005473739 virtqemud[259430]: End of file while reading data: Input/output error
Oct  7 09:51:05 np0005473739 systemd[1]: libpod-43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e.scope: Deactivated successfully.
Oct  7 09:51:05 np0005473739 systemd[1]: libpod-43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e.scope: Consumed 3.332s CPU time.
Oct  7 09:51:05 np0005473739 podman[259464]: 2025-10-07 13:51:05.071970223 +0000 UTC m=+1.207280376 container stop 43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3)
Oct  7 09:51:05 np0005473739 conmon[258589]: conmon 43156265e92920731265 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e.scope/container/memory.events
Oct  7 09:51:05 np0005473739 podman[259464]: 2025-10-07 13:51:05.078094588 +0000 UTC m=+1.213404741 container died 43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 09:51:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e-userdata-shm.mount: Deactivated successfully.
Oct  7 09:51:05 np0005473739 podman[259488]: 2025-10-07 13:51:05.112391327 +0000 UTC m=+0.094962197 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 09:51:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50-merged.mount: Deactivated successfully.
Oct  7 09:51:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v700: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:07 np0005473739 podman[259464]: 2025-10-07 13:51:07.41780644 +0000 UTC m=+3.553116553 container cleanup 43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 09:51:07 np0005473739 podman[259464]: nova_compute
Oct  7 09:51:07 np0005473739 podman[259521]: nova_compute
Oct  7 09:51:07 np0005473739 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Oct  7 09:51:07 np0005473739 systemd[1]: Stopped nova_compute container.
Oct  7 09:51:07 np0005473739 systemd[1]: Starting nova_compute container...
Oct  7 09:51:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:51:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33741128ba94aef69daba965ac2b6cb9f2254da37db05f0f6f63a2d192b52e50/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:07 np0005473739 podman[259534]: 2025-10-07 13:51:07.625380315 +0000 UTC m=+0.097163776 container init 43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  7 09:51:07 np0005473739 podman[259534]: 2025-10-07 13:51:07.635083535 +0000 UTC m=+0.106867006 container start 43156265e92920731265fe6c35291362797a04936fad4018b99e44485024040e (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0)
Oct  7 09:51:07 np0005473739 podman[259534]: nova_compute
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + sudo -E kolla_set_configs
Oct  7 09:51:07 np0005473739 systemd[1]: Started nova_compute container.
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Validating config file
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying service configuration files
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /etc/nova/nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/03-ceph-nova.conf to /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/03-ceph-nova.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /etc/ceph
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Creating directory /etc/ceph
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/ceph
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.conf to /etc/ceph/ceph.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/ceph/ceph.client.openstack.keyring to /etc/ceph/ceph.client.openstack.keyring
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Writing out command to execute
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/ceph/ceph.conf
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /etc/ceph/ceph.client.openstack.keyring
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Oct  7 09:51:07 np0005473739 nova_compute[259550]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Oct  7 09:51:07 np0005473739 nova_compute[259550]: ++ cat /run_command
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + CMD=nova-compute
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + ARGS=
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + sudo kolla_copy_cacerts
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + [[ ! -n '' ]]
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + . kolla_extend_start
Oct  7 09:51:07 np0005473739 nova_compute[259550]: Running command: 'nova-compute'
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + echo 'Running command: '\''nova-compute'\'''
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + umask 0022
Oct  7 09:51:07 np0005473739 nova_compute[259550]: + exec nova-compute
Oct  7 09:51:08 np0005473739 python3.9[259713]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Oct  7 09:51:08 np0005473739 systemd[1]: Started libpod-conmon-ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9.scope.
Oct  7 09:51:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:51:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db320d6ede9b64b3e0ee5031e178257514780ec16e815fb0fd434579f26125f0/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db320d6ede9b64b3e0ee5031e178257514780ec16e815fb0fd434579f26125f0/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db320d6ede9b64b3e0ee5031e178257514780ec16e815fb0fd434579f26125f0/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:08 np0005473739 podman[259741]: 2025-10-07 13:51:08.673048901 +0000 UTC m=+0.153172387 container init ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:51:08 np0005473739 podman[259741]: 2025-10-07 13:51:08.684643102 +0000 UTC m=+0.164766588 container start ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:51:08 np0005473739 python3.9[259713]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Applying nova statedir ownership
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Oct  7 09:51:08 np0005473739 nova_compute_init[259762]: INFO:nova_statedir:Nova statedir ownership complete
Oct  7 09:51:08 np0005473739 systemd[1]: libpod-ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9.scope: Deactivated successfully.
Oct  7 09:51:08 np0005473739 podman[259777]: 2025-10-07 13:51:08.771206642 +0000 UTC m=+0.023531481 container died ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 09:51:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9-userdata-shm.mount: Deactivated successfully.
Oct  7 09:51:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-db320d6ede9b64b3e0ee5031e178257514780ec16e815fb0fd434579f26125f0-merged.mount: Deactivated successfully.
Oct  7 09:51:08 np0005473739 podman[259777]: 2025-10-07 13:51:08.813198928 +0000 UTC m=+0.065523737 container cleanup ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9 (image=quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844, name=nova_compute_init, container_name=nova_compute_init, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute@sha256:5f179b847f2dc32d9110b8f2be9fe65f1aeada1e18105dffdaf052981215d844', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:51:08 np0005473739 systemd[1]: libpod-conmon-ffeead5bf7be88b42c7f35032f2d462f5bc55b73fcdab750bdfd9f448084fee9.scope: Deactivated successfully.
Oct  7 09:51:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v701: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:09 np0005473739 systemd[1]: session-51.scope: Deactivated successfully.
Oct  7 09:51:09 np0005473739 systemd[1]: session-51.scope: Consumed 3min 2.597s CPU time.
Oct  7 09:51:09 np0005473739 systemd-logind[801]: Session 51 logged out. Waiting for processes to exit.
Oct  7 09:51:09 np0005473739 systemd-logind[801]: Removed session 51.
Oct  7 09:51:09 np0005473739 nova_compute[259550]: 2025-10-07 13:51:09.782 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  7 09:51:09 np0005473739 nova_compute[259550]: 2025-10-07 13:51:09.783 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  7 09:51:09 np0005473739 nova_compute[259550]: 2025-10-07 13:51:09.783 2 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Oct  7 09:51:09 np0005473739 nova_compute[259550]: 2025-10-07 13:51:09.783 2 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Oct  7 09:51:09 np0005473739 nova_compute[259550]: 2025-10-07 13:51:09.946 2 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:51:09 np0005473739 nova_compute[259550]: 2025-10-07 13:51:09.972 2 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 0 in 0.026s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.426 2 INFO nova.virt.driver [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.566 2 INFO nova.compute.provider_config [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.579 2 DEBUG oslo_concurrency.lockutils [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.579 2 DEBUG oslo_concurrency.lockutils [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.579 2 DEBUG oslo_concurrency.lockutils [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.580 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.580 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.580 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.580 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.580 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.580 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.580 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.581 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.581 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.581 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.581 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.581 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.581 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.582 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.582 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.582 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.582 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.582 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.583 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.583 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.583 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.583 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.583 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.583 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.584 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.584 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.584 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.584 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.584 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.585 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.585 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.585 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.585 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.585 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.585 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.586 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.586 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.586 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.586 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.586 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.587 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.587 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.587 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.587 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.587 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.588 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.588 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.588 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.588 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.588 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.589 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.589 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.589 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.589 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.589 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.589 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.589 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.590 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.590 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.590 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.590 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.590 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.590 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.590 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.591 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.591 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.591 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.591 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.591 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.591 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.591 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.591 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.592 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.592 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.592 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.592 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.592 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.592 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.592 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.593 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.593 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.593 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.593 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.593 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.593 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.593 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.594 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.594 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.594 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.594 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.594 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.594 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.594 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.595 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.595 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.595 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.595 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.595 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.595 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.596 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.596 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.596 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.596 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.596 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.596 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.596 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.597 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.597 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.597 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.597 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.597 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.597 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.597 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.597 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.598 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.598 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.598 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.598 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.598 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.598 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.598 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.599 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.599 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.599 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.599 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.599 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.599 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.599 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.600 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.600 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.600 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.600 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.600 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.600 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.600 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.601 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.601 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.601 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.601 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.601 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.601 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.601 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.602 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.602 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.602 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.602 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.602 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.602 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.602 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.603 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.603 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.603 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.603 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.603 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.603 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.603 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.604 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.604 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.604 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.604 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.604 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.604 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.605 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.605 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.605 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.605 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.605 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.605 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.605 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.606 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.606 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.606 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.606 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.606 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.606 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.607 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.607 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.607 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.607 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.607 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.607 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.607 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.608 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.608 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.608 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.608 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.608 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.608 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.608 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.609 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.609 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.609 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.609 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.609 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.609 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.609 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.610 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.610 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.610 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.610 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.610 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.610 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.610 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.611 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.611 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.611 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.611 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.611 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.611 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.612 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.612 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.612 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.612 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.612 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.612 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.612 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.613 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.613 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.613 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.613 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.613 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.613 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.613 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.613 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.614 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.614 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.614 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.614 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.614 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.614 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.614 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.615 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.615 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.615 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.615 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.615 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.615 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.615 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.616 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.616 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.616 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.616 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.616 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.616 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.616 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.617 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.617 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.617 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.617 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.617 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.617 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.617 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.618 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.618 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.618 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.618 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.618 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.618 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.618 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.619 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.619 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.619 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.619 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.619 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.619 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.620 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.620 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.620 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.620 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.620 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.620 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.621 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.621 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.621 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.621 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.621 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.621 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.622 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.622 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.622 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.622 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.622 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.623 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.623 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.623 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.623 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.623 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.623 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.624 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.624 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.624 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.624 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.624 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.625 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.625 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.625 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.625 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.625 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.625 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.626 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.626 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.626 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.626 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.626 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.626 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.627 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.627 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.627 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.627 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.627 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.628 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.628 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.628 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.628 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.628 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.628 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.628 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.629 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.629 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.629 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.629 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.629 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.629 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.629 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.630 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.630 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.630 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.630 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.630 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.630 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.630 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.631 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.631 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.631 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.631 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.631 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.631 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.632 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.632 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.632 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.632 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.632 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.632 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.632 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.633 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.633 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.633 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.633 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.633 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.633 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.634 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.634 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.634 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.634 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.634 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.634 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.635 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.635 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.635 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.635 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.635 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.635 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.636 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.636 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.636 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.636 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.636 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.637 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.637 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.637 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.637 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.637 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.637 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.637 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.638 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.638 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.638 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.638 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.638 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.638 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.638 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.639 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.639 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.639 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.639 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.639 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.639 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.640 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.640 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.640 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.640 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.640 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.640 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.641 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.641 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.641 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.641 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.641 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.641 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.641 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.642 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.642 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.642 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.642 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.642 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.642 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.642 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.643 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.643 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.643 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.643 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.643 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.643 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.644 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.644 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.644 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.644 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.644 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.644 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.644 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.645 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.645 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.645 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.645 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.645 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.645 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.645 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.646 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.646 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.646 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.646 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.646 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.646 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.646 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.647 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.647 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.647 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.647 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.647 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.647 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.647 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.648 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.648 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.648 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.648 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.648 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.648 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.648 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.649 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.649 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.649 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.649 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.649 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.649 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.650 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.650 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.650 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.650 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.650 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.650 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.650 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.651 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.651 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.651 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.651 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.651 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.651 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.651 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.652 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.images_rbd_ceph_conf   = /etc/ceph/ceph.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.652 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.652 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.652 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.images_rbd_glance_store_name = default_backend log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.652 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.images_rbd_pool        = vms log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.652 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.images_type            = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.653 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.653 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.653 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.653 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.653 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.653 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.654 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.654 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.654 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.654 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.654 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.654 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.654 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.655 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.655 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.655 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.655 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.656 2 WARNING oslo_config.cfg [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Oct  7 09:51:10 np0005473739 nova_compute[259550]: live_migration_uri is deprecated for removal in favor of two other options that
Oct  7 09:51:10 np0005473739 nova_compute[259550]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Oct  7 09:51:10 np0005473739 nova_compute[259550]: and ``live_migration_inbound_addr`` respectively.
Oct  7 09:51:10 np0005473739 nova_compute[259550]: ).  Its value may be silently ignored in the future.#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.656 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.656 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.656 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.656 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.657 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.657 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.657 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.657 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.657 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.657 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.658 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.658 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.658 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.658 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.658 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.659 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.659 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.659 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.659 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rbd_secret_uuid        = 82044f27-a8da-5b2a-a297-ff6afc620e1f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.659 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rbd_user               = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.659 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.659 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.660 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.660 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.660 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.660 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.660 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.660 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.660 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.661 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.661 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.661 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.661 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.661 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.661 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.662 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.662 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.662 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.662 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.662 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.662 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.662 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.663 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.663 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.663 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.663 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.663 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.663 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.664 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.664 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.664 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.664 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.664 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.664 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.664 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.665 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.665 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.665 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.665 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.665 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.665 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.665 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.666 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.666 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.666 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.666 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.666 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.666 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.666 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.667 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.667 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.667 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.667 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.667 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.667 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.668 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.668 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.668 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.668 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.668 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.668 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.668 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.669 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.669 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.669 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.669 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.669 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.669 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.669 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.670 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.670 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.670 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.670 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.670 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.670 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.670 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.671 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.671 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.671 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.671 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.671 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.671 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.671 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.672 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.672 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.672 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.672 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.672 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.672 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.673 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.673 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.673 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.673 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.673 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.674 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.674 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.674 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.674 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.674 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.674 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.674 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.675 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.675 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.675 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.675 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.675 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.676 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.676 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.676 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.676 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.676 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.676 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.677 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.677 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.677 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.677 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.677 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.677 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.677 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.678 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.678 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.678 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.678 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.678 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.679 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.679 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.679 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.679 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.679 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.679 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.679 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.680 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.680 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.680 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.680 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.680 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.680 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.680 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.681 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.681 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.681 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.681 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.681 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.681 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.682 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.682 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.682 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.682 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.682 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.682 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.682 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.683 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.683 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.683 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.683 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.683 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.683 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.683 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.684 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.684 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.684 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.684 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.684 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.685 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.685 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.685 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.685 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.685 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.685 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.686 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.686 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.686 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.686 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.686 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.686 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.686 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.687 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.687 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.687 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.687 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.687 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.687 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.688 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.688 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.688 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.688 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.688 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.688 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.688 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.688 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.689 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.689 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.689 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.689 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.689 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.689 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.690 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.690 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.690 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.690 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.690 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.690 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.690 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.691 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.691 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.691 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.691 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.691 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.691 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.691 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.692 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.692 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.692 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.692 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.692 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.692 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.692 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.692 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.693 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.693 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.693 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.693 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.693 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.693 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.693 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.694 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.694 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.694 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.694 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.694 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.694 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.695 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.695 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.695 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.695 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.695 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.695 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.696 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.696 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.696 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.696 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.696 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.696 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.696 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.696 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.697 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.697 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.697 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.697 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.697 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.697 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.697 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.698 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.698 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.698 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.698 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.698 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.698 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.698 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.699 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.699 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.699 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.699 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.699 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.699 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.699 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.700 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.700 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.700 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.700 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.700 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.700 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.701 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.701 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.701 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.701 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.701 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.701 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.701 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.702 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.702 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.702 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.702 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.702 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.702 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.703 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.703 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.703 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.703 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.703 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.703 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.703 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.704 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.704 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.704 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.704 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.704 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.704 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.704 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.705 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.705 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.705 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.705 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.705 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.705 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.705 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.706 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.706 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.706 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.706 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.706 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.706 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.706 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.707 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.707 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.707 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.707 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.707 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.707 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.707 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.708 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.708 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.708 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.708 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.708 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.708 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.708 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.709 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.709 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.709 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.709 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.709 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.709 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.709 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.710 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.710 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.710 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.710 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.710 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.710 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.710 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.710 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.711 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.711 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.711 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.711 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.711 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.711 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.712 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.712 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.712 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.712 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.712 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.712 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.712 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.712 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.713 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.713 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.713 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.713 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.713 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.713 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.713 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.714 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.714 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.714 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.714 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.714 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.714 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.714 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.715 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.715 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.715 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.715 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.715 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.715 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.715 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.716 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.716 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.716 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.716 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.716 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.716 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.716 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.717 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.717 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.717 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.717 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.717 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.717 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.717 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.718 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.718 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.718 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.718 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.718 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.718 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.718 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.719 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.719 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.719 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.719 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.719 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.719 2 DEBUG oslo_service.service [None req-db0b25bb-700f-4fb8-8f42-eba5ce59d47a - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.720 2 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.732 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.733 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.733 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.733 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.745 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f6d19343100> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.748 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f6d19343100> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.749 2 INFO nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Connection event '1' reason 'None'#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.755 2 INFO nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Libvirt host capabilities <capabilities>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <host>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <uuid>955d3c0d-1dc5-4415-a876-b62089e34180</uuid>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <cpu>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <arch>x86_64</arch>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model>EPYC-Rome-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <vendor>AMD</vendor>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <microcode version='16777317'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <signature family='23' model='49' stepping='0'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <maxphysaddr mode='emulate' bits='40'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='x2apic'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='tsc-deadline'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='osxsave'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='hypervisor'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='tsc_adjust'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='spec-ctrl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='stibp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='arch-capabilities'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='cmp_legacy'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='topoext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='virt-ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='lbrv'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='tsc-scale'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='vmcb-clean'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='pause-filter'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='pfthreshold'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='svme-addr-chk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='rdctl-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='skip-l1dfl-vmentry'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='mds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature name='pschange-mc-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <pages unit='KiB' size='4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <pages unit='KiB' size='2048'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <pages unit='KiB' size='1048576'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </cpu>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <power_management>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <suspend_mem/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </power_management>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <iommu support='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <migration_features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <live/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <uri_transports>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <uri_transport>tcp</uri_transport>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <uri_transport>rdma</uri_transport>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </uri_transports>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </migration_features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <topology>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <cells num='1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <cell id='0'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:          <memory unit='KiB'>7864108</memory>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:          <pages unit='KiB' size='4'>1966027</pages>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:          <pages unit='KiB' size='2048'>0</pages>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:          <pages unit='KiB' size='1048576'>0</pages>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:          <distances>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <sibling id='0' value='10'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:          </distances>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:          <cpus num='8'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:          </cpus>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        </cell>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </cells>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </topology>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <cache>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </cache>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <secmodel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model>selinux</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <doi>0</doi>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </secmodel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <secmodel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model>dac</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <doi>0</doi>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <baselabel type='kvm'>+107:+107</baselabel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <baselabel type='qemu'>+107:+107</baselabel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </secmodel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </host>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <guest>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <os_type>hvm</os_type>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <arch name='i686'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <wordsize>32</wordsize>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <domain type='qemu'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <domain type='kvm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </arch>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <pae/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <nonpae/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <acpi default='on' toggle='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <apic default='on' toggle='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <cpuselection/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <deviceboot/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <disksnapshot default='on' toggle='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <externalSnapshot/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </guest>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <guest>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <os_type>hvm</os_type>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <arch name='x86_64'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <wordsize>64</wordsize>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine canonical='pc-q35-rhel9.6.0' maxCpus='4096'>q35</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <domain type='qemu'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <domain type='kvm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </arch>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <acpi default='on' toggle='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <apic default='on' toggle='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <cpuselection/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <deviceboot/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <disksnapshot default='on' toggle='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <externalSnapshot/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </guest>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 
Oct  7 09:51:10 np0005473739 nova_compute[259550]: </capabilities>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: #033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.760 2 WARNING nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.760 2 DEBUG nova.virt.libvirt.volume.mount [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.763 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.793 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Oct  7 09:51:10 np0005473739 nova_compute[259550]: <domainCapabilities>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <path>/usr/libexec/qemu-kvm</path>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <domain>kvm</domain>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <arch>i686</arch>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <vcpu max='240'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <iothreads supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <os supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <enum name='firmware'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <loader supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>rom</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>pflash</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='readonly'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>yes</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>no</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='secure'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>no</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </loader>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </os>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <cpu>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='host-passthrough' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='hostPassthroughMigratable'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>on</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>off</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='maximum' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='maximumMigratable'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>on</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>off</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='host-model' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <vendor>AMD</vendor>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='x2apic'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc-deadline'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='hypervisor'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc_adjust'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='spec-ctrl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='stibp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='arch-capabilities'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='cmp_legacy'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='overflow-recov'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='succor'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='amd-ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='virt-ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='lbrv'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc-scale'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='vmcb-clean'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='flushbyasid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pause-filter'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pfthreshold'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='svme-addr-chk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='rdctl-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='mds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pschange-mc-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='gds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='rfds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='disable' name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='custom' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Dhyana-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Genoa'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='auto-ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Genoa-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='auto-ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-128'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-256'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-512'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v6'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v7'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='KnightsMill'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4fmaps'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4vnniw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512er'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512pf'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='KnightsMill-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4fmaps'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4vnniw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512er'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512pf'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G4-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tbm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G5-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tbm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SierraForest'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ne-convert'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cmpccxadd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SierraForest-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ne-convert'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cmpccxadd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='athlon'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='athlon-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='core2duo'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='core2duo-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='coreduo'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='coreduo-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='n270'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='n270-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='phenom'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='phenom-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <memoryBacking supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <enum name='sourceType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>file</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>anonymous</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>memfd</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </memoryBacking>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <devices>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <disk supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='diskDevice'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>disk</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>cdrom</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>floppy</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>lun</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='bus'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>ide</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>fdc</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>scsi</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>sata</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio-transitional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio-non-transitional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </disk>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <graphics supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vnc</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>egl-headless</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>dbus</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <video supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='modelType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vga</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>cirrus</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>none</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>bochs</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>ramfb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </video>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <hostdev supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='mode'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>subsystem</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='startupPolicy'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>default</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>mandatory</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>requisite</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>optional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='subsysType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>pci</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>scsi</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='capsType'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='pciBackend'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </hostdev>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <rng supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio-transitional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio-non-transitional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>random</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>egd</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>builtin</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </rng>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <filesystem supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='driverType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>path</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>handle</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtiofs</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </filesystem>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <tpm supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>tpm-tis</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>tpm-crb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>emulator</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>external</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendVersion'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>2.0</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </tpm>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <redirdev supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='bus'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </redirdev>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <channel supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>pty</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>unix</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </channel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <crypto supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>qemu</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>builtin</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </crypto>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <interface supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>default</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>passt</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </interface>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <panic supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>isa</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>hyperv</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </panic>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </devices>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <gic supported='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <vmcoreinfo supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <genid supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <backingStoreInput supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <backup supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <async-teardown supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <ps2 supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <sev supported='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <sgx supported='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <hyperv supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='features'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>relaxed</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vapic</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>spinlocks</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vpindex</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>runtime</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>synic</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>stimer</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>reset</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vendor_id</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>frequencies</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>reenlightenment</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>tlbflush</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>ipi</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>avic</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>emsr_bitmap</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>xmm_input</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </hyperv>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <launchSecurity supported='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: </domainCapabilities>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.800 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Oct  7 09:51:10 np0005473739 nova_compute[259550]: <domainCapabilities>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <path>/usr/libexec/qemu-kvm</path>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <domain>kvm</domain>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <arch>i686</arch>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <vcpu max='4096'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <iothreads supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <os supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <enum name='firmware'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <loader supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>rom</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>pflash</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='readonly'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>yes</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>no</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='secure'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>no</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </loader>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </os>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <cpu>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='host-passthrough' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='hostPassthroughMigratable'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>on</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>off</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='maximum' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='maximumMigratable'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>on</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>off</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='host-model' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <vendor>AMD</vendor>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='x2apic'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc-deadline'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='hypervisor'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc_adjust'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='spec-ctrl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='stibp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='arch-capabilities'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='cmp_legacy'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='overflow-recov'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='succor'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='amd-ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='virt-ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='lbrv'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc-scale'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='vmcb-clean'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='flushbyasid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pause-filter'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pfthreshold'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='svme-addr-chk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='rdctl-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='mds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pschange-mc-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='gds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='rfds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='disable' name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='custom' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Dhyana-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Genoa'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='auto-ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Genoa-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='auto-ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-128'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-256'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-512'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v6'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v7'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='KnightsMill'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4fmaps'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4vnniw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512er'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512pf'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='KnightsMill-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4fmaps'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4vnniw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512er'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512pf'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G4-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tbm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G5-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tbm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SierraForest'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ne-convert'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cmpccxadd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SierraForest-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ne-convert'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cmpccxadd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='athlon'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='athlon-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='core2duo'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='core2duo-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='coreduo'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='coreduo-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='n270'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='n270-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='phenom'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='phenom-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <memoryBacking supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <enum name='sourceType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>file</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>anonymous</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>memfd</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </memoryBacking>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <devices>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <disk supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='diskDevice'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>disk</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>cdrom</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>floppy</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>lun</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='bus'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>fdc</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>scsi</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>sata</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio-transitional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio-non-transitional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </disk>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <graphics supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vnc</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>egl-headless</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>dbus</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <video supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='modelType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vga</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>cirrus</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>none</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>bochs</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>ramfb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </video>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <hostdev supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='mode'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>subsystem</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='startupPolicy'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>default</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>mandatory</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>requisite</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>optional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='subsysType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>pci</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>scsi</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='capsType'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='pciBackend'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </hostdev>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <rng supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio-transitional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtio-non-transitional</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>random</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>egd</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>builtin</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </rng>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <filesystem supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='driverType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>path</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>handle</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>virtiofs</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </filesystem>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <tpm supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>tpm-tis</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>tpm-crb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>emulator</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>external</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendVersion'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>2.0</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </tpm>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <redirdev supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='bus'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </redirdev>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <channel supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>pty</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>unix</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </channel>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <crypto supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>qemu</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>builtin</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </crypto>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <interface supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='backendType'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>default</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>passt</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </interface>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <panic supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>isa</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>hyperv</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </panic>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </devices>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <gic supported='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <vmcoreinfo supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <genid supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <backingStoreInput supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <backup supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <async-teardown supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <ps2 supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <sev supported='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <sgx supported='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <hyperv supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='features'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>relaxed</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vapic</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>spinlocks</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vpindex</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>runtime</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>synic</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>stimer</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>reset</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>vendor_id</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>frequencies</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>reenlightenment</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>tlbflush</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>ipi</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>avic</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>emsr_bitmap</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>xmm_input</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </hyperv>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <launchSecurity supported='no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </features>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: </domainCapabilities>
Oct  7 09:51:10 np0005473739 nova_compute[259550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.843 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Oct  7 09:51:10 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.848 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Oct  7 09:51:10 np0005473739 nova_compute[259550]: <domainCapabilities>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <path>/usr/libexec/qemu-kvm</path>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <domain>kvm</domain>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <machine>pc-i440fx-rhel7.6.0</machine>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <arch>x86_64</arch>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <vcpu max='240'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <iothreads supported='yes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <os supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <enum name='firmware'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <loader supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>rom</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>pflash</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='readonly'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>yes</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>no</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='secure'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>no</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </loader>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  </os>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:  <cpu>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='host-passthrough' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='hostPassthroughMigratable'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>on</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>off</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='maximum' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <enum name='maximumMigratable'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>on</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <value>off</value>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='host-model' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <vendor>AMD</vendor>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='x2apic'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc-deadline'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='hypervisor'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc_adjust'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='spec-ctrl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='stibp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='arch-capabilities'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='cmp_legacy'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='overflow-recov'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='succor'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='amd-ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='virt-ssbd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='lbrv'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc-scale'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='vmcb-clean'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='flushbyasid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pause-filter'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pfthreshold'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='svme-addr-chk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='rdctl-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='mds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='pschange-mc-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='gds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='require' name='rfds-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <feature policy='disable' name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:    <mode name='custom' supported='yes'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Dhyana-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Genoa'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='auto-ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Genoa-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='auto-ibrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='EPYC-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-128'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-256'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx10-512'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-noTSX'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v6'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v7'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='KnightsMill'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4fmaps'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4vnniw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512er'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512pf'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='KnightsMill-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4fmaps'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-4vnniw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512er'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512pf'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G4'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G4-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G5'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tbm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G5-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tbm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v2'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v3'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SierraForest'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ne-convert'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cmpccxadd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='SierraForest-v1'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ifma'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-ne-convert'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='avx-vnni-int8'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='cmpccxadd'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-IBRS'>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:10 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v5'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='athlon'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='athlon-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='core2duo'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='core2duo-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='coreduo'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='coreduo-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='n270'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='n270-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='phenom'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='phenom-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <memoryBacking supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <enum name='sourceType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>file</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>anonymous</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>memfd</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </memoryBacking>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <devices>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <disk supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='diskDevice'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>disk</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>cdrom</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>floppy</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>lun</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='bus'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>ide</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>fdc</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>scsi</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>sata</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio-transitional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio-non-transitional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </disk>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <graphics supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vnc</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>egl-headless</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>dbus</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <video supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='modelType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vga</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>cirrus</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>none</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>bochs</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>ramfb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </video>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <hostdev supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='mode'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>subsystem</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='startupPolicy'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>default</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>mandatory</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>requisite</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>optional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='subsysType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>pci</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>scsi</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='capsType'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='pciBackend'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </hostdev>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <rng supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio-transitional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio-non-transitional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>random</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>egd</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>builtin</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </rng>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <filesystem supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='driverType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>path</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>handle</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtiofs</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </filesystem>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <tpm supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>tpm-tis</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>tpm-crb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>emulator</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>external</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendVersion'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>2.0</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </tpm>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <redirdev supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='bus'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </redirdev>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <channel supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>pty</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>unix</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </channel>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <crypto supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>qemu</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>builtin</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </crypto>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <interface supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>default</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>passt</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </interface>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <panic supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>isa</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>hyperv</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </panic>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </devices>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <features>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <gic supported='no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <vmcoreinfo supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <genid supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <backingStoreInput supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <backup supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <async-teardown supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <ps2 supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <sev supported='no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <sgx supported='no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <hyperv supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='features'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>relaxed</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vapic</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>spinlocks</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vpindex</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>runtime</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>synic</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>stimer</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>reset</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vendor_id</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>frequencies</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>reenlightenment</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>tlbflush</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>ipi</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>avic</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>emsr_bitmap</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>xmm_input</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </hyperv>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <launchSecurity supported='no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </features>
Oct  7 09:51:11 np0005473739 nova_compute[259550]: </domainCapabilities>
Oct  7 09:51:11 np0005473739 nova_compute[259550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.918 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Oct  7 09:51:11 np0005473739 nova_compute[259550]: <domainCapabilities>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <path>/usr/libexec/qemu-kvm</path>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <domain>kvm</domain>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <machine>pc-q35-rhel9.6.0</machine>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <arch>x86_64</arch>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <vcpu max='4096'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <iothreads supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <os supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <enum name='firmware'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>efi</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <loader supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>rom</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>pflash</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='readonly'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>yes</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>no</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='secure'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>yes</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>no</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </loader>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </os>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <cpu>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <mode name='host-passthrough' supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='hostPassthroughMigratable'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>on</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>off</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <mode name='maximum' supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='maximumMigratable'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>on</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>off</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <mode name='host-model' supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model fallback='forbid'>EPYC-Rome</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <vendor>AMD</vendor>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <maxphysaddr mode='passthrough' limit='40'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='x2apic'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc-deadline'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='hypervisor'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc_adjust'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='spec-ctrl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='stibp'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='arch-capabilities'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='ssbd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='cmp_legacy'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='overflow-recov'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='succor'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='ibrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='amd-ssbd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='virt-ssbd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='lbrv'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='tsc-scale'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='vmcb-clean'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='flushbyasid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='pause-filter'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='pfthreshold'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='svme-addr-chk'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='lfence-always-serializing'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='rdctl-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='mds-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='pschange-mc-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='gds-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='require' name='rfds-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <feature policy='disable' name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <mode name='custom' supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Broadwell'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-noTSX'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-noTSX-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Broadwell-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-noTSX'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cascadelake-Server-v5'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Cooperlake-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Denverton'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Denverton-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Dhyana-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Genoa'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='auto-ibrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Genoa-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='auto-ibrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Milan-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amd-psfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='no-nested-data-bp'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='null-sel-clr-base'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='stibp-always-on'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-Rome-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='EPYC-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='GraniteRapids-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx10'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx10-128'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx10-256'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx10-512'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='prefetchiti'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Haswell'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Haswell-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Haswell-noTSX'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Haswell-noTSX-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Haswell-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-noTSX'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v702: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v5'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v6'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Icelake-Server-v7'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='IvyBridge-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='KnightsMill'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-4fmaps'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-4vnniw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512er'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512pf'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='KnightsMill-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-4fmaps'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-4vnniw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512er'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512pf'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G4-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G5'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tbm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Opteron_G5-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fma4'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tbm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xop'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='SapphireRapids-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='amx-tile'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-bf16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-fp16'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512-vpopcntdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bitalg'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vbmi2'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrc'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fzrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='la57'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='taa-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='tsx-ldtrk'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xfd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='SierraForest'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-ne-convert'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cmpccxadd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='SierraForest-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-ifma'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-ne-convert'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx-vnni-int8'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='bus-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cmpccxadd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fbsdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='fsrs'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ibrs-all'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mcdt-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pbrsb-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='psdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='sbdr-ssdp-no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='serialize'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vaes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='vpclmulqdq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Client-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='hle'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='rtm'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Skylake-Server-v5'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512bw'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512cd'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512dq'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512f'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='avx512vl'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='invpcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pcid'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='pku'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='mpx'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v2'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v3'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='core-capability'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='split-lock-detect'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='Snowridge-v4'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='cldemote'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='erms'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='gfni'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdir64b'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='movdiri'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='xsaves'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='athlon'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='athlon-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='core2duo'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='core2duo-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='coreduo'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='coreduo-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='n270'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='n270-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='ss'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='phenom'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <blockers model='phenom-v1'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnow'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <feature name='3dnowext'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </blockers>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </mode>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <memoryBacking supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <enum name='sourceType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>file</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>anonymous</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <value>memfd</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </memoryBacking>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <devices>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <disk supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='diskDevice'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>disk</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>cdrom</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>floppy</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>lun</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='bus'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>fdc</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>scsi</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>sata</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio-transitional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio-non-transitional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </disk>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <graphics supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vnc</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>egl-headless</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>dbus</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <video supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='modelType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vga</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>cirrus</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>none</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>bochs</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>ramfb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </video>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <hostdev supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='mode'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>subsystem</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='startupPolicy'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>default</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>mandatory</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>requisite</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>optional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='subsysType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>pci</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>scsi</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='capsType'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='pciBackend'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </hostdev>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <rng supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio-transitional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtio-non-transitional</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>random</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>egd</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>builtin</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </rng>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <filesystem supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='driverType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>path</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>handle</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>virtiofs</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </filesystem>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <tpm supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>tpm-tis</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>tpm-crb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>emulator</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>external</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendVersion'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>2.0</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </tpm>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <redirdev supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='bus'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>usb</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </redirdev>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <channel supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>pty</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>unix</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </channel>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <crypto supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='type'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>qemu</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendModel'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>builtin</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </crypto>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <interface supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='backendType'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>default</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>passt</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </interface>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <panic supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='model'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>isa</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>hyperv</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </panic>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </devices>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  <features>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <gic supported='no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <vmcoreinfo supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <genid supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <backingStoreInput supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <backup supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <async-teardown supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <ps2 supported='yes'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <sev supported='no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <sgx supported='no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <hyperv supported='yes'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      <enum name='features'>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>relaxed</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vapic</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>spinlocks</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vpindex</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>runtime</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>synic</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>stimer</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>reset</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>vendor_id</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>frequencies</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>reenlightenment</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>tlbflush</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>ipi</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>avic</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>emsr_bitmap</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:        <value>xmm_input</value>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:      </enum>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    </hyperv>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:    <launchSecurity supported='no'/>
Oct  7 09:51:11 np0005473739 nova_compute[259550]:  </features>
Oct  7 09:51:11 np0005473739 nova_compute[259550]: </domainCapabilities>
Oct  7 09:51:11 np0005473739 nova_compute[259550]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.979 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.980 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.980 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.980 2 INFO nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Secure Boot support detected#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.982 2 INFO nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.982 2 INFO nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:10.992 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.021 2 INFO nova.virt.node [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Determined node identity cc5ee907-7908-4ad9-99df-64935eda6bff from /var/lib/nova/compute_id#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.052 2 WARNING nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Compute nodes ['cc5ee907-7908-4ad9-99df-64935eda6bff'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.098 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.139 2 WARNING nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.139 2 DEBUG oslo_concurrency.lockutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.140 2 DEBUG oslo_concurrency.lockutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.140 2 DEBUG oslo_concurrency.lockutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.140 2 DEBUG nova.compute.resource_tracker [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.141 2 DEBUG oslo_concurrency.processutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:51:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:51:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985413771' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.557 2 DEBUG oslo_concurrency.processutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:51:11 np0005473739 systemd[1]: Starting libvirt nodedev daemon...
Oct  7 09:51:11 np0005473739 systemd[1]: Started libvirt nodedev daemon.
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.878 2 WARNING nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.880 2 DEBUG nova.compute.resource_tracker [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5187MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.881 2 DEBUG oslo_concurrency.lockutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.881 2 DEBUG oslo_concurrency.lockutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.895 2 WARNING nova.compute.resource_tracker [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] No compute node record for compute-0.ctlplane.example.com:cc5ee907-7908-4ad9-99df-64935eda6bff: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host cc5ee907-7908-4ad9-99df-64935eda6bff could not be found.#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.912 2 INFO nova.compute.resource_tracker [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: cc5ee907-7908-4ad9-99df-64935eda6bff#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.972 2 DEBUG nova.compute.resource_tracker [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:51:11 np0005473739 nova_compute[259550]: 2025-10-07 13:51:11.972 2 DEBUG nova.compute.resource_tracker [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:51:12 np0005473739 nova_compute[259550]: 2025-10-07 13:51:12.789 2 INFO nova.scheduler.client.report [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [req-37d3c54f-2347-4d31-b551-05e6fa920c76] Created resource provider record via placement API for resource provider with UUID cc5ee907-7908-4ad9-99df-64935eda6bff and name compute-0.ctlplane.example.com.#033[00m
Oct  7 09:51:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v703: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.156 2 DEBUG oslo_concurrency.processutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:51:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:51:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3063873845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.598 2 DEBUG oslo_concurrency.processutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.605 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Oct  7 09:51:13 np0005473739 nova_compute[259550]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.606 2 INFO nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] kernel doesn't support AMD SEV#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.607 2 DEBUG nova.compute.provider_tree [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.607 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 09:51:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.665 2 DEBUG nova.scheduler.client.report [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Updated inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.666 2 DEBUG nova.compute.provider_tree [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Updating resource provider cc5ee907-7908-4ad9-99df-64935eda6bff generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.666 2 DEBUG nova.compute.provider_tree [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.755 2 DEBUG nova.compute.provider_tree [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Updating resource provider cc5ee907-7908-4ad9-99df-64935eda6bff generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.779 2 DEBUG nova.compute.resource_tracker [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.779 2 DEBUG oslo_concurrency.lockutils [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.780 2 DEBUG nova.service [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.892 2 DEBUG nova.service [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Oct  7 09:51:13 np0005473739 nova_compute[259550]: 2025-10-07 13:51:13.893 2 DEBUG nova.servicegroup.drivers.db [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Oct  7 09:51:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v704: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v705: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v706: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v707: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:22 np0005473739 podman[259923]: 2025-10-07 13:51:22.07492227 +0000 UTC m=+0.057528303 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  7 09:51:22 np0005473739 podman[259922]: 2025-10-07 13:51:22.101051191 +0000 UTC m=+0.085937835 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:51:22
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'vms', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.data', 'images', 'volumes', 'default.rgw.log', 'default.rgw.control', 'cephfs.cephfs.meta', 'backups']
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:51:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:51:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v708: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v709: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v710: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:28 np0005473739 nova_compute[259550]: 2025-10-07 13:51:28.899 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:51:28 np0005473739 nova_compute[259550]: 2025-10-07 13:51:28.935 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:51:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v711: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v712: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:31 np0005473739 podman[259960]: 2025-10-07 13:51:31.157471495 +0000 UTC m=+0.140410574 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  7 09:51:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:51:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463226962' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:51:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:51:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3463226962' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:51:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:51:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2297686712' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:51:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:51:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2297686712' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:51:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:51:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1329340428' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:51:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:51:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1329340428' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:51:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v713: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v714: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:36 np0005473739 podman[259986]: 2025-10-07 13:51:36.102176253 +0000 UTC m=+0.082374139 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 09:51:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v715: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v716: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v717: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v718: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v719: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v720: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v721: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v722: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:51:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v723: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:53 np0005473739 podman[260007]: 2025-10-07 13:51:53.101432731 +0000 UTC m=+0.074968921 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 09:51:53 np0005473739 podman[260006]: 2025-10-07 13:51:53.109330532 +0000 UTC m=+0.087095035 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:51:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v724: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v725: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:51:58 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a9fef3e6-209c-4f5c-a6e0-03c3519febd6 does not exist
Oct  7 09:51:58 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 03e2efa4-a42a-4c7a-ae48-0dc7bb8efb87 does not exist
Oct  7 09:51:58 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7d9463ee-c120-4416-abbd-2dae83e7a8a4 does not exist
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:51:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:51:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v726: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:51:59 np0005473739 podman[260319]: 2025-10-07 13:51:59.167594755 +0000 UTC m=+0.042424418 container create 8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 09:51:59 np0005473739 systemd[1]: Started libpod-conmon-8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735.scope.
Oct  7 09:51:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:51:59 np0005473739 podman[260319]: 2025-10-07 13:51:59.151353319 +0000 UTC m=+0.026183012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:51:59 np0005473739 podman[260319]: 2025-10-07 13:51:59.25060154 +0000 UTC m=+0.125431213 container init 8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noyce, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:51:59 np0005473739 podman[260319]: 2025-10-07 13:51:59.258113162 +0000 UTC m=+0.132942825 container start 8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noyce, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 09:51:59 np0005473739 intelligent_noyce[260335]: 167 167
Oct  7 09:51:59 np0005473739 podman[260319]: 2025-10-07 13:51:59.264533283 +0000 UTC m=+0.139362966 container attach 8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noyce, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:51:59 np0005473739 systemd[1]: libpod-8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735.scope: Deactivated successfully.
Oct  7 09:51:59 np0005473739 conmon[260335]: conmon 8703ab561043d09e645d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735.scope/container/memory.events
Oct  7 09:51:59 np0005473739 podman[260319]: 2025-10-07 13:51:59.266278961 +0000 UTC m=+0.141108624 container died 8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 09:51:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-01be5e3dbf7288377b3f917b1a179218ec0a142e4eb967c43a7031992777fe0d-merged.mount: Deactivated successfully.
Oct  7 09:51:59 np0005473739 podman[260319]: 2025-10-07 13:51:59.310924237 +0000 UTC m=+0.185753900 container remove 8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_noyce, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 09:51:59 np0005473739 systemd[1]: libpod-conmon-8703ab561043d09e645d245bedb07d6fe627b0b0c063bb5ab1d83fbd67c37735.scope: Deactivated successfully.
Oct  7 09:51:59 np0005473739 podman[260359]: 2025-10-07 13:51:59.508841233 +0000 UTC m=+0.051611314 container create 0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:51:59 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:51:59 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:51:59 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:51:59 np0005473739 systemd[1]: Started libpod-conmon-0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599.scope.
Oct  7 09:51:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:51:59 np0005473739 podman[260359]: 2025-10-07 13:51:59.489256438 +0000 UTC m=+0.032026509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:51:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c046b6953e87a411c3fe494b1eb671207c680a53a95aca99fe4c22458ead82d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c046b6953e87a411c3fe494b1eb671207c680a53a95aca99fe4c22458ead82d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c046b6953e87a411c3fe494b1eb671207c680a53a95aca99fe4c22458ead82d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c046b6953e87a411c3fe494b1eb671207c680a53a95aca99fe4c22458ead82d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c046b6953e87a411c3fe494b1eb671207c680a53a95aca99fe4c22458ead82d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:51:59 np0005473739 podman[260359]: 2025-10-07 13:51:59.604424575 +0000 UTC m=+0.147194626 container init 0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:51:59 np0005473739 podman[260359]: 2025-10-07 13:51:59.618117693 +0000 UTC m=+0.160887734 container start 0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:51:59 np0005473739 podman[260359]: 2025-10-07 13:51:59.621942036 +0000 UTC m=+0.164712077 container attach 0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:52:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:52:00.025 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:52:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:52:00.026 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:52:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:52:00.026 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:52:00 np0005473739 vigilant_aryabhata[260375]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:52:00 np0005473739 vigilant_aryabhata[260375]: --> relative data size: 1.0
Oct  7 09:52:00 np0005473739 vigilant_aryabhata[260375]: --> All data devices are unavailable
Oct  7 09:52:00 np0005473739 systemd[1]: libpod-0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599.scope: Deactivated successfully.
Oct  7 09:52:00 np0005473739 podman[260359]: 2025-10-07 13:52:00.814787664 +0000 UTC m=+1.357557695 container died 0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:52:00 np0005473739 systemd[1]: libpod-0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599.scope: Consumed 1.147s CPU time.
Oct  7 09:52:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4c046b6953e87a411c3fe494b1eb671207c680a53a95aca99fe4c22458ead82d-merged.mount: Deactivated successfully.
Oct  7 09:52:00 np0005473739 podman[260359]: 2025-10-07 13:52:00.87063887 +0000 UTC m=+1.413408911 container remove 0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:52:00 np0005473739 systemd[1]: libpod-conmon-0a34b9f6f4ddf183b3495370ae193cb0840e838a367e52923c951e02a6d43599.scope: Deactivated successfully.
Oct  7 09:52:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v727: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:01 np0005473739 podman[260553]: 2025-10-07 13:52:01.453160106 +0000 UTC m=+0.037946268 container create 5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ardinghelli, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:52:01 np0005473739 systemd[1]: Started libpod-conmon-5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1.scope.
Oct  7 09:52:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:52:01 np0005473739 podman[260553]: 2025-10-07 13:52:01.509085156 +0000 UTC m=+0.093871338 container init 5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ardinghelli, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 09:52:01 np0005473739 podman[260553]: 2025-10-07 13:52:01.517978805 +0000 UTC m=+0.102764967 container start 5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ardinghelli, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:52:01 np0005473739 podman[260553]: 2025-10-07 13:52:01.522511336 +0000 UTC m=+0.107297498 container attach 5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ardinghelli, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:52:01 np0005473739 gifted_ardinghelli[260571]: 167 167
Oct  7 09:52:01 np0005473739 systemd[1]: libpod-5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1.scope: Deactivated successfully.
Oct  7 09:52:01 np0005473739 podman[260553]: 2025-10-07 13:52:01.5241725 +0000 UTC m=+0.108958672 container died 5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ardinghelli, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:52:01 np0005473739 podman[260553]: 2025-10-07 13:52:01.436360206 +0000 UTC m=+0.021146388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:52:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-52591df00adfbd25d439230a4aaee57ba9eace1634a52386489bcf82be3baf72-merged.mount: Deactivated successfully.
Oct  7 09:52:01 np0005473739 podman[260553]: 2025-10-07 13:52:01.559557289 +0000 UTC m=+0.144343451 container remove 5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_ardinghelli, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:52:01 np0005473739 systemd[1]: libpod-conmon-5b667ad4107cdb19fc4b2f3206fdda77b799b227b64e688bcf5b60848aa44bb1.scope: Deactivated successfully.
Oct  7 09:52:01 np0005473739 podman[260567]: 2025-10-07 13:52:01.656522229 +0000 UTC m=+0.162157229 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS)
Oct  7 09:52:01 np0005473739 podman[260619]: 2025-10-07 13:52:01.734435297 +0000 UTC m=+0.044581045 container create 37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:52:01 np0005473739 systemd[1]: Started libpod-conmon-37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842.scope.
Oct  7 09:52:01 np0005473739 podman[260619]: 2025-10-07 13:52:01.714727439 +0000 UTC m=+0.024873217 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:52:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:52:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9be15644e31d0d2baa1dc049689c5d704478fbde269f0ee19ff64a26c3ffbd2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:52:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9be15644e31d0d2baa1dc049689c5d704478fbde269f0ee19ff64a26c3ffbd2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:52:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9be15644e31d0d2baa1dc049689c5d704478fbde269f0ee19ff64a26c3ffbd2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:52:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9be15644e31d0d2baa1dc049689c5d704478fbde269f0ee19ff64a26c3ffbd2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:52:01 np0005473739 podman[260619]: 2025-10-07 13:52:01.833758299 +0000 UTC m=+0.143904137 container init 37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:52:01 np0005473739 podman[260619]: 2025-10-07 13:52:01.841099727 +0000 UTC m=+0.151245475 container start 37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 09:52:01 np0005473739 podman[260619]: 2025-10-07 13:52:01.845723481 +0000 UTC m=+0.155869259 container attach 37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]: {
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:    "0": [
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:        {
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "devices": [
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "/dev/loop3"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            ],
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_name": "ceph_lv0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_size": "21470642176",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "name": "ceph_lv0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "tags": {
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cluster_name": "ceph",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.crush_device_class": "",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.encrypted": "0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osd_id": "0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.type": "block",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.vdo": "0"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            },
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "type": "block",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "vg_name": "ceph_vg0"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:        }
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:    ],
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:    "1": [
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:        {
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "devices": [
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "/dev/loop4"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            ],
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_name": "ceph_lv1",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_size": "21470642176",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "name": "ceph_lv1",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "tags": {
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cluster_name": "ceph",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.crush_device_class": "",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.encrypted": "0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osd_id": "1",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.type": "block",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.vdo": "0"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            },
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "type": "block",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "vg_name": "ceph_vg1"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:        }
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:    ],
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:    "2": [
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:        {
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "devices": [
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "/dev/loop5"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            ],
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_name": "ceph_lv2",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_size": "21470642176",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "name": "ceph_lv2",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "tags": {
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.cluster_name": "ceph",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.crush_device_class": "",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.encrypted": "0",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osd_id": "2",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.type": "block",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:                "ceph.vdo": "0"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            },
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "type": "block",
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:            "vg_name": "ceph_vg2"
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:        }
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]:    ]
Oct  7 09:52:02 np0005473739 pedantic_villani[260637]: }
Oct  7 09:52:02 np0005473739 systemd[1]: libpod-37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842.scope: Deactivated successfully.
Oct  7 09:52:02 np0005473739 podman[260619]: 2025-10-07 13:52:02.645982704 +0000 UTC m=+0.956128452 container died 37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:52:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e9be15644e31d0d2baa1dc049689c5d704478fbde269f0ee19ff64a26c3ffbd2-merged.mount: Deactivated successfully.
Oct  7 09:52:02 np0005473739 podman[260619]: 2025-10-07 13:52:02.710633378 +0000 UTC m=+1.020779126 container remove 37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:52:02 np0005473739 systemd[1]: libpod-conmon-37716cb693819f975eb28c27e707fd659e8b8a0442ad450ba2fd7186dc94a842.scope: Deactivated successfully.
Oct  7 09:52:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v728: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:03 np0005473739 podman[260801]: 2025-10-07 13:52:03.322827679 +0000 UTC m=+0.039534790 container create 61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nash, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 09:52:03 np0005473739 systemd[1]: Started libpod-conmon-61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3.scope.
Oct  7 09:52:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:52:03 np0005473739 podman[260801]: 2025-10-07 13:52:03.397250814 +0000 UTC m=+0.113957935 container init 61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nash, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:52:03 np0005473739 podman[260801]: 2025-10-07 13:52:03.304232401 +0000 UTC m=+0.020939522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:52:03 np0005473739 podman[260801]: 2025-10-07 13:52:03.405829454 +0000 UTC m=+0.122536555 container start 61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:52:03 np0005473739 podman[260801]: 2025-10-07 13:52:03.409660727 +0000 UTC m=+0.126367858 container attach 61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nash, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:52:03 np0005473739 beautiful_nash[260817]: 167 167
Oct  7 09:52:03 np0005473739 systemd[1]: libpod-61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3.scope: Deactivated successfully.
Oct  7 09:52:03 np0005473739 podman[260801]: 2025-10-07 13:52:03.411576858 +0000 UTC m=+0.128283969 container died 61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nash, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:52:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-541c6442a1c585f7472440e42dfb4c73cf86eece23cedee6635e5c35789db290-merged.mount: Deactivated successfully.
Oct  7 09:52:03 np0005473739 podman[260801]: 2025-10-07 13:52:03.44372917 +0000 UTC m=+0.160436281 container remove 61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_nash, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:52:03 np0005473739 systemd[1]: libpod-conmon-61e2b4bd4abb5ea95ffff71a86c3ac34eb0198dcba3666f1dd4344492b8881d3.scope: Deactivated successfully.
Oct  7 09:52:03 np0005473739 podman[260842]: 2025-10-07 13:52:03.620918611 +0000 UTC m=+0.045177103 container create 3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:52:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:03 np0005473739 systemd[1]: Started libpod-conmon-3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12.scope.
Oct  7 09:52:03 np0005473739 podman[260842]: 2025-10-07 13:52:03.603096453 +0000 UTC m=+0.027354965 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:52:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:52:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66865096c4700bbb8620d53ae95a4cc8ce4edb406e605d525d0c920610a01bd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:52:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66865096c4700bbb8620d53ae95a4cc8ce4edb406e605d525d0c920610a01bd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:52:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66865096c4700bbb8620d53ae95a4cc8ce4edb406e605d525d0c920610a01bd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:52:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f66865096c4700bbb8620d53ae95a4cc8ce4edb406e605d525d0c920610a01bd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:52:03 np0005473739 podman[260842]: 2025-10-07 13:52:03.733148329 +0000 UTC m=+0.157406851 container init 3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:52:03 np0005473739 podman[260842]: 2025-10-07 13:52:03.743517246 +0000 UTC m=+0.167775738 container start 3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:52:03 np0005473739 podman[260842]: 2025-10-07 13:52:03.747242487 +0000 UTC m=+0.171500979 container attach 3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:52:04 np0005473739 keen_wilson[260859]: {
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "osd_id": 2,
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "type": "bluestore"
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:    },
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "osd_id": 1,
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "type": "bluestore"
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:    },
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "osd_id": 0,
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:        "type": "bluestore"
Oct  7 09:52:04 np0005473739 keen_wilson[260859]:    }
Oct  7 09:52:04 np0005473739 keen_wilson[260859]: }
Oct  7 09:52:04 np0005473739 systemd[1]: libpod-3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12.scope: Deactivated successfully.
Oct  7 09:52:04 np0005473739 systemd[1]: libpod-3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12.scope: Consumed 1.046s CPU time.
Oct  7 09:52:04 np0005473739 podman[260842]: 2025-10-07 13:52:04.782009457 +0000 UTC m=+1.206267949 container died 3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilson, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:52:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f66865096c4700bbb8620d53ae95a4cc8ce4edb406e605d525d0c920610a01bd-merged.mount: Deactivated successfully.
Oct  7 09:52:04 np0005473739 podman[260842]: 2025-10-07 13:52:04.839254672 +0000 UTC m=+1.263513164 container remove 3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_wilson, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:52:04 np0005473739 systemd[1]: libpod-conmon-3ddc6256fe34f35b9f69c2da5bdaef2737d72b5935a08afc23dc7ec0cf2f7d12.scope: Deactivated successfully.
Oct  7 09:52:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:52:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:52:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:52:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:52:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 210bd662-828c-4c05-8591-9e2c80f65005 does not exist
Oct  7 09:52:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c7abc328-1048-43f7-8641-f73e3b054858 does not exist
Oct  7 09:52:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v729: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:52:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:52:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v730: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:07 np0005473739 podman[260956]: 2025-10-07 13:52:07.094883281 +0000 UTC m=+0.070370357 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:52:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v731: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:09 np0005473739 nova_compute[259550]: 2025-10-07 13:52:09.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:09 np0005473739 nova_compute[259550]: 2025-10-07 13:52:09.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:09 np0005473739 nova_compute[259550]: 2025-10-07 13:52:09.985 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 09:52:09 np0005473739 nova_compute[259550]: 2025-10-07 13:52:09.985 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.009 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.009 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.010 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.010 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.010 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.011 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.011 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.011 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.011 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.041 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.041 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.042 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.042 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.042 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:52:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:52:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3208270340' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.459 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.417s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.614 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.616 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5128MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.616 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.616 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.706 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.707 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:52:10 np0005473739 nova_compute[259550]: 2025-10-07 13:52:10.925 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:52:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v732: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:52:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1104240852' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:52:11 np0005473739 nova_compute[259550]: 2025-10-07 13:52:11.361 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:52:11 np0005473739 nova_compute[259550]: 2025-10-07 13:52:11.369 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 09:52:11 np0005473739 nova_compute[259550]: 2025-10-07 13:52:11.385 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 09:52:11 np0005473739 nova_compute[259550]: 2025-10-07 13:52:11.388 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:52:11 np0005473739 nova_compute[259550]: 2025-10-07 13:52:11.389 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:52:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v733: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:52:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.0 total, 600.0 interval#012Cumulative writes: 5690 writes, 23K keys, 5690 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5690 writes, 930 syncs, 6.12 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 212 writes, 318 keys, 212 commit groups, 1.0 writes per commit group, ingest: 0.11 MB, 0.00 MB/s#012Interval WAL: 212 writes, 106 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.0 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  7 09:52:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v734: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct  7 09:52:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/85756590' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  7 09:52:15 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14351 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  7 09:52:15 np0005473739 ceph-mgr[74587]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  7 09:52:15 np0005473739 ceph-mgr[74587]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  7 09:52:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v735: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:52:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 6813 writes, 28K keys, 6813 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 6813 writes, 1283 syncs, 5.31 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 278 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.6e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_s
Oct  7 09:52:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v736: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v737: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 09:52:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1200.1 total, 600.0 interval#012Cumulative writes: 5567 writes, 23K keys, 5567 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.02 MB/s#012Cumulative WAL: 5567 writes, 857 syncs, 6.50 writes per sync, written: 0.02 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 180 writes, 270 keys, 180 commit groups, 1.0 writes per commit group, ingest: 0.09 MB, 0.00 MB/s#012Interval WAL: 180 writes, 90 syncs, 2.00 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 3 last_copies: 8 last_secs: 4.8e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1200.1 total, 600.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_sl
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:52:22
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.log', '.mgr', 'vms', 'images']
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:52:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:52:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v738: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 09:52:24 np0005473739 podman[261019]: 2025-10-07 13:52:24.110892398 +0000 UTC m=+0.091452442 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:52:24 np0005473739 podman[261020]: 2025-10-07 13:52:24.127106532 +0000 UTC m=+0.108449568 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:52:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v739: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v740: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v741: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "version", "format": "json"} v 0) v1
Oct  7 09:52:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/224468923' entity='client.openstack' cmd=[{"prefix": "version", "format": "json"}]: dispatch
Oct  7 09:52:30 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.14353 -' entity='client.openstack' cmd=[{"prefix": "fs volume ls", "format": "json"}]: dispatch
Oct  7 09:52:30 np0005473739 ceph-mgr[74587]: [volumes INFO volumes.module] Starting _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  7 09:52:30 np0005473739 ceph-mgr[74587]: [volumes INFO volumes.module] Finishing _cmd_fs_volume_ls(format:json, prefix:fs volume ls) < ""
Oct  7 09:52:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v742: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:52:32 np0005473739 podman[261057]: 2025-10-07 13:52:32.106750683 +0000 UTC m=+0.094570377 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:52:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:52:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345440457' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:52:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:52:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/345440457' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:52:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v743: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v744: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v745: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:38 np0005473739 podman[261084]: 2025-10-07 13:52:38.072161215 +0000 UTC m=+0.054563564 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 09:52:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v746: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v747: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v748: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v749: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v750: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v751: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v752: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:52:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v753: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v754: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:55 np0005473739 podman[261104]: 2025-10-07 13:52:55.088951496 +0000 UTC m=+0.071400983 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:52:55 np0005473739 podman[261105]: 2025-10-07 13:52:55.089445629 +0000 UTC m=+0.070685073 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid)
Oct  7 09:52:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v755: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:52:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:52:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v756: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:53:00.026 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:53:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:53:00.026 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:53:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:53:00.027 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:53:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v757: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v758: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:03 np0005473739 podman[261145]: 2025-10-07 13:53:03.108226905 +0000 UTC m=+0.098437997 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 09:53:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v759: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:53:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:53:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #36. Immutable memtables: 0.
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.681260) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 15] Flushing memtable with next log file: 36
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845186681310, "job": 15, "event": "flush_started", "num_memtables": 1, "num_entries": 1486, "num_deletes": 251, "total_data_size": 2376007, "memory_usage": 2413336, "flush_reason": "Manual Compaction"}
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 15] Level-0 flush table #37: started
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845186720998, "cf_name": "default", "job": 15, "event": "table_file_creation", "file_number": 37, "file_size": 2332285, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 14844, "largest_seqno": 16329, "table_properties": {"data_size": 2325311, "index_size": 4045, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 14213, "raw_average_key_size": 19, "raw_value_size": 2311419, "raw_average_value_size": 3205, "num_data_blocks": 185, "num_entries": 721, "num_filter_entries": 721, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845029, "oldest_key_time": 1759845029, "file_creation_time": 1759845186, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 37, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 15] Flush lasted 39796 microseconds, and 7501 cpu microseconds.
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.721054) [db/flush_job.cc:967] [default] [JOB 15] Level-0 flush table #37: 2332285 bytes OK
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.721079) [db/memtable_list.cc:519] [default] Level-0 commit table #37 started
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.729194) [db/memtable_list.cc:722] [default] Level-0 commit table #37: memtable #1 done
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.729228) EVENT_LOG_v1 {"time_micros": 1759845186729218, "job": 15, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.729250) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 15] Try to delete WAL files size 2369482, prev total WAL file size 2369943, number of live WAL files 2.
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000033.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.730430) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031303034' seq:72057594037927935, type:22 .. '7061786F730031323536' seq:0, type:0; will stop at (end)
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 16] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 15 Base level 0, inputs: [37(2277KB)], [35(7065KB)]
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845186730498, "job": 16, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [37], "files_L6": [35], "score": -1, "input_data_size": 9567490, "oldest_snapshot_seqno": -1}
Oct  7 09:53:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8c9e13b2-e044-4df7-9e8c-52c92fa6076a does not exist
Oct  7 09:53:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0a707a18-cbd3-425d-8c58-0d6b300bfac4 does not exist
Oct  7 09:53:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 29dbf40f-1b7b-461c-bc83-26101e762c04 does not exist
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 16] Generated table #38: 4003 keys, 7770825 bytes, temperature: kUnknown
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845186846852, "cf_name": "default", "job": 16, "event": "table_file_creation", "file_number": 38, "file_size": 7770825, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7741791, "index_size": 17916, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10053, "raw_key_size": 97809, "raw_average_key_size": 24, "raw_value_size": 7667091, "raw_average_value_size": 1915, "num_data_blocks": 759, "num_entries": 4003, "num_filter_entries": 4003, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845186, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 38, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.847253) [db/compaction/compaction_job.cc:1663] [default] [JOB 16] Compacted 1@0 + 1@6 files to L6 => 7770825 bytes
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.861271) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 82.1 rd, 66.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.2, 6.9 +0.0 blob) out(7.4 +0.0 blob), read-write-amplify(7.4) write-amplify(3.3) OK, records in: 4517, records dropped: 514 output_compression: NoCompression
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.861320) EVENT_LOG_v1 {"time_micros": 1759845186861301, "job": 16, "event": "compaction_finished", "compaction_time_micros": 116519, "compaction_time_cpu_micros": 24812, "output_level": 6, "num_output_files": 1, "total_output_size": 7770825, "num_input_records": 4517, "num_output_records": 4003, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000037.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845186862035, "job": 16, "event": "table_file_deletion", "file_number": 37}
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000035.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845186863834, "job": 16, "event": "table_file_deletion", "file_number": 35}
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.730178) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.863908) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.863912) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.863914) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.863915) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:53:06 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:53:06.863916) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:53:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v760: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:07 np0005473739 podman[261564]: 2025-10-07 13:53:07.470035768 +0000 UTC m=+0.097253677 container create 08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 09:53:07 np0005473739 podman[261564]: 2025-10-07 13:53:07.399171449 +0000 UTC m=+0.026389388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:53:07 np0005473739 systemd[1]: Started libpod-conmon-08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8.scope.
Oct  7 09:53:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:53:07 np0005473739 podman[261564]: 2025-10-07 13:53:07.623408385 +0000 UTC m=+0.250626314 container init 08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:53:07 np0005473739 podman[261564]: 2025-10-07 13:53:07.630959437 +0000 UTC m=+0.258177336 container start 08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 09:53:07 np0005473739 brave_keldysh[261581]: 167 167
Oct  7 09:53:07 np0005473739 systemd[1]: libpod-08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8.scope: Deactivated successfully.
Oct  7 09:53:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 09:53:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:53:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:53:07 np0005473739 podman[261564]: 2025-10-07 13:53:07.675982664 +0000 UTC m=+0.303200573 container attach 08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 09:53:07 np0005473739 podman[261564]: 2025-10-07 13:53:07.676403225 +0000 UTC m=+0.303621134 container died 08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:53:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a974b7fd69b052fb1a8ee58a7bf21bf3bf8851a89fc1ed8b67b73943dea7ed68-merged.mount: Deactivated successfully.
Oct  7 09:53:07 np0005473739 podman[261564]: 2025-10-07 13:53:07.850002744 +0000 UTC m=+0.477220643 container remove 08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_keldysh, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:53:07 np0005473739 systemd[1]: libpod-conmon-08955442d35fb57d86b1002193206dd4d68d464b790e5677dd29190a47bba1d8.scope: Deactivated successfully.
Oct  7 09:53:08 np0005473739 podman[261606]: 2025-10-07 13:53:08.066290266 +0000 UTC m=+0.093513904 container create 5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_varahamihira, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 09:53:08 np0005473739 podman[261606]: 2025-10-07 13:53:08.000124635 +0000 UTC m=+0.027348293 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:53:08 np0005473739 systemd[1]: Started libpod-conmon-5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f.scope.
Oct  7 09:53:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:53:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d967fc7559668a97113e2229d10e655b73c578bde99169acbac4ca59f439f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d967fc7559668a97113e2229d10e655b73c578bde99169acbac4ca59f439f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d967fc7559668a97113e2229d10e655b73c578bde99169acbac4ca59f439f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d967fc7559668a97113e2229d10e655b73c578bde99169acbac4ca59f439f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c5d967fc7559668a97113e2229d10e655b73c578bde99169acbac4ca59f439f8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:08 np0005473739 podman[261606]: 2025-10-07 13:53:08.221412701 +0000 UTC m=+0.248636359 container init 5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_varahamihira, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 09:53:08 np0005473739 podman[261606]: 2025-10-07 13:53:08.23108197 +0000 UTC m=+0.258305608 container start 5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_varahamihira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:53:08 np0005473739 podman[261606]: 2025-10-07 13:53:08.285341614 +0000 UTC m=+0.312565282 container attach 5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_varahamihira, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:53:08 np0005473739 podman[261624]: 2025-10-07 13:53:08.329735003 +0000 UTC m=+0.177422313 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  7 09:53:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v761: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:09 np0005473739 jolly_varahamihira[261622]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:53:09 np0005473739 jolly_varahamihira[261622]: --> relative data size: 1.0
Oct  7 09:53:09 np0005473739 jolly_varahamihira[261622]: --> All data devices are unavailable
Oct  7 09:53:09 np0005473739 systemd[1]: libpod-5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f.scope: Deactivated successfully.
Oct  7 09:53:09 np0005473739 podman[261606]: 2025-10-07 13:53:09.457126458 +0000 UTC m=+1.484350106 container died 5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_varahamihira, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:53:09 np0005473739 systemd[1]: libpod-5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f.scope: Consumed 1.167s CPU time.
Oct  7 09:53:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c5d967fc7559668a97113e2229d10e655b73c578bde99169acbac4ca59f439f8-merged.mount: Deactivated successfully.
Oct  7 09:53:09 np0005473739 podman[261606]: 2025-10-07 13:53:09.580375678 +0000 UTC m=+1.607599316 container remove 5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 09:53:09 np0005473739 systemd[1]: libpod-conmon-5f8fc3b93dcb409f89850c645df7a933e909174fdc1c64a4e3a14fb846e7a99f.scope: Deactivated successfully.
Oct  7 09:53:10 np0005473739 podman[261823]: 2025-10-07 13:53:10.219318631 +0000 UTC m=+0.022830902 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:53:10 np0005473739 podman[261823]: 2025-10-07 13:53:10.451656684 +0000 UTC m=+0.255168925 container create e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wiles, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 09:53:10 np0005473739 systemd[1]: Started libpod-conmon-e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b.scope.
Oct  7 09:53:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:53:11 np0005473739 podman[261823]: 2025-10-07 13:53:11.025284807 +0000 UTC m=+0.828797088 container init e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wiles, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:53:11 np0005473739 podman[261823]: 2025-10-07 13:53:11.034847343 +0000 UTC m=+0.838359604 container start e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wiles, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:53:11 np0005473739 magical_wiles[261840]: 167 167
Oct  7 09:53:11 np0005473739 systemd[1]: libpod-e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b.scope: Deactivated successfully.
Oct  7 09:53:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v762: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:11 np0005473739 podman[261823]: 2025-10-07 13:53:11.150610533 +0000 UTC m=+0.954122814 container attach e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wiles, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:53:11 np0005473739 podman[261823]: 2025-10-07 13:53:11.151428755 +0000 UTC m=+0.954941056 container died e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wiles, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.381 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.383 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.412 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.412 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.412 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.413 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.413 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.413 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.413 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 09:53:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e0db2a14e89d943a5da27d74834ebeed1470c6acd5b98ea9322751dbe4ad9b28-merged.mount: Deactivated successfully.
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.996 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 09:53:11 np0005473739 nova_compute[259550]: 2025-10-07 13:53:11.997 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.017 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.018 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:53:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:53:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3867599739' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.474 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:53:12 np0005473739 podman[261823]: 2025-10-07 13:53:12.571482259 +0000 UTC m=+2.374994500 container remove e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_wiles, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 09:53:12 np0005473739 systemd[1]: libpod-conmon-e579e6c693c48930511efa300d2988d79bd283577f98f00c04267560292c051b.scope: Deactivated successfully.
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.662 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.663 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5124MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.663 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.664 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.738 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.738 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:53:12 np0005473739 podman[261886]: 2025-10-07 13:53:12.74515286 +0000 UTC m=+0.052392224 container create 804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:53:12 np0005473739 nova_compute[259550]: 2025-10-07 13:53:12.757 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:53:12 np0005473739 systemd[1]: Started libpod-conmon-804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e.scope.
Oct  7 09:53:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:53:12 np0005473739 podman[261886]: 2025-10-07 13:53:12.720464808 +0000 UTC m=+0.027704192 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:53:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c8cb41de1d4739c0b5c137c0a5650a796bf346e0fabb636b1f948583b63121/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c8cb41de1d4739c0b5c137c0a5650a796bf346e0fabb636b1f948583b63121/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c8cb41de1d4739c0b5c137c0a5650a796bf346e0fabb636b1f948583b63121/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31c8cb41de1d4739c0b5c137c0a5650a796bf346e0fabb636b1f948583b63121/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:12 np0005473739 podman[261886]: 2025-10-07 13:53:12.867298121 +0000 UTC m=+0.174537485 container init 804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:53:12 np0005473739 podman[261886]: 2025-10-07 13:53:12.87470732 +0000 UTC m=+0.181946684 container start 804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:53:12 np0005473739 podman[261886]: 2025-10-07 13:53:12.886479245 +0000 UTC m=+0.193718609 container attach 804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:53:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v763: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:53:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776144494' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:53:13 np0005473739 nova_compute[259550]: 2025-10-07 13:53:13.208 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:53:13 np0005473739 nova_compute[259550]: 2025-10-07 13:53:13.214 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 09:53:13 np0005473739 nova_compute[259550]: 2025-10-07 13:53:13.240 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 09:53:13 np0005473739 nova_compute[259550]: 2025-10-07 13:53:13.242 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:53:13 np0005473739 nova_compute[259550]: 2025-10-07 13:53:13.243 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.579s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]: {
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:    "0": [
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:        {
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "devices": [
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "/dev/loop3"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            ],
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_name": "ceph_lv0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_size": "21470642176",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "name": "ceph_lv0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "tags": {
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cluster_name": "ceph",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.crush_device_class": "",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.encrypted": "0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osd_id": "0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.type": "block",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.vdo": "0"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            },
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "type": "block",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "vg_name": "ceph_vg0"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:        }
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:    ],
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:    "1": [
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:        {
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "devices": [
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "/dev/loop4"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            ],
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_name": "ceph_lv1",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_size": "21470642176",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "name": "ceph_lv1",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "tags": {
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cluster_name": "ceph",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.crush_device_class": "",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.encrypted": "0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osd_id": "1",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.type": "block",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.vdo": "0"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            },
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "type": "block",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "vg_name": "ceph_vg1"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:        }
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:    ],
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:    "2": [
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:        {
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "devices": [
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "/dev/loop5"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            ],
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_name": "ceph_lv2",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_size": "21470642176",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "name": "ceph_lv2",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "tags": {
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.cluster_name": "ceph",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.crush_device_class": "",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.encrypted": "0",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osd_id": "2",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.type": "block",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:                "ceph.vdo": "0"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            },
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "type": "block",
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:            "vg_name": "ceph_vg2"
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:        }
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]:    ]
Oct  7 09:53:13 np0005473739 bold_chandrasekhar[261903]: }
Oct  7 09:53:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:13 np0005473739 systemd[1]: libpod-804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e.scope: Deactivated successfully.
Oct  7 09:53:13 np0005473739 podman[261886]: 2025-10-07 13:53:13.823028969 +0000 UTC m=+1.130268333 container died 804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:53:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-31c8cb41de1d4739c0b5c137c0a5650a796bf346e0fabb636b1f948583b63121-merged.mount: Deactivated successfully.
Oct  7 09:53:13 np0005473739 podman[261886]: 2025-10-07 13:53:13.996233678 +0000 UTC m=+1.303473042 container remove 804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_chandrasekhar, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:53:14 np0005473739 systemd[1]: libpod-conmon-804c2009ceb3fb6f1cd578b0196dc5cd64cb04e7164e618ed245e7eb9a0a304e.scope: Deactivated successfully.
Oct  7 09:53:14 np0005473739 podman[262086]: 2025-10-07 13:53:14.675358807 +0000 UTC m=+0.058134929 container create 508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:53:14 np0005473739 systemd[1]: Started libpod-conmon-508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710.scope.
Oct  7 09:53:14 np0005473739 podman[262086]: 2025-10-07 13:53:14.646179915 +0000 UTC m=+0.028956107 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:53:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:53:14 np0005473739 podman[262086]: 2025-10-07 13:53:14.77329998 +0000 UTC m=+0.156076112 container init 508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:53:14 np0005473739 podman[262086]: 2025-10-07 13:53:14.783115203 +0000 UTC m=+0.165891315 container start 508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:53:14 np0005473739 serene_edison[262103]: 167 167
Oct  7 09:53:14 np0005473739 systemd[1]: libpod-508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710.scope: Deactivated successfully.
Oct  7 09:53:14 np0005473739 podman[262086]: 2025-10-07 13:53:14.790457469 +0000 UTC m=+0.173233581 container attach 508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct  7 09:53:14 np0005473739 podman[262086]: 2025-10-07 13:53:14.79309737 +0000 UTC m=+0.175873492 container died 508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 09:53:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5fa85d462226bddd16816387b4bc2d6cb5751a40be9bc9a4cc82679b933fb3c2-merged.mount: Deactivated successfully.
Oct  7 09:53:14 np0005473739 podman[262086]: 2025-10-07 13:53:14.851845083 +0000 UTC m=+0.234621195 container remove 508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_edison, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:53:14 np0005473739 systemd[1]: libpod-conmon-508d9407fb7ca86505a7078cb5c9b3fbc04db29b4760b41e22af12bb6e942710.scope: Deactivated successfully.
Oct  7 09:53:15 np0005473739 podman[262127]: 2025-10-07 13:53:15.034988998 +0000 UTC m=+0.054303695 container create f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_albattani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  7 09:53:15 np0005473739 systemd[1]: Started libpod-conmon-f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba.scope.
Oct  7 09:53:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v764: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:15 np0005473739 podman[262127]: 2025-10-07 13:53:15.009083634 +0000 UTC m=+0.028398351 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:53:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:53:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846fa0a437aba78c1f4fa52e5b92cc36e49437131e9df47952f62eae6b602e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846fa0a437aba78c1f4fa52e5b92cc36e49437131e9df47952f62eae6b602e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846fa0a437aba78c1f4fa52e5b92cc36e49437131e9df47952f62eae6b602e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f846fa0a437aba78c1f4fa52e5b92cc36e49437131e9df47952f62eae6b602e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:53:15 np0005473739 podman[262127]: 2025-10-07 13:53:15.163182401 +0000 UTC m=+0.182497098 container init f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_albattani, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:53:15 np0005473739 podman[262127]: 2025-10-07 13:53:15.172008107 +0000 UTC m=+0.191322824 container start f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_albattani, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:53:15 np0005473739 podman[262127]: 2025-10-07 13:53:15.200585553 +0000 UTC m=+0.219900270 container attach f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]: {
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "osd_id": 2,
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "type": "bluestore"
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:    },
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "osd_id": 1,
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "type": "bluestore"
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:    },
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "osd_id": 0,
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:        "type": "bluestore"
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]:    }
Oct  7 09:53:16 np0005473739 affectionate_albattani[262145]: }
Oct  7 09:53:16 np0005473739 systemd[1]: libpod-f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba.scope: Deactivated successfully.
Oct  7 09:53:16 np0005473739 podman[262127]: 2025-10-07 13:53:16.284653848 +0000 UTC m=+1.303968555 container died f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_albattani, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:53:16 np0005473739 systemd[1]: libpod-f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba.scope: Consumed 1.117s CPU time.
Oct  7 09:53:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f846fa0a437aba78c1f4fa52e5b92cc36e49437131e9df47952f62eae6b602e2-merged.mount: Deactivated successfully.
Oct  7 09:53:17 np0005473739 podman[262127]: 2025-10-07 13:53:17.077717107 +0000 UTC m=+2.097031804 container remove f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_albattani, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:53:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v765: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:53:17 np0005473739 systemd[1]: libpod-conmon-f382c04aa13dfee68720e265e98d40114034a0abca912965643ce24db5cc52ba.scope: Deactivated successfully.
Oct  7 09:53:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:53:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b258a6b2-8325-4b99-8ebf-b76a9e15ad38 does not exist
Oct  7 09:53:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 66dbcd6a-bc04-4bd7-aac8-f2813a5a3712 does not exist
Oct  7 09:53:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:53:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v766: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v767: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:53:22
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'vms', '.rgw.root', 'volumes', 'default.rgw.control', 'backups', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.meta']
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:53:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:53:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v768: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v769: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:26 np0005473739 podman[262243]: 2025-10-07 13:53:26.09487403 +0000 UTC m=+0.081488894 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Oct  7 09:53:26 np0005473739 podman[262244]: 2025-10-07 13:53:26.105359291 +0000 UTC m=+0.080939359 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 09:53:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v770: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v771: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 0 B/s wr, 30 op/s
Oct  7 09:53:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v772: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:53:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v773: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 31 KiB/s rd, 0 B/s wr, 52 op/s
Oct  7 09:53:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:34 np0005473739 podman[262282]: 2025-10-07 13:53:34.14641299 +0000 UTC m=+0.125379599 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller)
Oct  7 09:53:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v774: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 09:53:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v775: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 09:53:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:39 np0005473739 podman[262308]: 2025-10-07 13:53:39.074393364 +0000 UTC m=+0.061194150 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 09:53:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v776: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 09:53:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v777: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 0 B/s wr, 29 op/s
Oct  7 09:53:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v778: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 0 B/s wr, 7 op/s
Oct  7 09:53:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v779: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 0 B/s wr, 7 op/s
Oct  7 09:53:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v780: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct  7 09:53:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v781: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v782: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:53:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v783: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:53:54.563 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 09:53:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:53:54.566 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 09:53:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:53:54.568 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 09:53:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v784: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:57 np0005473739 podman[262328]: 2025-10-07 13:53:57.075107982 +0000 UTC m=+0.057776988 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 09:53:57 np0005473739 podman[262327]: 2025-10-07 13:53:57.081885924 +0000 UTC m=+0.067207961 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct  7 09:53:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v785: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:53:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:53:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v786: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:54:00.026 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:54:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:54:00.027 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:54:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:54:00.027 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:54:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v787: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v788: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:05 np0005473739 podman[262368]: 2025-10-07 13:54:05.110007297 +0000 UTC m=+0.092112478 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_managed=true)
Oct  7 09:54:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v789: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v790: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v791: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:10 np0005473739 podman[262394]: 2025-10-07 13:54:10.067950504 +0000 UTC m=+0.056387282 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:54:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v792: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:12 np0005473739 nova_compute[259550]: 2025-10-07 13:54:12.228 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:12 np0005473739 nova_compute[259550]: 2025-10-07 13:54:12.229 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:12 np0005473739 nova_compute[259550]: 2025-10-07 13:54:12.229 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:12 np0005473739 nova_compute[259550]: 2025-10-07 13:54:12.229 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:12 np0005473739 nova_compute[259550]: 2025-10-07 13:54:12.229 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:12 np0005473739 nova_compute[259550]: 2025-10-07 13:54:12.230 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:12 np0005473739 nova_compute[259550]: 2025-10-07 13:54:12.230 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:12 np0005473739 nova_compute[259550]: 2025-10-07 13:54:12.230 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 09:54:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v793: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:13 np0005473739 nova_compute[259550]: 2025-10-07 13:54:13.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:13 np0005473739 nova_compute[259550]: 2025-10-07 13:54:13.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 09:54:13 np0005473739 nova_compute[259550]: 2025-10-07 13:54:13.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.027 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.027 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.075 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.075 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.076 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.076 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.076 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:54:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:54:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864507618' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.549 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.719 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.721 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5196MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.721 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.721 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.836 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.836 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:54:14 np0005473739 nova_compute[259550]: 2025-10-07 13:54:14.859 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:54:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v794: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:54:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3952402735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:54:15 np0005473739 nova_compute[259550]: 2025-10-07 13:54:15.310 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:54:15 np0005473739 nova_compute[259550]: 2025-10-07 13:54:15.315 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 09:54:15 np0005473739 nova_compute[259550]: 2025-10-07 13:54:15.339 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 09:54:15 np0005473739 nova_compute[259550]: 2025-10-07 13:54:15.341 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:54:15 np0005473739 nova_compute[259550]: 2025-10-07 13:54:15.342 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:54:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v795: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:18 np0005473739 podman[262628]: 2025-10-07 13:54:18.350907405 +0000 UTC m=+0.064694754 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:54:18 np0005473739 podman[262628]: 2025-10-07 13:54:18.448530049 +0000 UTC m=+0.162317408 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 09:54:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:54:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v796: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:54:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 43587441-b473-4d61-95a8-a72518560612 does not exist
Oct  7 09:54:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 41408839-a3a3-4844-bb74-0a34623004ec does not exist
Oct  7 09:54:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 84b2972e-7b73-464e-87cf-1c5292d18273 does not exist
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:54:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:54:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v797: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:21 np0005473739 podman[263058]: 2025-10-07 13:54:21.174399067 +0000 UTC m=+0.050511304 container create f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 09:54:21 np0005473739 systemd[1]: Started libpod-conmon-f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a.scope.
Oct  7 09:54:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:54:21 np0005473739 podman[263058]: 2025-10-07 13:54:21.159411045 +0000 UTC m=+0.035523292 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:54:21 np0005473739 podman[263058]: 2025-10-07 13:54:21.25777695 +0000 UTC m=+0.133889207 container init f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:54:21 np0005473739 podman[263058]: 2025-10-07 13:54:21.266039932 +0000 UTC m=+0.142152169 container start f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:54:21 np0005473739 competent_banach[263074]: 167 167
Oct  7 09:54:21 np0005473739 systemd[1]: libpod-f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a.scope: Deactivated successfully.
Oct  7 09:54:21 np0005473739 conmon[263074]: conmon f535684374bd2ec93242 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a.scope/container/memory.events
Oct  7 09:54:21 np0005473739 podman[263058]: 2025-10-07 13:54:21.276386418 +0000 UTC m=+0.152498795 container attach f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:54:21 np0005473739 podman[263058]: 2025-10-07 13:54:21.277734154 +0000 UTC m=+0.153846391 container died f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 09:54:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ea2b4d3c55072aec8b8ee4ef2564e8e575bf555e39d36d3b20b1dd58ca583704-merged.mount: Deactivated successfully.
Oct  7 09:54:21 np0005473739 podman[263058]: 2025-10-07 13:54:21.321065625 +0000 UTC m=+0.197177862 container remove f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:54:21 np0005473739 systemd[1]: libpod-conmon-f535684374bd2ec93242396d8cb41ede4307daab4d28e5c9e80732c8133ecd2a.scope: Deactivated successfully.
Oct  7 09:54:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:54:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:54:21 np0005473739 podman[263100]: 2025-10-07 13:54:21.531234384 +0000 UTC m=+0.042054868 container create 0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pascal, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:54:21 np0005473739 systemd[1]: Started libpod-conmon-0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9.scope.
Oct  7 09:54:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:54:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0582e1255312ee99b006d1e3baf113d503c424222e7402c554c4b234e9dfbef8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:21 np0005473739 podman[263100]: 2025-10-07 13:54:21.515096891 +0000 UTC m=+0.025917395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:54:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0582e1255312ee99b006d1e3baf113d503c424222e7402c554c4b234e9dfbef8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0582e1255312ee99b006d1e3baf113d503c424222e7402c554c4b234e9dfbef8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0582e1255312ee99b006d1e3baf113d503c424222e7402c554c4b234e9dfbef8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0582e1255312ee99b006d1e3baf113d503c424222e7402c554c4b234e9dfbef8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:21 np0005473739 podman[263100]: 2025-10-07 13:54:21.634602763 +0000 UTC m=+0.145423267 container init 0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pascal, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 09:54:21 np0005473739 podman[263100]: 2025-10-07 13:54:21.643354717 +0000 UTC m=+0.154175201 container start 0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pascal, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:54:21 np0005473739 podman[263100]: 2025-10-07 13:54:21.684472137 +0000 UTC m=+0.195292611 container attach 0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pascal, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:54:22
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['images', '.rgw.root', 'backups', 'default.rgw.control', 'vms', 'volumes', 'default.rgw.log', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta']
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:54:22 np0005473739 dazzling_pascal[263116]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:54:22 np0005473739 dazzling_pascal[263116]: --> relative data size: 1.0
Oct  7 09:54:22 np0005473739 dazzling_pascal[263116]: --> All data devices are unavailable
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:54:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:54:22 np0005473739 systemd[1]: libpod-0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9.scope: Deactivated successfully.
Oct  7 09:54:22 np0005473739 systemd[1]: libpod-0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9.scope: Consumed 1.072s CPU time.
Oct  7 09:54:22 np0005473739 podman[263145]: 2025-10-07 13:54:22.836309657 +0000 UTC m=+0.037210398 container died 0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pascal, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:54:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0582e1255312ee99b006d1e3baf113d503c424222e7402c554c4b234e9dfbef8-merged.mount: Deactivated successfully.
Oct  7 09:54:22 np0005473739 podman[263145]: 2025-10-07 13:54:22.895178994 +0000 UTC m=+0.096079705 container remove 0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_pascal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:54:22 np0005473739 systemd[1]: libpod-conmon-0ad3720f3bf86f9b9ef844b299b6faa02f42b1b7c4cc2564abb2115bf35c5ca9.scope: Deactivated successfully.
Oct  7 09:54:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v798: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:23 np0005473739 podman[263300]: 2025-10-07 13:54:23.532869074 +0000 UTC m=+0.033210081 container create fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bardeen, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:54:23 np0005473739 systemd[1]: Started libpod-conmon-fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa.scope.
Oct  7 09:54:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:54:23 np0005473739 podman[263300]: 2025-10-07 13:54:23.603133435 +0000 UTC m=+0.103474452 container init fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bardeen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:54:23 np0005473739 podman[263300]: 2025-10-07 13:54:23.518462277 +0000 UTC m=+0.018803304 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:54:23 np0005473739 podman[263300]: 2025-10-07 13:54:23.615196558 +0000 UTC m=+0.115537595 container start fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bardeen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 09:54:23 np0005473739 podman[263300]: 2025-10-07 13:54:23.619327139 +0000 UTC m=+0.119668176 container attach fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bardeen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:54:23 np0005473739 quirky_bardeen[263316]: 167 167
Oct  7 09:54:23 np0005473739 systemd[1]: libpod-fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa.scope: Deactivated successfully.
Oct  7 09:54:23 np0005473739 podman[263321]: 2025-10-07 13:54:23.664674394 +0000 UTC m=+0.029593664 container died fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bardeen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:54:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cbae23815ab79cb0447fe9400d1d5415f1f24db0004198101428e70c7742f1f7-merged.mount: Deactivated successfully.
Oct  7 09:54:23 np0005473739 podman[263321]: 2025-10-07 13:54:23.704567082 +0000 UTC m=+0.069486362 container remove fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_bardeen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:54:23 np0005473739 systemd[1]: libpod-conmon-fc17715f889a0811b9c6979f3b06fd37b9abb50be8b13c4f095b927a68b826fa.scope: Deactivated successfully.
Oct  7 09:54:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:23 np0005473739 podman[263344]: 2025-10-07 13:54:23.924262166 +0000 UTC m=+0.046702862 container create b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banach, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:54:23 np0005473739 systemd[1]: Started libpod-conmon-b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec.scope.
Oct  7 09:54:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:54:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbba3680bee5c7a416e04307fdd0aabd47112560a92fc7ba0eb6ecc06afaff6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbba3680bee5c7a416e04307fdd0aabd47112560a92fc7ba0eb6ecc06afaff6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbba3680bee5c7a416e04307fdd0aabd47112560a92fc7ba0eb6ecc06afaff6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4fbba3680bee5c7a416e04307fdd0aabd47112560a92fc7ba0eb6ecc06afaff6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:23 np0005473739 podman[263344]: 2025-10-07 13:54:23.903414377 +0000 UTC m=+0.025855123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:54:23 np0005473739 podman[263344]: 2025-10-07 13:54:23.998452613 +0000 UTC m=+0.120893319 container init b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:54:24 np0005473739 podman[263344]: 2025-10-07 13:54:24.008224845 +0000 UTC m=+0.130665541 container start b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:54:24 np0005473739 podman[263344]: 2025-10-07 13:54:24.011490802 +0000 UTC m=+0.133931518 container attach b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:54:24 np0005473739 elastic_banach[263361]: {
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:    "0": [
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:        {
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "devices": [
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "/dev/loop3"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            ],
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_name": "ceph_lv0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_size": "21470642176",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "name": "ceph_lv0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "tags": {
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cluster_name": "ceph",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.crush_device_class": "",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.encrypted": "0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osd_id": "0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.type": "block",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.vdo": "0"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            },
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "type": "block",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "vg_name": "ceph_vg0"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:        }
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:    ],
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:    "1": [
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:        {
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "devices": [
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "/dev/loop4"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            ],
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_name": "ceph_lv1",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_size": "21470642176",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "name": "ceph_lv1",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "tags": {
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cluster_name": "ceph",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.crush_device_class": "",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.encrypted": "0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osd_id": "1",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.type": "block",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.vdo": "0"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            },
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "type": "block",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "vg_name": "ceph_vg1"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:        }
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:    ],
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:    "2": [
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:        {
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "devices": [
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "/dev/loop5"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            ],
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_name": "ceph_lv2",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_size": "21470642176",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "name": "ceph_lv2",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "tags": {
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.cluster_name": "ceph",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.crush_device_class": "",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.encrypted": "0",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osd_id": "2",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.type": "block",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:                "ceph.vdo": "0"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            },
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "type": "block",
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:            "vg_name": "ceph_vg2"
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:        }
Oct  7 09:54:24 np0005473739 elastic_banach[263361]:    ]
Oct  7 09:54:24 np0005473739 elastic_banach[263361]: }
Oct  7 09:54:24 np0005473739 systemd[1]: libpod-b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec.scope: Deactivated successfully.
Oct  7 09:54:24 np0005473739 podman[263344]: 2025-10-07 13:54:24.804586954 +0000 UTC m=+0.927027700 container died b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banach, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:54:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4fbba3680bee5c7a416e04307fdd0aabd47112560a92fc7ba0eb6ecc06afaff6-merged.mount: Deactivated successfully.
Oct  7 09:54:24 np0005473739 podman[263344]: 2025-10-07 13:54:24.870197981 +0000 UTC m=+0.992638677 container remove b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_banach, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:54:24 np0005473739 systemd[1]: libpod-conmon-b1773e8d006f2a50146504f9fb3975571440cad704fde9463859425188f8b4ec.scope: Deactivated successfully.
Oct  7 09:54:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v799: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:25 np0005473739 podman[263524]: 2025-10-07 13:54:25.475229795 +0000 UTC m=+0.041411360 container create 2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swartz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 09:54:25 np0005473739 systemd[1]: Started libpod-conmon-2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d.scope.
Oct  7 09:54:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:54:25 np0005473739 podman[263524]: 2025-10-07 13:54:25.457301855 +0000 UTC m=+0.023483440 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:54:25 np0005473739 podman[263524]: 2025-10-07 13:54:25.558617298 +0000 UTC m=+0.124798913 container init 2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:54:25 np0005473739 podman[263524]: 2025-10-07 13:54:25.566986072 +0000 UTC m=+0.133167657 container start 2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swartz, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 09:54:25 np0005473739 wonderful_swartz[263540]: 167 167
Oct  7 09:54:25 np0005473739 systemd[1]: libpod-2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d.scope: Deactivated successfully.
Oct  7 09:54:25 np0005473739 podman[263524]: 2025-10-07 13:54:25.580813753 +0000 UTC m=+0.146995318 container attach 2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swartz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 09:54:25 np0005473739 podman[263524]: 2025-10-07 13:54:25.581406249 +0000 UTC m=+0.147587814 container died 2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swartz, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 09:54:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-32a0cfa158da6ed0f4d4aee8ff4795905775f4e0e2e0f6c9fdc7be6e17101260-merged.mount: Deactivated successfully.
Oct  7 09:54:25 np0005473739 podman[263524]: 2025-10-07 13:54:25.631663175 +0000 UTC m=+0.197844760 container remove 2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_swartz, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:54:25 np0005473739 systemd[1]: libpod-conmon-2f7e3f596e6214920cb0471a477930bf5d12e74f2a64dd4825c12b63db8c5b4d.scope: Deactivated successfully.
Oct  7 09:54:25 np0005473739 podman[263564]: 2025-10-07 13:54:25.825297691 +0000 UTC m=+0.047324028 container create a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 09:54:25 np0005473739 systemd[1]: Started libpod-conmon-a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5.scope.
Oct  7 09:54:25 np0005473739 podman[263564]: 2025-10-07 13:54:25.804993027 +0000 UTC m=+0.027019394 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:54:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:54:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee99745050ca8dc44c2f028765132c7170dd6534d84433054d7ecce04e2d227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee99745050ca8dc44c2f028765132c7170dd6534d84433054d7ecce04e2d227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee99745050ca8dc44c2f028765132c7170dd6534d84433054d7ecce04e2d227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bee99745050ca8dc44c2f028765132c7170dd6534d84433054d7ecce04e2d227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:54:25 np0005473739 podman[263564]: 2025-10-07 13:54:25.920365678 +0000 UTC m=+0.142392045 container init a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mayer, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:54:25 np0005473739 podman[263564]: 2025-10-07 13:54:25.927043096 +0000 UTC m=+0.149069433 container start a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 09:54:25 np0005473739 podman[263564]: 2025-10-07 13:54:25.931072244 +0000 UTC m=+0.153098601 container attach a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]: {
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "osd_id": 2,
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "type": "bluestore"
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:    },
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "osd_id": 1,
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "type": "bluestore"
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:    },
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "osd_id": 0,
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:        "type": "bluestore"
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]:    }
Oct  7 09:54:26 np0005473739 youthful_mayer[263580]: }
Oct  7 09:54:26 np0005473739 systemd[1]: libpod-a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5.scope: Deactivated successfully.
Oct  7 09:54:26 np0005473739 systemd[1]: libpod-a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5.scope: Consumed 1.080s CPU time.
Oct  7 09:54:26 np0005473739 podman[263564]: 2025-10-07 13:54:26.998440662 +0000 UTC m=+1.220466999 container died a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mayer, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:54:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bee99745050ca8dc44c2f028765132c7170dd6534d84433054d7ecce04e2d227-merged.mount: Deactivated successfully.
Oct  7 09:54:27 np0005473739 podman[263564]: 2025-10-07 13:54:27.059728062 +0000 UTC m=+1.281754429 container remove a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_mayer, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:54:27 np0005473739 systemd[1]: libpod-conmon-a8f7837bee42ba9e6c96cdf52e4bcb2f1c3b8987ae750df112283c782c7396c5.scope: Deactivated successfully.
Oct  7 09:54:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:54:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:54:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v800: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f612ccf3-c10c-4f46-8570-344567c8dec1 does not exist
Oct  7 09:54:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ab695a06-8cec-4960-a2fe-83529d79f3c2 does not exist
Oct  7 09:54:27 np0005473739 podman[263651]: 2025-10-07 13:54:27.334372759 +0000 UTC m=+0.084031452 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 09:54:27 np0005473739 podman[263652]: 2025-10-07 13:54:27.37141666 +0000 UTC m=+0.113854360 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 09:54:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:54:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v801: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v802: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:54:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:54:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1461332024' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:54:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:54:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1461332024' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:54:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v803: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:34 np0005473739 systemd[1]: packagekit.service: Deactivated successfully.
Oct  7 09:54:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v804: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:36 np0005473739 podman[263717]: 2025-10-07 13:54:36.106223193 +0000 UTC m=+0.097065061 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, config_id=ovn_controller)
Oct  7 09:54:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v805: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v806: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:41 np0005473739 podman[263744]: 2025-10-07 13:54:41.093074524 +0000 UTC m=+0.072396629 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 09:54:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v807: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v808: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v809: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v810: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v811: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v812: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:54:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v813: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v814: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v815: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:54:58 np0005473739 podman[263764]: 2025-10-07 13:54:58.074015012 +0000 UTC m=+0.058669056 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd)
Oct  7 09:54:58 np0005473739 podman[263765]: 2025-10-07 13:54:58.080038417 +0000 UTC m=+0.059824848 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 09:54:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:54:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v816: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:55:00.027 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:55:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:55:00.028 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:55:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:55:00.028 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:55:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v817: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v818: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v819: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:07 np0005473739 podman[263801]: 2025-10-07 13:55:07.103951945 +0000 UTC m=+0.085427745 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Oct  7 09:55:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v820: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v821: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v822: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:12 np0005473739 podman[263828]: 2025-10-07 13:55:12.101388391 +0000 UTC m=+0.078057494 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct  7 09:55:12 np0005473739 nova_compute[259550]: 2025-10-07 13:55:12.297 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:12 np0005473739 nova_compute[259550]: 2025-10-07 13:55:12.315 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:12 np0005473739 nova_compute[259550]: 2025-10-07 13:55:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:12 np0005473739 nova_compute[259550]: 2025-10-07 13:55:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:12 np0005473739 nova_compute[259550]: 2025-10-07 13:55:12.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:12 np0005473739 nova_compute[259550]: 2025-10-07 13:55:12.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:12 np0005473739 nova_compute[259550]: 2025-10-07 13:55:12.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:12 np0005473739 nova_compute[259550]: 2025-10-07 13:55:12.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 09:55:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v823: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:13 np0005473739 nova_compute[259550]: 2025-10-07 13:55:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:14 np0005473739 nova_compute[259550]: 2025-10-07 13:55:14.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.035 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.036 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.036 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.036 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.036 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:55:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v824: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:55:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/548261045' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.492 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.683 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.685 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.686 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.686 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:55:15 np0005473739 nova_compute[259550]: 2025-10-07 13:55:15.792 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:55:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:55:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2055267087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:55:16 np0005473739 nova_compute[259550]: 2025-10-07 13:55:16.246 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:55:16 np0005473739 nova_compute[259550]: 2025-10-07 13:55:16.253 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 09:55:16 np0005473739 nova_compute[259550]: 2025-10-07 13:55:16.275 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 09:55:16 np0005473739 nova_compute[259550]: 2025-10-07 13:55:16.278 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:55:16 np0005473739 nova_compute[259550]: 2025-10-07 13:55:16.278 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:55:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v825: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:17 np0005473739 nova_compute[259550]: 2025-10-07 13:55:17.279 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:55:17 np0005473739 nova_compute[259550]: 2025-10-07 13:55:17.280 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 09:55:17 np0005473739 nova_compute[259550]: 2025-10-07 13:55:17.280 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 09:55:17 np0005473739 nova_compute[259550]: 2025-10-07 13:55:17.299 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 09:55:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v826: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v827: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:55:22
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'vms', 'cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'volumes', '.rgw.root', 'default.rgw.meta', 'images', '.mgr', 'default.rgw.log']
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:55:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:55:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v828: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v829: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v830: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:55:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev da412500-5669-4e0c-9032-09f347f06c9f does not exist
Oct  7 09:55:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev eb50805c-c95c-4194-8193-76b392ca115a does not exist
Oct  7 09:55:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e8d355f-243c-44e4-8167-3ee6e36848b1 does not exist
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:55:28 np0005473739 podman[264048]: 2025-10-07 13:55:28.445766063 +0000 UTC m=+0.079868663 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:55:28 np0005473739 podman[264049]: 2025-10-07 13:55:28.467804752 +0000 UTC m=+0.097180954 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 09:55:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:28 np0005473739 podman[264206]: 2025-10-07 13:55:28.902274797 +0000 UTC m=+0.042570528 container create d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:55:28 np0005473739 systemd[1]: Started libpod-conmon-d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e.scope.
Oct  7 09:55:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:55:28 np0005473739 podman[264206]: 2025-10-07 13:55:28.88396876 +0000 UTC m=+0.024264491 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:55:28 np0005473739 podman[264206]: 2025-10-07 13:55:28.987097675 +0000 UTC m=+0.127393406 container init d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:55:28 np0005473739 podman[264206]: 2025-10-07 13:55:28.996657795 +0000 UTC m=+0.136953506 container start d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 09:55:29 np0005473739 podman[264206]: 2025-10-07 13:55:29.001124106 +0000 UTC m=+0.141419907 container attach d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:55:29 np0005473739 blissful_fermi[264223]: 167 167
Oct  7 09:55:29 np0005473739 systemd[1]: libpod-d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e.scope: Deactivated successfully.
Oct  7 09:55:29 np0005473739 podman[264206]: 2025-10-07 13:55:29.004359614 +0000 UTC m=+0.144655325 container died d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 09:55:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e945149a94ecb63ecedd0db543cf6660487863d52468cd7882eca0dc00bba492-merged.mount: Deactivated successfully.
Oct  7 09:55:29 np0005473739 podman[264206]: 2025-10-07 13:55:29.054040395 +0000 UTC m=+0.194336106 container remove d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:55:29 np0005473739 systemd[1]: libpod-conmon-d2520438c3c7a194f49ee7b4122a3729353c3eea0b6ba4174ac9b6ec1967cf7e.scope: Deactivated successfully.
Oct  7 09:55:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v831: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:55:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:55:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:55:29 np0005473739 podman[264247]: 2025-10-07 13:55:29.237886505 +0000 UTC m=+0.050507145 container create 028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_keldysh, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 09:55:29 np0005473739 systemd[1]: Started libpod-conmon-028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973.scope.
Oct  7 09:55:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:55:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27961a36c0dc2087c545471e89c26b87034fc07fbe5b54b161bb305f59059c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27961a36c0dc2087c545471e89c26b87034fc07fbe5b54b161bb305f59059c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27961a36c0dc2087c545471e89c26b87034fc07fbe5b54b161bb305f59059c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27961a36c0dc2087c545471e89c26b87034fc07fbe5b54b161bb305f59059c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/27961a36c0dc2087c545471e89c26b87034fc07fbe5b54b161bb305f59059c72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:29 np0005473739 podman[264247]: 2025-10-07 13:55:29.219847374 +0000 UTC m=+0.032468044 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:55:29 np0005473739 podman[264247]: 2025-10-07 13:55:29.322896537 +0000 UTC m=+0.135517337 container init 028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_keldysh, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:55:29 np0005473739 podman[264247]: 2025-10-07 13:55:29.333740881 +0000 UTC m=+0.146361511 container start 028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_keldysh, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 09:55:29 np0005473739 podman[264247]: 2025-10-07 13:55:29.337037951 +0000 UTC m=+0.149658591 container attach 028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_keldysh, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 09:55:30 np0005473739 wonderful_keldysh[264264]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:55:30 np0005473739 wonderful_keldysh[264264]: --> relative data size: 1.0
Oct  7 09:55:30 np0005473739 wonderful_keldysh[264264]: --> All data devices are unavailable
Oct  7 09:55:30 np0005473739 systemd[1]: libpod-028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973.scope: Deactivated successfully.
Oct  7 09:55:30 np0005473739 podman[264247]: 2025-10-07 13:55:30.404529002 +0000 UTC m=+1.217149642 container died 028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 09:55:30 np0005473739 systemd[1]: libpod-028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973.scope: Consumed 1.023s CPU time.
Oct  7 09:55:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-27961a36c0dc2087c545471e89c26b87034fc07fbe5b54b161bb305f59059c72-merged.mount: Deactivated successfully.
Oct  7 09:55:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v832: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:31 np0005473739 podman[264247]: 2025-10-07 13:55:31.308147456 +0000 UTC m=+2.120768106 container remove 028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:55:31 np0005473739 systemd[1]: libpod-conmon-028377e35b5ccd7c189b57b6f6cb85844daecb1716547c6e5fc470ae3b4ce973.scope: Deactivated successfully.
Oct  7 09:55:32 np0005473739 podman[264445]: 2025-10-07 13:55:31.930106971 +0000 UTC m=+0.026714348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:55:32 np0005473739 podman[264445]: 2025-10-07 13:55:32.033270165 +0000 UTC m=+0.129877562 container create bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lumiere, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:55:32 np0005473739 systemd[1]: Started libpod-conmon-bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430.scope.
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:55:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:55:32 np0005473739 podman[264445]: 2025-10-07 13:55:32.15880059 +0000 UTC m=+0.255407967 container init bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lumiere, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:55:32 np0005473739 podman[264445]: 2025-10-07 13:55:32.167018603 +0000 UTC m=+0.263625960 container start bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 09:55:32 np0005473739 determined_lumiere[264461]: 167 167
Oct  7 09:55:32 np0005473739 systemd[1]: libpod-bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430.scope: Deactivated successfully.
Oct  7 09:55:32 np0005473739 podman[264445]: 2025-10-07 13:55:32.193203826 +0000 UTC m=+0.289811203 container attach bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lumiere, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct  7 09:55:32 np0005473739 podman[264445]: 2025-10-07 13:55:32.195519248 +0000 UTC m=+0.292126605 container died bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lumiere, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct  7 09:55:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e45f5bf68d08b18449e225fa31e5de12008adfc4f5f39ea55e51644e9bd6d57c-merged.mount: Deactivated successfully.
Oct  7 09:55:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:55:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1246107535' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:55:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:55:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1246107535' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:55:32 np0005473739 podman[264445]: 2025-10-07 13:55:32.719295312 +0000 UTC m=+0.815902669 container remove bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_lumiere, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 09:55:32 np0005473739 systemd[1]: libpod-conmon-bbeaec8d9c36f4d251ba338f7776d4a8d3a88dd77a376321748bfd86adef5430.scope: Deactivated successfully.
Oct  7 09:55:32 np0005473739 podman[264483]: 2025-10-07 13:55:32.902750081 +0000 UTC m=+0.024977350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:55:33 np0005473739 podman[264483]: 2025-10-07 13:55:33.009426393 +0000 UTC m=+0.131653642 container create 0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mclaren, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:55:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v833: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:33 np0005473739 systemd[1]: Started libpod-conmon-0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91.scope.
Oct  7 09:55:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:55:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb39e3feb5d004e213c24a4d6cf3541b181091363cb43371e67980a7c248409a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb39e3feb5d004e213c24a4d6cf3541b181091363cb43371e67980a7c248409a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb39e3feb5d004e213c24a4d6cf3541b181091363cb43371e67980a7c248409a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb39e3feb5d004e213c24a4d6cf3541b181091363cb43371e67980a7c248409a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:33 np0005473739 podman[264483]: 2025-10-07 13:55:33.330394521 +0000 UTC m=+0.452621790 container init 0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mclaren, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 09:55:33 np0005473739 podman[264483]: 2025-10-07 13:55:33.337859654 +0000 UTC m=+0.460086903 container start 0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mclaren, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:55:33 np0005473739 podman[264483]: 2025-10-07 13:55:33.372449995 +0000 UTC m=+0.494677254 container attach 0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 09:55:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]: {
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:    "0": [
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:        {
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "devices": [
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "/dev/loop3"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            ],
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_name": "ceph_lv0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_size": "21470642176",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "name": "ceph_lv0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "tags": {
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cluster_name": "ceph",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.crush_device_class": "",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.encrypted": "0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osd_id": "0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.type": "block",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.vdo": "0"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            },
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "type": "block",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "vg_name": "ceph_vg0"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:        }
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:    ],
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:    "1": [
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:        {
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "devices": [
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "/dev/loop4"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            ],
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_name": "ceph_lv1",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_size": "21470642176",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "name": "ceph_lv1",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "tags": {
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cluster_name": "ceph",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.crush_device_class": "",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.encrypted": "0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osd_id": "1",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.type": "block",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.vdo": "0"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            },
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "type": "block",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "vg_name": "ceph_vg1"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:        }
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:    ],
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:    "2": [
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:        {
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "devices": [
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "/dev/loop5"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            ],
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_name": "ceph_lv2",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_size": "21470642176",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "name": "ceph_lv2",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "tags": {
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.cluster_name": "ceph",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.crush_device_class": "",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.encrypted": "0",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osd_id": "2",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.type": "block",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:                "ceph.vdo": "0"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            },
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "type": "block",
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:            "vg_name": "ceph_vg2"
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:        }
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]:    ]
Oct  7 09:55:34 np0005473739 blissful_mclaren[264500]: }
Oct  7 09:55:34 np0005473739 systemd[1]: libpod-0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91.scope: Deactivated successfully.
Oct  7 09:55:34 np0005473739 podman[264483]: 2025-10-07 13:55:34.160509457 +0000 UTC m=+1.282736746 container died 0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:55:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bb39e3feb5d004e213c24a4d6cf3541b181091363cb43371e67980a7c248409a-merged.mount: Deactivated successfully.
Oct  7 09:55:34 np0005473739 podman[264483]: 2025-10-07 13:55:34.590920052 +0000 UTC m=+1.713147301 container remove 0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 09:55:34 np0005473739 systemd[1]: libpod-conmon-0fcb228382e79954edc2ae45440268934fd0f7465e7ab834abbb758196e43a91.scope: Deactivated successfully.
Oct  7 09:55:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v834: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:35 np0005473739 podman[264665]: 2025-10-07 13:55:35.26048167 +0000 UTC m=+0.071357061 container create 690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 09:55:35 np0005473739 podman[264665]: 2025-10-07 13:55:35.212057334 +0000 UTC m=+0.022932725 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:55:35 np0005473739 systemd[1]: Started libpod-conmon-690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1.scope.
Oct  7 09:55:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:55:35 np0005473739 podman[264665]: 2025-10-07 13:55:35.558624589 +0000 UTC m=+0.369499980 container init 690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  7 09:55:35 np0005473739 podman[264665]: 2025-10-07 13:55:35.569079453 +0000 UTC m=+0.379954804 container start 690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True)
Oct  7 09:55:35 np0005473739 goofy_carson[264681]: 167 167
Oct  7 09:55:35 np0005473739 systemd[1]: libpod-690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1.scope: Deactivated successfully.
Oct  7 09:55:35 np0005473739 podman[264665]: 2025-10-07 13:55:35.654048233 +0000 UTC m=+0.464923704 container attach 690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:55:35 np0005473739 podman[264665]: 2025-10-07 13:55:35.655139553 +0000 UTC m=+0.466014904 container died 690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 09:55:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-136d51da0a596bde9191273dde11db675c8bb0b083c94a4e532ff3ed0e1d019c-merged.mount: Deactivated successfully.
Oct  7 09:55:36 np0005473739 podman[264665]: 2025-10-07 13:55:36.106253161 +0000 UTC m=+0.917128512 container remove 690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_carson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:55:36 np0005473739 systemd[1]: libpod-conmon-690f8699e6edf55e2bc69d6283a30110c224dae2614464bf43b4bb33c0e5cee1.scope: Deactivated successfully.
Oct  7 09:55:36 np0005473739 podman[264706]: 2025-10-07 13:55:36.275235167 +0000 UTC m=+0.032009482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:55:36 np0005473739 podman[264706]: 2025-10-07 13:55:36.431144487 +0000 UTC m=+0.187918812 container create 54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 09:55:36 np0005473739 systemd[1]: Started libpod-conmon-54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd.scope.
Oct  7 09:55:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:55:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3535336e651539532c1f4b74decb659b128703a077b69f74d5383b9adfd14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3535336e651539532c1f4b74decb659b128703a077b69f74d5383b9adfd14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3535336e651539532c1f4b74decb659b128703a077b69f74d5383b9adfd14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da3535336e651539532c1f4b74decb659b128703a077b69f74d5383b9adfd14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:55:36 np0005473739 podman[264706]: 2025-10-07 13:55:36.843277995 +0000 UTC m=+0.600052370 container init 54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 09:55:36 np0005473739 podman[264706]: 2025-10-07 13:55:36.851098987 +0000 UTC m=+0.607873282 container start 54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 09:55:37 np0005473739 podman[264706]: 2025-10-07 13:55:37.07402327 +0000 UTC m=+0.830797565 container attach 54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:55:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v835: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]: {
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "osd_id": 2,
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "type": "bluestore"
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:    },
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "osd_id": 1,
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "type": "bluestore"
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:    },
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "osd_id": 0,
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:        "type": "bluestore"
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]:    }
Oct  7 09:55:37 np0005473739 sharp_chaplygin[264722]: }
Oct  7 09:55:38 np0005473739 systemd[1]: libpod-54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd.scope: Deactivated successfully.
Oct  7 09:55:38 np0005473739 podman[264706]: 2025-10-07 13:55:38.0252994 +0000 UTC m=+1.782073695 container died 54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:55:38 np0005473739 systemd[1]: libpod-54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd.scope: Consumed 1.170s CPU time.
Oct  7 09:55:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8da3535336e651539532c1f4b74decb659b128703a077b69f74d5383b9adfd14-merged.mount: Deactivated successfully.
Oct  7 09:55:38 np0005473739 podman[264706]: 2025-10-07 13:55:38.393470213 +0000 UTC m=+2.150244488 container remove 54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_chaplygin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:55:38 np0005473739 systemd[1]: libpod-conmon-54aaeb6c645c46f08f73a6c36104f8ca390dbc651f9aa9e8929f3c85138f22bd.scope: Deactivated successfully.
Oct  7 09:55:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:55:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:55:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:55:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:55:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d58ac738-311d-4267-9176-da0cf9903de7 does not exist
Oct  7 09:55:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 087fe1bf-15d9-43df-997c-3b5b9ac3af6e does not exist
Oct  7 09:55:38 np0005473739 podman[264755]: 2025-10-07 13:55:38.568547864 +0000 UTC m=+0.544068797 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 09:55:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:55:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v836: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:55:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v837: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:43 np0005473739 podman[264843]: 2025-10-07 13:55:43.093149141 +0000 UTC m=+0.074237970 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  7 09:55:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v838: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v839: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v840: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v841: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v842: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:55:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v843: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v844: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v845: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:55:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:55:59 np0005473739 podman[264863]: 2025-10-07 13:55:59.083104593 +0000 UTC m=+0.069774689 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:55:59 np0005473739 podman[264864]: 2025-10-07 13:55:59.113997173 +0000 UTC m=+0.093196106 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 09:55:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v846: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:56:00.028 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:56:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:56:00.029 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:56:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:56:00.029 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:56:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v847: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v848: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v849: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v850: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:09 np0005473739 podman[264903]: 2025-10-07 13:56:09.166811572 +0000 UTC m=+0.146538726 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, container_name=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:56:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v851: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:09 np0005473739 nova_compute[259550]: 2025-10-07 13:56:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:09 np0005473739 nova_compute[259550]: 2025-10-07 13:56:09.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 09:56:10 np0005473739 nova_compute[259550]: 2025-10-07 13:56:10.010 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 09:56:10 np0005473739 nova_compute[259550]: 2025-10-07 13:56:10.012 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:10 np0005473739 nova_compute[259550]: 2025-10-07 13:56:10.012 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 09:56:10 np0005473739 nova_compute[259550]: 2025-10-07 13:56:10.026 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v852: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:13 np0005473739 nova_compute[259550]: 2025-10-07 13:56:13.040 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:13 np0005473739 nova_compute[259550]: 2025-10-07 13:56:13.041 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:13 np0005473739 nova_compute[259550]: 2025-10-07 13:56:13.041 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 09:56:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v853: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:13 np0005473739 nova_compute[259550]: 2025-10-07 13:56:13.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:13 np0005473739 nova_compute[259550]: 2025-10-07 13:56:13.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:13 np0005473739 nova_compute[259550]: 2025-10-07 13:56:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:14 np0005473739 podman[264927]: 2025-10-07 13:56:14.090998796 +0000 UTC m=+0.076446120 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 09:56:14 np0005473739 nova_compute[259550]: 2025-10-07 13:56:14.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:14 np0005473739 nova_compute[259550]: 2025-10-07 13:56:14.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:14 np0005473739 nova_compute[259550]: 2025-10-07 13:56:14.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.018 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:56:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v854: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:56:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694918667' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.483 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.659 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.662 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5212MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.662 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.663 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.875 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.875 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:56:15 np0005473739 nova_compute[259550]: 2025-10-07 13:56:15.948 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.039 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.039 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.057 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.080 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.099 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:56:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:56:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3693279582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.550 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.559 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.592 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.593 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:56:16 np0005473739 nova_compute[259550]: 2025-10-07 13:56:16.594 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.931s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:56:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v855: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:18 np0005473739 nova_compute[259550]: 2025-10-07 13:56:18.592 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:56:18 np0005473739 nova_compute[259550]: 2025-10-07 13:56:18.593 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 09:56:18 np0005473739 nova_compute[259550]: 2025-10-07 13:56:18.593 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 09:56:18 np0005473739 nova_compute[259550]: 2025-10-07 13:56:18.615 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 09:56:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v856: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v857: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:56:22
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', '.rgw.root', 'images', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.meta', 'backups', 'volumes']
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:56:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:56:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v858: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v859: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v860: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v861: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:30 np0005473739 podman[264991]: 2025-10-07 13:56:30.083042843 +0000 UTC m=+0.067332302 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct  7 09:56:30 np0005473739 podman[264990]: 2025-10-07 13:56:30.107141028 +0000 UTC m=+0.084912459 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 09:56:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v862: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:56:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:56:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/859695933' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:56:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:56:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/859695933' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:56:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v863: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v864: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v865: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v866: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:56:39 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cdc698eb-39d4-4e2b-a087-685eb2711cae does not exist
Oct  7 09:56:39 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev fc9bd9fc-712b-451e-989f-c4c306f3ffbd does not exist
Oct  7 09:56:39 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0d90c027-1638-4bfc-aa91-1e473ce6624e does not exist
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:56:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:56:39 np0005473739 podman[265185]: 2025-10-07 13:56:39.731940298 +0000 UTC m=+0.088462978 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Oct  7 09:56:40 np0005473739 podman[265327]: 2025-10-07 13:56:40.195233976 +0000 UTC m=+0.048144380 container create bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ellis, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:56:40 np0005473739 systemd[1]: Started libpod-conmon-bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd.scope.
Oct  7 09:56:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:56:40 np0005473739 podman[265327]: 2025-10-07 13:56:40.175817729 +0000 UTC m=+0.028728153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:56:40 np0005473739 podman[265327]: 2025-10-07 13:56:40.284257058 +0000 UTC m=+0.137167482 container init bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:56:40 np0005473739 podman[265327]: 2025-10-07 13:56:40.29792019 +0000 UTC m=+0.150830594 container start bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ellis, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True)
Oct  7 09:56:40 np0005473739 podman[265327]: 2025-10-07 13:56:40.301408334 +0000 UTC m=+0.154318738 container attach bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ellis, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  7 09:56:40 np0005473739 nifty_ellis[265344]: 167 167
Oct  7 09:56:40 np0005473739 systemd[1]: libpod-bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd.scope: Deactivated successfully.
Oct  7 09:56:40 np0005473739 conmon[265344]: conmon bc88fb8c95aa7fd4d8ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd.scope/container/memory.events
Oct  7 09:56:40 np0005473739 podman[265327]: 2025-10-07 13:56:40.305407383 +0000 UTC m=+0.158317787 container died bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ellis, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct  7 09:56:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8b9486ad457f9bc6c021ce6482b88dc07c949b24989330f58a2811657527177a-merged.mount: Deactivated successfully.
Oct  7 09:56:40 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:56:40 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:56:40 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:56:40 np0005473739 podman[265327]: 2025-10-07 13:56:40.354674842 +0000 UTC m=+0.207585246 container remove bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_ellis, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 09:56:40 np0005473739 systemd[1]: libpod-conmon-bc88fb8c95aa7fd4d8ade8d78d17ce715177f08002b7800e9705b87eb5b319bd.scope: Deactivated successfully.
Oct  7 09:56:40 np0005473739 podman[265368]: 2025-10-07 13:56:40.536290662 +0000 UTC m=+0.047697188 container create 31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:56:40 np0005473739 systemd[1]: Started libpod-conmon-31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c.scope.
Oct  7 09:56:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:56:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918de6dbc0140cfe582ebfef30214ded3d1180b87a359b8d34a115ef9a53343f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918de6dbc0140cfe582ebfef30214ded3d1180b87a359b8d34a115ef9a53343f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918de6dbc0140cfe582ebfef30214ded3d1180b87a359b8d34a115ef9a53343f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918de6dbc0140cfe582ebfef30214ded3d1180b87a359b8d34a115ef9a53343f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/918de6dbc0140cfe582ebfef30214ded3d1180b87a359b8d34a115ef9a53343f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:40 np0005473739 podman[265368]: 2025-10-07 13:56:40.517790709 +0000 UTC m=+0.029197255 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:56:40 np0005473739 podman[265368]: 2025-10-07 13:56:40.621401767 +0000 UTC m=+0.132808293 container init 31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 09:56:40 np0005473739 podman[265368]: 2025-10-07 13:56:40.63036507 +0000 UTC m=+0.141771596 container start 31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:56:40 np0005473739 podman[265368]: 2025-10-07 13:56:40.634600565 +0000 UTC m=+0.146007091 container attach 31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:56:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v867: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:41 np0005473739 zealous_shaw[265386]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:56:41 np0005473739 zealous_shaw[265386]: --> relative data size: 1.0
Oct  7 09:56:41 np0005473739 zealous_shaw[265386]: --> All data devices are unavailable
Oct  7 09:56:41 np0005473739 systemd[1]: libpod-31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c.scope: Deactivated successfully.
Oct  7 09:56:41 np0005473739 systemd[1]: libpod-31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c.scope: Consumed 1.102s CPU time.
Oct  7 09:56:41 np0005473739 podman[265368]: 2025-10-07 13:56:41.790302845 +0000 UTC m=+1.301709431 container died 31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:56:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-918de6dbc0140cfe582ebfef30214ded3d1180b87a359b8d34a115ef9a53343f-merged.mount: Deactivated successfully.
Oct  7 09:56:41 np0005473739 podman[265368]: 2025-10-07 13:56:41.857008349 +0000 UTC m=+1.368414875 container remove 31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_shaw, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 09:56:41 np0005473739 systemd[1]: libpod-conmon-31a448a0588b73b1363018971b4c6b75cb5eddb257762572bf87897f18b3a59c.scope: Deactivated successfully.
Oct  7 09:56:42 np0005473739 podman[265573]: 2025-10-07 13:56:42.491440732 +0000 UTC m=+0.042094376 container create 64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 09:56:42 np0005473739 systemd[1]: Started libpod-conmon-64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec.scope.
Oct  7 09:56:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:56:42 np0005473739 podman[265573]: 2025-10-07 13:56:42.475257363 +0000 UTC m=+0.025911037 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:56:42 np0005473739 podman[265573]: 2025-10-07 13:56:42.586810617 +0000 UTC m=+0.137464281 container init 64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:56:42 np0005473739 podman[265573]: 2025-10-07 13:56:42.593443956 +0000 UTC m=+0.144097600 container start 64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 09:56:42 np0005473739 podman[265573]: 2025-10-07 13:56:42.598819423 +0000 UTC m=+0.149473087 container attach 64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 09:56:42 np0005473739 laughing_germain[265590]: 167 167
Oct  7 09:56:42 np0005473739 systemd[1]: libpod-64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec.scope: Deactivated successfully.
Oct  7 09:56:42 np0005473739 conmon[265590]: conmon 64eac40eaed634f799f7 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec.scope/container/memory.events
Oct  7 09:56:42 np0005473739 podman[265573]: 2025-10-07 13:56:42.602418031 +0000 UTC m=+0.153071705 container died 64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:56:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ccec370219c6391c9890178e991fd14fcc668612757f8ad37f5a9994c0749d65-merged.mount: Deactivated successfully.
Oct  7 09:56:42 np0005473739 podman[265573]: 2025-10-07 13:56:42.654680972 +0000 UTC m=+0.205334616 container remove 64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_germain, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:56:42 np0005473739 systemd[1]: libpod-conmon-64eac40eaed634f799f7f5b06e72006f01fe7ecec9d2b26baed3cacea448daec.scope: Deactivated successfully.
Oct  7 09:56:42 np0005473739 podman[265614]: 2025-10-07 13:56:42.840736242 +0000 UTC m=+0.046178127 container create 907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 09:56:42 np0005473739 systemd[1]: Started libpod-conmon-907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7.scope.
Oct  7 09:56:42 np0005473739 podman[265614]: 2025-10-07 13:56:42.822411334 +0000 UTC m=+0.027853229 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:56:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:56:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0f3adbc3be25fa7f35687d550cab9cb96686572d91d71f23901f586987598d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0f3adbc3be25fa7f35687d550cab9cb96686572d91d71f23901f586987598d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0f3adbc3be25fa7f35687d550cab9cb96686572d91d71f23901f586987598d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb0f3adbc3be25fa7f35687d550cab9cb96686572d91d71f23901f586987598d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:42 np0005473739 podman[265614]: 2025-10-07 13:56:42.946423236 +0000 UTC m=+0.151865181 container init 907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:56:42 np0005473739 podman[265614]: 2025-10-07 13:56:42.958917645 +0000 UTC m=+0.164359510 container start 907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 09:56:42 np0005473739 podman[265614]: 2025-10-07 13:56:42.963020627 +0000 UTC m=+0.168462582 container attach 907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 09:56:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v868: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:43 np0005473739 gallant_pike[265631]: {
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:    "0": [
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:        {
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "devices": [
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "/dev/loop3"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            ],
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_name": "ceph_lv0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_size": "21470642176",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "name": "ceph_lv0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "tags": {
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cluster_name": "ceph",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.crush_device_class": "",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.encrypted": "0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osd_id": "0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.type": "block",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.vdo": "0"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            },
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "type": "block",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "vg_name": "ceph_vg0"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:        }
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:    ],
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:    "1": [
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:        {
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "devices": [
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "/dev/loop4"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            ],
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_name": "ceph_lv1",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_size": "21470642176",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "name": "ceph_lv1",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "tags": {
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cluster_name": "ceph",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.crush_device_class": "",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.encrypted": "0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osd_id": "1",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.type": "block",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.vdo": "0"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            },
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "type": "block",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "vg_name": "ceph_vg1"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:        }
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:    ],
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:    "2": [
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:        {
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "devices": [
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "/dev/loop5"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            ],
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_name": "ceph_lv2",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_size": "21470642176",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "name": "ceph_lv2",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "tags": {
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.cluster_name": "ceph",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.crush_device_class": "",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.encrypted": "0",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osd_id": "2",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.type": "block",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:                "ceph.vdo": "0"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            },
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "type": "block",
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:            "vg_name": "ceph_vg2"
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:        }
Oct  7 09:56:43 np0005473739 gallant_pike[265631]:    ]
Oct  7 09:56:43 np0005473739 gallant_pike[265631]: }
Oct  7 09:56:43 np0005473739 systemd[1]: libpod-907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7.scope: Deactivated successfully.
Oct  7 09:56:43 np0005473739 podman[265614]: 2025-10-07 13:56:43.812390376 +0000 UTC m=+1.017832251 container died 907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:56:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bb0f3adbc3be25fa7f35687d550cab9cb96686572d91d71f23901f586987598d-merged.mount: Deactivated successfully.
Oct  7 09:56:43 np0005473739 podman[265614]: 2025-10-07 13:56:43.863614369 +0000 UTC m=+1.069056244 container remove 907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_pike, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:56:43 np0005473739 systemd[1]: libpod-conmon-907a87a646a7e48df879f28eda2f4b37dde2415f242f333b8e1fd3a471ae73f7.scope: Deactivated successfully.
Oct  7 09:56:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:44 np0005473739 podman[265726]: 2025-10-07 13:56:44.257271565 +0000 UTC m=+0.066132190 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 09:56:44 np0005473739 podman[265807]: 2025-10-07 13:56:44.596247574 +0000 UTC m=+0.046210648 container create 73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 09:56:44 np0005473739 systemd[1]: Started libpod-conmon-73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9.scope.
Oct  7 09:56:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:56:44 np0005473739 podman[265807]: 2025-10-07 13:56:44.577850013 +0000 UTC m=+0.027813097 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:56:44 np0005473739 podman[265807]: 2025-10-07 13:56:44.683739923 +0000 UTC m=+0.133703037 container init 73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 09:56:44 np0005473739 podman[265807]: 2025-10-07 13:56:44.692413438 +0000 UTC m=+0.142376512 container start 73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:56:44 np0005473739 podman[265807]: 2025-10-07 13:56:44.696059648 +0000 UTC m=+0.146022782 container attach 73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 09:56:44 np0005473739 dreamy_pasteur[265823]: 167 167
Oct  7 09:56:44 np0005473739 systemd[1]: libpod-73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9.scope: Deactivated successfully.
Oct  7 09:56:44 np0005473739 podman[265807]: 2025-10-07 13:56:44.700695084 +0000 UTC m=+0.150658158 container died 73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 09:56:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7f5cdf7510daf41eacdfd9db6e68435e7465c3a5468b28fd8ecb6e2ae2951d99-merged.mount: Deactivated successfully.
Oct  7 09:56:44 np0005473739 podman[265807]: 2025-10-07 13:56:44.73766988 +0000 UTC m=+0.187632954 container remove 73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:56:44 np0005473739 systemd[1]: libpod-conmon-73d16960af80861298796c4aa04618b1d36e3a443bf7e037db220fdb38716ad9.scope: Deactivated successfully.
Oct  7 09:56:44 np0005473739 podman[265847]: 2025-10-07 13:56:44.959859432 +0000 UTC m=+0.078097035 container create a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_carson, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:56:45 np0005473739 systemd[1]: Started libpod-conmon-a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b.scope.
Oct  7 09:56:45 np0005473739 podman[265847]: 2025-10-07 13:56:44.925843017 +0000 UTC m=+0.044080630 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:56:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:56:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85beb09835435b49e7b825bdd28db180808f71a257dbff942eb65b5212b486b5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85beb09835435b49e7b825bdd28db180808f71a257dbff942eb65b5212b486b5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85beb09835435b49e7b825bdd28db180808f71a257dbff942eb65b5212b486b5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85beb09835435b49e7b825bdd28db180808f71a257dbff942eb65b5212b486b5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:56:45 np0005473739 podman[265847]: 2025-10-07 13:56:45.069184605 +0000 UTC m=+0.187422278 container init a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_carson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:56:45 np0005473739 podman[265847]: 2025-10-07 13:56:45.077502881 +0000 UTC m=+0.195740474 container start a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_carson, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:56:45 np0005473739 podman[265847]: 2025-10-07 13:56:45.081558291 +0000 UTC m=+0.199795924 container attach a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_carson, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 09:56:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v869: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]: {
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "osd_id": 2,
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "type": "bluestore"
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:    },
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "osd_id": 1,
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "type": "bluestore"
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:    },
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "osd_id": 0,
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:        "type": "bluestore"
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]:    }
Oct  7 09:56:46 np0005473739 stupefied_carson[265864]: }
Oct  7 09:56:46 np0005473739 systemd[1]: libpod-a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b.scope: Deactivated successfully.
Oct  7 09:56:46 np0005473739 systemd[1]: libpod-a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b.scope: Consumed 1.143s CPU time.
Oct  7 09:56:46 np0005473739 podman[265847]: 2025-10-07 13:56:46.211876971 +0000 UTC m=+1.330114604 container died a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_carson, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:56:46 np0005473739 systemd[1]: var-lib-containers-storage-overlay-85beb09835435b49e7b825bdd28db180808f71a257dbff942eb65b5212b486b5-merged.mount: Deactivated successfully.
Oct  7 09:56:46 np0005473739 podman[265847]: 2025-10-07 13:56:46.273232959 +0000 UTC m=+1.391470542 container remove a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_carson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:56:46 np0005473739 systemd[1]: libpod-conmon-a6fb614f2919ab3a5d5724950e6ab68f08892494adaa3823f0483e473b66a54b.scope: Deactivated successfully.
Oct  7 09:56:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:56:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:56:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:56:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:56:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a1ac4245-d548-4f13-9d2d-a2a3a6cf01b1 does not exist
Oct  7 09:56:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 77f39833-5bdf-4caf-bb37-2f87a03a02e1 does not exist
Oct  7 09:56:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:56:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:56:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v870: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v871: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v872: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:56:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v873: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #39. Immutable memtables: 0.
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.456558) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 17] Flushing memtable with next log file: 39
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845413456585, "job": 17, "event": "flush_started", "num_memtables": 1, "num_entries": 2050, "num_deletes": 251, "total_data_size": 3451043, "memory_usage": 3510712, "flush_reason": "Manual Compaction"}
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 17] Level-0 flush table #40: started
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845413477275, "cf_name": "default", "job": 17, "event": "table_file_creation", "file_number": 40, "file_size": 3375507, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 16330, "largest_seqno": 18379, "table_properties": {"data_size": 3366200, "index_size": 5865, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18383, "raw_average_key_size": 19, "raw_value_size": 3347706, "raw_average_value_size": 3607, "num_data_blocks": 266, "num_entries": 928, "num_filter_entries": 928, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845186, "oldest_key_time": 1759845186, "file_creation_time": 1759845413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 40, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 17] Flush lasted 20765 microseconds, and 6581 cpu microseconds.
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.477321) [db/flush_job.cc:967] [default] [JOB 17] Level-0 flush table #40: 3375507 bytes OK
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.477340) [db/memtable_list.cc:519] [default] Level-0 commit table #40 started
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.479311) [db/memtable_list.cc:722] [default] Level-0 commit table #40: memtable #1 done
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.479364) EVENT_LOG_v1 {"time_micros": 1759845413479353, "job": 17, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.479388) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 17] Try to delete WAL files size 3442466, prev total WAL file size 3442466, number of live WAL files 2.
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000036.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.480584) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031323535' seq:72057594037927935, type:22 .. '7061786F730031353037' seq:0, type:0; will stop at (end)
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 18] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 17 Base level 0, inputs: [40(3296KB)], [38(7588KB)]
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845413480618, "job": 18, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [40], "files_L6": [38], "score": -1, "input_data_size": 11146332, "oldest_snapshot_seqno": -1}
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 18] Generated table #41: 4417 keys, 9373852 bytes, temperature: kUnknown
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845413531077, "cf_name": "default", "job": 18, "event": "table_file_creation", "file_number": 41, "file_size": 9373852, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9340668, "index_size": 21042, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11077, "raw_key_size": 106765, "raw_average_key_size": 24, "raw_value_size": 9257306, "raw_average_value_size": 2095, "num_data_blocks": 894, "num_entries": 4417, "num_filter_entries": 4417, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845413, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 41, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.531337) [db/compaction/compaction_job.cc:1663] [default] [JOB 18] Compacted 1@0 + 1@6 files to L6 => 9373852 bytes
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.532881) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 220.5 rd, 185.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.4 +0.0 blob) out(8.9 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 4931, records dropped: 514 output_compression: NoCompression
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.532898) EVENT_LOG_v1 {"time_micros": 1759845413532888, "job": 18, "event": "compaction_finished", "compaction_time_micros": 50540, "compaction_time_cpu_micros": 19976, "output_level": 6, "num_output_files": 1, "total_output_size": 9373852, "num_input_records": 4931, "num_output_records": 4417, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000040.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845413533591, "job": 18, "event": "table_file_deletion", "file_number": 40}
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000038.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845413535272, "job": 18, "event": "table_file_deletion", "file_number": 38}
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.480493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.535364) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.535370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.535372) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.535373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:53.535374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v874: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v875: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #42. Immutable memtables: 0.
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.936808) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 19] Flushing memtable with next log file: 42
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845418936852, "job": 19, "event": "flush_started", "num_memtables": 1, "num_entries": 290, "num_deletes": 250, "total_data_size": 70390, "memory_usage": 75824, "flush_reason": "Manual Compaction"}
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 19] Level-0 flush table #43: started
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845418939973, "cf_name": "default", "job": 19, "event": "table_file_creation", "file_number": 43, "file_size": 69633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18380, "largest_seqno": 18669, "table_properties": {"data_size": 67706, "index_size": 155, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 709, "raw_key_size": 5216, "raw_average_key_size": 19, "raw_value_size": 63945, "raw_average_value_size": 235, "num_data_blocks": 7, "num_entries": 271, "num_filter_entries": 271, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845414, "oldest_key_time": 1759845414, "file_creation_time": 1759845418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 43, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 19] Flush lasted 3243 microseconds, and 1321 cpu microseconds.
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.940051) [db/flush_job.cc:967] [default] [JOB 19] Level-0 flush table #43: 69633 bytes OK
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.940073) [db/memtable_list.cc:519] [default] Level-0 commit table #43 started
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.941871) [db/memtable_list.cc:722] [default] Level-0 commit table #43: memtable #1 done
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.941895) EVENT_LOG_v1 {"time_micros": 1759845418941887, "job": 19, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.941913) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 19] Try to delete WAL files size 68254, prev total WAL file size 68254, number of live WAL files 2.
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000039.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.942410) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D67727374617400353031' seq:72057594037927935, type:22 .. '6D67727374617400373532' seq:0, type:0; will stop at (end)
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 20] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 19 Base level 0, inputs: [43(68KB)], [41(9154KB)]
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845418942487, "job": 20, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [43], "files_L6": [41], "score": -1, "input_data_size": 9443485, "oldest_snapshot_seqno": -1}
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 20] Generated table #44: 4181 keys, 6152762 bytes, temperature: kUnknown
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845418994531, "cf_name": "default", "job": 20, "event": "table_file_creation", "file_number": 44, "file_size": 6152762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 6125772, "index_size": 15425, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10501, "raw_key_size": 102296, "raw_average_key_size": 24, "raw_value_size": 6051042, "raw_average_value_size": 1447, "num_data_blocks": 650, "num_entries": 4181, "num_filter_entries": 4181, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845418, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 44, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.994910) [db/compaction/compaction_job.cc:1663] [default] [JOB 20] Compacted 1@0 + 1@6 files to L6 => 6152762 bytes
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.998377) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 181.1 rd, 118.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.1, 8.9 +0.0 blob) out(5.9 +0.0 blob), read-write-amplify(224.0) write-amplify(88.4) OK, records in: 4688, records dropped: 507 output_compression: NoCompression
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.998409) EVENT_LOG_v1 {"time_micros": 1759845418998394, "job": 20, "event": "compaction_finished", "compaction_time_micros": 52152, "compaction_time_cpu_micros": 25752, "output_level": 6, "num_output_files": 1, "total_output_size": 6152762, "num_input_records": 4688, "num_output_records": 4181, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000043.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:56:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845418998589, "job": 20, "event": "table_file_deletion", "file_number": 43}
Oct  7 09:56:59 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000041.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:56:59 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845419001910, "job": 20, "event": "table_file_deletion", "file_number": 41}
Oct  7 09:56:59 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:58.942310) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:59 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:59.002018) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:59 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:59.002025) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:59 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:59.002028) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:59 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:59.002031) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:59 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:56:59.002034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:56:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v876: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:57:00.029 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:57:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:57:00.031 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:57:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:57:00.031 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:57:01 np0005473739 podman[265962]: 2025-10-07 13:57:01.088836632 +0000 UTC m=+0.062076209 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 09:57:01 np0005473739 podman[265961]: 2025-10-07 13:57:01.099685027 +0000 UTC m=+0.072857732 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:57:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v877: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v878: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v879: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v880: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v881: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:10 np0005473739 podman[265999]: 2025-10-07 13:57:10.118405017 +0000 UTC m=+0.102432583 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 09:57:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v882: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:12 np0005473739 nova_compute[259550]: 2025-10-07 13:57:12.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v883: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:13 np0005473739 nova_compute[259550]: 2025-10-07 13:57:13.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:13 np0005473739 nova_compute[259550]: 2025-10-07 13:57:13.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:13 np0005473739 nova_compute[259550]: 2025-10-07 13:57:13.998 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:14 np0005473739 nova_compute[259550]: 2025-10-07 13:57:14.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:14 np0005473739 nova_compute[259550]: 2025-10-07 13:57:14.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:14 np0005473739 nova_compute[259550]: 2025-10-07 13:57:14.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:14 np0005473739 nova_compute[259550]: 2025-10-07 13:57:14.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 09:57:14 np0005473739 nova_compute[259550]: 2025-10-07 13:57:14.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:15 np0005473739 podman[266026]: 2025-10-07 13:57:15.066803778 +0000 UTC m=+0.055280566 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 09:57:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v884: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:15 np0005473739 nova_compute[259550]: 2025-10-07 13:57:15.686 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:57:15 np0005473739 nova_compute[259550]: 2025-10-07 13:57:15.686 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:57:15 np0005473739 nova_compute[259550]: 2025-10-07 13:57:15.687 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:57:15 np0005473739 nova_compute[259550]: 2025-10-07 13:57:15.687 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:57:15 np0005473739 nova_compute[259550]: 2025-10-07 13:57:15.687 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:57:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:57:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3684116855' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:57:16 np0005473739 nova_compute[259550]: 2025-10-07 13:57:16.145 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:57:16 np0005473739 nova_compute[259550]: 2025-10-07 13:57:16.290 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:57:16 np0005473739 nova_compute[259550]: 2025-10-07 13:57:16.291 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5186MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:57:16 np0005473739 nova_compute[259550]: 2025-10-07 13:57:16.291 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:57:16 np0005473739 nova_compute[259550]: 2025-10-07 13:57:16.292 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:57:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v885: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:17 np0005473739 nova_compute[259550]: 2025-10-07 13:57:17.369 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:57:17 np0005473739 nova_compute[259550]: 2025-10-07 13:57:17.370 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:57:17 np0005473739 nova_compute[259550]: 2025-10-07 13:57:17.391 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:57:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:57:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3570622204' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:57:17 np0005473739 nova_compute[259550]: 2025-10-07 13:57:17.858 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:57:17 np0005473739 nova_compute[259550]: 2025-10-07 13:57:17.869 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 09:57:17 np0005473739 nova_compute[259550]: 2025-10-07 13:57:17.885 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 09:57:17 np0005473739 nova_compute[259550]: 2025-10-07 13:57:17.888 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:57:17 np0005473739 nova_compute[259550]: 2025-10-07 13:57:17.888 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:57:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v886: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:19 np0005473739 nova_compute[259550]: 2025-10-07 13:57:19.888 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:19 np0005473739 nova_compute[259550]: 2025-10-07 13:57:19.889 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 09:57:19 np0005473739 nova_compute[259550]: 2025-10-07 13:57:19.889 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 09:57:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v887: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:21 np0005473739 nova_compute[259550]: 2025-10-07 13:57:21.226 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 09:57:21 np0005473739 nova_compute[259550]: 2025-10-07 13:57:21.227 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:57:22
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.log', '.mgr', 'images', 'volumes', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:57:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:57:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v888: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v889: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v890: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v891: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v892: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:32 np0005473739 podman[266090]: 2025-10-07 13:57:32.078803834 +0000 UTC m=+0.063485036 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 09:57:32 np0005473739 podman[266089]: 2025-10-07 13:57:32.104800093 +0000 UTC m=+0.091095038 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:57:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:57:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019945314' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:57:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:57:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3019945314' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:57:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v893: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v894: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v895: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v896: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:41 np0005473739 podman[266126]: 2025-10-07 13:57:41.113757747 +0000 UTC m=+0.102015642 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  7 09:57:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v897: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v898: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v899: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:46 np0005473739 podman[266152]: 2025-10-07 13:57:46.074652404 +0000 UTC m=+0.060120686 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 09:57:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v900: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:57:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9f6b1ba1-015c-4956-9ffb-6c2f8cf62966 does not exist
Oct  7 09:57:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9551329d-9551-4e55-b13c-3798b57e725b does not exist
Oct  7 09:57:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 81c17556-6c40-4494-a6bd-58e458d2172a does not exist
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:57:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:57:48 np0005473739 podman[266444]: 2025-10-07 13:57:48.035174772 +0000 UTC m=+0.050466288 container create 20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_leavitt, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 09:57:48 np0005473739 systemd[1]: Started libpod-conmon-20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196.scope.
Oct  7 09:57:48 np0005473739 podman[266444]: 2025-10-07 13:57:48.015133193 +0000 UTC m=+0.030424729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:57:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:57:48 np0005473739 podman[266444]: 2025-10-07 13:57:48.131601371 +0000 UTC m=+0.146892907 container init 20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 09:57:48 np0005473739 podman[266444]: 2025-10-07 13:57:48.139773461 +0000 UTC m=+0.155064977 container start 20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_leavitt, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 09:57:48 np0005473739 podman[266444]: 2025-10-07 13:57:48.143713887 +0000 UTC m=+0.159005423 container attach 20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_leavitt, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 09:57:48 np0005473739 great_leavitt[266460]: 167 167
Oct  7 09:57:48 np0005473739 podman[266444]: 2025-10-07 13:57:48.148100865 +0000 UTC m=+0.163392381 container died 20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:57:48 np0005473739 systemd[1]: libpod-20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196.scope: Deactivated successfully.
Oct  7 09:57:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a9d9150d5f3d13fa01360a9b73fd8f32a38839faf5a17276995c224934e8d6b6-merged.mount: Deactivated successfully.
Oct  7 09:57:48 np0005473739 podman[266444]: 2025-10-07 13:57:48.19108314 +0000 UTC m=+0.206374656 container remove 20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 09:57:48 np0005473739 systemd[1]: libpod-conmon-20b65a06ccd3a1aaa800cbd128975c8f936dad9c81d232fd9e28467507d7d196.scope: Deactivated successfully.
Oct  7 09:57:48 np0005473739 podman[266486]: 2025-10-07 13:57:48.381238148 +0000 UTC m=+0.053057186 container create 28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 09:57:48 np0005473739 systemd[1]: Started libpod-conmon-28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14.scope.
Oct  7 09:57:48 np0005473739 podman[266486]: 2025-10-07 13:57:48.356526074 +0000 UTC m=+0.028345202 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:57:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:57:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7ae50f30e707445762bcb1bf3997d8d41d04ff41a2b560bc6aac96f9a0ee9f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7ae50f30e707445762bcb1bf3997d8d41d04ff41a2b560bc6aac96f9a0ee9f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7ae50f30e707445762bcb1bf3997d8d41d04ff41a2b560bc6aac96f9a0ee9f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7ae50f30e707445762bcb1bf3997d8d41d04ff41a2b560bc6aac96f9a0ee9f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f7ae50f30e707445762bcb1bf3997d8d41d04ff41a2b560bc6aac96f9a0ee9f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:48 np0005473739 podman[266486]: 2025-10-07 13:57:48.487502683 +0000 UTC m=+0.159321741 container init 28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 09:57:48 np0005473739 podman[266486]: 2025-10-07 13:57:48.495216059 +0000 UTC m=+0.167035087 container start 28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 09:57:48 np0005473739 podman[266486]: 2025-10-07 13:57:48.566461704 +0000 UTC m=+0.238280732 container attach 28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 09:57:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v901: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:49 np0005473739 hopeful_cartwright[266503]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:57:49 np0005473739 hopeful_cartwright[266503]: --> relative data size: 1.0
Oct  7 09:57:49 np0005473739 hopeful_cartwright[266503]: --> All data devices are unavailable
Oct  7 09:57:49 np0005473739 systemd[1]: libpod-28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14.scope: Deactivated successfully.
Oct  7 09:57:49 np0005473739 systemd[1]: libpod-28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14.scope: Consumed 1.072s CPU time.
Oct  7 09:57:49 np0005473739 conmon[266503]: conmon 28d9b2d5ea26bc028fbc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14.scope/container/memory.events
Oct  7 09:57:49 np0005473739 podman[266486]: 2025-10-07 13:57:49.607776977 +0000 UTC m=+1.279596015 container died 28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:57:49 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4f7ae50f30e707445762bcb1bf3997d8d41d04ff41a2b560bc6aac96f9a0ee9f-merged.mount: Deactivated successfully.
Oct  7 09:57:50 np0005473739 podman[266486]: 2025-10-07 13:57:50.228448701 +0000 UTC m=+1.900267729 container remove 28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_cartwright, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 09:57:50 np0005473739 systemd[1]: libpod-conmon-28d9b2d5ea26bc028fbc98f6805e27c47c57182335566becda09b14ce16a3c14.scope: Deactivated successfully.
Oct  7 09:57:50 np0005473739 podman[266690]: 2025-10-07 13:57:50.96139671 +0000 UTC m=+0.071909692 container create 363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:57:51 np0005473739 podman[266690]: 2025-10-07 13:57:50.915426246 +0000 UTC m=+0.025939208 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:57:51 np0005473739 systemd[1]: Started libpod-conmon-363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642.scope.
Oct  7 09:57:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:57:51 np0005473739 podman[266690]: 2025-10-07 13:57:51.077460518 +0000 UTC m=+0.187973470 container init 363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:57:51 np0005473739 podman[266690]: 2025-10-07 13:57:51.087098387 +0000 UTC m=+0.197611329 container start 363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:57:51 np0005473739 great_knuth[266706]: 167 167
Oct  7 09:57:51 np0005473739 systemd[1]: libpod-363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642.scope: Deactivated successfully.
Oct  7 09:57:51 np0005473739 podman[266690]: 2025-10-07 13:57:51.099380007 +0000 UTC m=+0.209893059 container attach 363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 09:57:51 np0005473739 podman[266690]: 2025-10-07 13:57:51.100093357 +0000 UTC m=+0.210606309 container died 363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:57:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3941ff12654aaf0b480e167e43e2b9623af3fdff59719f77431df558e2d13fb2-merged.mount: Deactivated successfully.
Oct  7 09:57:51 np0005473739 podman[266690]: 2025-10-07 13:57:51.187516505 +0000 UTC m=+0.298029487 container remove 363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_knuth, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 09:57:51 np0005473739 systemd[1]: libpod-conmon-363f945ea1b730a69f01981790b5c1f4b62a76063026bab005b5a6e6e2572642.scope: Deactivated successfully.
Oct  7 09:57:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v902: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:51 np0005473739 podman[266730]: 2025-10-07 13:57:51.397413364 +0000 UTC m=+0.082933369 container create 49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:57:51 np0005473739 podman[266730]: 2025-10-07 13:57:51.345896829 +0000 UTC m=+0.031416924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:57:51 np0005473739 systemd[1]: Started libpod-conmon-49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4.scope.
Oct  7 09:57:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:57:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab05671b8b7e0166224be9614e7cbd9ea6715a7b0bbc23d28bb775812dd4322f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab05671b8b7e0166224be9614e7cbd9ea6715a7b0bbc23d28bb775812dd4322f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab05671b8b7e0166224be9614e7cbd9ea6715a7b0bbc23d28bb775812dd4322f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ab05671b8b7e0166224be9614e7cbd9ea6715a7b0bbc23d28bb775812dd4322f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:51 np0005473739 podman[266730]: 2025-10-07 13:57:51.526873491 +0000 UTC m=+0.212393506 container init 49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 09:57:51 np0005473739 podman[266730]: 2025-10-07 13:57:51.534167268 +0000 UTC m=+0.219687263 container start 49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:57:51 np0005473739 podman[266730]: 2025-10-07 13:57:51.553319111 +0000 UTC m=+0.238839126 container attach 49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 09:57:52 np0005473739 silly_joliot[266747]: {
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:    "0": [
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:        {
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "devices": [
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "/dev/loop3"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            ],
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_name": "ceph_lv0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_size": "21470642176",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "name": "ceph_lv0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "tags": {
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cluster_name": "ceph",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.crush_device_class": "",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.encrypted": "0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osd_id": "0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.type": "block",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.vdo": "0"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            },
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "type": "block",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "vg_name": "ceph_vg0"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:        }
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:    ],
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:    "1": [
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:        {
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "devices": [
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "/dev/loop4"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            ],
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_name": "ceph_lv1",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_size": "21470642176",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "name": "ceph_lv1",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "tags": {
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cluster_name": "ceph",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.crush_device_class": "",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.encrypted": "0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osd_id": "1",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.type": "block",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.vdo": "0"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            },
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "type": "block",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "vg_name": "ceph_vg1"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:        }
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:    ],
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:    "2": [
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:        {
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "devices": [
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "/dev/loop5"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            ],
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_name": "ceph_lv2",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_size": "21470642176",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "name": "ceph_lv2",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "tags": {
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.cluster_name": "ceph",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.crush_device_class": "",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.encrypted": "0",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osd_id": "2",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.type": "block",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:                "ceph.vdo": "0"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            },
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "type": "block",
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:            "vg_name": "ceph_vg2"
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:        }
Oct  7 09:57:52 np0005473739 silly_joliot[266747]:    ]
Oct  7 09:57:52 np0005473739 silly_joliot[266747]: }
Oct  7 09:57:52 np0005473739 systemd[1]: libpod-49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4.scope: Deactivated successfully.
Oct  7 09:57:52 np0005473739 podman[266730]: 2025-10-07 13:57:52.304596644 +0000 UTC m=+0.990116639 container died 49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:57:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ab05671b8b7e0166224be9614e7cbd9ea6715a7b0bbc23d28bb775812dd4322f-merged.mount: Deactivated successfully.
Oct  7 09:57:52 np0005473739 podman[266730]: 2025-10-07 13:57:52.521613104 +0000 UTC m=+1.207133099 container remove 49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_joliot, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:57:52 np0005473739 systemd[1]: libpod-conmon-49edf8a59882c535517593bc211e87cb7858f1dfefe9a4dc2ac882c22dad09d4.scope: Deactivated successfully.
Oct  7 09:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:57:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v903: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:53 np0005473739 podman[266909]: 2025-10-07 13:57:53.198806365 +0000 UTC m=+0.031484066 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:57:53 np0005473739 podman[266909]: 2025-10-07 13:57:53.30246719 +0000 UTC m=+0.135144831 container create 45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:57:53 np0005473739 systemd[1]: Started libpod-conmon-45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d.scope.
Oct  7 09:57:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:57:53 np0005473739 podman[266909]: 2025-10-07 13:57:53.401692686 +0000 UTC m=+0.234370357 container init 45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 09:57:53 np0005473739 podman[266909]: 2025-10-07 13:57:53.409046004 +0000 UTC m=+0.241723655 container start 45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 09:57:53 np0005473739 blissful_cartwright[266926]: 167 167
Oct  7 09:57:53 np0005473739 systemd[1]: libpod-45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d.scope: Deactivated successfully.
Oct  7 09:57:53 np0005473739 podman[266909]: 2025-10-07 13:57:53.43944019 +0000 UTC m=+0.272117861 container attach 45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:57:53 np0005473739 podman[266909]: 2025-10-07 13:57:53.439947434 +0000 UTC m=+0.272625085 container died 45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 09:57:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c89cee3c197d640f96dd9f68ad07228f1bbecbb958f9dbea5998fc21613de7f9-merged.mount: Deactivated successfully.
Oct  7 09:57:53 np0005473739 podman[266909]: 2025-10-07 13:57:53.560505032 +0000 UTC m=+0.393182683 container remove 45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_cartwright, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 09:57:53 np0005473739 systemd[1]: libpod-conmon-45fe86f9ab9126cb3bde4a99fe19481fb75f8f360bfed236087390ae13049a5d.scope: Deactivated successfully.
Oct  7 09:57:53 np0005473739 podman[266950]: 2025-10-07 13:57:53.751650527 +0000 UTC m=+0.049577423 container create 288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 09:57:53 np0005473739 systemd[1]: Started libpod-conmon-288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab.scope.
Oct  7 09:57:53 np0005473739 podman[266950]: 2025-10-07 13:57:53.732149543 +0000 UTC m=+0.030076459 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:57:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:57:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841545ac5d72730507e990cc87fe072c01258f21476904c5b2c6e1a5f67db687/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841545ac5d72730507e990cc87fe072c01258f21476904c5b2c6e1a5f67db687/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841545ac5d72730507e990cc87fe072c01258f21476904c5b2c6e1a5f67db687/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/841545ac5d72730507e990cc87fe072c01258f21476904c5b2c6e1a5f67db687/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:57:53 np0005473739 podman[266950]: 2025-10-07 13:57:53.921647854 +0000 UTC m=+0.219574780 container init 288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:57:53 np0005473739 podman[266950]: 2025-10-07 13:57:53.929521835 +0000 UTC m=+0.227448731 container start 288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:57:53 np0005473739 podman[266950]: 2025-10-07 13:57:53.934105199 +0000 UTC m=+0.232032145 container attach 288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:57:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]: {
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "osd_id": 2,
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "type": "bluestore"
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:    },
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "osd_id": 1,
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "type": "bluestore"
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:    },
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "osd_id": 0,
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:        "type": "bluestore"
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]:    }
Oct  7 09:57:54 np0005473739 amazing_ganguly[266966]: }
Oct  7 09:57:54 np0005473739 systemd[1]: libpod-288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab.scope: Deactivated successfully.
Oct  7 09:57:54 np0005473739 podman[266950]: 2025-10-07 13:57:54.884657593 +0000 UTC m=+1.182584529 container died 288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:57:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay-841545ac5d72730507e990cc87fe072c01258f21476904c5b2c6e1a5f67db687-merged.mount: Deactivated successfully.
Oct  7 09:57:54 np0005473739 podman[266950]: 2025-10-07 13:57:54.949873075 +0000 UTC m=+1.247799971 container remove 288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=amazing_ganguly, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 09:57:54 np0005473739 systemd[1]: libpod-conmon-288bbd06a3904c31839dad4d93db4ce8d042848b7234457c5111187948cf96ab.scope: Deactivated successfully.
Oct  7 09:57:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:57:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:57:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:57:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:57:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1b1a32f8-b071-4b59-a828-10891694e26b does not exist
Oct  7 09:57:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 57ae3716-cd27-41ea-9a1e-c2b4fbbedd7c does not exist
Oct  7 09:57:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v904: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:55 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:57:55 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:57:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v905: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:57:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:57:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v906: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:58:00.030 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:58:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:58:00.032 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:58:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:58:00.032 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:58:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v907: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:03 np0005473739 podman[267064]: 2025-10-07 13:58:03.095376755 +0000 UTC m=+0.074496533 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:58:03 np0005473739 podman[267063]: 2025-10-07 13:58:03.125134494 +0000 UTC m=+0.106781429 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 09:58:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v908: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v909: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v910: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v911: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v912: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:12 np0005473739 podman[267103]: 2025-10-07 13:58:12.152857762 +0000 UTC m=+0.141154683 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:58:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v913: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:13 np0005473739 nova_compute[259550]: 2025-10-07 13:58:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:14 np0005473739 nova_compute[259550]: 2025-10-07 13:58:14.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:14 np0005473739 nova_compute[259550]: 2025-10-07 13:58:14.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:14 np0005473739 nova_compute[259550]: 2025-10-07 13:58:14.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:14 np0005473739 nova_compute[259550]: 2025-10-07 13:58:14.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:14 np0005473739 nova_compute[259550]: 2025-10-07 13:58:14.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 09:58:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v914: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:15 np0005473739 nova_compute[259550]: 2025-10-07 13:58:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:15 np0005473739 nova_compute[259550]: 2025-10-07 13:58:15.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.047 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.048 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.048 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.048 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.048 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:58:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:58:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3123402210' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.465 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.416s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.637 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.638 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5153MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.638 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.638 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.772 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.772 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:58:16 np0005473739 nova_compute[259550]: 2025-10-07 13:58:16.788 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:58:17 np0005473739 podman[267171]: 2025-10-07 13:58:17.065377917 +0000 UTC m=+0.057918776 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:58:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:58:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1402762658' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:58:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v915: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:17 np0005473739 nova_compute[259550]: 2025-10-07 13:58:17.248 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:58:17 np0005473739 nova_compute[259550]: 2025-10-07 13:58:17.254 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 09:58:17 np0005473739 nova_compute[259550]: 2025-10-07 13:58:17.285 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 09:58:17 np0005473739 nova_compute[259550]: 2025-10-07 13:58:17.287 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:58:17 np0005473739 nova_compute[259550]: 2025-10-07 13:58:17.287 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #45. Immutable memtables: 0.
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.136380) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 21] Flushing memtable with next log file: 45
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845499136439, "job": 21, "event": "flush_started", "num_memtables": 1, "num_entries": 877, "num_deletes": 257, "total_data_size": 1183349, "memory_usage": 1210296, "flush_reason": "Manual Compaction"}
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 21] Level-0 flush table #46: started
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845499144485, "cf_name": "default", "job": 21, "event": "table_file_creation", "file_number": 46, "file_size": 1172640, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 18670, "largest_seqno": 19546, "table_properties": {"data_size": 1168252, "index_size": 2040, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1285, "raw_key_size": 9147, "raw_average_key_size": 18, "raw_value_size": 1159438, "raw_average_value_size": 2332, "num_data_blocks": 93, "num_entries": 497, "num_filter_entries": 497, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845419, "oldest_key_time": 1759845419, "file_creation_time": 1759845499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 46, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 21] Flush lasted 8164 microseconds, and 4256 cpu microseconds.
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.144539) [db/flush_job.cc:967] [default] [JOB 21] Level-0 flush table #46: 1172640 bytes OK
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.144568) [db/memtable_list.cc:519] [default] Level-0 commit table #46 started
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.145966) [db/memtable_list.cc:722] [default] Level-0 commit table #46: memtable #1 done
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.145990) EVENT_LOG_v1 {"time_micros": 1759845499145982, "job": 21, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.146017) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 21] Try to delete WAL files size 1179041, prev total WAL file size 1179041, number of live WAL files 2.
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000042.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.146605) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00323531' seq:72057594037927935, type:22 .. '6C6F676D00353034' seq:0, type:0; will stop at (end)
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 22] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 21 Base level 0, inputs: [46(1145KB)], [44(6008KB)]
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845499146641, "job": 22, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [46], "files_L6": [44], "score": -1, "input_data_size": 7325402, "oldest_snapshot_seqno": -1}
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 22] Generated table #47: 4152 keys, 7200850 bytes, temperature: kUnknown
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845499190189, "cf_name": "default", "job": 22, "event": "table_file_creation", "file_number": 47, "file_size": 7200850, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7172367, "index_size": 16981, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10437, "raw_key_size": 102779, "raw_average_key_size": 24, "raw_value_size": 7096465, "raw_average_value_size": 1709, "num_data_blocks": 714, "num_entries": 4152, "num_filter_entries": 4152, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845499, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 47, "seqno_to_time_mapping": "N/A"}}
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.190480) [db/compaction/compaction_job.cc:1663] [default] [JOB 22] Compacted 1@0 + 1@6 files to L6 => 7200850 bytes
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.193497) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 167.9 rd, 165.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 5.9 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(12.4) write-amplify(6.1) OK, records in: 4678, records dropped: 526 output_compression: NoCompression
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.193531) EVENT_LOG_v1 {"time_micros": 1759845499193513, "job": 22, "event": "compaction_finished", "compaction_time_micros": 43642, "compaction_time_cpu_micros": 16233, "output_level": 6, "num_output_files": 1, "total_output_size": 7200850, "num_input_records": 4678, "num_output_records": 4152, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000046.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845499193881, "job": 22, "event": "table_file_deletion", "file_number": 46}
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000044.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845499195197, "job": 22, "event": "table_file_deletion", "file_number": 44}
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.146523) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.195352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.195359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.195361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.195362) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:58:19 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-13:58:19.195363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 09:58:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v916: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:20 np0005473739 nova_compute[259550]: 2025-10-07 13:58:20.288 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:20 np0005473739 nova_compute[259550]: 2025-10-07 13:58:20.288 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 09:58:20 np0005473739 nova_compute[259550]: 2025-10-07 13:58:20.288 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 09:58:20 np0005473739 nova_compute[259550]: 2025-10-07 13:58:20.302 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 09:58:20 np0005473739 nova_compute[259550]: 2025-10-07 13:58:20.302 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:58:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v917: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:58:22
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'vms', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', '.rgw.root']
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:58:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:58:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v918: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v919: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v920: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v921: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v922: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:58:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:58:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/274705654' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:58:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:58:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/274705654' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:58:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v923: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:34 np0005473739 podman[267193]: 2025-10-07 13:58:34.079876631 +0000 UTC m=+0.062820829 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:58:34 np0005473739 podman[267192]: 2025-10-07 13:58:34.115491078 +0000 UTC m=+0.092083025 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 09:58:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v924: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v925: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v926: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v927: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:43 np0005473739 podman[267233]: 2025-10-07 13:58:43.09796392 +0000 UTC m=+0.084709106 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  7 09:58:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v928: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v929: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v930: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:48 np0005473739 podman[267260]: 2025-10-07 13:58:48.100847015 +0000 UTC m=+0.086655659 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  7 09:58:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v931: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v932: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:58:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v933: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v934: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:58:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 011742b5-ea0c-44f6-bca6-7664d279f83b does not exist
Oct  7 09:58:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bec715fc-2948-40f7-8a29-5ffb5b124852 does not exist
Oct  7 09:58:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev fee27d56-cf20-4ccb-a9ae-8dd8be5bb999 does not exist
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 09:58:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 09:58:56 np0005473739 podman[267549]: 2025-10-07 13:58:56.558952292 +0000 UTC m=+0.026879753 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:58:56 np0005473739 podman[267549]: 2025-10-07 13:58:56.742956485 +0000 UTC m=+0.210883926 container create 0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_easley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:58:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 09:58:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:58:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 09:58:56 np0005473739 systemd[1]: Started libpod-conmon-0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a.scope.
Oct  7 09:58:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:58:56 np0005473739 podman[267549]: 2025-10-07 13:58:56.851341177 +0000 UTC m=+0.319268638 container init 0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_easley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:58:56 np0005473739 podman[267549]: 2025-10-07 13:58:56.859539998 +0000 UTC m=+0.327467439 container start 0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_easley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 09:58:56 np0005473739 gifted_easley[267565]: 167 167
Oct  7 09:58:56 np0005473739 systemd[1]: libpod-0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a.scope: Deactivated successfully.
Oct  7 09:58:56 np0005473739 podman[267549]: 2025-10-07 13:58:56.867976924 +0000 UTC m=+0.335904385 container attach 0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 09:58:56 np0005473739 podman[267549]: 2025-10-07 13:58:56.869147486 +0000 UTC m=+0.337074937 container died 0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_easley, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:58:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ee6592605d3c0d20827216f9fcdd6828f8e6f5d6c6f280edfb72d412b8da123d-merged.mount: Deactivated successfully.
Oct  7 09:58:56 np0005473739 podman[267549]: 2025-10-07 13:58:56.917971896 +0000 UTC m=+0.385899337 container remove 0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_easley, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:58:56 np0005473739 systemd[1]: libpod-conmon-0c889ac5939eee57c8011defbc07391f3c18ae98bffda8d96db4d3af3ce98c2a.scope: Deactivated successfully.
Oct  7 09:58:57 np0005473739 podman[267590]: 2025-10-07 13:58:57.086737201 +0000 UTC m=+0.046986044 container create 4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_curran, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:58:57 np0005473739 systemd[1]: Started libpod-conmon-4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc.scope.
Oct  7 09:58:57 np0005473739 podman[267590]: 2025-10-07 13:58:57.065653754 +0000 UTC m=+0.025902627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:58:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:58:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318d9d1440767dbf6e71142d6062d8212710218100ce660f0b42c8a5846cfccd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318d9d1440767dbf6e71142d6062d8212710218100ce660f0b42c8a5846cfccd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318d9d1440767dbf6e71142d6062d8212710218100ce660f0b42c8a5846cfccd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318d9d1440767dbf6e71142d6062d8212710218100ce660f0b42c8a5846cfccd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/318d9d1440767dbf6e71142d6062d8212710218100ce660f0b42c8a5846cfccd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:57 np0005473739 podman[267590]: 2025-10-07 13:58:57.191480025 +0000 UTC m=+0.151728888 container init 4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_curran, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 09:58:57 np0005473739 podman[267590]: 2025-10-07 13:58:57.199294614 +0000 UTC m=+0.159543457 container start 4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_curran, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:58:57 np0005473739 podman[267590]: 2025-10-07 13:58:57.203908269 +0000 UTC m=+0.164157122 container attach 4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_curran, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 09:58:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v935: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:58 np0005473739 happy_curran[267607]: --> passed data devices: 0 physical, 3 LVM
Oct  7 09:58:58 np0005473739 happy_curran[267607]: --> relative data size: 1.0
Oct  7 09:58:58 np0005473739 happy_curran[267607]: --> All data devices are unavailable
Oct  7 09:58:58 np0005473739 systemd[1]: libpod-4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc.scope: Deactivated successfully.
Oct  7 09:58:58 np0005473739 podman[267590]: 2025-10-07 13:58:58.388949903 +0000 UTC m=+1.349198756 container died 4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_curran, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 09:58:58 np0005473739 systemd[1]: libpod-4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc.scope: Consumed 1.147s CPU time.
Oct  7 09:58:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-318d9d1440767dbf6e71142d6062d8212710218100ce660f0b42c8a5846cfccd-merged.mount: Deactivated successfully.
Oct  7 09:58:58 np0005473739 podman[267590]: 2025-10-07 13:58:58.455053829 +0000 UTC m=+1.415302672 container remove 4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_curran, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:58:58 np0005473739 systemd[1]: libpod-conmon-4298e702bbfb4a02df3a692a08f12429963631a49711ebd0582fcf9ad2522ecc.scope: Deactivated successfully.
Oct  7 09:58:59 np0005473739 podman[267789]: 2025-10-07 13:58:59.089166353 +0000 UTC m=+0.049040918 container create 345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:58:59 np0005473739 systemd[1]: Started libpod-conmon-345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421.scope.
Oct  7 09:58:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:58:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:58:59 np0005473739 podman[267789]: 2025-10-07 13:58:59.064945603 +0000 UTC m=+0.024820198 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:58:59 np0005473739 podman[267789]: 2025-10-07 13:58:59.174365862 +0000 UTC m=+0.134240427 container init 345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 09:58:59 np0005473739 podman[267789]: 2025-10-07 13:58:59.181326649 +0000 UTC m=+0.141201214 container start 345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_williams, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:58:59 np0005473739 vigilant_williams[267805]: 167 167
Oct  7 09:58:59 np0005473739 systemd[1]: libpod-345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421.scope: Deactivated successfully.
Oct  7 09:58:59 np0005473739 podman[267789]: 2025-10-07 13:58:59.188332878 +0000 UTC m=+0.148207473 container attach 345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_williams, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 09:58:59 np0005473739 podman[267789]: 2025-10-07 13:58:59.188847961 +0000 UTC m=+0.148722536 container died 345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_williams, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 09:58:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3ceef1a62773d68e5e9e7aa5d23119ba5067dd69c9c3e0e87a9b90dc94349c33-merged.mount: Deactivated successfully.
Oct  7 09:58:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v936: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:58:59 np0005473739 podman[267789]: 2025-10-07 13:58:59.258860862 +0000 UTC m=+0.218735427 container remove 345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_williams, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 09:58:59 np0005473739 systemd[1]: libpod-conmon-345b16684181b5c53765400a7432ac390753140eba1c8348d8b5ead5b9d20421.scope: Deactivated successfully.
Oct  7 09:58:59 np0005473739 podman[267832]: 2025-10-07 13:58:59.438906919 +0000 UTC m=+0.048594627 container create 1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 09:58:59 np0005473739 systemd[1]: Started libpod-conmon-1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e.scope.
Oct  7 09:58:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:58:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4d88b795abdd05ad2da587a60ee1c4542b68b6f6353bc21e5271236561f2fe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4d88b795abdd05ad2da587a60ee1c4542b68b6f6353bc21e5271236561f2fe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4d88b795abdd05ad2da587a60ee1c4542b68b6f6353bc21e5271236561f2fe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5a4d88b795abdd05ad2da587a60ee1c4542b68b6f6353bc21e5271236561f2fe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:58:59 np0005473739 podman[267832]: 2025-10-07 13:58:59.414853683 +0000 UTC m=+0.024541401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:58:59 np0005473739 podman[267832]: 2025-10-07 13:58:59.521461637 +0000 UTC m=+0.131149365 container init 1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:58:59 np0005473739 podman[267832]: 2025-10-07 13:58:59.528494786 +0000 UTC m=+0.138182494 container start 1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:58:59 np0005473739 podman[267832]: 2025-10-07 13:58:59.532964636 +0000 UTC m=+0.142652334 container attach 1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 09:59:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:59:00.032 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:59:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:59:00.033 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:59:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 13:59:00.033 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]: {
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:    "0": [
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:        {
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "devices": [
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "/dev/loop3"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            ],
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_name": "ceph_lv0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_size": "21470642176",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "name": "ceph_lv0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "tags": {
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cluster_name": "ceph",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.crush_device_class": "",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.encrypted": "0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osd_id": "0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.type": "block",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.vdo": "0"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            },
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "type": "block",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "vg_name": "ceph_vg0"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:        }
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:    ],
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:    "1": [
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:        {
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "devices": [
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "/dev/loop4"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            ],
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_name": "ceph_lv1",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_size": "21470642176",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "name": "ceph_lv1",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "tags": {
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cluster_name": "ceph",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.crush_device_class": "",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.encrypted": "0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osd_id": "1",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.type": "block",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.vdo": "0"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            },
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "type": "block",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "vg_name": "ceph_vg1"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:        }
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:    ],
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:    "2": [
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:        {
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "devices": [
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "/dev/loop5"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            ],
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_name": "ceph_lv2",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_size": "21470642176",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "name": "ceph_lv2",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "tags": {
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cephx_lockbox_secret": "",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.cluster_name": "ceph",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.crush_device_class": "",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.encrypted": "0",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osd_id": "2",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.type": "block",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:                "ceph.vdo": "0"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            },
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "type": "block",
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:            "vg_name": "ceph_vg2"
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:        }
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]:    ]
Oct  7 09:59:00 np0005473739 admiring_mahavira[267848]: }
Oct  7 09:59:00 np0005473739 systemd[1]: libpod-1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e.scope: Deactivated successfully.
Oct  7 09:59:00 np0005473739 podman[267832]: 2025-10-07 13:59:00.341824395 +0000 UTC m=+0.951512103 container died 1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:59:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5a4d88b795abdd05ad2da587a60ee1c4542b68b6f6353bc21e5271236561f2fe-merged.mount: Deactivated successfully.
Oct  7 09:59:00 np0005473739 podman[267832]: 2025-10-07 13:59:00.399780551 +0000 UTC m=+1.009468259 container remove 1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_mahavira, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:59:00 np0005473739 systemd[1]: libpod-conmon-1447c65b0c44c176ad49dac0dfbc10295d317f651414a3900ac774e9709c1c4e.scope: Deactivated successfully.
Oct  7 09:59:01 np0005473739 podman[268009]: 2025-10-07 13:59:01.117359839 +0000 UTC m=+0.054285160 container create ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pike, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:59:01 np0005473739 systemd[1]: Started libpod-conmon-ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167.scope.
Oct  7 09:59:01 np0005473739 podman[268009]: 2025-10-07 13:59:01.092798318 +0000 UTC m=+0.029723689 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:59:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:59:01 np0005473739 podman[268009]: 2025-10-07 13:59:01.204264263 +0000 UTC m=+0.141189594 container init ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pike, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:59:01 np0005473739 podman[268009]: 2025-10-07 13:59:01.211005674 +0000 UTC m=+0.147930995 container start ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pike, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 09:59:01 np0005473739 podman[268009]: 2025-10-07 13:59:01.214830706 +0000 UTC m=+0.151756027 container attach ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pike, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 09:59:01 np0005473739 festive_pike[268025]: 167 167
Oct  7 09:59:01 np0005473739 systemd[1]: libpod-ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167.scope: Deactivated successfully.
Oct  7 09:59:01 np0005473739 podman[268009]: 2025-10-07 13:59:01.219014129 +0000 UTC m=+0.155939490 container died ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pike, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 09:59:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7676548bbd4188302f6896026e6e2f5a6f1509d7144b50d4672e91e9a64ea40c-merged.mount: Deactivated successfully.
Oct  7 09:59:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v937: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:01 np0005473739 podman[268009]: 2025-10-07 13:59:01.291320372 +0000 UTC m=+0.228245723 container remove ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_pike, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 09:59:01 np0005473739 systemd[1]: libpod-conmon-ed95903583ff5b8ef961de397adc21a438cbf9973561147cc83f7ded90a7a167.scope: Deactivated successfully.
Oct  7 09:59:01 np0005473739 podman[268047]: 2025-10-07 13:59:01.502534416 +0000 UTC m=+0.066717054 container create 0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 09:59:01 np0005473739 systemd[1]: Started libpod-conmon-0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148.scope.
Oct  7 09:59:01 np0005473739 podman[268047]: 2025-10-07 13:59:01.473605778 +0000 UTC m=+0.037788466 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 09:59:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 09:59:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573891a7c22e969fd82732f531283fa9a333ec92e0e79f15f622495ffcae025d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 09:59:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573891a7c22e969fd82732f531283fa9a333ec92e0e79f15f622495ffcae025d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 09:59:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573891a7c22e969fd82732f531283fa9a333ec92e0e79f15f622495ffcae025d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 09:59:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/573891a7c22e969fd82732f531283fa9a333ec92e0e79f15f622495ffcae025d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 09:59:01 np0005473739 podman[268047]: 2025-10-07 13:59:01.605146772 +0000 UTC m=+0.169329430 container init 0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 09:59:01 np0005473739 podman[268047]: 2025-10-07 13:59:01.61215183 +0000 UTC m=+0.176334468 container start 0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:59:01 np0005473739 podman[268047]: 2025-10-07 13:59:01.615772468 +0000 UTC m=+0.179955156 container attach 0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]: {
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "osd_id": 2,
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "type": "bluestore"
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:    },
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "osd_id": 1,
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "type": "bluestore"
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:    },
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "osd_id": 0,
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:        "type": "bluestore"
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]:    }
Oct  7 09:59:02 np0005473739 intelligent_mclaren[268064]: }
Oct  7 09:59:02 np0005473739 systemd[1]: libpod-0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148.scope: Deactivated successfully.
Oct  7 09:59:02 np0005473739 systemd[1]: libpod-0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148.scope: Consumed 1.054s CPU time.
Oct  7 09:59:02 np0005473739 podman[268047]: 2025-10-07 13:59:02.66485532 +0000 UTC m=+1.229037988 container died 0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:59:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-573891a7c22e969fd82732f531283fa9a333ec92e0e79f15f622495ffcae025d-merged.mount: Deactivated successfully.
Oct  7 09:59:03 np0005473739 podman[268047]: 2025-10-07 13:59:03.207318792 +0000 UTC m=+1.771501440 container remove 0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mclaren, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 09:59:03 np0005473739 systemd[1]: libpod-conmon-0702184797a832fd9cee1f4e733765dd651aca50cae9e7782dc499fc495c4148.scope: Deactivated successfully.
Oct  7 09:59:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 09:59:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v938: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:59:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 09:59:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:59:03 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ec02a1b1-e44b-4433-b892-11db7c139fce does not exist
Oct  7 09:59:03 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ef9da8cf-a82a-43ec-972d-ed31f19e6b28 does not exist
Oct  7 09:59:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:59:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 09:59:05 np0005473739 podman[268163]: 2025-10-07 13:59:05.104077817 +0000 UTC m=+0.083723381 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  7 09:59:05 np0005473739 podman[268162]: 2025-10-07 13:59:05.109273037 +0000 UTC m=+0.085782176 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:59:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v939: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v940: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v941: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v942: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v943: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:13 np0005473739 nova_compute[259550]: 2025-10-07 13:59:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:14 np0005473739 podman[268201]: 2025-10-07 13:59:14.125890828 +0000 UTC m=+0.113579874 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Oct  7 09:59:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:14 np0005473739 nova_compute[259550]: 2025-10-07 13:59:14.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v944: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:15 np0005473739 nova_compute[259550]: 2025-10-07 13:59:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:15 np0005473739 nova_compute[259550]: 2025-10-07 13:59:15.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:15 np0005473739 nova_compute[259550]: 2025-10-07 13:59:15.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:15 np0005473739 nova_compute[259550]: 2025-10-07 13:59:15.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 09:59:16 np0005473739 nova_compute[259550]: 2025-10-07 13:59:16.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:16 np0005473739 nova_compute[259550]: 2025-10-07 13:59:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.009 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.009 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.010 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.010 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:59:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v945: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:59:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1962321582' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.423 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.412s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.598 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.599 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5146MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.600 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.600 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.655 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.656 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 09:59:17 np0005473739 nova_compute[259550]: 2025-10-07 13:59:17.671 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 09:59:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 09:59:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1061488621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 09:59:18 np0005473739 nova_compute[259550]: 2025-10-07 13:59:18.115 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 09:59:18 np0005473739 nova_compute[259550]: 2025-10-07 13:59:18.121 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 09:59:18 np0005473739 nova_compute[259550]: 2025-10-07 13:59:18.137 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 09:59:18 np0005473739 nova_compute[259550]: 2025-10-07 13:59:18.139 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 09:59:18 np0005473739 nova_compute[259550]: 2025-10-07 13:59:18.139 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.539s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 09:59:19 np0005473739 podman[268271]: 2025-10-07 13:59:19.099141568 +0000 UTC m=+0.076932117 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  7 09:59:19 np0005473739 nova_compute[259550]: 2025-10-07 13:59:19.140 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:19 np0005473739 nova_compute[259550]: 2025-10-07 13:59:19.140 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v946: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:19 np0005473739 nova_compute[259550]: 2025-10-07 13:59:19.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 09:59:19 np0005473739 nova_compute[259550]: 2025-10-07 13:59:19.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 09:59:19 np0005473739 nova_compute[259550]: 2025-10-07 13:59:19.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 09:59:20 np0005473739 nova_compute[259550]: 2025-10-07 13:59:20.001 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 09:59:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v947: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_13:59:22
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.meta', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control', 'backups', '.rgw.root', 'images', 'default.rgw.log', '.mgr', 'vms']
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 09:59:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 09:59:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v948: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v949: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v950: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v951: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v952: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 09:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 09:59:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 09:59:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4280087547' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 09:59:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 09:59:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4280087547' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 09:59:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v953: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v954: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:36 np0005473739 podman[268293]: 2025-10-07 13:59:36.09312578 +0000 UTC m=+0.074897223 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 09:59:36 np0005473739 podman[268292]: 2025-10-07 13:59:36.107127026 +0000 UTC m=+0.083265268 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 09:59:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v955: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v956: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v957: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v958: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:45 np0005473739 podman[268328]: 2025-10-07 13:59:45.146360466 +0000 UTC m=+0.123713005 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 09:59:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v959: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v960: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e117 do_prune osdmap full prune enabled
Oct  7 09:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e118 e118: 3 total, 3 up, 3 in
Oct  7 09:59:49 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e118: 3 total, 3 up, 3 in
Oct  7 09:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e118 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v962: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:50 np0005473739 podman[268354]: 2025-10-07 13:59:50.136008245 +0000 UTC m=+0.115131133 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 09:59:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e118 do_prune osdmap full prune enabled
Oct  7 09:59:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e119 e119: 3 total, 3 up, 3 in
Oct  7 09:59:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v963: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:51 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e119: 3 total, 3 up, 3 in
Oct  7 09:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 09:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 09:59:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v965: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 09:59:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v966: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.2 KiB/s rd, 639 B/s wr, 4 op/s
Oct  7 09:59:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v967: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 3.5 KiB/s rd, 870 B/s wr, 5 op/s
Oct  7 09:59:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e119 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 09:59:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e119 do_prune osdmap full prune enabled
Oct  7 09:59:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v968: 305 pgs: 305 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.1 KiB/s rd, 1.6 MiB/s wr, 9 op/s
Oct  7 09:59:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e120 e120: 3 total, 3 up, 3 in
Oct  7 09:59:59 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e120: 3 total, 3 up, 3 in
Oct  7 10:00:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:00:00.033 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:00:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:00:00.033 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:00:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:00:00.034 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:00:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e120 do_prune osdmap full prune enabled
Oct  7 10:00:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e121 e121: 3 total, 3 up, 3 in
Oct  7 10:00:01 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e121: 3 total, 3 up, 3 in
Oct  7 10:00:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v971: 305 pgs: 305 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Oct  7 10:00:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v972: 305 pgs: 305 active+clean; 16 MiB data, 164 MiB used, 60 GiB / 60 GiB avail; 6.7 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Oct  7 10:00:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:00:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:00:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:00:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:00:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:00:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:00:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 66811d3a-fb6f-43a6-bc54-645d0b3aa082 does not exist
Oct  7 10:00:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a29c0d21-d228-4842-afa5-ae31939cdac4 does not exist
Oct  7 10:00:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a46da250-8d62-4575-acf1-01ee895bb994 does not exist
Oct  7 10:00:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:00:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:00:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:00:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:00:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:00:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:00:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v973: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 2.6 MiB/s wr, 16 op/s
Oct  7 10:00:05 np0005473739 podman[268647]: 2025-10-07 14:00:05.790112542 +0000 UTC m=+0.026141093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:00:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:00:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:00:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:00:06 np0005473739 podman[268647]: 2025-10-07 14:00:06.579297002 +0000 UTC m=+0.815325523 container create c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_golick, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:00:07 np0005473739 systemd[1]: Started libpod-conmon-c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6.scope.
Oct  7 10:00:07 np0005473739 podman[268661]: 2025-10-07 14:00:07.238314786 +0000 UTC m=+0.619263317 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:00:07 np0005473739 podman[268662]: 2025-10-07 14:00:07.24031948 +0000 UTC m=+0.621150647 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:00:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:00:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v974: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 8.2 KiB/s rd, 569 KiB/s wr, 11 op/s
Oct  7 10:00:07 np0005473739 podman[268647]: 2025-10-07 14:00:07.650733015 +0000 UTC m=+1.886761566 container init c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_golick, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:00:07 np0005473739 podman[268647]: 2025-10-07 14:00:07.661305298 +0000 UTC m=+1.897333829 container start c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_golick, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:00:07 np0005473739 adoring_golick[268692]: 167 167
Oct  7 10:00:07 np0005473739 systemd[1]: libpod-c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6.scope: Deactivated successfully.
Oct  7 10:00:08 np0005473739 podman[268647]: 2025-10-07 14:00:08.185342625 +0000 UTC m=+2.421371186 container attach c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_golick, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:00:08 np0005473739 podman[268647]: 2025-10-07 14:00:08.186883777 +0000 UTC m=+2.422912338 container died c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_golick, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:00:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v975: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 477 KiB/s wr, 14 op/s
Oct  7 10:00:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cc68ee41f704df40e8ebf32cafb247a7c5b996550d874753ce99a0a911161258-merged.mount: Deactivated successfully.
Oct  7 10:00:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v976: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 7.7 KiB/s rd, 448 KiB/s wr, 10 op/s
Oct  7 10:00:11 np0005473739 podman[268647]: 2025-10-07 14:00:11.724710025 +0000 UTC m=+5.960738556 container remove c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_golick, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:00:11 np0005473739 systemd[1]: libpod-conmon-c77f8f40236afdcecae0871c2d82c4c2112688695259b914cec9fee621e9d2a6.scope: Deactivated successfully.
Oct  7 10:00:11 np0005473739 podman[268723]: 2025-10-07 14:00:11.88791642 +0000 UTC m=+0.028627340 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:00:12 np0005473739 podman[268723]: 2025-10-07 14:00:12.252044223 +0000 UTC m=+0.392755143 container create 8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_villani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 10:00:12 np0005473739 systemd[1]: Started libpod-conmon-8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e.scope.
Oct  7 10:00:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:00:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ea466bc65f44ec57901bab0f6c083b51a937f99ceaaae86b8b064776fb9d0e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ea466bc65f44ec57901bab0f6c083b51a937f99ceaaae86b8b064776fb9d0e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ea466bc65f44ec57901bab0f6c083b51a937f99ceaaae86b8b064776fb9d0e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ea466bc65f44ec57901bab0f6c083b51a937f99ceaaae86b8b064776fb9d0e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/30ea466bc65f44ec57901bab0f6c083b51a937f99ceaaae86b8b064776fb9d0e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:13 np0005473739 podman[268723]: 2025-10-07 14:00:13.277900563 +0000 UTC m=+1.418611443 container init 8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_villani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:00:13 np0005473739 podman[268723]: 2025-10-07 14:00:13.285667561 +0000 UTC m=+1.426378451 container start 8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_villani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:00:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v977: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 6.5 KiB/s rd, 379 KiB/s wr, 9 op/s
Oct  7 10:00:13 np0005473739 podman[268723]: 2025-10-07 14:00:13.70925165 +0000 UTC m=+1.849962570 container attach 8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_villani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:00:14 np0005473739 angry_villani[268739]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:00:14 np0005473739 angry_villani[268739]: --> relative data size: 1.0
Oct  7 10:00:14 np0005473739 angry_villani[268739]: --> All data devices are unavailable
Oct  7 10:00:14 np0005473739 systemd[1]: libpod-8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e.scope: Deactivated successfully.
Oct  7 10:00:14 np0005473739 podman[268723]: 2025-10-07 14:00:14.460852861 +0000 UTC m=+2.601563741 container died 8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_villani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:00:14 np0005473739 systemd[1]: libpod-8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e.scope: Consumed 1.076s CPU time.
Oct  7 10:00:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v978: 305 pgs: 305 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.0 MiB/s wr, 14 op/s
Oct  7 10:00:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e121 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:15 np0005473739 nova_compute[259550]: 2025-10-07 14:00:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:15 np0005473739 nova_compute[259550]: 2025-10-07 14:00:15.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:15 np0005473739 nova_compute[259550]: 2025-10-07 14:00:15.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:00:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-30ea466bc65f44ec57901bab0f6c083b51a937f99ceaaae86b8b064776fb9d0e-merged.mount: Deactivated successfully.
Oct  7 10:00:16 np0005473739 nova_compute[259550]: 2025-10-07 14:00:16.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:16 np0005473739 nova_compute[259550]: 2025-10-07 14:00:16.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v979: 305 pgs: 305 active+clean; 29 MiB data, 177 MiB used, 60 GiB / 60 GiB avail; 6.4 KiB/s rd, 683 KiB/s wr, 9 op/s
Oct  7 10:00:17 np0005473739 nova_compute[259550]: 2025-10-07 14:00:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:18 np0005473739 nova_compute[259550]: 2025-10-07 14:00:18.123 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:00:18 np0005473739 nova_compute[259550]: 2025-10-07 14:00:18.124 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:00:18 np0005473739 nova_compute[259550]: 2025-10-07 14:00:18.124 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:00:18 np0005473739 nova_compute[259550]: 2025-10-07 14:00:18.125 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:00:18 np0005473739 nova_compute[259550]: 2025-10-07 14:00:18.125 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:00:18 np0005473739 podman[268723]: 2025-10-07 14:00:18.183122445 +0000 UTC m=+6.323833325 container remove 8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_villani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  7 10:00:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e121 do_prune osdmap full prune enabled
Oct  7 10:00:18 np0005473739 systemd[1]: libpod-conmon-8182016f6160e3f46149d6b87f12ae966e17956bbd075bcdb9846d93a444d98e.scope: Deactivated successfully.
Oct  7 10:00:18 np0005473739 podman[268780]: 2025-10-07 14:00:18.333769222 +0000 UTC m=+2.355013405 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:00:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e122 e122: 3 total, 3 up, 3 in
Oct  7 10:00:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e122: 3 total, 3 up, 3 in
Oct  7 10:00:19 np0005473739 podman[268968]: 2025-10-07 14:00:18.955589986 +0000 UTC m=+0.040518599 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:00:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:00:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2957937913' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.088 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.963s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.262 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.263 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5126MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.263 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.264 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:00:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v981: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.326 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.327 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.344 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:00:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:00:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1569831325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.826 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.836 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.853 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.855 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:00:19 np0005473739 nova_compute[259550]: 2025-10-07 14:00:19.855 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.592s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:00:19 np0005473739 podman[268968]: 2025-10-07 14:00:19.875527649 +0000 UTC m=+0.960456272 container create ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_allen, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:00:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:20 np0005473739 systemd[1]: Started libpod-conmon-ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a.scope.
Oct  7 10:00:20 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:00:20 np0005473739 podman[268968]: 2025-10-07 14:00:20.629658527 +0000 UTC m=+1.714587130 container init ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_allen, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:00:20 np0005473739 podman[268968]: 2025-10-07 14:00:20.641707041 +0000 UTC m=+1.726635624 container start ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_allen, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:00:20 np0005473739 gracious_allen[269008]: 167 167
Oct  7 10:00:20 np0005473739 systemd[1]: libpod-ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a.scope: Deactivated successfully.
Oct  7 10:00:20 np0005473739 conmon[269008]: conmon ea8fbc9faa78c1826cff <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a.scope/container/memory.events
Oct  7 10:00:20 np0005473739 podman[268968]: 2025-10-07 14:00:20.748189592 +0000 UTC m=+1.833118195 container attach ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_allen, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:00:20 np0005473739 podman[268968]: 2025-10-07 14:00:20.74924265 +0000 UTC m=+1.834171233 container died ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_allen, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:00:20 np0005473739 nova_compute[259550]: 2025-10-07 14:00:20.857 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:20 np0005473739 nova_compute[259550]: 2025-10-07 14:00:20.858 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:20 np0005473739 nova_compute[259550]: 2025-10-07 14:00:20.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-01d02228cd29e3f1615121773d297df259e0e02cc0fca46a3002048eed279fae-merged.mount: Deactivated successfully.
Oct  7 10:00:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v982: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Oct  7 10:00:21 np0005473739 podman[268968]: 2025-10-07 14:00:21.488543881 +0000 UTC m=+2.573472464 container remove ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:00:21 np0005473739 systemd[1]: libpod-conmon-ea8fbc9faa78c1826cff32c2949c51c7d6cf5e7c560889a48d754377ee385d2a.scope: Deactivated successfully.
Oct  7 10:00:21 np0005473739 podman[269009]: 2025-10-07 14:00:21.608758111 +0000 UTC m=+1.101344559 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:00:21 np0005473739 podman[269052]: 2025-10-07 14:00:21.737918661 +0000 UTC m=+0.091723986 container create 116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elgamal, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 10:00:21 np0005473739 podman[269052]: 2025-10-07 14:00:21.678250797 +0000 UTC m=+0.032056112 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:00:21 np0005473739 systemd[1]: Started libpod-conmon-116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2.scope.
Oct  7 10:00:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:00:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18ede6a6c5153252e48ce01ee481fde394924b3b8b8fad816481034d65eb7a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18ede6a6c5153252e48ce01ee481fde394924b3b8b8fad816481034d65eb7a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18ede6a6c5153252e48ce01ee481fde394924b3b8b8fad816481034d65eb7a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d18ede6a6c5153252e48ce01ee481fde394924b3b8b8fad816481034d65eb7a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:21 np0005473739 nova_compute[259550]: 2025-10-07 14:00:21.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:00:21 np0005473739 nova_compute[259550]: 2025-10-07 14:00:21.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:00:21 np0005473739 nova_compute[259550]: 2025-10-07 14:00:21.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:00:22 np0005473739 nova_compute[259550]: 2025-10-07 14:00:22.006 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:00:22 np0005473739 podman[269052]: 2025-10-07 14:00:22.024545791 +0000 UTC m=+0.378351106 container init 116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elgamal, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:00:22 np0005473739 podman[269052]: 2025-10-07 14:00:22.038130945 +0000 UTC m=+0.391936250 container start 116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elgamal, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:00:22 np0005473739 podman[269052]: 2025-10-07 14:00:22.094063567 +0000 UTC m=+0.447868882 container attach 116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elgamal, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:00:22
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control', 'default.rgw.log', 'volumes', 'images', '.mgr', 'cephfs.cephfs.data', 'vms']
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:00:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]: {
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:    "0": [
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:        {
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "devices": [
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "/dev/loop3"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            ],
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_name": "ceph_lv0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_size": "21470642176",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "name": "ceph_lv0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "tags": {
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cluster_name": "ceph",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.crush_device_class": "",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.encrypted": "0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osd_id": "0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.type": "block",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.vdo": "0"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            },
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "type": "block",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "vg_name": "ceph_vg0"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:        }
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:    ],
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:    "1": [
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:        {
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "devices": [
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "/dev/loop4"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            ],
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_name": "ceph_lv1",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_size": "21470642176",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "name": "ceph_lv1",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "tags": {
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cluster_name": "ceph",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.crush_device_class": "",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.encrypted": "0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osd_id": "1",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.type": "block",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.vdo": "0"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            },
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "type": "block",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "vg_name": "ceph_vg1"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:        }
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:    ],
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:    "2": [
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:        {
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "devices": [
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "/dev/loop5"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            ],
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_name": "ceph_lv2",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_size": "21470642176",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "name": "ceph_lv2",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "tags": {
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.cluster_name": "ceph",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.crush_device_class": "",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.encrypted": "0",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osd_id": "2",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.type": "block",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:                "ceph.vdo": "0"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            },
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "type": "block",
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:            "vg_name": "ceph_vg2"
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:        }
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]:    ]
Oct  7 10:00:22 np0005473739 affectionate_elgamal[269068]: }
Oct  7 10:00:22 np0005473739 systemd[1]: libpod-116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2.scope: Deactivated successfully.
Oct  7 10:00:22 np0005473739 podman[269052]: 2025-10-07 14:00:22.843392997 +0000 UTC m=+1.197198372 container died 116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:00:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d18ede6a6c5153252e48ce01ee481fde394924b3b8b8fad816481034d65eb7a9-merged.mount: Deactivated successfully.
Oct  7 10:00:22 np0005473739 podman[269052]: 2025-10-07 14:00:22.953609198 +0000 UTC m=+1.307414503 container remove 116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_elgamal, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 10:00:22 np0005473739 systemd[1]: libpod-conmon-116d4da618eb4e1d34cd7bca5494f6792faa899722c03f1396f5a950240410c2.scope: Deactivated successfully.
Oct  7 10:00:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v983: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 7.1 KiB/s rd, 2.0 MiB/s wr, 10 op/s
Oct  7 10:00:23 np0005473739 podman[269231]: 2025-10-07 14:00:23.562705951 +0000 UTC m=+0.044223399 container create ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:00:23 np0005473739 systemd[1]: Started libpod-conmon-ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725.scope.
Oct  7 10:00:23 np0005473739 podman[269231]: 2025-10-07 14:00:23.539069056 +0000 UTC m=+0.020586464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:00:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:00:23 np0005473739 podman[269231]: 2025-10-07 14:00:23.764047 +0000 UTC m=+0.245564418 container init ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 10:00:23 np0005473739 podman[269231]: 2025-10-07 14:00:23.771590892 +0000 UTC m=+0.253108280 container start ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_roentgen, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:00:23 np0005473739 recursing_roentgen[269249]: 167 167
Oct  7 10:00:23 np0005473739 systemd[1]: libpod-ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725.scope: Deactivated successfully.
Oct  7 10:00:23 np0005473739 podman[269231]: 2025-10-07 14:00:23.817154766 +0000 UTC m=+0.298672244 container attach ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_roentgen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:00:23 np0005473739 podman[269231]: 2025-10-07 14:00:23.81767587 +0000 UTC m=+0.299193278 container died ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_roentgen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 10:00:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dad38b4b900f4996a2f5c97314d852fe3a4d699abbcc96ac03b906dcd4752f50-merged.mount: Deactivated successfully.
Oct  7 10:00:24 np0005473739 podman[269231]: 2025-10-07 14:00:24.011121087 +0000 UTC m=+0.492638475 container remove ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_roentgen, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:00:24 np0005473739 systemd[1]: libpod-conmon-ddc4387dc20e8b7a07bb4f1353bb9c5d8711ea4add178bf51e3d525905be1725.scope: Deactivated successfully.
Oct  7 10:00:24 np0005473739 podman[269274]: 2025-10-07 14:00:24.23459124 +0000 UTC m=+0.093443982 container create 00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:00:24 np0005473739 podman[269274]: 2025-10-07 14:00:24.180538968 +0000 UTC m=+0.039391790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:00:24 np0005473739 systemd[1]: Started libpod-conmon-00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909.scope.
Oct  7 10:00:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:00:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bad90b688c1c20962d55696fee868924db0caf2143e4482cc0a018e9ce32606/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bad90b688c1c20962d55696fee868924db0caf2143e4482cc0a018e9ce32606/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bad90b688c1c20962d55696fee868924db0caf2143e4482cc0a018e9ce32606/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6bad90b688c1c20962d55696fee868924db0caf2143e4482cc0a018e9ce32606/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:00:24 np0005473739 podman[269274]: 2025-10-07 14:00:24.356569257 +0000 UTC m=+0.215422019 container init 00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:00:24 np0005473739 podman[269274]: 2025-10-07 14:00:24.364358976 +0000 UTC m=+0.223211718 container start 00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:00:24 np0005473739 podman[269274]: 2025-10-07 14:00:24.425608261 +0000 UTC m=+0.284461033 container attach 00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 10:00:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v984: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.2 MiB/s wr, 8 op/s
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]: {
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "osd_id": 2,
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "type": "bluestore"
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:    },
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "osd_id": 1,
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "type": "bluestore"
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:    },
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "osd_id": 0,
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:        "type": "bluestore"
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]:    }
Oct  7 10:00:25 np0005473739 dreamy_nash[269290]: }
Oct  7 10:00:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e122 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:25 np0005473739 systemd[1]: libpod-00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909.scope: Deactivated successfully.
Oct  7 10:00:25 np0005473739 podman[269274]: 2025-10-07 14:00:25.388879118 +0000 UTC m=+1.247731860 container died 00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:00:25 np0005473739 systemd[1]: libpod-00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909.scope: Consumed 1.033s CPU time.
Oct  7 10:00:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6bad90b688c1c20962d55696fee868924db0caf2143e4482cc0a018e9ce32606-merged.mount: Deactivated successfully.
Oct  7 10:00:25 np0005473739 podman[269274]: 2025-10-07 14:00:25.482483093 +0000 UTC m=+1.341335835 container remove 00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_nash, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True)
Oct  7 10:00:25 np0005473739 systemd[1]: libpod-conmon-00ebe4268839061fe96f53b8d6d99e3a59e759ddff348bd4e16fcb769a8d9909.scope: Deactivated successfully.
Oct  7 10:00:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:00:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:00:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:00:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:00:25 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7d63b844-33bc-4158-86e6-67546d4f3310 does not exist
Oct  7 10:00:25 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7108640b-cd99-4eca-a4aa-5cea887a0f57 does not exist
Oct  7 10:00:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:00:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:00:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v985: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 6.2 KiB/s rd, 1.2 MiB/s wr, 8 op/s
Oct  7 10:00:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v986: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 3.3 KiB/s rd, 495 B/s wr, 4 op/s
Oct  7 10:00:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e122 do_prune osdmap full prune enabled
Oct  7 10:00:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e123 e123: 3 total, 3 up, 3 in
Oct  7 10:00:29 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e123: 3 total, 3 up, 3 in
Oct  7 10:00:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e123 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v988: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 614 B/s wr, 6 op/s
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006657947108810315 of space, bias 1.0, pg target 0.19973841326430944 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:00:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:00:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2553232587' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:00:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:00:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2553232587' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:00:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v989: 305 pgs: 305 active+clean; 41 MiB data, 189 MiB used, 60 GiB / 60 GiB avail; 4.3 KiB/s rd, 614 B/s wr, 6 op/s
Oct  7 10:00:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e123 do_prune osdmap full prune enabled
Oct  7 10:00:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e124 e124: 3 total, 3 up, 3 in
Oct  7 10:00:34 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e124: 3 total, 3 up, 3 in
Oct  7 10:00:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:00:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 4617 writes, 20K keys, 4617 commit groups, 1.0 writes per commit group, ingest: 0.03 GB, 0.02 MB/s#012Cumulative WAL: 4617 writes, 4617 syncs, 1.00 writes per sync, written: 0.03 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1333 writes, 5793 keys, 1333 commit groups, 1.0 writes per commit group, ingest: 8.67 MB, 0.01 MB/s#012Interval WAL: 1333 writes, 1333 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     71.8      0.31              0.06        11    0.028       0      0       0.0       0.0#012  L6      1/0    6.87 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.1    124.8    102.2      0.69              0.21        10    0.069     43K   5261       0.0       0.0#012 Sum      1/0    6.87 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.1     85.8     92.7      1.00              0.28        21    0.048     43K   5261       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   5.4    106.8    106.7      0.33              0.11         8    0.042     18K   2061       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    124.8    102.2      0.69              0.21        10    0.069     43K   5261       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     72.4      0.31              0.06        10    0.031       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 1800.0 total, 600.0 interval#012Flush(GB): cumulative 0.022, interval 0.006#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.09 GB write, 0.05 MB/s write, 0.08 GB read, 0.05 MB/s read, 1.0 seconds#012Interval compaction: 0.03 GB write, 0.06 MB/s write, 0.03 GB read, 0.06 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 308.00 MB usage: 6.35 MB table_size: 0 occupancy: 18446744073709551615 collections: 4 last_copies: 0 last_secs: 0.000103 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(415,5.99 MB,1.94462%) FilterBlock(22,128.67 KB,0.0407974%) IndexBlock(22,242.08 KB,0.0767547%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 10:00:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v991: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 895 B/s wr, 26 op/s
Oct  7 10:00:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v992: 305 pgs: 305 active+clean; 21 MiB data, 169 MiB used, 60 GiB / 60 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 32 op/s
Oct  7 10:00:38 np0005473739 podman[269385]: 2025-10-07 14:00:38.091487029 +0000 UTC m=+0.071117502 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:00:38 np0005473739 podman[269386]: 2025-10-07 14:00:38.091700784 +0000 UTC m=+0.068019708 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #48. Immutable memtables: 0.
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.378597) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 23] Flushing memtable with next log file: 48
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845638378699, "job": 23, "event": "flush_started", "num_memtables": 1, "num_entries": 1417, "num_deletes": 252, "total_data_size": 2190707, "memory_usage": 2233712, "flush_reason": "Manual Compaction"}
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 23] Level-0 flush table #49: started
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845638392679, "cf_name": "default", "job": 23, "event": "table_file_creation", "file_number": 49, "file_size": 2158289, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 19547, "largest_seqno": 20963, "table_properties": {"data_size": 2151533, "index_size": 3891, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 13995, "raw_average_key_size": 20, "raw_value_size": 2137969, "raw_average_value_size": 3058, "num_data_blocks": 177, "num_entries": 699, "num_filter_entries": 699, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845500, "oldest_key_time": 1759845500, "file_creation_time": 1759845638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 49, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 23] Flush lasted 14134 microseconds, and 6824 cpu microseconds.
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.392745) [db/flush_job.cc:967] [default] [JOB 23] Level-0 flush table #49: 2158289 bytes OK
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.392776) [db/memtable_list.cc:519] [default] Level-0 commit table #49 started
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.394349) [db/memtable_list.cc:722] [default] Level-0 commit table #49: memtable #1 done
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.394384) EVENT_LOG_v1 {"time_micros": 1759845638394374, "job": 23, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.394414) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 23] Try to delete WAL files size 2184412, prev total WAL file size 2184412, number of live WAL files 2.
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000045.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.395610) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031353036' seq:72057594037927935, type:22 .. '7061786F730031373538' seq:0, type:0; will stop at (end)
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 24] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 23 Base level 0, inputs: [49(2107KB)], [47(7032KB)]
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845638395686, "job": 24, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [49], "files_L6": [47], "score": -1, "input_data_size": 9359139, "oldest_snapshot_seqno": -1}
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 24] Generated table #50: 4332 keys, 7603174 bytes, temperature: kUnknown
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845638434482, "cf_name": "default", "job": 24, "event": "table_file_creation", "file_number": 50, "file_size": 7603174, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7573078, "index_size": 18146, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 10885, "raw_key_size": 107192, "raw_average_key_size": 24, "raw_value_size": 7493472, "raw_average_value_size": 1729, "num_data_blocks": 760, "num_entries": 4332, "num_filter_entries": 4332, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845638, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 50, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.434871) [db/compaction/compaction_job.cc:1663] [default] [JOB 24] Compacted 1@0 + 1@6 files to L6 => 7603174 bytes
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.437703) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 240.3 rd, 195.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.1, 6.9 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(7.9) write-amplify(3.5) OK, records in: 4851, records dropped: 519 output_compression: NoCompression
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.437739) EVENT_LOG_v1 {"time_micros": 1759845638437723, "job": 24, "event": "compaction_finished", "compaction_time_micros": 38946, "compaction_time_cpu_micros": 18514, "output_level": 6, "num_output_files": 1, "total_output_size": 7603174, "num_input_records": 4851, "num_output_records": 4332, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000049.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845638438476, "job": 24, "event": "table_file_deletion", "file_number": 49}
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000047.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845638440008, "job": 24, "event": "table_file_deletion", "file_number": 47}
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.395490) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.440191) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.440200) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.440203) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.440205) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:00:38 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:00:38.440207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:00:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v993: 305 pgs: 305 active+clean; 456 KiB data, 156 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 2.9 KiB/s wr, 52 op/s
Oct  7 10:00:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e124 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e124 do_prune osdmap full prune enabled
Oct  7 10:00:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e125 e125: 3 total, 3 up, 3 in
Oct  7 10:00:40 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e125: 3 total, 3 up, 3 in
Oct  7 10:00:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v995: 305 pgs: 305 active+clean; 456 KiB data, 156 MiB used, 60 GiB / 60 GiB avail; 44 KiB/s rd, 3.4 KiB/s wr, 60 op/s
Oct  7 10:00:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v996: 305 pgs: 305 active+clean; 456 KiB data, 156 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 2.5 KiB/s wr, 34 op/s
Oct  7 10:00:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v997: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail; 20 KiB/s rd, 2.1 KiB/s wr, 29 op/s
Oct  7 10:00:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e125 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e125 do_prune osdmap full prune enabled
Oct  7 10:00:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 e126: 3 total, 3 up, 3 in
Oct  7 10:00:46 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e126: 3 total, 3 up, 3 in
Oct  7 10:00:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v999: 305 pgs: 305 active+clean; 456 KiB data, 148 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:00:49 np0005473739 podman[269423]: 2025-10-07 14:00:49.128028403 +0000 UTC m=+0.118503085 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 10:00:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1000: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 12 KiB/s rd, 1.7 KiB/s wr, 16 op/s
Oct  7 10:00:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1001: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  7 10:00:52 np0005473739 podman[269451]: 2025-10-07 14:00:52.05859124 +0000 UTC m=+0.047233559 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  7 10:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:00:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1002: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  7 10:00:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1003: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  7 10:00:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:00:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1004: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 1.5 KiB/s wr, 13 op/s
Oct  7 10:00:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1005: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 8.7 KiB/s rd, 1.3 KiB/s wr, 12 op/s
Oct  7 10:01:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:01:00.034 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:01:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:01:00.035 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:01:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:01:00.035 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:01:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1006: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail; 85 B/s rd, 85 B/s wr, 0 op/s
Oct  7 10:01:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1007: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1008: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1009: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:09 np0005473739 podman[269481]: 2025-10-07 14:01:09.088711745 +0000 UTC m=+0.071644266 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  7 10:01:09 np0005473739 podman[269480]: 2025-10-07 14:01:09.097616135 +0000 UTC m=+0.081129421 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 10:01:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1010: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:10 np0005473739 nova_compute[259550]: 2025-10-07 14:01:10.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1011: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1012: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1013: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:16 np0005473739 nova_compute[259550]: 2025-10-07 14:01:16.000 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:16 np0005473739 nova_compute[259550]: 2025-10-07 14:01:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:16 np0005473739 nova_compute[259550]: 2025-10-07 14:01:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:16 np0005473739 nova_compute[259550]: 2025-10-07 14:01:16.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:01:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1014: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:17 np0005473739 nova_compute[259550]: 2025-10-07 14:01:17.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:17 np0005473739 nova_compute[259550]: 2025-10-07 14:01:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:17 np0005473739 nova_compute[259550]: 2025-10-07 14:01:17.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:01:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1015: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:19 np0005473739 nova_compute[259550]: 2025-10-07 14:01:19.996 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.015 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.016 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.040 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.040 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.041 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.041 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.041 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:01:20 np0005473739 podman[269519]: 2025-10-07 14:01:20.106758403 +0000 UTC m=+0.094459629 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:01:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:01:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/495579678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.484 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.658 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.659 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5191MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.660 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.660 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.816 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.816 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.886 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.974 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.974 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:01:20 np0005473739 nova_compute[259550]: 2025-10-07 14:01:20.988 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.009 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.022 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:01:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1016: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:01:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/745683009' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.533 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.541 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.555 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.557 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.557 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.897s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:01:21 np0005473739 nova_compute[259550]: 2025-10-07 14:01:21.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:01:22 np0005473739 nova_compute[259550]: 2025-10-07 14:01:22.001 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:01:22 np0005473739 nova_compute[259550]: 2025-10-07 14:01:22.002 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:22 np0005473739 nova_compute[259550]: 2025-10-07 14:01:22.003 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:22 np0005473739 nova_compute[259550]: 2025-10-07 14:01:22.003 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:01:22 np0005473739 nova_compute[259550]: 2025-10-07 14:01:22.019 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:01:22
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.log', 'images', 'volumes', 'default.rgw.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'vms']
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:01:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:01:23 np0005473739 nova_compute[259550]: 2025-10-07 14:01:23.000 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:23 np0005473739 podman[269589]: 2025-10-07 14:01:23.065760192 +0000 UTC m=+0.051606548 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001)
Oct  7 10:01:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1017: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:23 np0005473739 ceph-mgr[74587]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3626055412
Oct  7 10:01:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1018: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:01:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev add09495-dc80-4de5-9433-d44e4310bfba does not exist
Oct  7 10:01:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4ba008ab-f478-4b47-b64c-c4029c6f9a6d does not exist
Oct  7 10:01:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e83508ee-9163-4a17-bb18-f84ea4a11146 does not exist
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:01:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:01:27 np0005473739 podman[269880]: 2025-10-07 14:01:27.04369729 +0000 UTC m=+0.031195905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:01:27 np0005473739 podman[269880]: 2025-10-07 14:01:27.142299927 +0000 UTC m=+0.129798442 container create ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:01:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1019: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:27 np0005473739 systemd[1]: Started libpod-conmon-ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5.scope.
Oct  7 10:01:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:01:27 np0005473739 podman[269880]: 2025-10-07 14:01:27.426270632 +0000 UTC m=+0.413769177 container init ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:01:27 np0005473739 podman[269880]: 2025-10-07 14:01:27.435472498 +0000 UTC m=+0.422971013 container start ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:01:27 np0005473739 podman[269880]: 2025-10-07 14:01:27.440901333 +0000 UTC m=+0.428399848 container attach ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 10:01:27 np0005473739 vibrant_archimedes[269896]: 167 167
Oct  7 10:01:27 np0005473739 systemd[1]: libpod-ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5.scope: Deactivated successfully.
Oct  7 10:01:27 np0005473739 conmon[269896]: conmon ca695260b7a43cefe030 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5.scope/container/memory.events
Oct  7 10:01:27 np0005473739 podman[269880]: 2025-10-07 14:01:27.444792178 +0000 UTC m=+0.432290693 container died ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:01:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-78a1d7bde8f74c36393ae01455231096b343cda3645b8969e3f6a843696b44fd-merged.mount: Deactivated successfully.
Oct  7 10:01:27 np0005473739 podman[269880]: 2025-10-07 14:01:27.491591029 +0000 UTC m=+0.479089544 container remove ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_archimedes, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:01:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:01:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:01:27 np0005473739 systemd[1]: libpod-conmon-ca695260b7a43cefe03005f60b4bc295703bc96893866f2c34062fa711452ff5.scope: Deactivated successfully.
Oct  7 10:01:27 np0005473739 podman[269919]: 2025-10-07 14:01:27.664042641 +0000 UTC m=+0.045467786 container create 1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:01:27 np0005473739 systemd[1]: Started libpod-conmon-1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f.scope.
Oct  7 10:01:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:01:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b914c2c6859ecf9d5cec41e61fca0dd22f613f73eb19c583c2085cce78528e8a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b914c2c6859ecf9d5cec41e61fca0dd22f613f73eb19c583c2085cce78528e8a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b914c2c6859ecf9d5cec41e61fca0dd22f613f73eb19c583c2085cce78528e8a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:27 np0005473739 podman[269919]: 2025-10-07 14:01:27.644362425 +0000 UTC m=+0.025787370 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:01:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b914c2c6859ecf9d5cec41e61fca0dd22f613f73eb19c583c2085cce78528e8a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b914c2c6859ecf9d5cec41e61fca0dd22f613f73eb19c583c2085cce78528e8a/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:27 np0005473739 podman[269919]: 2025-10-07 14:01:27.753653428 +0000 UTC m=+0.135078353 container init 1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 10:01:27 np0005473739 podman[269919]: 2025-10-07 14:01:27.76418584 +0000 UTC m=+0.145610775 container start 1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:01:27 np0005473739 podman[269919]: 2025-10-07 14:01:27.768861445 +0000 UTC m=+0.150286380 container attach 1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:01:28 np0005473739 bold_gagarin[269936]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:01:28 np0005473739 bold_gagarin[269936]: --> relative data size: 1.0
Oct  7 10:01:28 np0005473739 bold_gagarin[269936]: --> All data devices are unavailable
Oct  7 10:01:28 np0005473739 nova_compute[259550]: 2025-10-07 14:01:28.901 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:01:28 np0005473739 systemd[1]: libpod-1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f.scope: Deactivated successfully.
Oct  7 10:01:28 np0005473739 systemd[1]: libpod-1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f.scope: Consumed 1.079s CPU time.
Oct  7 10:01:28 np0005473739 podman[269919]: 2025-10-07 14:01:28.90722881 +0000 UTC m=+1.288653735 container died 1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:01:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b914c2c6859ecf9d5cec41e61fca0dd22f613f73eb19c583c2085cce78528e8a-merged.mount: Deactivated successfully.
Oct  7 10:01:29 np0005473739 podman[269919]: 2025-10-07 14:01:29.04034896 +0000 UTC m=+1.421773885 container remove 1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_gagarin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 10:01:29 np0005473739 systemd[1]: libpod-conmon-1ed180b326e03514f2972d494360e26cc01d60a50b2c48b205f7e2288ee7929f.scope: Deactivated successfully.
Oct  7 10:01:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1020: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:29 np0005473739 podman[270119]: 2025-10-07 14:01:29.760074639 +0000 UTC m=+0.079752234 container create 7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:01:29 np0005473739 podman[270119]: 2025-10-07 14:01:29.703211869 +0000 UTC m=+0.022889484 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:01:29 np0005473739 systemd[1]: Started libpod-conmon-7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a.scope.
Oct  7 10:01:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:01:29 np0005473739 podman[270119]: 2025-10-07 14:01:29.897763681 +0000 UTC m=+0.217441286 container init 7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:01:29 np0005473739 podman[270119]: 2025-10-07 14:01:29.906711861 +0000 UTC m=+0.226389456 container start 7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:01:29 np0005473739 hardcore_rhodes[270135]: 167 167
Oct  7 10:01:29 np0005473739 systemd[1]: libpod-7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a.scope: Deactivated successfully.
Oct  7 10:01:29 np0005473739 podman[270119]: 2025-10-07 14:01:29.926044278 +0000 UTC m=+0.245721873 container attach 7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:01:29 np0005473739 podman[270119]: 2025-10-07 14:01:29.927465466 +0000 UTC m=+0.247143061 container died 7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:01:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4c255de988e1620957485b84bb626f4d2736d356713a51c84f69bb994f270b0d-merged.mount: Deactivated successfully.
Oct  7 10:01:30 np0005473739 podman[270119]: 2025-10-07 14:01:30.025877928 +0000 UTC m=+0.345555533 container remove 7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_rhodes, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:01:30 np0005473739 systemd[1]: libpod-conmon-7839ba5e6f463efa5010e29b320f6a2aae926d4419e7443b9252b5f4df578a4a.scope: Deactivated successfully.
Oct  7 10:01:30 np0005473739 podman[270160]: 2025-10-07 14:01:30.210624679 +0000 UTC m=+0.053773669 container create f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:01:30 np0005473739 podman[270160]: 2025-10-07 14:01:30.185437275 +0000 UTC m=+0.028586345 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:01:30 np0005473739 systemd[1]: Started libpod-conmon-f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c.scope.
Oct  7 10:01:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:01:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6aee1419142877fa00e9912e984b48da494c62b8dc27521521ed751c2a1eaea/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6aee1419142877fa00e9912e984b48da494c62b8dc27521521ed751c2a1eaea/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6aee1419142877fa00e9912e984b48da494c62b8dc27521521ed751c2a1eaea/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c6aee1419142877fa00e9912e984b48da494c62b8dc27521521ed751c2a1eaea/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:30 np0005473739 podman[270160]: 2025-10-07 14:01:30.490611467 +0000 UTC m=+0.333760487 container init f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:01:30 np0005473739 podman[270160]: 2025-10-07 14:01:30.497620874 +0000 UTC m=+0.340769864 container start f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 10:01:30 np0005473739 podman[270160]: 2025-10-07 14:01:30.505146275 +0000 UTC m=+0.348295295 container attach f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:01:31 np0005473739 sad_golick[270176]: {
Oct  7 10:01:31 np0005473739 sad_golick[270176]:    "0": [
Oct  7 10:01:31 np0005473739 sad_golick[270176]:        {
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "devices": [
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "/dev/loop3"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            ],
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_name": "ceph_lv0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_size": "21470642176",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "name": "ceph_lv0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "tags": {
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cluster_name": "ceph",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.crush_device_class": "",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.encrypted": "0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osd_id": "0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.type": "block",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.vdo": "0"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            },
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "type": "block",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "vg_name": "ceph_vg0"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:        }
Oct  7 10:01:31 np0005473739 sad_golick[270176]:    ],
Oct  7 10:01:31 np0005473739 sad_golick[270176]:    "1": [
Oct  7 10:01:31 np0005473739 sad_golick[270176]:        {
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "devices": [
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "/dev/loop4"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            ],
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_name": "ceph_lv1",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_size": "21470642176",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "name": "ceph_lv1",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "tags": {
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cluster_name": "ceph",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.crush_device_class": "",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.encrypted": "0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osd_id": "1",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.type": "block",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.vdo": "0"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            },
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "type": "block",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "vg_name": "ceph_vg1"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:        }
Oct  7 10:01:31 np0005473739 sad_golick[270176]:    ],
Oct  7 10:01:31 np0005473739 sad_golick[270176]:    "2": [
Oct  7 10:01:31 np0005473739 sad_golick[270176]:        {
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "devices": [
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "/dev/loop5"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            ],
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_name": "ceph_lv2",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_size": "21470642176",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "name": "ceph_lv2",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "tags": {
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.cluster_name": "ceph",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.crush_device_class": "",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.encrypted": "0",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osd_id": "2",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.type": "block",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:                "ceph.vdo": "0"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            },
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "type": "block",
Oct  7 10:01:31 np0005473739 sad_golick[270176]:            "vg_name": "ceph_vg2"
Oct  7 10:01:31 np0005473739 sad_golick[270176]:        }
Oct  7 10:01:31 np0005473739 sad_golick[270176]:    ]
Oct  7 10:01:31 np0005473739 sad_golick[270176]: }
Oct  7 10:01:31 np0005473739 systemd[1]: libpod-f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c.scope: Deactivated successfully.
Oct  7 10:01:31 np0005473739 podman[270185]: 2025-10-07 14:01:31.332352429 +0000 UTC m=+0.026026318 container died f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 10:01:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1021: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c6aee1419142877fa00e9912e984b48da494c62b8dc27521521ed751c2a1eaea-merged.mount: Deactivated successfully.
Oct  7 10:01:31 np0005473739 podman[270185]: 2025-10-07 14:01:31.439745361 +0000 UTC m=+0.133419220 container remove f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_golick, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:01:31 np0005473739 systemd[1]: libpod-conmon-f94e3ff10bf93205bea4e2b30c77f2a06be55ddce0ca88640664585adee2179c.scope: Deactivated successfully.
Oct  7 10:01:32 np0005473739 podman[270342]: 2025-10-07 14:01:32.111334242 +0000 UTC m=+0.062644286 container create 9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 10:01:32 np0005473739 podman[270342]: 2025-10-07 14:01:32.083392215 +0000 UTC m=+0.034702289 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:01:32 np0005473739 systemd[1]: Started libpod-conmon-9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd.scope.
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:01:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:01:32 np0005473739 podman[270342]: 2025-10-07 14:01:32.253909675 +0000 UTC m=+0.205219739 container init 9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True)
Oct  7 10:01:32 np0005473739 podman[270342]: 2025-10-07 14:01:32.264870288 +0000 UTC m=+0.216180332 container start 9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:01:32 np0005473739 strange_zhukovsky[270359]: 167 167
Oct  7 10:01:32 np0005473739 systemd[1]: libpod-9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd.scope: Deactivated successfully.
Oct  7 10:01:32 np0005473739 podman[270342]: 2025-10-07 14:01:32.283075495 +0000 UTC m=+0.234385639 container attach 9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:01:32 np0005473739 podman[270342]: 2025-10-07 14:01:32.284176965 +0000 UTC m=+0.235486999 container died 9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:01:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b2aa78543f051b4ea65e44007802a7474784ae0c966c4d4289134779e76aa784-merged.mount: Deactivated successfully.
Oct  7 10:01:32 np0005473739 podman[270342]: 2025-10-07 14:01:32.37670872 +0000 UTC m=+0.328018774 container remove 9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_zhukovsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:01:32 np0005473739 systemd[1]: libpod-conmon-9156a69b0e443c2a745e44f1a85866cd4ce1e0feb7f4b08c56496f168bc4dadd.scope: Deactivated successfully.
Oct  7 10:01:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:01:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612290156' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:01:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:01:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/612290156' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:01:32 np0005473739 podman[270385]: 2025-10-07 14:01:32.643208107 +0000 UTC m=+0.070154478 container create 855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True)
Oct  7 10:01:32 np0005473739 systemd[1]: Started libpod-conmon-855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6.scope.
Oct  7 10:01:32 np0005473739 podman[270385]: 2025-10-07 14:01:32.617090329 +0000 UTC m=+0.044036730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:01:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:01:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307225cfc928ed7612b8edb2ade519be806f72c7f1e65ab35638abdd45a4776e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307225cfc928ed7612b8edb2ade519be806f72c7f1e65ab35638abdd45a4776e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307225cfc928ed7612b8edb2ade519be806f72c7f1e65ab35638abdd45a4776e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/307225cfc928ed7612b8edb2ade519be806f72c7f1e65ab35638abdd45a4776e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:01:32 np0005473739 podman[270385]: 2025-10-07 14:01:32.750325072 +0000 UTC m=+0.177271473 container init 855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:01:32 np0005473739 podman[270385]: 2025-10-07 14:01:32.75814153 +0000 UTC m=+0.185087901 container start 855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:01:32 np0005473739 podman[270385]: 2025-10-07 14:01:32.766043142 +0000 UTC m=+0.192989543 container attach 855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:01:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1022: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:33 np0005473739 clever_kare[270401]: {
Oct  7 10:01:33 np0005473739 clever_kare[270401]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "osd_id": 2,
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "type": "bluestore"
Oct  7 10:01:33 np0005473739 clever_kare[270401]:    },
Oct  7 10:01:33 np0005473739 clever_kare[270401]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "osd_id": 1,
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "type": "bluestore"
Oct  7 10:01:33 np0005473739 clever_kare[270401]:    },
Oct  7 10:01:33 np0005473739 clever_kare[270401]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "osd_id": 0,
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:01:33 np0005473739 clever_kare[270401]:        "type": "bluestore"
Oct  7 10:01:33 np0005473739 clever_kare[270401]:    }
Oct  7 10:01:33 np0005473739 clever_kare[270401]: }
Oct  7 10:01:33 np0005473739 systemd[1]: libpod-855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6.scope: Deactivated successfully.
Oct  7 10:01:33 np0005473739 systemd[1]: libpod-855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6.scope: Consumed 1.053s CPU time.
Oct  7 10:01:33 np0005473739 podman[270434]: 2025-10-07 14:01:33.860471492 +0000 UTC m=+0.034923925 container died 855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 10:01:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-307225cfc928ed7612b8edb2ade519be806f72c7f1e65ab35638abdd45a4776e-merged.mount: Deactivated successfully.
Oct  7 10:01:34 np0005473739 podman[270434]: 2025-10-07 14:01:34.038866863 +0000 UTC m=+0.213319246 container remove 855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_kare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:01:34 np0005473739 systemd[1]: libpod-conmon-855b0ffff2d559e39f5f8af76a5c0448f268ddcb3ce8f4af1ee31648194506b6.scope: Deactivated successfully.
Oct  7 10:01:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:01:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:01:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:01:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:01:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 10e79a36-f7a5-4426-b04b-3b906ded57e6 does not exist
Oct  7 10:01:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4b11e127-019a-4dc9-8e12-6af2f5de8353 does not exist
Oct  7 10:01:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:01:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:01:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1023: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1024: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1025: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:40 np0005473739 podman[270499]: 2025-10-07 14:01:40.083489295 +0000 UTC m=+0.063307794 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:01:40 np0005473739 podman[270500]: 2025-10-07 14:01:40.083566067 +0000 UTC m=+0.063382896 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:01:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1026: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1027: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1028: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1029: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1030: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:51 np0005473739 podman[270539]: 2025-10-07 14:01:51.138408766 +0000 UTC m=+0.120120024 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:01:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1031: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:01:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1032: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:54 np0005473739 podman[270566]: 2025-10-07 14:01:54.078788244 +0000 UTC m=+0.064026053 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:01:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1033: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:01:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1034: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:01:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1035: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:02:00.035 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:02:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:02:00.035 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:02:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:02:00.036 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:02:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1036: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1037: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1038: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1039: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1040: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:11 np0005473739 podman[270588]: 2025-10-07 14:02:11.077619542 +0000 UTC m=+0.064307411 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 10:02:11 np0005473739 podman[270587]: 2025-10-07 14:02:11.082110942 +0000 UTC m=+0.068148604 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  7 10:02:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1041: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1042: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:02:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.0 total, 600.0 interval#012Cumulative writes: 6043 writes, 24K keys, 6043 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 6043 writes, 1090 syncs, 5.54 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 353 writes, 730 keys, 353 commit groups, 1.0 writes per commit group, ingest: 0.45 MB, 0.00 MB/s#012Interval WAL: 353 writes, 160 syncs, 2.21 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:02:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1043: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:16 np0005473739 nova_compute[259550]: 2025-10-07 14:02:16.009 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:16 np0005473739 nova_compute[259550]: 2025-10-07 14:02:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:16 np0005473739 nova_compute[259550]: 2025-10-07 14:02:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:16 np0005473739 nova_compute[259550]: 2025-10-07 14:02:16.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:02:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1044: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:02:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 7242 writes, 29K keys, 7242 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 7242 writes, 1472 syncs, 4.92 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 429 writes, 1220 keys, 429 commit groups, 1.0 writes per commit group, ingest: 0.57 MB, 0.00 MB/s#012Interval WAL: 429 writes, 189 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:02:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1045: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:19 np0005473739 nova_compute[259550]: 2025-10-07 14:02:19.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:20 np0005473739 nova_compute[259550]: 2025-10-07 14:02:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.014 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:02:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1046: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:02:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/843243379' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.464 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.630 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.632 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5151MB free_disk=59.98828125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.633 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.633 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.697 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.698 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:02:21 np0005473739 nova_compute[259550]: 2025-10-07 14:02:21.715 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:02:22 np0005473739 podman[270668]: 2025-10-07 14:02:22.133684123 +0000 UTC m=+0.118266083 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 10:02:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:02:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685939663' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:02:22 np0005473739 nova_compute[259550]: 2025-10-07 14:02:22.172 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:02:22 np0005473739 nova_compute[259550]: 2025-10-07 14:02:22.180 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:02:22 np0005473739 nova_compute[259550]: 2025-10-07 14:02:22.198 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:02:22 np0005473739 nova_compute[259550]: 2025-10-07 14:02:22.200 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:02:22 np0005473739 nova_compute[259550]: 2025-10-07 14:02:22.200 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:02:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:02:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 1800.1 total, 600.0 interval#012Cumulative writes: 5966 writes, 24K keys, 5966 commit groups, 1.0 writes per commit group, ingest: 0.02 GB, 0.01 MB/s#012Cumulative WAL: 5966 writes, 1032 syncs, 5.78 writes per sync, written: 0.02 GB, 0.01 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 399 writes, 997 keys, 399 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s#012Interval WAL: 399 writes, 175 syncs, 2.28 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:02:22
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'vms', 'volumes', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.control', 'default.rgw.log', 'backups', 'cephfs.cephfs.meta']
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:02:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:02:23 np0005473739 nova_compute[259550]: 2025-10-07 14:02:23.201 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:23 np0005473739 nova_compute[259550]: 2025-10-07 14:02:23.202 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:02:23 np0005473739 nova_compute[259550]: 2025-10-07 14:02:23.202 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:02:23 np0005473739 nova_compute[259550]: 2025-10-07 14:02:23.214 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:02:23 np0005473739 nova_compute[259550]: 2025-10-07 14:02:23.214 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:23 np0005473739 nova_compute[259550]: 2025-10-07 14:02:23.214 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1047: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 10:02:24 np0005473739 nova_compute[259550]: 2025-10-07 14:02:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:02:25 np0005473739 podman[270698]: 2025-10-07 14:02:25.063741397 +0000 UTC m=+0.050843572 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:02:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1048: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1049: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1050: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1051: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e126 do_prune osdmap full prune enabled
Oct  7 10:02:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e127 e127: 3 total, 3 up, 3 in
Oct  7 10:02:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e127: 3 total, 3 up, 3 in
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 3.1795353910268934e-07 of space, bias 1.0, pg target 9.53860617308068e-05 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:02:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e127 do_prune osdmap full prune enabled
Oct  7 10:02:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e128 e128: 3 total, 3 up, 3 in
Oct  7 10:02:32 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e128: 3 total, 3 up, 3 in
Oct  7 10:02:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:02:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3441100972' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:02:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:02:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3441100972' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:02:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1054: 305 pgs: 305 active+clean; 457 KiB data, 149 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1055: 305 pgs: 305 active+clean; 32 MiB data, 181 MiB used, 60 GiB / 60 GiB avail; 30 KiB/s rd, 4.0 MiB/s wr, 40 op/s
Oct  7 10:02:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:02:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:02:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:36 np0005473739 podman[271109]: 2025-10-07 14:02:36.132450652 +0000 UTC m=+0.042301932 container create 40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:02:36 np0005473739 systemd[1]: Started libpod-conmon-40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d.scope.
Oct  7 10:02:36 np0005473739 podman[271109]: 2025-10-07 14:02:36.114233505 +0000 UTC m=+0.024084805 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:02:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:02:36 np0005473739 podman[271109]: 2025-10-07 14:02:36.227352001 +0000 UTC m=+0.137203301 container init 40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:02:36 np0005473739 podman[271109]: 2025-10-07 14:02:36.237466361 +0000 UTC m=+0.147317641 container start 40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 10:02:36 np0005473739 podman[271109]: 2025-10-07 14:02:36.241489619 +0000 UTC m=+0.151340919 container attach 40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:02:36 np0005473739 eager_shamir[271126]: 167 167
Oct  7 10:02:36 np0005473739 systemd[1]: libpod-40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d.scope: Deactivated successfully.
Oct  7 10:02:36 np0005473739 conmon[271126]: conmon 40e4ba8c72953670357a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d.scope/container/memory.events
Oct  7 10:02:36 np0005473739 podman[271109]: 2025-10-07 14:02:36.246975186 +0000 UTC m=+0.156826466 container died 40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:02:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9bca68b51a2db0d8a13c3fe1d3c310965e5d0390d7b58c62cf6c556ebde5712c-merged.mount: Deactivated successfully.
Oct  7 10:02:36 np0005473739 podman[271109]: 2025-10-07 14:02:36.2923866 +0000 UTC m=+0.202237880 container remove 40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_shamir, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:02:36 np0005473739 systemd[1]: libpod-conmon-40e4ba8c72953670357aa947cf9b25ca010bd8a1e6cf0e1a12d2b02c0c50ac6d.scope: Deactivated successfully.
Oct  7 10:02:36 np0005473739 podman[271150]: 2025-10-07 14:02:36.455020929 +0000 UTC m=+0.044717976 container create 5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 10:02:36 np0005473739 systemd[1]: Started libpod-conmon-5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe.scope.
Oct  7 10:02:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:36 np0005473739 podman[271150]: 2025-10-07 14:02:36.437188193 +0000 UTC m=+0.026885250 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:02:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:02:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4095d29b06590a3cf0fb8a37395ba993ffd232b667c2222dacff456605ddd413/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4095d29b06590a3cf0fb8a37395ba993ffd232b667c2222dacff456605ddd413/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4095d29b06590a3cf0fb8a37395ba993ffd232b667c2222dacff456605ddd413/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4095d29b06590a3cf0fb8a37395ba993ffd232b667c2222dacff456605ddd413/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:36 np0005473739 podman[271150]: 2025-10-07 14:02:36.56045546 +0000 UTC m=+0.150152527 container init 5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brahmagupta, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:02:36 np0005473739 podman[271150]: 2025-10-07 14:02:36.571595367 +0000 UTC m=+0.161292414 container start 5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:02:36 np0005473739 podman[271150]: 2025-10-07 14:02:36.575680847 +0000 UTC m=+0.165377924 container attach 5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brahmagupta, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:02:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1056: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]: [
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:    {
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        "available": false,
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        "ceph_device": false,
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        "lsm_data": {},
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        "lvs": [],
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        "path": "/dev/sr0",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        "rejected_reasons": [
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "Insufficient space (<5GB)",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "Has a FileSystem"
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        ],
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        "sys_api": {
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "actuators": null,
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "device_nodes": "sr0",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "devname": "sr0",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "human_readable_size": "482.00 KB",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "id_bus": "ata",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "model": "QEMU DVD-ROM",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "nr_requests": "2",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "parent": "/dev/sr0",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "partitions": {},
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "path": "/dev/sr0",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "removable": "1",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "rev": "2.5+",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "ro": "0",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "rotational": "0",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "sas_address": "",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "sas_device_handle": "",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "scheduler_mode": "mq-deadline",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "sectors": 0,
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "sectorsize": "2048",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "size": 493568.0,
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "support_discard": "2048",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "type": "disk",
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:            "vendor": "QEMU"
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:        }
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]:    }
Oct  7 10:02:38 np0005473739 hopeful_brahmagupta[271166]: ]
Oct  7 10:02:38 np0005473739 systemd[1]: libpod-5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe.scope: Deactivated successfully.
Oct  7 10:02:38 np0005473739 podman[271150]: 2025-10-07 14:02:38.08866226 +0000 UTC m=+1.678359307 container died 5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brahmagupta, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:02:38 np0005473739 systemd[1]: libpod-5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe.scope: Consumed 1.576s CPU time.
Oct  7 10:02:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4095d29b06590a3cf0fb8a37395ba993ffd232b667c2222dacff456605ddd413-merged.mount: Deactivated successfully.
Oct  7 10:02:38 np0005473739 podman[271150]: 2025-10-07 14:02:38.151569843 +0000 UTC m=+1.741266920 container remove 5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_brahmagupta, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:02:38 np0005473739 systemd[1]: libpod-conmon-5b1b4ec96c0b943ea440ab817c3f40fc7d94f72c5ad5f031a6b9d73997387dbe.scope: Deactivated successfully.
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5bae411c-fa42-4d19-ba89-e82d68bfc6b2 does not exist
Oct  7 10:02:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2f103db9-d9d5-4180-bf18-497e394402aa does not exist
Oct  7 10:02:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d20d582d-aff1-4e49-8dfe-10188c6a8e1e does not exist
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:02:38 np0005473739 podman[273190]: 2025-10-07 14:02:38.885581713 +0000 UTC m=+0.057080767 container create d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcnulty, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 10:02:38 np0005473739 systemd[1]: Started libpod-conmon-d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b.scope.
Oct  7 10:02:38 np0005473739 podman[273190]: 2025-10-07 14:02:38.853552957 +0000 UTC m=+0.025052031 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:02:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:02:38 np0005473739 podman[273190]: 2025-10-07 14:02:38.983783091 +0000 UTC m=+0.155282175 container init d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcnulty, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:02:38 np0005473739 podman[273190]: 2025-10-07 14:02:38.992535364 +0000 UTC m=+0.164034428 container start d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcnulty, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:02:38 np0005473739 podman[273190]: 2025-10-07 14:02:38.997800445 +0000 UTC m=+0.169299589 container attach d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcnulty, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:02:38 np0005473739 dazzling_mcnulty[273206]: 167 167
Oct  7 10:02:38 np0005473739 systemd[1]: libpod-d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b.scope: Deactivated successfully.
Oct  7 10:02:39 np0005473739 podman[273190]: 2025-10-07 14:02:39.000439625 +0000 UTC m=+0.171938689 container died d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcnulty, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:02:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-57c96084e7bf26c5d628795916f24bb2d7670fb5a92474b51980873d66cfdb0d-merged.mount: Deactivated successfully.
Oct  7 10:02:39 np0005473739 podman[273190]: 2025-10-07 14:02:39.046692802 +0000 UTC m=+0.218191856 container remove d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:02:39 np0005473739 systemd[1]: libpod-conmon-d212ea46175b9d7451183414e408de16fbaf0055fc585641a52f0fe005d4c00b.scope: Deactivated successfully.
Oct  7 10:02:39 np0005473739 podman[273231]: 2025-10-07 14:02:39.226168503 +0000 UTC m=+0.047091021 container create 35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lalande, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2)
Oct  7 10:02:39 np0005473739 systemd[1]: Started libpod-conmon-35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75.scope.
Oct  7 10:02:39 np0005473739 podman[273231]: 2025-10-07 14:02:39.207233666 +0000 UTC m=+0.028156194 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:02:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:02:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf006493cd7e9fe4f0c557da3b7aa517644f78f1bffb5898bd57b5cd46e5d020/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf006493cd7e9fe4f0c557da3b7aa517644f78f1bffb5898bd57b5cd46e5d020/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf006493cd7e9fe4f0c557da3b7aa517644f78f1bffb5898bd57b5cd46e5d020/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf006493cd7e9fe4f0c557da3b7aa517644f78f1bffb5898bd57b5cd46e5d020/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cf006493cd7e9fe4f0c557da3b7aa517644f78f1bffb5898bd57b5cd46e5d020/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:39 np0005473739 podman[273231]: 2025-10-07 14:02:39.322834578 +0000 UTC m=+0.143757116 container init 35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lalande, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:02:39 np0005473739 podman[273231]: 2025-10-07 14:02:39.33260814 +0000 UTC m=+0.153530648 container start 35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lalande, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 10:02:39 np0005473739 podman[273231]: 2025-10-07 14:02:39.336965466 +0000 UTC m=+0.157887984 container attach 35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lalande, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:02:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1057: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 5.1 MiB/s wr, 47 op/s
Oct  7 10:02:40 np0005473739 great_lalande[273247]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:02:40 np0005473739 great_lalande[273247]: --> relative data size: 1.0
Oct  7 10:02:40 np0005473739 great_lalande[273247]: --> All data devices are unavailable
Oct  7 10:02:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:40 np0005473739 systemd[1]: libpod-35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75.scope: Deactivated successfully.
Oct  7 10:02:40 np0005473739 systemd[1]: libpod-35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75.scope: Consumed 1.092s CPU time.
Oct  7 10:02:40 np0005473739 conmon[273247]: conmon 35853f63bf3187a6d4f5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75.scope/container/memory.events
Oct  7 10:02:40 np0005473739 podman[273231]: 2025-10-07 14:02:40.475141336 +0000 UTC m=+1.296063844 container died 35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:02:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cf006493cd7e9fe4f0c557da3b7aa517644f78f1bffb5898bd57b5cd46e5d020-merged.mount: Deactivated successfully.
Oct  7 10:02:40 np0005473739 podman[273231]: 2025-10-07 14:02:40.579698652 +0000 UTC m=+1.400621160 container remove 35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_lalande, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:02:40 np0005473739 systemd[1]: libpod-conmon-35853f63bf3187a6d4f591c33faeee9a13369f49e18aa2991688840949784c75.scope: Deactivated successfully.
Oct  7 10:02:41 np0005473739 podman[273429]: 2025-10-07 14:02:41.328399486 +0000 UTC m=+0.053769429 container create 268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jennings, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:02:41 np0005473739 systemd[1]: Started libpod-conmon-268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad.scope.
Oct  7 10:02:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1058: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 4.2 MiB/s wr, 39 op/s
Oct  7 10:02:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:02:41 np0005473739 podman[273429]: 2025-10-07 14:02:41.301719952 +0000 UTC m=+0.027089925 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:02:41 np0005473739 podman[273429]: 2025-10-07 14:02:41.425477322 +0000 UTC m=+0.150847285 container init 268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jennings, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:02:41 np0005473739 podman[273429]: 2025-10-07 14:02:41.435521681 +0000 UTC m=+0.160891624 container start 268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jennings, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:02:41 np0005473739 naughty_jennings[273448]: 167 167
Oct  7 10:02:41 np0005473739 podman[273429]: 2025-10-07 14:02:41.440824022 +0000 UTC m=+0.166193985 container attach 268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jennings, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:02:41 np0005473739 systemd[1]: libpod-268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad.scope: Deactivated successfully.
Oct  7 10:02:41 np0005473739 conmon[273448]: conmon 268b3114757edbaaa546 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad.scope/container/memory.events
Oct  7 10:02:41 np0005473739 podman[273429]: 2025-10-07 14:02:41.443558375 +0000 UTC m=+0.168928318 container died 268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jennings, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct  7 10:02:41 np0005473739 podman[273443]: 2025-10-07 14:02:41.474437181 +0000 UTC m=+0.099924653 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 10:02:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-14e2356347cebf602e7a581abab381a51984bfb638526d855909b715c2e7da17-merged.mount: Deactivated successfully.
Oct  7 10:02:41 np0005473739 podman[273447]: 2025-10-07 14:02:41.508888493 +0000 UTC m=+0.132299279 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:02:41 np0005473739 podman[273429]: 2025-10-07 14:02:41.521852199 +0000 UTC m=+0.247222142 container remove 268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_jennings, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:02:41 np0005473739 systemd[1]: libpod-conmon-268b3114757edbaaa546cad12ba94f1dcbd78a58cc904868257281b7078588ad.scope: Deactivated successfully.
Oct  7 10:02:41 np0005473739 podman[273512]: 2025-10-07 14:02:41.714568434 +0000 UTC m=+0.060375836 container create 8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 10:02:41 np0005473739 systemd[1]: Started libpod-conmon-8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da.scope.
Oct  7 10:02:41 np0005473739 podman[273512]: 2025-10-07 14:02:41.68227121 +0000 UTC m=+0.028078642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:02:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:02:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6086c926856f26c91e249c8cefd2a72fa3435604e97bc79e31a8b4d281c558/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6086c926856f26c91e249c8cefd2a72fa3435604e97bc79e31a8b4d281c558/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6086c926856f26c91e249c8cefd2a72fa3435604e97bc79e31a8b4d281c558/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6f6086c926856f26c91e249c8cefd2a72fa3435604e97bc79e31a8b4d281c558/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:41 np0005473739 podman[273512]: 2025-10-07 14:02:41.819902361 +0000 UTC m=+0.165709783 container init 8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:02:41 np0005473739 podman[273512]: 2025-10-07 14:02:41.82996686 +0000 UTC m=+0.175774272 container start 8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:02:41 np0005473739 podman[273512]: 2025-10-07 14:02:41.836659439 +0000 UTC m=+0.182466861 container attach 8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]: {
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:    "0": [
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:        {
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "devices": [
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "/dev/loop3"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            ],
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_name": "ceph_lv0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_size": "21470642176",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "name": "ceph_lv0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "tags": {
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cluster_name": "ceph",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.crush_device_class": "",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.encrypted": "0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osd_id": "0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.type": "block",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.vdo": "0"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            },
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "type": "block",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "vg_name": "ceph_vg0"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:        }
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:    ],
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:    "1": [
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:        {
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "devices": [
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "/dev/loop4"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            ],
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_name": "ceph_lv1",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_size": "21470642176",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "name": "ceph_lv1",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "tags": {
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cluster_name": "ceph",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.crush_device_class": "",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.encrypted": "0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osd_id": "1",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.type": "block",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.vdo": "0"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            },
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "type": "block",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "vg_name": "ceph_vg1"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:        }
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:    ],
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:    "2": [
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:        {
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "devices": [
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "/dev/loop5"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            ],
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_name": "ceph_lv2",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_size": "21470642176",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "name": "ceph_lv2",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "tags": {
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.cluster_name": "ceph",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.crush_device_class": "",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.encrypted": "0",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osd_id": "2",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.type": "block",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:                "ceph.vdo": "0"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            },
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "type": "block",
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:            "vg_name": "ceph_vg2"
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:        }
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]:    ]
Oct  7 10:02:42 np0005473739 clever_lehmann[273528]: }
Oct  7 10:02:42 np0005473739 systemd[1]: libpod-8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da.scope: Deactivated successfully.
Oct  7 10:02:42 np0005473739 podman[273537]: 2025-10-07 14:02:42.69034018 +0000 UTC m=+0.026239273 container died 8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:02:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6f6086c926856f26c91e249c8cefd2a72fa3435604e97bc79e31a8b4d281c558-merged.mount: Deactivated successfully.
Oct  7 10:02:43 np0005473739 podman[273537]: 2025-10-07 14:02:43.030835967 +0000 UTC m=+0.366735030 container remove 8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_lehmann, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:02:43 np0005473739 systemd[1]: libpod-conmon-8f0086e0b1f38dc02ed40360208e7a4c8afe7caed891db814f5fcd7f2f0b58da.scope: Deactivated successfully.
Oct  7 10:02:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1059: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 3.8 MiB/s wr, 35 op/s
Oct  7 10:02:43 np0005473739 podman[273692]: 2025-10-07 14:02:43.781123923 +0000 UTC m=+0.113445945 container create b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 10:02:43 np0005473739 podman[273692]: 2025-10-07 14:02:43.689158853 +0000 UTC m=+0.021480905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:02:43 np0005473739 systemd[1]: Started libpod-conmon-b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f.scope.
Oct  7 10:02:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:02:43 np0005473739 podman[273692]: 2025-10-07 14:02:43.900025303 +0000 UTC m=+0.232347345 container init b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:02:43 np0005473739 podman[273692]: 2025-10-07 14:02:43.911607582 +0000 UTC m=+0.243929594 container start b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:02:43 np0005473739 adoring_hypatia[273709]: 167 167
Oct  7 10:02:43 np0005473739 systemd[1]: libpod-b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f.scope: Deactivated successfully.
Oct  7 10:02:43 np0005473739 conmon[273709]: conmon b601baed6082db8dd646 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f.scope/container/memory.events
Oct  7 10:02:43 np0005473739 podman[273692]: 2025-10-07 14:02:43.922185745 +0000 UTC m=+0.254507797 container attach b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:02:43 np0005473739 podman[273692]: 2025-10-07 14:02:43.922705849 +0000 UTC m=+0.255027871 container died b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:02:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-096a0b8f7c45a5e1cc92f751bea329599bd8faa8bdb31775a40984447957afcc-merged.mount: Deactivated successfully.
Oct  7 10:02:44 np0005473739 podman[273692]: 2025-10-07 14:02:44.088811682 +0000 UTC m=+0.421133714 container remove b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_hypatia, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:02:44 np0005473739 systemd[1]: libpod-conmon-b601baed6082db8dd6468ddd09e77f98ab77809f7d98eeba9a4e926ee5e42b2f.scope: Deactivated successfully.
Oct  7 10:02:44 np0005473739 podman[273734]: 2025-10-07 14:02:44.295408707 +0000 UTC m=+0.079316272 container create 78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:02:44 np0005473739 podman[273734]: 2025-10-07 14:02:44.246016376 +0000 UTC m=+0.029923991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:02:44 np0005473739 systemd[1]: Started libpod-conmon-78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963.scope.
Oct  7 10:02:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:02:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7143cbf4a044de234644c59494cdb3773c1b1fc440dbaf99a687be522ab0ab/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7143cbf4a044de234644c59494cdb3773c1b1fc440dbaf99a687be522ab0ab/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7143cbf4a044de234644c59494cdb3773c1b1fc440dbaf99a687be522ab0ab/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c7143cbf4a044de234644c59494cdb3773c1b1fc440dbaf99a687be522ab0ab/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:02:44 np0005473739 podman[273734]: 2025-10-07 14:02:44.420894583 +0000 UTC m=+0.204802168 container init 78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:02:44 np0005473739 podman[273734]: 2025-10-07 14:02:44.428884806 +0000 UTC m=+0.212792371 container start 78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:02:44 np0005473739 podman[273734]: 2025-10-07 14:02:44.438207365 +0000 UTC m=+0.222114950 container attach 78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 10:02:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1060: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 3.4 MiB/s wr, 31 op/s
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]: {
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "osd_id": 2,
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "type": "bluestore"
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:    },
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "osd_id": 1,
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "type": "bluestore"
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:    },
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "osd_id": 0,
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:        "type": "bluestore"
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]:    }
Oct  7 10:02:45 np0005473739 modest_keldysh[273751]: }
Oct  7 10:02:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:45 np0005473739 systemd[1]: libpod-78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963.scope: Deactivated successfully.
Oct  7 10:02:45 np0005473739 systemd[1]: libpod-78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963.scope: Consumed 1.068s CPU time.
Oct  7 10:02:45 np0005473739 podman[273734]: 2025-10-07 14:02:45.492569845 +0000 UTC m=+1.276477440 container died 78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 10:02:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6c7143cbf4a044de234644c59494cdb3773c1b1fc440dbaf99a687be522ab0ab-merged.mount: Deactivated successfully.
Oct  7 10:02:45 np0005473739 podman[273734]: 2025-10-07 14:02:45.701099052 +0000 UTC m=+1.485006607 container remove 78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:02:45 np0005473739 systemd[1]: libpod-conmon-78a0f24bb4421e0662fb4df4765e7aac7a2eef8c9626e15b5ed869310e175963.scope: Deactivated successfully.
Oct  7 10:02:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:02:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:02:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c3abbafe-ac7c-4358-9a6c-d501ebc70f73 does not exist
Oct  7 10:02:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 48b8123a-5363-4146-ae0a-d85ba3a01e49 does not exist
Oct  7 10:02:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:02:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1061: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.2 KiB/s rd, 758 KiB/s wr, 4 op/s
Oct  7 10:02:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1062: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:02:49.545 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:02:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:02:49.547 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:02:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1063: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:02:53 np0005473739 podman[273845]: 2025-10-07 14:02:53.134515167 +0000 UTC m=+0.124240614 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:02:53 np0005473739 nova_compute[259550]: 2025-10-07 14:02:53.191 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquiring lock "29e49704-9ecd-4306-a3c4-88b69878387f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:02:53 np0005473739 nova_compute[259550]: 2025-10-07 14:02:53.191 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "29e49704-9ecd-4306-a3c4-88b69878387f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:02:53 np0005473739 nova_compute[259550]: 2025-10-07 14:02:53.218 2 DEBUG nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:02:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1064: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:53 np0005473739 nova_compute[259550]: 2025-10-07 14:02:53.379 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:02:53 np0005473739 nova_compute[259550]: 2025-10-07 14:02:53.380 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:02:53 np0005473739 nova_compute[259550]: 2025-10-07 14:02:53.399 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:02:53 np0005473739 nova_compute[259550]: 2025-10-07 14:02:53.400 2 INFO nova.compute.claims [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:02:53 np0005473739 nova_compute[259550]: 2025-10-07 14:02:53.639 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:02:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:02:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3642173327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.087 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.096 2 DEBUG nova.compute.provider_tree [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.127 2 DEBUG nova.scheduler.client.report [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.152 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.153 2 DEBUG nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.221 2 DEBUG nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.242 2 INFO nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.266 2 DEBUG nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.357 2 DEBUG nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.358 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.359 2 INFO nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Creating image(s)#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.386 2 DEBUG nova.storage.rbd_utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] rbd image 29e49704-9ecd-4306-a3c4-88b69878387f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.411 2 DEBUG nova.storage.rbd_utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] rbd image 29e49704-9ecd-4306-a3c4-88b69878387f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.435 2 DEBUG nova.storage.rbd_utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] rbd image 29e49704-9ecd-4306-a3c4-88b69878387f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.439 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.440 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:02:54 np0005473739 nova_compute[259550]: 2025-10-07 14:02:54.947 2 DEBUG nova.virt.libvirt.imagebackend [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/1c7e024e-3dd7-433b-91ff-f363a3d5a581/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/1c7e024e-3dd7-433b-91ff-f363a3d5a581/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  7 10:02:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1065: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e128 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:02:56 np0005473739 podman[273949]: 2025-10-07 14:02:56.076851011 +0000 UTC m=+0.058036964 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 10:02:56 np0005473739 nova_compute[259550]: 2025-10-07 14:02:56.456 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:02:56 np0005473739 nova_compute[259550]: 2025-10-07 14:02:56.527 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122.part --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:02:56 np0005473739 nova_compute[259550]: 2025-10-07 14:02:56.528 2 DEBUG nova.virt.images [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] 1c7e024e-3dd7-433b-91ff-f363a3d5a581 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  7 10:02:56 np0005473739 nova_compute[259550]: 2025-10-07 14:02:56.538 2 DEBUG nova.privsep.utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  7 10:02:56 np0005473739 nova_compute[259550]: 2025-10-07 14:02:56.539 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122.part /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:02:57 np0005473739 nova_compute[259550]: 2025-10-07 14:02:57.313 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122.part /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122.converted" returned: 0 in 0.774s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:02:57 np0005473739 nova_compute[259550]: 2025-10-07 14:02:57.318 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:02:57 np0005473739 nova_compute[259550]: 2025-10-07 14:02:57.377 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122.converted --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:02:57 np0005473739 nova_compute[259550]: 2025-10-07 14:02:57.379 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:02:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1066: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail
Oct  7 10:02:57 np0005473739 nova_compute[259550]: 2025-10-07 14:02:57.404 2 DEBUG nova.storage.rbd_utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] rbd image 29e49704-9ecd-4306-a3c4-88b69878387f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:02:57 np0005473739 nova_compute[259550]: 2025-10-07 14:02:57.408 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 29e49704-9ecd-4306-a3c4-88b69878387f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:02:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e128 do_prune osdmap full prune enabled
Oct  7 10:02:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e129 e129: 3 total, 3 up, 3 in
Oct  7 10:02:57 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e129: 3 total, 3 up, 3 in
Oct  7 10:02:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:02:58.549 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:02:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e129 do_prune osdmap full prune enabled
Oct  7 10:02:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e130 e130: 3 total, 3 up, 3 in
Oct  7 10:02:58 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e130: 3 total, 3 up, 3 in
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.056 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 29e49704-9ecd-4306-a3c4-88b69878387f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.647s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.124 2 DEBUG nova.storage.rbd_utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] resizing rbd image 29e49704-9ecd-4306-a3c4-88b69878387f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.222 2 DEBUG nova.objects.instance [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lazy-loading 'migration_context' on Instance uuid 29e49704-9ecd-4306-a3c4-88b69878387f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.251 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.252 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Ensure instance console log exists: /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.252 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.253 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.253 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.255 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.258 2 WARNING nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.263 2 DEBUG nova.virt.libvirt.host [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.263 2 DEBUG nova.virt.libvirt.host [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.265 2 DEBUG nova.virt.libvirt.host [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.266 2 DEBUG nova.virt.libvirt.host [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.266 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.266 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.267 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.267 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.267 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.267 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.268 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.268 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.268 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.268 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.268 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.269 2 DEBUG nova.virt.hardware [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.274 2 DEBUG nova.privsep.utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.274 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:02:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1069: 305 pgs: 305 active+clean; 41 MiB data, 190 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 127 B/s wr, 10 op/s
Oct  7 10:02:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:02:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155606743' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.734 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.757 2 DEBUG nova.storage.rbd_utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] rbd image 29e49704-9ecd-4306-a3c4-88b69878387f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:02:59 np0005473739 nova_compute[259550]: 2025-10-07 14:02:59.761 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:00.037 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:00.037 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:00.037 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3498233883' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:00 np0005473739 nova_compute[259550]: 2025-10-07 14:03:00.196 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:00 np0005473739 nova_compute[259550]: 2025-10-07 14:03:00.199 2 DEBUG nova.objects.instance [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lazy-loading 'pci_devices' on Instance uuid 29e49704-9ecd-4306-a3c4-88b69878387f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:00 np0005473739 nova_compute[259550]: 2025-10-07 14:03:00.225 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <uuid>29e49704-9ecd-4306-a3c4-88b69878387f</uuid>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <name>instance-00000001</name>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <nova:name>tempest-AutoAllocateNetworkTest-server-2891475</nova:name>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:02:59</nova:creationTime>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <nova:user uuid="2764f8731615493db51630d2f86e3188">tempest-AutoAllocateNetworkTest-999964873-project-member</nova:user>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <nova:project uuid="03f09f47e414490480fddcbf6ce04aaf">tempest-AutoAllocateNetworkTest-999964873</nova:project>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <entry name="serial">29e49704-9ecd-4306-a3c4-88b69878387f</entry>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <entry name="uuid">29e49704-9ecd-4306-a3c4-88b69878387f</entry>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/29e49704-9ecd-4306-a3c4-88b69878387f_disk">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/29e49704-9ecd-4306-a3c4-88b69878387f_disk.config">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f/console.log" append="off"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:03:00 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:03:00 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:03:00 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:03:00 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:03:00 np0005473739 nova_compute[259550]: 2025-10-07 14:03:00.289 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:00 np0005473739 nova_compute[259550]: 2025-10-07 14:03:00.289 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:00 np0005473739 nova_compute[259550]: 2025-10-07 14:03:00.290 2 INFO nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Using config drive#033[00m
Oct  7 10:03:00 np0005473739 nova_compute[259550]: 2025-10-07 14:03:00.314 2 DEBUG nova.storage.rbd_utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] rbd image 29e49704-9ecd-4306-a3c4-88b69878387f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:01 np0005473739 nova_compute[259550]: 2025-10-07 14:03:01.265 2 INFO nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Creating config drive at /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f/disk.config#033[00m
Oct  7 10:03:01 np0005473739 nova_compute[259550]: 2025-10-07 14:03:01.270 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzodia42 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1070: 305 pgs: 305 active+clean; 60 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 14 op/s
Oct  7 10:03:01 np0005473739 nova_compute[259550]: 2025-10-07 14:03:01.438 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppzodia42" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:01 np0005473739 nova_compute[259550]: 2025-10-07 14:03:01.466 2 DEBUG nova.storage.rbd_utils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] rbd image 29e49704-9ecd-4306-a3c4-88b69878387f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:01 np0005473739 nova_compute[259550]: 2025-10-07 14:03:01.471 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f/disk.config 29e49704-9ecd-4306-a3c4-88b69878387f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:01 np0005473739 nova_compute[259550]: 2025-10-07 14:03:01.774 2 DEBUG oslo_concurrency.processutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f/disk.config 29e49704-9ecd-4306-a3c4-88b69878387f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:01 np0005473739 nova_compute[259550]: 2025-10-07 14:03:01.775 2 INFO nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Deleting local config drive /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f/disk.config because it was imported into RBD.#033[00m
Oct  7 10:03:01 np0005473739 systemd[1]: Starting libvirt secret daemon...
Oct  7 10:03:01 np0005473739 systemd[1]: Started libvirt secret daemon.
Oct  7 10:03:01 np0005473739 systemd-machined[214580]: New machine qemu-1-instance-00000001.
Oct  7 10:03:01 np0005473739 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.345 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845783.3449118, 29e49704-9ecd-4306-a3c4-88b69878387f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.347 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.357 2 DEBUG nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.358 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.361 2 INFO nova.virt.libvirt.driver [-] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Instance spawned successfully.#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.362 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:03:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1071: 305 pgs: 305 active+clean; 60 MiB data, 199 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.2 MiB/s wr, 14 op/s
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.410 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.415 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.423 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.423 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.424 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.424 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.424 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.425 2 DEBUG nova.virt.libvirt.driver [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.450 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.451 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845783.3560507, 29e49704-9ecd-4306-a3c4-88b69878387f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.451 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] VM Started (Lifecycle Event)#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.511 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.516 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.559 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.712 2 INFO nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Took 9.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.713 2 DEBUG nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.774 2 INFO nova.compute.manager [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Took 10.45 seconds to build instance.#033[00m
Oct  7 10:03:03 np0005473739 nova_compute[259550]: 2025-10-07 14:03:03.811 2 DEBUG oslo_concurrency.lockutils [None req-3c83391b-30fb-4ff2-a92f-1793fd0d3338 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "29e49704-9ecd-4306-a3c4-88b69878387f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.715 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquiring lock "29e49704-9ecd-4306-a3c4-88b69878387f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.715 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "29e49704-9ecd-4306-a3c4-88b69878387f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.716 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquiring lock "29e49704-9ecd-4306-a3c4-88b69878387f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.716 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "29e49704-9ecd-4306-a3c4-88b69878387f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.716 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "29e49704-9ecd-4306-a3c4-88b69878387f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.718 2 INFO nova.compute.manager [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Terminating instance#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.718 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquiring lock "refresh_cache-29e49704-9ecd-4306-a3c4-88b69878387f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.719 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquired lock "refresh_cache-29e49704-9ecd-4306-a3c4-88b69878387f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.719 2 DEBUG nova.network.neutron [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:03:04 np0005473739 nova_compute[259550]: 2025-10-07 14:03:04.972 2 DEBUG nova.network.neutron [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:05 np0005473739 nova_compute[259550]: 2025-10-07 14:03:05.314 2 DEBUG nova.network.neutron [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:05 np0005473739 nova_compute[259550]: 2025-10-07 14:03:05.375 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Releasing lock "refresh_cache-29e49704-9ecd-4306-a3c4-88b69878387f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:03:05 np0005473739 nova_compute[259550]: 2025-10-07 14:03:05.376 2 DEBUG nova.compute.manager [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:03:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1072: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.7 MiB/s wr, 64 op/s
Oct  7 10:03:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:05 np0005473739 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Oct  7 10:03:05 np0005473739 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3.394s CPU time.
Oct  7 10:03:05 np0005473739 systemd-machined[214580]: Machine qemu-1-instance-00000001 terminated.
Oct  7 10:03:05 np0005473739 nova_compute[259550]: 2025-10-07 14:03:05.797 2 INFO nova.virt.libvirt.driver [-] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Instance destroyed successfully.#033[00m
Oct  7 10:03:05 np0005473739 nova_compute[259550]: 2025-10-07 14:03:05.798 2 DEBUG nova.objects.instance [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lazy-loading 'resources' on Instance uuid 29e49704-9ecd-4306-a3c4-88b69878387f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.304 2 INFO nova.virt.libvirt.driver [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Deleting instance files /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f_del#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.305 2 INFO nova.virt.libvirt.driver [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Deletion of /var/lib/nova/instances/29e49704-9ecd-4306-a3c4-88b69878387f_del complete#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.375 2 DEBUG nova.virt.libvirt.host [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.376 2 INFO nova.virt.libvirt.host [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] UEFI support detected#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.378 2 INFO nova.compute.manager [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.378 2 DEBUG oslo.service.loopingcall [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.378 2 DEBUG nova.compute.manager [-] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.378 2 DEBUG nova.network.neutron [-] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.644 2 DEBUG nova.network.neutron [-] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.661 2 DEBUG nova.network.neutron [-] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.678 2 INFO nova.compute.manager [-] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Took 0.30 seconds to deallocate network for instance.#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.720 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.720 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:06 np0005473739 nova_compute[259550]: 2025-10-07 14:03:06.793 2 DEBUG oslo_concurrency.processutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321314415' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.264 2 DEBUG oslo_concurrency.processutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.272 2 DEBUG nova.compute.provider_tree [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.371 2 ERROR nova.scheduler.client.report [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] [req-c369d6cf-a897-4ad4-b102-ad8a93683548] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID cc5ee907-7908-4ad9-99df-64935eda6bff.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-c369d6cf-a897-4ad4-b102-ad8a93683548"}]}#033[00m
Oct  7 10:03:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1073: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.2 MiB/s wr, 91 op/s
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.405 2 DEBUG nova.scheduler.client.report [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.433 2 DEBUG nova.scheduler.client.report [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.434 2 DEBUG nova.compute.provider_tree [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 0, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.452 2 DEBUG nova.scheduler.client.report [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.560 2 DEBUG nova.scheduler.client.report [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:03:07 np0005473739 nova_compute[259550]: 2025-10-07 14:03:07.728 2 DEBUG oslo_concurrency.processutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/672028784' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:08 np0005473739 nova_compute[259550]: 2025-10-07 14:03:08.179 2 DEBUG oslo_concurrency.processutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:08 np0005473739 nova_compute[259550]: 2025-10-07 14:03:08.186 2 DEBUG nova.compute.provider_tree [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:03:08 np0005473739 nova_compute[259550]: 2025-10-07 14:03:08.241 2 DEBUG nova.scheduler.client.report [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Updated inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff with generation 7 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 59, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Oct  7 10:03:08 np0005473739 nova_compute[259550]: 2025-10-07 14:03:08.242 2 DEBUG nova.compute.provider_tree [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Updating resource provider cc5ee907-7908-4ad9-99df-64935eda6bff generation from 7 to 8 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Oct  7 10:03:08 np0005473739 nova_compute[259550]: 2025-10-07 14:03:08.242 2 DEBUG nova.compute.provider_tree [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:03:08 np0005473739 nova_compute[259550]: 2025-10-07 14:03:08.268 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:08 np0005473739 nova_compute[259550]: 2025-10-07 14:03:08.298 2 INFO nova.scheduler.client.report [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Deleted allocations for instance 29e49704-9ecd-4306-a3c4-88b69878387f#033[00m
Oct  7 10:03:08 np0005473739 nova_compute[259550]: 2025-10-07 14:03:08.369 2 DEBUG oslo_concurrency.lockutils [None req-5db25415-d34d-4c6b-94b1-a8d0650c7a89 2764f8731615493db51630d2f86e3188 03f09f47e414490480fddcbf6ce04aaf - - default default] Lock "29e49704-9ecd-4306-a3c4-88b69878387f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1074: 305 pgs: 305 active+clean; 60 MiB data, 203 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 141 op/s
Oct  7 10:03:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e130 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e130 do_prune osdmap full prune enabled
Oct  7 10:03:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 e131: 3 total, 3 up, 3 in
Oct  7 10:03:10 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e131: 3 total, 3 up, 3 in
Oct  7 10:03:10 np0005473739 nova_compute[259550]: 2025-10-07 14:03:10.789 2 DEBUG oslo_concurrency.processutils [None req-43a081cf-a5de-472e-9982-25a37477cb00 0aed82dcc4dc40748b0dc13938987104 26aa6c9262084498bd6fbfe3378c4efa - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:10 np0005473739 nova_compute[259550]: 2025-10-07 14:03:10.819 2 DEBUG oslo_concurrency.processutils [None req-43a081cf-a5de-472e-9982-25a37477cb00 0aed82dcc4dc40748b0dc13938987104 26aa6c9262084498bd6fbfe3378c4efa - - default default] CMD "env LANG=C uptime" returned: 0 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1076: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 149 op/s
Oct  7 10:03:11 np0005473739 nova_compute[259550]: 2025-10-07 14:03:11.842 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "b3438e68-f883-4d76-86d9-7396298ab465" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:11 np0005473739 nova_compute[259550]: 2025-10-07 14:03:11.843 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "b3438e68-f883-4d76-86d9-7396298ab465" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:11 np0005473739 nova_compute[259550]: 2025-10-07 14:03:11.883 2 DEBUG nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:03:11 np0005473739 nova_compute[259550]: 2025-10-07 14:03:11.968 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:11 np0005473739 nova_compute[259550]: 2025-10-07 14:03:11.969 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:11 np0005473739 nova_compute[259550]: 2025-10-07 14:03:11.980 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:03:11 np0005473739 nova_compute[259550]: 2025-10-07 14:03:11.981 2 INFO nova.compute.claims [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:03:12 np0005473739 podman[274356]: 2025-10-07 14:03:12.096671172 +0000 UTC m=+0.068456692 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:03:12 np0005473739 podman[274355]: 2025-10-07 14:03:12.108026115 +0000 UTC m=+0.081171862 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.119 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4011177506' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.544 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.426s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.552 2 DEBUG nova.compute.provider_tree [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.643 2 DEBUG nova.scheduler.client.report [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.744 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.745 2 DEBUG nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.800 2 DEBUG nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.801 2 DEBUG nova.network.neutron [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.823 2 INFO nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.845 2 DEBUG nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.922 2 DEBUG nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.923 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.924 2 INFO nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Creating image(s)#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.946 2 DEBUG nova.storage.rbd_utils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image b3438e68-f883-4d76-86d9-7396298ab465_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:12 np0005473739 nova_compute[259550]: 2025-10-07 14:03:12.974 2 DEBUG nova.storage.rbd_utils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image b3438e68-f883-4d76-86d9-7396298ab465_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.005 2 DEBUG nova.storage.rbd_utils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image b3438e68-f883-4d76-86d9-7396298ab465_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.010 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.094 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.096 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.096 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.097 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.120 2 DEBUG nova.storage.rbd_utils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image b3438e68-f883-4d76-86d9-7396298ab465_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.125 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b3438e68-f883-4d76-86d9-7396298ab465_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.289 2 DEBUG nova.network.neutron [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.289 2 DEBUG nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:03:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1077: 305 pgs: 305 active+clean; 41 MiB data, 194 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 149 op/s
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.407 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b3438e68-f883-4d76-86d9-7396298ab465_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.282s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.463 2 DEBUG nova.storage.rbd_utils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] resizing rbd image b3438e68-f883-4d76-86d9-7396298ab465_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.570 2 DEBUG nova.objects.instance [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lazy-loading 'migration_context' on Instance uuid b3438e68-f883-4d76-86d9-7396298ab465 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.588 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.588 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Ensure instance console log exists: /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.589 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.589 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.589 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.591 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.596 2 WARNING nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.601 2 DEBUG nova.virt.libvirt.host [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.601 2 DEBUG nova.virt.libvirt.host [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.605 2 DEBUG nova.virt.libvirt.host [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.606 2 DEBUG nova.virt.libvirt.host [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.606 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.607 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.607 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.608 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.608 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.608 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.608 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.609 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.609 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.609 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.609 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.610 2 DEBUG nova.virt.hardware [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:03:13 np0005473739 nova_compute[259550]: 2025-10-07 14:03:13.614 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3681294753' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.046 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.070 2 DEBUG nova.storage.rbd_utils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image b3438e68-f883-4d76-86d9-7396298ab465_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.074 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2407816275' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.521 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.524 2 DEBUG nova.objects.instance [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3438e68-f883-4d76-86d9-7396298ab465 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.796 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <uuid>b3438e68-f883-4d76-86d9-7396298ab465</uuid>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <name>instance-00000002</name>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-1766854173</nova:name>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:03:13</nova:creationTime>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <nova:user uuid="29b159ffa5754a4ea36ea97967fc907f">tempest-DeleteServersAdminTestJSON-2055501431-project-member</nova:user>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <nova:project uuid="99c1b7cefd964764a69e1e53219287d2">tempest-DeleteServersAdminTestJSON-2055501431</nova:project>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <entry name="serial">b3438e68-f883-4d76-86d9-7396298ab465</entry>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <entry name="uuid">b3438e68-f883-4d76-86d9-7396298ab465</entry>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b3438e68-f883-4d76-86d9-7396298ab465_disk">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b3438e68-f883-4d76-86d9-7396298ab465_disk.config">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465/console.log" append="off"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:03:14 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:03:14 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:03:14 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:03:14 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.877 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.877 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.878 2 INFO nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Using config drive#033[00m
Oct  7 10:03:14 np0005473739 nova_compute[259550]: 2025-10-07 14:03:14.899 2 DEBUG nova.storage.rbd_utils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image b3438e68-f883-4d76-86d9-7396298ab465_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1078: 305 pgs: 305 active+clean; 71 MiB data, 202 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.5 MiB/s wr, 124 op/s
Oct  7 10:03:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.541 2 INFO nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Creating config drive at /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465/disk.config#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.547 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzd9nmaa8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.677 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzd9nmaa8" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.706 2 DEBUG nova.storage.rbd_utils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image b3438e68-f883-4d76-86d9-7396298ab465_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.711 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465/disk.config b3438e68-f883-4d76-86d9-7396298ab465_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.897 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.898 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.918 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.985 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.985 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.992 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:03:15 np0005473739 nova_compute[259550]: 2025-10-07 14:03:15.992 2 INFO nova.compute.claims [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.035 2 DEBUG oslo_concurrency.processutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465/disk.config b3438e68-f883-4d76-86d9-7396298ab465_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.036 2 INFO nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Deleting local config drive /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465/disk.config because it was imported into RBD.#033[00m
Oct  7 10:03:16 np0005473739 systemd-machined[214580]: New machine qemu-2-instance-00000002.
Oct  7 10:03:16 np0005473739 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.123 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2111380603' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.552 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.561 2 DEBUG nova.compute.provider_tree [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.579 2 DEBUG nova.scheduler.client.report [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.604 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.605 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.667 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.667 2 DEBUG nova.network.neutron [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.842 2 INFO nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.863 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.993 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.995 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:03:16 np0005473739 nova_compute[259550]: 2025-10-07 14:03:16.995 2 INFO nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Creating image(s)#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.019 2 DEBUG nova.storage.rbd_utils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.042 2 DEBUG nova.storage.rbd_utils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.068 2 DEBUG nova.storage.rbd_utils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.072 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.130 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.131 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.132 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.132 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.159 2 DEBUG nova.storage.rbd_utils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.164 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1079: 305 pgs: 305 active+clean; 88 MiB data, 208 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 104 op/s
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.390 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845797.3896248, b3438e68-f883-4d76-86d9-7396298ab465 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.391 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.395 2 DEBUG nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.395 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.399 2 INFO nova.virt.libvirt.driver [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Instance spawned successfully.#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.400 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.448 2 WARNING oslo_policy.policy [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.449 2 WARNING oslo_policy.policy [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.452 2 DEBUG nova.policy [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2d78803d42674b89b4ea28d9a2442357', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '380b73085cef431383bee110ceaefb15', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.457 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.460 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.461 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.461 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.462 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.462 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.463 2 DEBUG nova.virt.libvirt.driver [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.469 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.498 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.499 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845797.3944774, b3438e68-f883-4d76-86d9-7396298ab465 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.499 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] VM Started (Lifecycle Event)#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.529 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.533 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.584 2 INFO nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Took 4.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.585 2 DEBUG nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.586 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.642 2 INFO nova.compute.manager [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Took 5.71 seconds to build instance.#033[00m
Oct  7 10:03:17 np0005473739 nova_compute[259550]: 2025-10-07 14:03:17.661 2 DEBUG oslo_concurrency.lockutils [None req-d5214926-293c-49b0-9a20-541b678709bc 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "b3438e68-f883-4d76-86d9-7396298ab465" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.818s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.518 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.354s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.595 2 DEBUG nova.storage.rbd_utils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] resizing rbd image 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.803 2 DEBUG nova.objects.instance [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'migration_context' on Instance uuid 5aa06cd5-91e7-4797-83c0-ddd3966533ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.841 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.842 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Ensure instance console log exists: /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.843 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.843 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.843 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.867 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Acquiring lock "b3438e68-f883-4d76-86d9-7396298ab465" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.868 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Lock "b3438e68-f883-4d76-86d9-7396298ab465" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.868 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Acquiring lock "b3438e68-f883-4d76-86d9-7396298ab465-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.868 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Lock "b3438e68-f883-4d76-86d9-7396298ab465-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.868 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Lock "b3438e68-f883-4d76-86d9-7396298ab465-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.870 2 INFO nova.compute.manager [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Terminating instance#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.870 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Acquiring lock "refresh_cache-b3438e68-f883-4d76-86d9-7396298ab465" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.871 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Acquired lock "refresh_cache-b3438e68-f883-4d76-86d9-7396298ab465" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:03:18 np0005473739 nova_compute[259550]: 2025-10-07 14:03:18.871 2 DEBUG nova.network.neutron [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:03:19 np0005473739 nova_compute[259550]: 2025-10-07 14:03:19.380 2 DEBUG nova.network.neutron [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1080: 305 pgs: 305 active+clean; 123 MiB data, 227 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 117 op/s
Oct  7 10:03:19 np0005473739 nova_compute[259550]: 2025-10-07 14:03:19.462 2 DEBUG nova.network.neutron [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Successfully created port: 8bbf9c96-17e6-49df-8a58-e3557085f576 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:03:19 np0005473739 nova_compute[259550]: 2025-10-07 14:03:19.654 2 DEBUG nova.network.neutron [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:19 np0005473739 nova_compute[259550]: 2025-10-07 14:03:19.670 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Releasing lock "refresh_cache-b3438e68-f883-4d76-86d9-7396298ab465" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:03:19 np0005473739 nova_compute[259550]: 2025-10-07 14:03:19.670 2 DEBUG nova.compute.manager [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:03:19 np0005473739 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Oct  7 10:03:19 np0005473739 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 3.577s CPU time.
Oct  7 10:03:19 np0005473739 systemd-machined[214580]: Machine qemu-2-instance-00000002 terminated.
Oct  7 10:03:19 np0005473739 nova_compute[259550]: 2025-10-07 14:03:19.893 2 INFO nova.virt.libvirt.driver [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Instance destroyed successfully.#033[00m
Oct  7 10:03:19 np0005473739 nova_compute[259550]: 2025-10-07 14:03:19.893 2 DEBUG nova.objects.instance [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Lazy-loading 'resources' on Instance uuid b3438e68-f883-4d76-86d9-7396298ab465 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:19 np0005473739 nova_compute[259550]: 2025-10-07 14:03:19.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.366 2 INFO nova.virt.libvirt.driver [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Deleting instance files /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465_del#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.367 2 INFO nova.virt.libvirt.driver [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Deletion of /var/lib/nova/instances/b3438e68-f883-4d76-86d9-7396298ab465_del complete#033[00m
Oct  7 10:03:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.472 2 INFO nova.compute.manager [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.473 2 DEBUG oslo.service.loopingcall [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.473 2 DEBUG nova.compute.manager [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.474 2 DEBUG nova.network.neutron [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.793 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845785.792529, 29e49704-9ecd-4306-a3c4-88b69878387f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.794 2 INFO nova.compute.manager [-] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.824 2 DEBUG nova.compute.manager [None req-51018d0f-5f4c-423f-9b47-ffac9f587c10 - - - - - -] [instance: 29e49704-9ecd-4306-a3c4-88b69878387f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.910 2 DEBUG nova.network.neutron [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.929 2 DEBUG nova.network.neutron [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.948 2 INFO nova.compute.manager [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Took 0.47 seconds to deallocate network for instance.#033[00m
Oct  7 10:03:20 np0005473739 nova_compute[259550]: 2025-10-07 14:03:20.976 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.013 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.014 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.078 2 DEBUG nova.network.neutron [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Successfully updated port: 8bbf9c96-17e6-49df-8a58-e3557085f576 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.104 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.105 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquired lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.105 2 DEBUG nova.network.neutron [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.122 2 DEBUG oslo_concurrency.processutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1081: 305 pgs: 305 active+clean; 117 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.9 MiB/s wr, 142 op/s
Oct  7 10:03:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2846713959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.578 2 DEBUG oslo_concurrency.processutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.585 2 DEBUG nova.compute.provider_tree [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.597 2 DEBUG nova.network.neutron [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.606 2 DEBUG nova.compute.manager [req-46e0012b-d98a-4041-8aed-df95b7866ea9 req-7ed6d223-5b8f-4895-b465-9a5bb6bf1b59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-changed-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.606 2 DEBUG nova.compute.manager [req-46e0012b-d98a-4041-8aed-df95b7866ea9 req-7ed6d223-5b8f-4895-b465-9a5bb6bf1b59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Refreshing instance network info cache due to event network-changed-8bbf9c96-17e6-49df-8a58-e3557085f576. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.607 2 DEBUG oslo_concurrency.lockutils [req-46e0012b-d98a-4041-8aed-df95b7866ea9 req-7ed6d223-5b8f-4895-b465-9a5bb6bf1b59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.608 2 DEBUG nova.scheduler.client.report [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.632 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.618s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.660 2 INFO nova.scheduler.client.report [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Deleted allocations for instance b3438e68-f883-4d76-86d9-7396298ab465#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.734 2 DEBUG oslo_concurrency.lockutils [None req-ae318705-4d3f-4294-802b-edd665a1aa56 dfd5c2a962f244bebf3d01b87e41963d 3cf4b25aff7d4106870864cb9ccb83ad - - default default] Lock "b3438e68-f883-4d76-86d9-7396298ab465" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:21 np0005473739 nova_compute[259550]: 2025-10-07 14:03:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.014 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.014 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/718577786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.478 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:03:22
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', 'vms']
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.758 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.760 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5107MB free_disk=59.95278549194336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.760 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.760 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:03:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.837 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5aa06cd5-91e7-4797-83c0-ddd3966533ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.837 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.837 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:03:22 np0005473739 nova_compute[259550]: 2025-10-07 14:03:22.873 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712887103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:23 np0005473739 nova_compute[259550]: 2025-10-07 14:03:23.337 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:23 np0005473739 nova_compute[259550]: 2025-10-07 14:03:23.346 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:03:23 np0005473739 nova_compute[259550]: 2025-10-07 14:03:23.366 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:03:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1082: 305 pgs: 305 active+clean; 117 MiB data, 231 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.6 MiB/s wr, 129 op/s
Oct  7 10:03:23 np0005473739 nova_compute[259550]: 2025-10-07 14:03:23.412 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:03:23 np0005473739 nova_compute[259550]: 2025-10-07 14:03:23.412 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:23 np0005473739 nova_compute[259550]: 2025-10-07 14:03:23.789 2 DEBUG nova.network.neutron [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updating instance_info_cache with network_info: [{"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.034 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Releasing lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.035 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Instance network_info: |[{"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.036 2 DEBUG oslo_concurrency.lockutils [req-46e0012b-d98a-4041-8aed-df95b7866ea9 req-7ed6d223-5b8f-4895-b465-9a5bb6bf1b59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.036 2 DEBUG nova.network.neutron [req-46e0012b-d98a-4041-8aed-df95b7866ea9 req-7ed6d223-5b8f-4895-b465-9a5bb6bf1b59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Refreshing network info cache for port 8bbf9c96-17e6-49df-8a58-e3557085f576 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.039 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Start _get_guest_xml network_info=[{"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.046 2 WARNING nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.055 2 DEBUG nova.virt.libvirt.host [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.056 2 DEBUG nova.virt.libvirt.host [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.064 2 DEBUG nova.virt.libvirt.host [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.065 2 DEBUG nova.virt.libvirt.host [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.065 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.065 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:03:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='1734034173',id=20,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_0-460437147',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.066 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.066 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.066 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.066 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.067 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.067 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.067 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.067 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.068 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.068 2 DEBUG nova.virt.hardware [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.071 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:24 np0005473739 podman[275041]: 2025-10-07 14:03:24.113879618 +0000 UTC m=+0.096607275 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.414 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.414 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.414 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:03:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1739257128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.553 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.582 2 DEBUG nova.storage.rbd_utils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.588 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.611 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.612 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.612 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.613 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:24 np0005473739 nova_compute[259550]: 2025-10-07 14:03:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:03:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2982264935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.061 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.062 2 DEBUG nova.virt.libvirt.vif [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:03:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-741383512',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-741383512',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-741383512',id=3,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-9ayb8exy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:03:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=5aa06cd5-91e7-4797-83c0-ddd3966533ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.062 2 DEBUG nova.network.os_vif_util [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.063 2 DEBUG nova.network.os_vif_util [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.065 2 DEBUG nova.objects.instance [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5aa06cd5-91e7-4797-83c0-ddd3966533ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.103 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <uuid>5aa06cd5-91e7-4797-83c0-ddd3966533ce</uuid>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <name>instance-00000003</name>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-741383512</nova:name>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:03:24</nova:creationTime>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <nova:flavor name="tempest-flavor_with_ephemeral_0-460437147">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <nova:user uuid="2d78803d42674b89b4ea28d9a2442357">tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member</nova:user>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <nova:project uuid="380b73085cef431383bee110ceaefb15">tempest-ServersWithSpecificFlavorTestJSON-1902159686</nova:project>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <nova:port uuid="8bbf9c96-17e6-49df-8a58-e3557085f576">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <entry name="serial">5aa06cd5-91e7-4797-83c0-ddd3966533ce</entry>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <entry name="uuid">5aa06cd5-91e7-4797-83c0-ddd3966533ce</entry>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk.config">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:09:39:1a"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <target dev="tap8bbf9c96-17"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce/console.log" append="off"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:03:25 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:03:25 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:03:25 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:03:25 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.104 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Preparing to wait for external event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.104 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.104 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.104 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.106 2 DEBUG nova.virt.libvirt.vif [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:03:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-741383512',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-741383512',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-741383512',id=3,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-9ayb8exy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:03:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=5aa06cd5-91e7-4797-83c0-ddd3966533ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.106 2 DEBUG nova.network.os_vif_util [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.106 2 DEBUG nova.network.os_vif_util [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.107 2 DEBUG os_vif [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.157 2 DEBUG ovsdbapp.backend.ovs_idl [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.158 2 DEBUG ovsdbapp.backend.ovs_idl [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.158 2 DEBUG ovsdbapp.backend.ovs_idl [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [POLLOUT] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.179 2 INFO oslo.privsep.daemon [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmp2tcx9_fp/privsep.sock']#033[00m
Oct  7 10:03:25 np0005473739 nova_compute[259550]: 2025-10-07 14:03:25.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1083: 305 pgs: 305 active+clean; 88 MiB data, 224 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 154 op/s
Oct  7 10:03:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:26 np0005473739 nova_compute[259550]: 2025-10-07 14:03:26.425 2 INFO oslo.privsep.daemon [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Oct  7 10:03:26 np0005473739 nova_compute[259550]: 2025-10-07 14:03:26.240 1596 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  7 10:03:26 np0005473739 nova_compute[259550]: 2025-10-07 14:03:26.249 1596 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  7 10:03:26 np0005473739 nova_compute[259550]: 2025-10-07 14:03:26.252 1596 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Oct  7 10:03:26 np0005473739 nova_compute[259550]: 2025-10-07 14:03:26.252 1596 INFO oslo.privsep.daemon [-] privsep daemon running as pid 1596#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8bbf9c96-17, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.009 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8bbf9c96-17, col_values=(('external_ids', {'iface-id': '8bbf9c96-17e6-49df-8a58-e3557085f576', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:39:1a', 'vm-uuid': '5aa06cd5-91e7-4797-83c0-ddd3966533ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:27 np0005473739 NetworkManager[44949]: <info>  [1759845807.0581] manager: (tap8bbf9c96-17): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/23)
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.069 2 INFO os_vif [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17')#033[00m
Oct  7 10:03:27 np0005473739 podman[275136]: 2025-10-07 14:03:27.129270743 +0000 UTC m=+0.058939967 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.144 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.144 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.144 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No VIF found with MAC fa:16:3e:09:39:1a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.145 2 INFO nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Using config drive#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.165 2 DEBUG nova.storage.rbd_utils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.370 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.371 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1084: 305 pgs: 305 active+clean; 88 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 141 op/s
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.554 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.629 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.630 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.636 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.637 2 INFO nova.compute.claims [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.787 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.951 2 DEBUG nova.network.neutron [req-46e0012b-d98a-4041-8aed-df95b7866ea9 req-7ed6d223-5b8f-4895-b465-9a5bb6bf1b59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updated VIF entry in instance network info cache for port 8bbf9c96-17e6-49df-8a58-e3557085f576. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.952 2 DEBUG nova.network.neutron [req-46e0012b-d98a-4041-8aed-df95b7866ea9 req-7ed6d223-5b8f-4895-b465-9a5bb6bf1b59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updating instance_info_cache with network_info: [{"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:27 np0005473739 nova_compute[259550]: 2025-10-07 14:03:27.982 2 DEBUG oslo_concurrency.lockutils [req-46e0012b-d98a-4041-8aed-df95b7866ea9 req-7ed6d223-5b8f-4895-b465-9a5bb6bf1b59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.135 2 INFO nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Creating config drive at /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce/disk.config#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.141 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptcrcpaf4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.271 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptcrcpaf4" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.314 2 DEBUG nova.storage.rbd_utils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.325 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce/disk.config 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/167638052' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.364 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.372 2 DEBUG nova.compute.provider_tree [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.389 2 DEBUG nova.scheduler.client.report [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.417 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.788s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.419 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.469 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.469 2 DEBUG nova.network.neutron [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.490 2 INFO nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.515 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.618 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.620 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.621 2 INFO nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Creating image(s)#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.646 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.677 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.705 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.709 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.781 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.783 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.783 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.784 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.811 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.817 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.936 2 DEBUG oslo_concurrency.processutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce/disk.config 5aa06cd5-91e7-4797-83c0-ddd3966533ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.937 2 INFO nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Deleting local config drive /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce/disk.config because it was imported into RBD.#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.977 2 DEBUG nova.network.neutron [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:03:28 np0005473739 nova_compute[259550]: 2025-10-07 14:03:28.978 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:03:29 np0005473739 kernel: tun: Universal TUN/TAP device driver, 1.6
Oct  7 10:03:29 np0005473739 kernel: tap8bbf9c96-17: entered promiscuous mode
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:03:29Z|00027|binding|INFO|Claiming lport 8bbf9c96-17e6-49df-8a58-e3557085f576 for this chassis.
Oct  7 10:03:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:03:29Z|00028|binding|INFO|8bbf9c96-17e6-49df-8a58-e3557085f576: Claiming fa:16:3e:09:39:1a 10.100.0.5
Oct  7 10:03:29 np0005473739 NetworkManager[44949]: <info>  [1759845809.0231] manager: (tap8bbf9c96-17): new Tun device (/org/freedesktop/NetworkManager/Devices/24)
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:29.036 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:39:1a 10.100.0.5'], port_security=['fa:16:3e:09:39:1a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5aa06cd5-91e7-4797-83c0-ddd3966533ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '380b73085cef431383bee110ceaefb15', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3b7853b-d94c-445c-810d-9b4dd15d78f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f05683d-d61c-46e7-a8b2-e5ebf47fffcc, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8bbf9c96-17e6-49df-8a58-e3557085f576) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:03:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:29.038 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8bbf9c96-17e6-49df-8a58-e3557085f576 in datapath 384938fa-4eb0-4ec5-a6a4-bc65721ba22a bound to our chassis#033[00m
Oct  7 10:03:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:29.040 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 384938fa-4eb0-4ec5-a6a4-bc65721ba22a#033[00m
Oct  7 10:03:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:29.042 161536 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpl807twbq/privsep.sock']#033[00m
Oct  7 10:03:29 np0005473739 systemd-udevd[275348]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:03:29 np0005473739 NetworkManager[44949]: <info>  [1759845809.0830] device (tap8bbf9c96-17): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:03:29 np0005473739 NetworkManager[44949]: <info>  [1759845809.0838] device (tap8bbf9c96-17): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:03:29 np0005473739 systemd-machined[214580]: New machine qemu-3-instance-00000003.
Oct  7 10:03:29 np0005473739 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:03:29Z|00029|binding|INFO|Setting lport 8bbf9c96-17e6-49df-8a58-e3557085f576 ovn-installed in OVS
Oct  7 10:03:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:03:29Z|00030|binding|INFO|Setting lport 8bbf9c96-17e6-49df-8a58-e3557085f576 up in Southbound
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1085: 305 pgs: 305 active+clean; 88 MiB data, 211 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.494 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.677s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.579 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] resizing rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.764 2 DEBUG nova.objects.instance [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lazy-loading 'migration_context' on Instance uuid 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.789 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.790 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Ensure instance console log exists: /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.791 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.792 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.792 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.794 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.799 2 WARNING nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.804 2 DEBUG nova.virt.libvirt.host [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.805 2 DEBUG nova.virt.libvirt.host [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.810 2 DEBUG nova.virt.libvirt.host [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.811 2 DEBUG nova.virt.libvirt.host [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.811 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.811 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.812 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.812 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.812 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.813 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.813 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.813 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.813 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.813 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.814 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.814 2 DEBUG nova.virt.hardware [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.817 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.848 2 DEBUG nova.compute.manager [req-6be1bc68-7674-4f70-a83e-2a5a73437d15 req-6c6ba9ec-de31-4c76-a555-f13d4a874536 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.848 2 DEBUG oslo_concurrency.lockutils [req-6be1bc68-7674-4f70-a83e-2a5a73437d15 req-6c6ba9ec-de31-4c76-a555-f13d4a874536 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.848 2 DEBUG oslo_concurrency.lockutils [req-6be1bc68-7674-4f70-a83e-2a5a73437d15 req-6c6ba9ec-de31-4c76-a555-f13d4a874536 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.849 2 DEBUG oslo_concurrency.lockutils [req-6be1bc68-7674-4f70-a83e-2a5a73437d15 req-6c6ba9ec-de31-4c76-a555-f13d4a874536 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:29 np0005473739 nova_compute[259550]: 2025-10-07 14:03:29.849 2 DEBUG nova.compute.manager [req-6be1bc68-7674-4f70-a83e-2a5a73437d15 req-6c6ba9ec-de31-4c76-a555-f13d4a874536 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Processing event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.067 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.068 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845810.0682135, 5aa06cd5-91e7-4797-83c0-ddd3966533ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.069 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] VM Started (Lifecycle Event)#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.096 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.098 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.104 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.114 2 INFO nova.virt.libvirt.driver [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Instance spawned successfully.#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.115 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.124 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.124 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845810.0692296, 5aa06cd5-91e7-4797-83c0-ddd3966533ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.124 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.151 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.157 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.158 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.159 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.160 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:30 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.161 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.162 2 DEBUG nova.virt.libvirt.driver [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.170 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845810.0830505, 5aa06cd5-91e7-4797-83c0-ddd3966533ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.170 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.215 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.224 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.228 2 INFO nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Took 13.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.229 2 DEBUG nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.246 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.260 161536 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  7 10:03:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.261 161536 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpl807twbq/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  7 10:03:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.072 275502 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  7 10:03:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.077 275502 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  7 10:03:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.080 275502 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Oct  7 10:03:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.080 275502 INFO oslo.privsep.daemon [-] privsep daemon running as pid 275502#033[00m
Oct  7 10:03:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:30.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bb2f14-9f7d-447e-a1cf-ae3b2fa31b64]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955050984' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.305 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.327 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.379 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.402 2 INFO nova.compute.manager [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Took 14.44 seconds to build instance.#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.428 2 DEBUG oslo_concurrency.lockutils [None req-99325ee2-9a1b-42ec-9094-7039d7634013 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1394659432' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.882 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.884 2 DEBUG nova.objects.instance [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:30 np0005473739 nova_compute[259550]: 2025-10-07 14:03:30.970 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <uuid>39a26aa2-978f-4d33-bc4e-fd4bfc81d380</uuid>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <name>instance-00000004</name>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <nova:name>tempest-DeleteServersAdminTestJSON-server-782175462</nova:name>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:03:29</nova:creationTime>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <nova:user uuid="29b159ffa5754a4ea36ea97967fc907f">tempest-DeleteServersAdminTestJSON-2055501431-project-member</nova:user>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <nova:project uuid="99c1b7cefd964764a69e1e53219287d2">tempest-DeleteServersAdminTestJSON-2055501431</nova:project>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <entry name="serial">39a26aa2-978f-4d33-bc4e-fd4bfc81d380</entry>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <entry name="uuid">39a26aa2-978f-4d33-bc4e-fd4bfc81d380</entry>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/console.log" append="off"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:03:30 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:03:30 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:03:30 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:03:30 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.146 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.147 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.147 2 INFO nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Using config drive#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.168 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.359 2 INFO nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Creating config drive at /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.364 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv8qdrg66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1086: 305 pgs: 305 active+clean; 107 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 789 KiB/s rd, 943 KiB/s wr, 80 op/s
Oct  7 10:03:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:31.481 275502 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:31.481 275502 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:31.481 275502 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.493 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpv8qdrg66" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.524 2 DEBUG nova.storage.rbd_utils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] rbd image 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.529 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.743 2 DEBUG oslo_concurrency.processutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config 39a26aa2-978f-4d33-bc4e-fd4bfc81d380_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:31 np0005473739 nova_compute[259550]: 2025-10-07 14:03:31.744 2 INFO nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Deleting local config drive /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380/disk.config because it was imported into RBD.#033[00m
Oct  7 10:03:31 np0005473739 systemd-machined[214580]: New machine qemu-4-instance-00000004.
Oct  7 10:03:31 np0005473739 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.005 2 DEBUG nova.compute.manager [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.005 2 DEBUG oslo_concurrency.lockutils [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.006 2 DEBUG oslo_concurrency.lockutils [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.006 2 DEBUG oslo_concurrency.lockutils [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.006 2 DEBUG nova.compute.manager [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] No waiting events found dispatching network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.006 2 WARNING nova.compute.manager [req-c078c9c5-29f4-4294-be67-45ed847ffa96 req-c51e1f98-d8dc-4f47-bff5-41d37f69545f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received unexpected event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00046802760955915867 of space, bias 1.0, pg target 0.1404082828677476 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:03:32 np0005473739 NetworkManager[44949]: <info>  [1759845812.6615] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/25)
Oct  7 10:03:32 np0005473739 NetworkManager[44949]: <info>  [1759845812.6622] device (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 10:03:32 np0005473739 NetworkManager[44949]: <info>  [1759845812.6632] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/26)
Oct  7 10:03:32 np0005473739 NetworkManager[44949]: <info>  [1759845812.6634] device (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:32 np0005473739 NetworkManager[44949]: <info>  [1759845812.6644] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Oct  7 10:03:32 np0005473739 NetworkManager[44949]: <info>  [1759845812.6650] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/28)
Oct  7 10:03:32 np0005473739 NetworkManager[44949]: <info>  [1759845812.6654] device (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  7 10:03:32 np0005473739 NetworkManager[44949]: <info>  [1759845812.6657] device (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Oct  7 10:03:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:03:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1928728611' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:03:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:03:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1928728611' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:03:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.713 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4a5736-3c68-4957-91a4-69db6ce26f6e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.715 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap384938fa-41 in ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:03:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.718 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap384938fa-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:03:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.718 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c614dae5-dba9-474c-89be-210816963151]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5d9cee2c-ae28-4d95-934b-242ee1da200b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.759 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[24c73748-3093-42d9-8690-d0365242526d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.803 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845812.8012054, 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.803 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:03:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.803 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f5c6f194-2567-472f-be3c-d2c5c94a7b4c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:32.805 161536 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpy8uxhs20/privsep.sock']#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.806 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.806 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.810 2 INFO nova.virt.libvirt.driver [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance spawned successfully.#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.810 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.861 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.866 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.891 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.892 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.893 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.893 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.893 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.894 2 DEBUG nova.virt.libvirt.driver [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.996 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.997 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845812.8025327, 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:32 np0005473739 nova_compute[259550]: 2025-10-07 14:03:32.997 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] VM Started (Lifecycle Event)#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.026 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.031 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.037 2 INFO nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Took 4.42 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.038 2 DEBUG nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.054 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.106 2 INFO nova.compute.manager [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Took 5.50 seconds to build instance.#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.131 2 DEBUG oslo_concurrency.lockutils [None req-6534fde7-9cb9-4bc2-be7f-3274efc2be0b 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.148 2 DEBUG nova.compute.manager [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-changed-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.148 2 DEBUG nova.compute.manager [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Refreshing instance network info cache due to event network-changed-8bbf9c96-17e6-49df-8a58-e3557085f576. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.148 2 DEBUG oslo_concurrency.lockutils [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.149 2 DEBUG oslo_concurrency.lockutils [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:03:33 np0005473739 nova_compute[259550]: 2025-10-07 14:03:33.149 2 DEBUG nova.network.neutron [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Refreshing network info cache for port 8bbf9c96-17e6-49df-8a58-e3557085f576 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:03:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1087: 305 pgs: 305 active+clean; 107 MiB data, 218 MiB used, 60 GiB / 60 GiB avail; 180 KiB/s rd, 640 KiB/s wr, 46 op/s
Oct  7 10:03:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.616 161536 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Oct  7 10:03:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.617 161536 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpy8uxhs20/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Oct  7 10:03:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.410 275676 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Oct  7 10:03:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.415 275676 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Oct  7 10:03:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.418 275676 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Oct  7 10:03:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.418 275676 INFO oslo.privsep.daemon [-] privsep daemon running as pid 275676#033[00m
Oct  7 10:03:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:33.620 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eea3c6ba-4503-4e08-877e-f6a6eb985057]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:34.312 275676 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:34.312 275676 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:34.312 275676 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:34 np0005473739 nova_compute[259550]: 2025-10-07 14:03:34.621 2 DEBUG nova.network.neutron [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updated VIF entry in instance network info cache for port 8bbf9c96-17e6-49df-8a58-e3557085f576. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:03:34 np0005473739 nova_compute[259550]: 2025-10-07 14:03:34.622 2 DEBUG nova.network.neutron [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updating instance_info_cache with network_info: [{"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:34 np0005473739 nova_compute[259550]: 2025-10-07 14:03:34.640 2 DEBUG oslo_concurrency.lockutils [req-19297643-f452-4e84-93c2-95ccefd25712 req-6d1e7d49-d147-4455-8e63-399822cd6f05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5aa06cd5-91e7-4797-83c0-ddd3966533ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:03:34 np0005473739 nova_compute[259550]: 2025-10-07 14:03:34.891 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845799.8912003, b3438e68-f883-4d76-86d9-7396298ab465 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:34 np0005473739 nova_compute[259550]: 2025-10-07 14:03:34.892 2 INFO nova.compute.manager [-] [instance: b3438e68-f883-4d76-86d9-7396298ab465] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:03:34 np0005473739 nova_compute[259550]: 2025-10-07 14:03:34.923 2 DEBUG nova.compute.manager [None req-cef3d681-e454-485b-bdb7-0ef6404bf70f - - - - - -] [instance: b3438e68-f883-4d76-86d9-7396298ab465] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.096 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ce49a762-46c3-471b-9647-c030abcea08e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 systemd-udevd[275664]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:03:35 np0005473739 NetworkManager[44949]: <info>  [1759845815.1163] manager: (tap384938fa-40): new Veth device (/org/freedesktop/NetworkManager/Devices/29)
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.111 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b47e1338-ff96-4bb0-960f-1fe325c616ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.159 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f2a44dd0-7d62-4aea-9cb2-2183de396418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.163 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e82301fb-bddd-40f4-8dd3-677be626782a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.178 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.179 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.180 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.180 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.180 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.181 2 INFO nova.compute.manager [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Terminating instance#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.182 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "refresh_cache-39a26aa2-978f-4d33-bc4e-fd4bfc81d380" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.182 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquired lock "refresh_cache-39a26aa2-978f-4d33-bc4e-fd4bfc81d380" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.183 2 DEBUG nova.network.neutron [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:03:35 np0005473739 NetworkManager[44949]: <info>  [1759845815.1942] device (tap384938fa-40): carrier: link connected
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.197 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c6ec53f4-c395-4405-93bb-72cd7cb5f9e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.220 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[069f5ba4-81da-43e2-a34b-8b9ff733e104]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap384938fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:dc:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639875, 'reachable_time': 21429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 275704, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.248 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aac93308-8f70-4985-8f9f-5f292ab5b9ad]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:dc9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 639875, 'tstamp': 639875}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 275705, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb735e5b-ef02-47a6-8a53-21f9fc2d768a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap384938fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:dc:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 15], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639875, 'reachable_time': 21429, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 275706, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.323 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[099d417a-4abb-4277-baa2-d8edb4ec78cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1088: 305 pgs: 305 active+clean; 134 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[44318c19-5dff-48ad-a7ae-a9938f7c2a6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.418 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384938fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.418 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.419 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap384938fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:35 np0005473739 NetworkManager[44949]: <info>  [1759845815.4226] manager: (tap384938fa-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/30)
Oct  7 10:03:35 np0005473739 kernel: tap384938fa-40: entered promiscuous mode
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.438 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap384938fa-40, col_values=(('external_ids', {'iface-id': '86c408a4-938c-4caa-9ec3-5622a47990e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:03:35Z|00031|binding|INFO|Releasing lport 86c408a4-938c-4caa-9ec3-5622a47990e3 from this chassis (sb_readonly=0)
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.458 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:03:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.461 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00c6b464-9e84-419f-9a49-55409caa29b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.462 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:03:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:35.464 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'env', 'PROCESS_TAG=haproxy-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.531 2 DEBUG nova.network.neutron [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.776 2 DEBUG nova.network.neutron [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.812 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Releasing lock "refresh_cache-39a26aa2-978f-4d33-bc4e-fd4bfc81d380" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:03:35 np0005473739 nova_compute[259550]: 2025-10-07 14:03:35.813 2 DEBUG nova.compute.manager [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:03:35 np0005473739 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Oct  7 10:03:35 np0005473739 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 4.068s CPU time.
Oct  7 10:03:35 np0005473739 systemd-machined[214580]: Machine qemu-4-instance-00000004 terminated.
Oct  7 10:03:35 np0005473739 podman[275739]: 2025-10-07 14:03:35.859563441 +0000 UTC m=+0.028843003 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:03:36 np0005473739 podman[275739]: 2025-10-07 14:03:36.02557362 +0000 UTC m=+0.194853182 container create ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.047 2 INFO nova.virt.libvirt.driver [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance destroyed successfully.#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.048 2 DEBUG nova.objects.instance [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lazy-loading 'resources' on Instance uuid 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:36 np0005473739 systemd[1]: Started libpod-conmon-ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b.scope.
Oct  7 10:03:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:03:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/84c7010ba2fc6f6ddfa22c1cc2034b39e4a61b0f4287273b69c1e4981fb9a1cb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:36 np0005473739 podman[275739]: 2025-10-07 14:03:36.129305086 +0000 UTC m=+0.298584668 container init ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  7 10:03:36 np0005473739 podman[275739]: 2025-10-07 14:03:36.135265075 +0000 UTC m=+0.304544637 container start ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:03:36 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [NOTICE]   (275779) : New worker (275781) forked
Oct  7 10:03:36 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [NOTICE]   (275779) : Loading success.
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.537 2 INFO nova.virt.libvirt.driver [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Deleting instance files /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380_del#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.538 2 INFO nova.virt.libvirt.driver [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Deletion of /var/lib/nova/instances/39a26aa2-978f-4d33-bc4e-fd4bfc81d380_del complete#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.589 2 INFO nova.compute.manager [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.590 2 DEBUG oslo.service.loopingcall [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.592 2 DEBUG nova.compute.manager [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.593 2 DEBUG nova.network.neutron [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.711 2 DEBUG nova.network.neutron [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.760 2 DEBUG nova.network.neutron [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.774 2 INFO nova.compute.manager [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Took 0.18 seconds to deallocate network for instance.#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.826 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.832 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:36 np0005473739 nova_compute[259550]: 2025-10-07 14:03:36.902 2 DEBUG oslo_concurrency.processutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:37 np0005473739 nova_compute[259550]: 2025-10-07 14:03:37.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/685437039' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:37 np0005473739 nova_compute[259550]: 2025-10-07 14:03:37.331 2 DEBUG oslo_concurrency.processutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:37 np0005473739 nova_compute[259550]: 2025-10-07 14:03:37.342 2 DEBUG nova.compute.provider_tree [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:03:37 np0005473739 nova_compute[259550]: 2025-10-07 14:03:37.364 2 DEBUG nova.scheduler.client.report [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:03:37 np0005473739 nova_compute[259550]: 2025-10-07 14:03:37.387 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1089: 305 pgs: 305 active+clean; 134 MiB data, 236 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct  7 10:03:37 np0005473739 nova_compute[259550]: 2025-10-07 14:03:37.442 2 INFO nova.scheduler.client.report [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Deleted allocations for instance 39a26aa2-978f-4d33-bc4e-fd4bfc81d380#033[00m
Oct  7 10:03:37 np0005473739 nova_compute[259550]: 2025-10-07 14:03:37.540 2 DEBUG oslo_concurrency.lockutils [None req-8bdcf19e-b4b1-4ed6-8a74-1ec89326f29e 29b159ffa5754a4ea36ea97967fc907f 99c1b7cefd964764a69e1e53219287d2 - - default default] Lock "39a26aa2-978f-4d33-bc4e-fd4bfc81d380" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1090: 305 pgs: 305 active+clean; 107 MiB data, 223 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 250 op/s
Oct  7 10:03:40 np0005473739 nova_compute[259550]: 2025-10-07 14:03:40.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1091: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 257 op/s
Oct  7 10:03:42 np0005473739 nova_compute[259550]: 2025-10-07 14:03:42.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:42 np0005473739 nova_compute[259550]: 2025-10-07 14:03:42.442 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "74c8f655-7045-4ad9-8246-3d2504315607" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:42 np0005473739 nova_compute[259550]: 2025-10-07 14:03:42.443 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:42 np0005473739 nova_compute[259550]: 2025-10-07 14:03:42.622 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:03:42 np0005473739 nova_compute[259550]: 2025-10-07 14:03:42.822 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:42 np0005473739 nova_compute[259550]: 2025-10-07 14:03:42.823 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:42 np0005473739 nova_compute[259550]: 2025-10-07 14:03:42.831 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:03:42 np0005473739 nova_compute[259550]: 2025-10-07 14:03:42.832 2 INFO nova.compute.claims [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:03:43 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  7 10:03:43 np0005473739 podman[275815]: 2025-10-07 14:03:43.093354255 +0000 UTC m=+0.071340179 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 10:03:43 np0005473739 podman[275814]: 2025-10-07 14:03:43.100014653 +0000 UTC m=+0.079918858 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:03:43 np0005473739 nova_compute[259550]: 2025-10-07 14:03:43.133 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1092: 305 pgs: 305 active+clean; 88 MiB data, 215 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.2 MiB/s wr, 237 op/s
Oct  7 10:03:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1703088313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:43 np0005473739 nova_compute[259550]: 2025-10-07 14:03:43.567 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:43 np0005473739 nova_compute[259550]: 2025-10-07 14:03:43.575 2 DEBUG nova.compute.provider_tree [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:03:43 np0005473739 nova_compute[259550]: 2025-10-07 14:03:43.600 2 DEBUG nova.scheduler.client.report [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:03:43 np0005473739 nova_compute[259550]: 2025-10-07 14:03:43.661 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:43 np0005473739 nova_compute[259550]: 2025-10-07 14:03:43.663 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:03:43 np0005473739 nova_compute[259550]: 2025-10-07 14:03:43.972 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:03:43 np0005473739 nova_compute[259550]: 2025-10-07 14:03:43.972 2 DEBUG nova.network.neutron [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.025 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.048 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.221 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.223 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.224 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Creating image(s)#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.254 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:44 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.289 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.319 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.324 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.395 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.396 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.397 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.397 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.426 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.431 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 74c8f655-7045-4ad9-8246-3d2504315607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.728 2 DEBUG nova.network.neutron [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.730 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.766 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 74c8f655-7045-4ad9-8246-3d2504315607_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.842 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] resizing rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.938 2 DEBUG nova.objects.instance [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lazy-loading 'migration_context' on Instance uuid 74c8f655-7045-4ad9-8246-3d2504315607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.959 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.960 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Ensure instance console log exists: /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.961 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.961 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.962 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.963 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.969 2 WARNING nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.974 2 DEBUG nova.virt.libvirt.host [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.975 2 DEBUG nova.virt.libvirt.host [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.978 2 DEBUG nova.virt.libvirt.host [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.979 2 DEBUG nova.virt.libvirt.host [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.980 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.980 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.980 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.981 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.981 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.981 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.982 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.982 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.982 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.982 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.983 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.983 2 DEBUG nova.virt.hardware [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:03:44 np0005473739 nova_compute[259550]: 2025-10-07 14:03:44.986 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:45 np0005473739 nova_compute[259550]: 2025-10-07 14:03:45.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1093: 305 pgs: 305 active+clean; 91 MiB data, 238 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 249 op/s
Oct  7 10:03:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3136911333' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:45 np0005473739 nova_compute[259550]: 2025-10-07 14:03:45.503 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:45 np0005473739 nova_compute[259550]: 2025-10-07 14:03:45.530 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:45 np0005473739 nova_compute[259550]: 2025-10-07 14:03:45.534 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:45 np0005473739 ovn_controller[151684]: 2025-10-07T14:03:45Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:39:1a 10.100.0.5
Oct  7 10:03:45 np0005473739 ovn_controller[151684]: 2025-10-07T14:03:45Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:39:1a 10.100.0.5
Oct  7 10:03:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:03:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3321439574' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:03:45 np0005473739 nova_compute[259550]: 2025-10-07 14:03:45.965 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:45 np0005473739 nova_compute[259550]: 2025-10-07 14:03:45.968 2 DEBUG nova.objects.instance [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lazy-loading 'pci_devices' on Instance uuid 74c8f655-7045-4ad9-8246-3d2504315607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:45 np0005473739 nova_compute[259550]: 2025-10-07 14:03:45.984 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <uuid>74c8f655-7045-4ad9-8246-3d2504315607</uuid>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <name>instance-00000005</name>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerDiagnosticsNegativeTest-server-644875358</nova:name>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:03:44</nova:creationTime>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <nova:user uuid="74c6d01f508d42c7840dbfdf786af788">tempest-ServerDiagnosticsNegativeTest-1419378582-project-member</nova:user>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <nova:project uuid="4bb899690ee54889beb522359cc49e4f">tempest-ServerDiagnosticsNegativeTest-1419378582</nova:project>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <entry name="serial">74c8f655-7045-4ad9-8246-3d2504315607</entry>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <entry name="uuid">74c8f655-7045-4ad9-8246-3d2504315607</entry>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/74c8f655-7045-4ad9-8246-3d2504315607_disk">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/74c8f655-7045-4ad9-8246-3d2504315607_disk.config">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/console.log" append="off"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:03:45 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:03:45 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:03:45 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:03:45 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.044 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.044 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.045 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Using config drive#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.132 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.350 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Creating config drive at /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.356 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rjdh_fy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.492 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1rjdh_fy" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.520 2 DEBUG nova.storage.rbd_utils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] rbd image 74c8f655-7045-4ad9-8246-3d2504315607_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.525 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config 74c8f655-7045-4ad9-8246-3d2504315607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:03:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:03:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.712 2 DEBUG oslo_concurrency.processutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config 74c8f655-7045-4ad9-8246-3d2504315607_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.187s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:46 np0005473739 nova_compute[259550]: 2025-10-07 14:03:46.713 2 INFO nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Deleting local config drive /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607/disk.config because it was imported into RBD.#033[00m
Oct  7 10:03:46 np0005473739 systemd-machined[214580]: New machine qemu-5-instance-00000005.
Oct  7 10:03:46 np0005473739 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Oct  7 10:03:47 np0005473739 nova_compute[259550]: 2025-10-07 14:03:47.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1094: 305 pgs: 305 active+clean; 116 MiB data, 275 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct  7 10:03:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 42a327ff-dea2-407e-925c-ff8553fb368d does not exist
Oct  7 10:03:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 15ffd35c-e826-4d4d-b1ec-bfc9cfcae014 does not exist
Oct  7 10:03:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev eb93310d-6c93-4906-9e39-97f20d5561da does not exist
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.073 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845828.073248, 74c8f655-7045-4ad9-8246-3d2504315607 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.075 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.079 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.079 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.084 2 INFO nova.virt.libvirt.driver [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance spawned successfully.#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.085 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:03:48 np0005473739 podman[276611]: 2025-10-07 14:03:48.087184722 +0000 UTC m=+0.071084932 container create 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.095 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.102 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.108 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.108 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.109 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.109 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.110 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.110 2 DEBUG nova.virt.libvirt.driver [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:03:48 np0005473739 systemd[1]: Started libpod-conmon-0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33.scope.
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.133 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.134 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845828.07479, 74c8f655-7045-4ad9-8246-3d2504315607 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.134 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] VM Started (Lifecycle Event)#033[00m
Oct  7 10:03:48 np0005473739 podman[276611]: 2025-10-07 14:03:48.045750834 +0000 UTC m=+0.029651074 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.160 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.164 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.170 2 INFO nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Took 3.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.170 2 DEBUG nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.179 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:03:48 np0005473739 podman[276611]: 2025-10-07 14:03:48.208126487 +0000 UTC m=+0.192026717 container init 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:03:48 np0005473739 podman[276611]: 2025-10-07 14:03:48.218248118 +0000 UTC m=+0.202148328 container start 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.222 2 INFO nova.compute.manager [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Took 5.43 seconds to build instance.#033[00m
Oct  7 10:03:48 np0005473739 podman[276611]: 2025-10-07 14:03:48.226794216 +0000 UTC m=+0.210694436 container attach 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 10:03:48 np0005473739 tender_lehmann[276627]: 167 167
Oct  7 10:03:48 np0005473739 systemd[1]: libpod-0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33.scope: Deactivated successfully.
Oct  7 10:03:48 np0005473739 podman[276611]: 2025-10-07 14:03:48.230715592 +0000 UTC m=+0.214615812 container died 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:03:48 np0005473739 nova_compute[259550]: 2025-10-07 14:03:48.237 2 DEBUG oslo_concurrency.lockutils [None req-1e3acc53-73ac-43df-9e95-71526c481a8c 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.794s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ab34d759db6fc508577f821b629ce0833e4f53f6c8d9fb4df47bcedc551cbc7d-merged.mount: Deactivated successfully.
Oct  7 10:03:48 np0005473739 podman[276611]: 2025-10-07 14:03:48.326069152 +0000 UTC m=+0.309969392 container remove 0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lehmann, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:03:48 np0005473739 systemd[1]: libpod-conmon-0f082d2ec5831ef85c85248c592d93011e54389d299caf187b712c6b81032f33.scope: Deactivated successfully.
Oct  7 10:03:48 np0005473739 podman[276649]: 2025-10-07 14:03:48.566572863 +0000 UTC m=+0.066133089 container create b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:03:48 np0005473739 podman[276649]: 2025-10-07 14:03:48.529624975 +0000 UTC m=+0.029185221 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:03:48 np0005473739 systemd[1]: Started libpod-conmon-b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f.scope.
Oct  7 10:03:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:03:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:48 np0005473739 podman[276649]: 2025-10-07 14:03:48.731658918 +0000 UTC m=+0.231219154 container init b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 10:03:48 np0005473739 podman[276649]: 2025-10-07 14:03:48.740593988 +0000 UTC m=+0.240154224 container start b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:03:48 np0005473739 podman[276649]: 2025-10-07 14:03:48.766441569 +0000 UTC m=+0.266001805 container attach b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.152 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "74c8f655-7045-4ad9-8246-3d2504315607" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.153 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.154 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "74c8f655-7045-4ad9-8246-3d2504315607-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.154 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.154 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.156 2 INFO nova.compute.manager [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Terminating instance#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.158 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "refresh_cache-74c8f655-7045-4ad9-8246-3d2504315607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.159 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquired lock "refresh_cache-74c8f655-7045-4ad9-8246-3d2504315607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.159 2 DEBUG nova.network.neutron [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.327 2 DEBUG nova.network.neutron [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1095: 305 pgs: 305 active+clean; 164 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.9 MiB/s wr, 154 op/s
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.617 2 DEBUG nova.network.neutron [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.666 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Releasing lock "refresh_cache-74c8f655-7045-4ad9-8246-3d2504315607" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:03:49 np0005473739 nova_compute[259550]: 2025-10-07 14:03:49.668 2 DEBUG nova.compute.manager [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:03:49 np0005473739 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Oct  7 10:03:49 np0005473739 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 2.870s CPU time.
Oct  7 10:03:49 np0005473739 systemd-machined[214580]: Machine qemu-5-instance-00000005 terminated.
Oct  7 10:03:50 np0005473739 nifty_spence[276666]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:03:50 np0005473739 nifty_spence[276666]: --> relative data size: 1.0
Oct  7 10:03:50 np0005473739 nifty_spence[276666]: --> All data devices are unavailable
Oct  7 10:03:50 np0005473739 systemd[1]: libpod-b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f.scope: Deactivated successfully.
Oct  7 10:03:50 np0005473739 podman[276649]: 2025-10-07 14:03:50.073801904 +0000 UTC m=+1.573362120 container died b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:03:50 np0005473739 systemd[1]: libpod-b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f.scope: Consumed 1.119s CPU time.
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.099 2 INFO nova.virt.libvirt.driver [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance destroyed successfully.#033[00m
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.101 2 DEBUG nova.objects.instance [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lazy-loading 'resources' on Instance uuid 74c8f655-7045-4ad9-8246-3d2504315607 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:03:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bab96af5b1f781901c9cfc2377a51b3b165d11e3e6e2705ff61271bacb372a72-merged.mount: Deactivated successfully.
Oct  7 10:03:50 np0005473739 podman[276649]: 2025-10-07 14:03:50.144973297 +0000 UTC m=+1.644533513 container remove b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:03:50 np0005473739 systemd[1]: libpod-conmon-b37a566b4c3ef04ad8ec2c24e8e2b9f1ddce263e7a93d2ad9d421768f6e6980f.scope: Deactivated successfully.
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.530 2 INFO nova.virt.libvirt.driver [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Deleting instance files /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607_del#033[00m
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.531 2 INFO nova.virt.libvirt.driver [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Deletion of /var/lib/nova/instances/74c8f655-7045-4ad9-8246-3d2504315607_del complete#033[00m
Oct  7 10:03:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:50.578 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:03:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:03:50.580 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.593 2 INFO nova.compute.manager [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.593 2 DEBUG oslo.service.loopingcall [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.594 2 DEBUG nova.compute.manager [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:03:50 np0005473739 nova_compute[259550]: 2025-10-07 14:03:50.594 2 DEBUG nova.network.neutron [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:03:50 np0005473739 podman[276867]: 2025-10-07 14:03:50.868453646 +0000 UTC m=+0.048965751 container create 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 10:03:50 np0005473739 systemd[1]: Started libpod-conmon-9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec.scope.
Oct  7 10:03:50 np0005473739 podman[276867]: 2025-10-07 14:03:50.849090929 +0000 UTC m=+0.029603054 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:03:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:03:50 np0005473739 podman[276867]: 2025-10-07 14:03:50.982182278 +0000 UTC m=+0.162694393 container init 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:03:50 np0005473739 podman[276867]: 2025-10-07 14:03:50.990636584 +0000 UTC m=+0.171148679 container start 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:03:50 np0005473739 podman[276867]: 2025-10-07 14:03:50.994291551 +0000 UTC m=+0.174803666 container attach 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:03:50 np0005473739 ecstatic_benz[276883]: 167 167
Oct  7 10:03:50 np0005473739 systemd[1]: libpod-9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec.scope: Deactivated successfully.
Oct  7 10:03:50 np0005473739 conmon[276883]: conmon 9840fdff892a035e69a0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec.scope/container/memory.events
Oct  7 10:03:50 np0005473739 podman[276867]: 2025-10-07 14:03:50.998389941 +0000 UTC m=+0.178902036 container died 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:03:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4d2b2f568b97a3c1b20fdf15e6f7ebcbdeb69663c55986884036f4fd56023041-merged.mount: Deactivated successfully.
Oct  7 10:03:51 np0005473739 podman[276867]: 2025-10-07 14:03:51.042558083 +0000 UTC m=+0.223070178 container remove 9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_benz, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.048 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845816.044832, 39a26aa2-978f-4d33-bc4e-fd4bfc81d380 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.049 2 INFO nova.compute.manager [-] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:03:51 np0005473739 systemd[1]: libpod-conmon-9840fdff892a035e69a02699b8665068209eda36912e1658e9f7028846448dec.scope: Deactivated successfully.
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.072 2 DEBUG nova.compute.manager [None req-4776ba38-85c4-4125-ab49-b819c6dfd9e5 - - - - - -] [instance: 39a26aa2-978f-4d33-bc4e-fd4bfc81d380] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.172 2 DEBUG nova.network.neutron [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.202 2 DEBUG nova.network.neutron [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:03:51 np0005473739 podman[276908]: 2025-10-07 14:03:51.245257454 +0000 UTC m=+0.047476821 container create 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:03:51 np0005473739 systemd[1]: Started libpod-conmon-499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84.scope.
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.295 2 INFO nova.compute.manager [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Took 0.70 seconds to deallocate network for instance.#033[00m
Oct  7 10:03:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:03:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:51 np0005473739 podman[276908]: 2025-10-07 14:03:51.225789303 +0000 UTC m=+0.028008690 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:03:51 np0005473739 podman[276908]: 2025-10-07 14:03:51.325990912 +0000 UTC m=+0.128210319 container init 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:03:51 np0005473739 podman[276908]: 2025-10-07 14:03:51.333347879 +0000 UTC m=+0.135567286 container start 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 10:03:51 np0005473739 podman[276908]: 2025-10-07 14:03:51.337728967 +0000 UTC m=+0.139948374 container attach 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.371 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.371 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:03:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1096: 305 pgs: 305 active+clean; 148 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 150 op/s
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.475 2 DEBUG oslo_concurrency.processutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:03:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:03:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221377279' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.933 2 DEBUG oslo_concurrency.processutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.940 2 DEBUG nova.compute.provider_tree [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.962 2 DEBUG nova.scheduler.client.report [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:03:51 np0005473739 nova_compute[259550]: 2025-10-07 14:03:51.991 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.620s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:52 np0005473739 nova_compute[259550]: 2025-10-07 14:03:52.018 2 INFO nova.scheduler.client.report [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Deleted allocations for instance 74c8f655-7045-4ad9-8246-3d2504315607#033[00m
Oct  7 10:03:52 np0005473739 nova_compute[259550]: 2025-10-07 14:03:52.073 2 DEBUG oslo_concurrency.lockutils [None req-2174ea96-4936-465f-a795-d4b8d6e52077 74c6d01f508d42c7840dbfdf786af788 4bb899690ee54889beb522359cc49e4f - - default default] Lock "74c8f655-7045-4ad9-8246-3d2504315607" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:03:52 np0005473739 nova_compute[259550]: 2025-10-07 14:03:52.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]: {
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:    "0": [
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:        {
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "devices": [
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "/dev/loop3"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            ],
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_name": "ceph_lv0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_size": "21470642176",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "name": "ceph_lv0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "tags": {
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cluster_name": "ceph",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.crush_device_class": "",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.encrypted": "0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osd_id": "0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.type": "block",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.vdo": "0"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            },
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "type": "block",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "vg_name": "ceph_vg0"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:        }
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:    ],
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:    "1": [
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:        {
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "devices": [
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "/dev/loop4"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            ],
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_name": "ceph_lv1",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_size": "21470642176",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "name": "ceph_lv1",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "tags": {
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cluster_name": "ceph",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.crush_device_class": "",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.encrypted": "0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osd_id": "1",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.type": "block",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.vdo": "0"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            },
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "type": "block",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "vg_name": "ceph_vg1"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:        }
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:    ],
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:    "2": [
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:        {
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "devices": [
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "/dev/loop5"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            ],
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_name": "ceph_lv2",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_size": "21470642176",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "name": "ceph_lv2",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "tags": {
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.cluster_name": "ceph",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.crush_device_class": "",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.encrypted": "0",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osd_id": "2",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.type": "block",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:                "ceph.vdo": "0"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            },
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "type": "block",
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:            "vg_name": "ceph_vg2"
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:        }
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]:    ]
Oct  7 10:03:52 np0005473739 vigilant_saha[276923]: }
Oct  7 10:03:52 np0005473739 systemd[1]: libpod-499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84.scope: Deactivated successfully.
Oct  7 10:03:52 np0005473739 podman[276908]: 2025-10-07 14:03:52.245670159 +0000 UTC m=+1.047889526 container died 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:03:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-df2f3b7ecf3da2841f086f5f1237b5a8f7f7b299207d8ab53c4949eeb7fe09e2-merged.mount: Deactivated successfully.
Oct  7 10:03:52 np0005473739 podman[276908]: 2025-10-07 14:03:52.383336661 +0000 UTC m=+1.185556028 container remove 499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_saha, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 10:03:52 np0005473739 systemd[1]: libpod-conmon-499949e303d184f81d743fa153c9489231eef18fe801554590af23e51b870a84.scope: Deactivated successfully.
Oct  7 10:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:03:53 np0005473739 podman[277108]: 2025-10-07 14:03:53.059952757 +0000 UTC m=+0.040599457 container create 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:03:53 np0005473739 systemd[1]: Started libpod-conmon-6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31.scope.
Oct  7 10:03:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:03:53 np0005473739 podman[277108]: 2025-10-07 14:03:53.136575226 +0000 UTC m=+0.117221936 container init 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:03:53 np0005473739 podman[277108]: 2025-10-07 14:03:53.043961699 +0000 UTC m=+0.024608429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:03:53 np0005473739 podman[277108]: 2025-10-07 14:03:53.14496968 +0000 UTC m=+0.125616380 container start 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:03:53 np0005473739 trusting_mendel[277125]: 167 167
Oct  7 10:03:53 np0005473739 podman[277108]: 2025-10-07 14:03:53.148946917 +0000 UTC m=+0.129593647 container attach 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:03:53 np0005473739 systemd[1]: libpod-6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31.scope: Deactivated successfully.
Oct  7 10:03:53 np0005473739 conmon[277125]: conmon 6eafb6c9fbf0f4e789c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31.scope/container/memory.events
Oct  7 10:03:53 np0005473739 podman[277108]: 2025-10-07 14:03:53.150986252 +0000 UTC m=+0.131632952 container died 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:03:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f04616ca4b75a9b59f2ff06b1693d2aaa51e799a9a720e84aa856db98c902656-merged.mount: Deactivated successfully.
Oct  7 10:03:53 np0005473739 podman[277108]: 2025-10-07 14:03:53.188241747 +0000 UTC m=+0.168888447 container remove 6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 10:03:53 np0005473739 systemd[1]: libpod-conmon-6eafb6c9fbf0f4e789c0376e20f1b96123ff426d879b69bb61c4e702d81aaa31.scope: Deactivated successfully.
Oct  7 10:03:53 np0005473739 podman[277150]: 2025-10-07 14:03:53.367963805 +0000 UTC m=+0.043009562 container create 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:03:53 np0005473739 systemd[1]: Started libpod-conmon-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope.
Oct  7 10:03:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1097: 305 pgs: 305 active+clean; 148 MiB data, 297 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct  7 10:03:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:03:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:03:53 np0005473739 podman[277150]: 2025-10-07 14:03:53.350190989 +0000 UTC m=+0.025236766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:03:53 np0005473739 podman[277150]: 2025-10-07 14:03:53.453764819 +0000 UTC m=+0.128810586 container init 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:03:53 np0005473739 podman[277150]: 2025-10-07 14:03:53.459653526 +0000 UTC m=+0.134699283 container start 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:03:53 np0005473739 podman[277150]: 2025-10-07 14:03:53.465113503 +0000 UTC m=+0.140159260 container attach 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef)
Oct  7 10:03:54 np0005473739 nifty_spence[277166]: {
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "osd_id": 2,
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "type": "bluestore"
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:    },
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "osd_id": 1,
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "type": "bluestore"
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:    },
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "osd_id": 0,
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:        "type": "bluestore"
Oct  7 10:03:54 np0005473739 nifty_spence[277166]:    }
Oct  7 10:03:54 np0005473739 nifty_spence[277166]: }
Oct  7 10:03:54 np0005473739 systemd[1]: libpod-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope: Deactivated successfully.
Oct  7 10:03:54 np0005473739 systemd[1]: libpod-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope: Consumed 1.108s CPU time.
Oct  7 10:03:54 np0005473739 conmon[277166]: conmon 26270d07ea0ed924d106 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope/container/memory.events
Oct  7 10:03:54 np0005473739 podman[277150]: 2025-10-07 14:03:54.570575798 +0000 UTC m=+1.245621585 container died 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 10:03:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8f6605e7645f8daf3a26609902c48ae606923ac4a58341c7ec6685568a32a0c6-merged.mount: Deactivated successfully.
Oct  7 10:03:54 np0005473739 podman[277150]: 2025-10-07 14:03:54.633157491 +0000 UTC m=+1.308203248 container remove 26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_spence, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:03:54 np0005473739 systemd[1]: libpod-conmon-26270d07ea0ed924d1066fb6cc6c7b07511270ed2c6c370fd084d572d1e543f0.scope: Deactivated successfully.
Oct  7 10:03:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:03:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:03:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:54 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev dc0622c1-e24e-41d4-b0e2-03c7b9e7b3a3 does not exist
Oct  7 10:03:54 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8547c35f-5820-4e13-bcd9-47ca0311d7b9 does not exist
Oct  7 10:03:54 np0005473739 podman[277199]: 2025-10-07 14:03:54.750033038 +0000 UTC m=+0.143264084 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:03:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:03:55 np0005473739 nova_compute[259550]: 2025-10-07 14:03:55.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1098: 305 pgs: 305 active+clean; 121 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 189 op/s
Oct  7 10:03:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:03:57 np0005473739 nova_compute[259550]: 2025-10-07 14:03:57.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:03:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1099: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.5 MiB/s wr, 177 op/s
Oct  7 10:03:58 np0005473739 podman[277289]: 2025-10-07 14:03:58.078752561 +0000 UTC m=+0.061320361 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:03:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1100: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 166 op/s
Oct  7 10:04:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:00.037 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:00.038 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:00.038 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:00 np0005473739 nova_compute[259550]: 2025-10-07 14:04:00.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:00.581 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1101: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 91 op/s
Oct  7 10:04:01 np0005473739 nova_compute[259550]: 2025-10-07 14:04:01.832 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "af17d051-72b6-45f1-b829-94d3a2939519" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:01 np0005473739 nova_compute[259550]: 2025-10-07 14:04:01.833 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:01 np0005473739 nova_compute[259550]: 2025-10-07 14:04:01.851 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:04:01 np0005473739 nova_compute[259550]: 2025-10-07 14:04:01.920 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:01 np0005473739 nova_compute[259550]: 2025-10-07 14:04:01.921 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:01 np0005473739 nova_compute[259550]: 2025-10-07 14:04:01.931 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:04:01 np0005473739 nova_compute[259550]: 2025-10-07 14:04:01.932 2 INFO nova.compute.claims [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.038 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:04:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3384435753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.519 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.527 2 DEBUG nova.compute.provider_tree [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.544 2 DEBUG nova.scheduler.client.report [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.565 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.566 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.604 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.617 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.633 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.709 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.711 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.711 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating image(s)#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.738 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.771 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.796 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.801 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.864 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.867 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.868 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.869 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.899 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:02 np0005473739 nova_compute[259550]: 2025-10-07 14:04:02.903 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.271 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.368s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.340 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] resizing rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:04:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1102: 305 pgs: 305 active+clean; 121 MiB data, 276 MiB used, 60 GiB / 60 GiB avail; 754 KiB/s rd, 14 KiB/s wr, 48 op/s
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.443 2 DEBUG nova.objects.instance [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'migration_context' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.460 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.461 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Ensure instance console log exists: /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.461 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.461 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.462 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.464 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.469 2 WARNING nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.473 2 DEBUG nova.virt.libvirt.host [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.474 2 DEBUG nova.virt.libvirt.host [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.477 2 DEBUG nova.virt.libvirt.host [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.477 2 DEBUG nova.virt.libvirt.host [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.478 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.478 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.479 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.480 2 DEBUG nova.virt.hardware [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.483 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1310574256' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.941 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.971 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:03 np0005473739 nova_compute[259550]: 2025-10-07 14:04:03.976 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1888465638' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.478 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.480 2 DEBUG nova.objects.instance [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.497 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <uuid>af17d051-72b6-45f1-b829-94d3a2939519</uuid>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <name>instance-00000006</name>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdmin275Test-server-2034007563</nova:name>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:04:03</nova:creationTime>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <nova:user uuid="0418d9872e6041138cc90a3aa74cce48">tempest-ServersAdmin275Test-1410041514-project-member</nova:user>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <nova:project uuid="60e63a33759f4241845fccdb5c104b64">tempest-ServersAdmin275Test-1410041514</nova:project>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <entry name="serial">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <entry name="uuid">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk.config">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log" append="off"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:04:04 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:04:04 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:04:04 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:04:04 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.621 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.621 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.622 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Using config drive#033[00m
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.648 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.938 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating config drive at /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config#033[00m
Oct  7 10:04:04 np0005473739 nova_compute[259550]: 2025-10-07 14:04:04.943 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww6enjpi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.071 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpww6enjpi" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.096 2 DEBUG nova.storage.rbd_utils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.100 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.122 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845830.0956862, 74c8f655-7045-4ad9-8246-3d2504315607 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.123 2 INFO nova.compute.manager [-] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.200 2 DEBUG nova.compute.manager [None req-4c082ff7-5885-4222-8579-cf149a82d079 - - - - - -] [instance: 74c8f655-7045-4ad9-8246-3d2504315607] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.280 2 DEBUG oslo_concurrency.processutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.281 2 INFO nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting local config drive /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config because it was imported into RBD.#033[00m
Oct  7 10:04:05 np0005473739 systemd-machined[214580]: New machine qemu-6-instance-00000006.
Oct  7 10:04:05 np0005473739 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Oct  7 10:04:05 np0005473739 nova_compute[259550]: 2025-10-07 14:04:05.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1103: 305 pgs: 305 active+clean; 157 MiB data, 282 MiB used, 60 GiB / 60 GiB avail; 761 KiB/s rd, 1.3 MiB/s wr, 61 op/s
Oct  7 10:04:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:05Z|00032|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  7 10:04:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.055 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.058 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.058 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.059 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.059 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.061 2 INFO nova.compute.manager [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Terminating instance#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.063 2 DEBUG nova.compute.manager [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:04:06 np0005473739 kernel: tap8bbf9c96-17 (unregistering): left promiscuous mode
Oct  7 10:04:06 np0005473739 NetworkManager[44949]: <info>  [1759845846.1678] device (tap8bbf9c96-17): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:04:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:06Z|00033|binding|INFO|Releasing lport 8bbf9c96-17e6-49df-8a58-e3557085f576 from this chassis (sb_readonly=0)
Oct  7 10:04:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:06Z|00034|binding|INFO|Setting lport 8bbf9c96-17e6-49df-8a58-e3557085f576 down in Southbound
Oct  7 10:04:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:06Z|00035|binding|INFO|Removing iface tap8bbf9c96-17 ovn-installed in OVS
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.217 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:39:1a 10.100.0.5'], port_security=['fa:16:3e:09:39:1a 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '5aa06cd5-91e7-4797-83c0-ddd3966533ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '380b73085cef431383bee110ceaefb15', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3b7853b-d94c-445c-810d-9b4dd15d78f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f05683d-d61c-46e7-a8b2-e5ebf47fffcc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8bbf9c96-17e6-49df-8a58-e3557085f576) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.218 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8bbf9c96-17e6-49df-8a58-e3557085f576 in datapath 384938fa-4eb0-4ec5-a6a4-bc65721ba22a unbound from our chassis#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.220 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 384938fa-4eb0-4ec5-a6a4-bc65721ba22a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.222 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[51cc40b8-e93c-4cb4-ab34-96c9687d8c83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.222 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a namespace which is not needed anymore#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.260 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845846.2593713, af17d051-72b6-45f1-b829-94d3a2939519 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.260 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:04:06 np0005473739 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Oct  7 10:04:06 np0005473739 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 16.702s CPU time.
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.263 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.263 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:04:06 np0005473739 systemd-machined[214580]: Machine qemu-3-instance-00000003 terminated.
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.271 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance spawned successfully.#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.271 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:04:06 np0005473739 NetworkManager[44949]: <info>  [1759845846.2845] manager: (tap8bbf9c96-17): new Tun device (/org/freedesktop/NetworkManager/Devices/31)
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.303 2 INFO nova.virt.libvirt.driver [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Instance destroyed successfully.#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.303 2 DEBUG nova.objects.instance [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'resources' on Instance uuid 5aa06cd5-91e7-4797-83c0-ddd3966533ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.316 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.325 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.335 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.335 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.336 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.336 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.337 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.337 2 DEBUG nova.virt.libvirt.driver [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.347 2 DEBUG nova.virt.libvirt.vif [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:03:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-741383512',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-741383512',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(20),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-741383512',id=3,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=20,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:03:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-9ayb8exy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:03:30Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=5aa06cd5-91e7-4797-83c0-ddd3966533ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.347 2 DEBUG nova.network.os_vif_util [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "8bbf9c96-17e6-49df-8a58-e3557085f576", "address": "fa:16:3e:09:39:1a", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8bbf9c96-17", "ovs_interfaceid": "8bbf9c96-17e6-49df-8a58-e3557085f576", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.348 2 DEBUG nova.network.os_vif_util [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.349 2 DEBUG os_vif [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.352 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8bbf9c96-17, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.372 2 INFO os_vif [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:39:1a,bridge_name='br-int',has_traffic_filtering=True,id=8bbf9c96-17e6-49df-8a58-e3557085f576,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8bbf9c96-17')#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.394 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.395 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845846.2622466, af17d051-72b6-45f1-b829-94d3a2939519 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.395 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Started (Lifecycle Event)#033[00m
Oct  7 10:04:06 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [NOTICE]   (275779) : haproxy version is 2.8.14-c23fe91
Oct  7 10:04:06 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [NOTICE]   (275779) : path to executable is /usr/sbin/haproxy
Oct  7 10:04:06 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [WARNING]  (275779) : Exiting Master process...
Oct  7 10:04:06 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [ALERT]    (275779) : Current worker (275781) exited with code 143 (Terminated)
Oct  7 10:04:06 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[275764]: [WARNING]  (275779) : All workers exited. Exiting... (0)
Oct  7 10:04:06 np0005473739 systemd[1]: libpod-ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b.scope: Deactivated successfully.
Oct  7 10:04:06 np0005473739 podman[277705]: 2025-10-07 14:04:06.441154048 +0000 UTC m=+0.062446052 container died ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.456 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.461 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-84c7010ba2fc6f6ddfa22c1cc2034b39e4a61b0f4287273b69c1e4981fb9a1cb-merged.mount: Deactivated successfully.
Oct  7 10:04:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b-userdata-shm.mount: Deactivated successfully.
Oct  7 10:04:06 np0005473739 podman[277705]: 2025-10-07 14:04:06.50291993 +0000 UTC m=+0.124211934 container cleanup ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.510 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:04:06 np0005473739 systemd[1]: libpod-conmon-ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b.scope: Deactivated successfully.
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.556 2 INFO nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Took 3.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.557 2 DEBUG nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:06 np0005473739 podman[277750]: 2025-10-07 14:04:06.590119551 +0000 UTC m=+0.060465878 container remove ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.599 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[223e4954-7273-4f0d-baf3-6486674a245b]: (4, ('Tue Oct  7 02:04:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a (ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b)\nee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b\nTue Oct  7 02:04:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a (ee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b)\nee4d2cd3da53b0287a2fc0c91d3f42d1093a074ff7b35630b7963d189164da6b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a4a9274e-ec83-405e-a276-c116eb44e76b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.602 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384938fa-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:06 np0005473739 kernel: tap384938fa-40: left promiscuous mode
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c078deeb-3311-40a0-9ba7-23c7f9d7324f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.645 2 INFO nova.compute.manager [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Took 4.76 seconds to build instance.#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.663 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0394262-6759-499e-ace4-7f99841d6cf3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.664 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1e1996e-a0a2-4894-89f9-efb5719c4b13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.683 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[52d96de8-6495-4fb3-83a4-3febf9b35eb0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 639865, 'reachable_time': 20041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 277763, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:06 np0005473739 systemd[1]: run-netns-ovnmeta\x2d384938fa\x2d4eb0\x2d4ec5\x2da6a4\x2dbc65721ba22a.mount: Deactivated successfully.
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.699 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:04:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:06.699 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f544657e-fdb8-4f10-8899-089a80c3ee95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.770 2 DEBUG oslo_concurrency.lockutils [None req-59d8f2d9-db5a-446c-b985-d265ebbbaba7 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.937s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.869 2 DEBUG nova.compute.manager [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-unplugged-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.869 2 DEBUG oslo_concurrency.lockutils [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.870 2 DEBUG oslo_concurrency.lockutils [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.870 2 DEBUG oslo_concurrency.lockutils [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.870 2 DEBUG nova.compute.manager [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] No waiting events found dispatching network-vif-unplugged-8bbf9c96-17e6-49df-8a58-e3557085f576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.870 2 DEBUG nova.compute.manager [req-7056e284-6cfd-43ca-b103-6a60098160d7 req-40d4e78d-0eee-40e8-9c3f-4befae2f8f74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-unplugged-8bbf9c96-17e6-49df-8a58-e3557085f576 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.988 2 INFO nova.virt.libvirt.driver [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Deleting instance files /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce_del#033[00m
Oct  7 10:04:06 np0005473739 nova_compute[259550]: 2025-10-07 14:04:06.989 2 INFO nova.virt.libvirt.driver [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Deletion of /var/lib/nova/instances/5aa06cd5-91e7-4797-83c0-ddd3966533ce_del complete#033[00m
Oct  7 10:04:07 np0005473739 nova_compute[259550]: 2025-10-07 14:04:07.318 2 INFO nova.compute.manager [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Took 1.25 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:04:07 np0005473739 nova_compute[259550]: 2025-10-07 14:04:07.318 2 DEBUG oslo.service.loopingcall [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:04:07 np0005473739 nova_compute[259550]: 2025-10-07 14:04:07.318 2 DEBUG nova.compute.manager [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:04:07 np0005473739 nova_compute[259550]: 2025-10-07 14:04:07.319 2 DEBUG nova.network.neutron [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:04:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1104: 305 pgs: 305 active+clean; 167 MiB data, 288 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #51. Immutable memtables: 0.
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.534358) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 25] Flushing memtable with next log file: 51
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847534405, "job": 25, "event": "flush_started", "num_memtables": 1, "num_entries": 2091, "num_deletes": 252, "total_data_size": 3402313, "memory_usage": 3457216, "flush_reason": "Manual Compaction"}
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 25] Level-0 flush table #52: started
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847561618, "cf_name": "default", "job": 25, "event": "table_file_creation", "file_number": 52, "file_size": 3313483, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 20964, "largest_seqno": 23054, "table_properties": {"data_size": 3304072, "index_size": 5905, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19380, "raw_average_key_size": 20, "raw_value_size": 3285069, "raw_average_value_size": 3425, "num_data_blocks": 266, "num_entries": 959, "num_filter_entries": 959, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845639, "oldest_key_time": 1759845639, "file_creation_time": 1759845847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 52, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 25] Flush lasted 27320 microseconds, and 9017 cpu microseconds.
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.561674) [db/flush_job.cc:967] [default] [JOB 25] Level-0 flush table #52: 3313483 bytes OK
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.561699) [db/memtable_list.cc:519] [default] Level-0 commit table #52 started
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.568415) [db/memtable_list.cc:722] [default] Level-0 commit table #52: memtable #1 done
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.568438) EVENT_LOG_v1 {"time_micros": 1759845847568430, "job": 25, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.568459) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 25] Try to delete WAL files size 3393532, prev total WAL file size 3393532, number of live WAL files 2.
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000048.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.569501) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730031373537' seq:72057594037927935, type:22 .. '7061786F730032303039' seq:0, type:0; will stop at (end)
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 26] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 25 Base level 0, inputs: [52(3235KB)], [50(7424KB)]
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847569537, "job": 26, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [52], "files_L6": [50], "score": -1, "input_data_size": 10916657, "oldest_snapshot_seqno": -1}
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 26] Generated table #53: 4770 keys, 9171902 bytes, temperature: kUnknown
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847646350, "cf_name": "default", "job": 26, "event": "table_file_creation", "file_number": 53, "file_size": 9171902, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9137441, "index_size": 21423, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11973, "raw_key_size": 116896, "raw_average_key_size": 24, "raw_value_size": 9048679, "raw_average_value_size": 1896, "num_data_blocks": 901, "num_entries": 4770, "num_filter_entries": 4770, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845847, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 53, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.646749) [db/compaction/compaction_job.cc:1663] [default] [JOB 26] Compacted 1@0 + 1@6 files to L6 => 9171902 bytes
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.649543) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 141.9 rd, 119.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.3 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(6.1) write-amplify(2.8) OK, records in: 5291, records dropped: 521 output_compression: NoCompression
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.649566) EVENT_LOG_v1 {"time_micros": 1759845847649555, "job": 26, "event": "compaction_finished", "compaction_time_micros": 76928, "compaction_time_cpu_micros": 25415, "output_level": 6, "num_output_files": 1, "total_output_size": 9171902, "num_input_records": 5291, "num_output_records": 4770, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000052.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847650328, "job": 26, "event": "table_file_deletion", "file_number": 52}
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000050.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845847651962, "job": 26, "event": "table_file_deletion", "file_number": 50}
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.569401) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652027) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652034) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652035) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652037) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:04:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:04:07.652038) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:04:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1105: 305 pgs: 305 active+clean; 107 MiB data, 261 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  7 10:04:09 np0005473739 nova_compute[259550]: 2025-10-07 14:04:09.748 2 DEBUG nova.compute.manager [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:09 np0005473739 nova_compute[259550]: 2025-10-07 14:04:09.748 2 DEBUG oslo_concurrency.lockutils [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:09 np0005473739 nova_compute[259550]: 2025-10-07 14:04:09.748 2 DEBUG oslo_concurrency.lockutils [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:09 np0005473739 nova_compute[259550]: 2025-10-07 14:04:09.749 2 DEBUG oslo_concurrency.lockutils [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:09 np0005473739 nova_compute[259550]: 2025-10-07 14:04:09.749 2 DEBUG nova.compute.manager [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] No waiting events found dispatching network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:04:09 np0005473739 nova_compute[259550]: 2025-10-07 14:04:09.749 2 WARNING nova.compute.manager [req-1370c5a4-fead-47da-ba58-090dce7b9fe1 req-0fff700a-6555-4c01-94b1-8ec450537bb7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received unexpected event network-vif-plugged-8bbf9c96-17e6-49df-8a58-e3557085f576 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:04:09 np0005473739 nova_compute[259550]: 2025-10-07 14:04:09.895 2 DEBUG nova.network.neutron [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:09 np0005473739 nova_compute[259550]: 2025-10-07 14:04:09.920 2 INFO nova.compute.manager [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Took 2.60 seconds to deallocate network for instance.#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.099 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.099 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.172 2 DEBUG oslo_concurrency.processutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:04:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002777242' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.672 2 DEBUG oslo_concurrency.processutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.678 2 DEBUG nova.compute.provider_tree [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.702 2 DEBUG nova.scheduler.client.report [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.726 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.626s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.748 2 INFO nova.scheduler.client.report [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Deleted allocations for instance 5aa06cd5-91e7-4797-83c0-ddd3966533ce#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.821 2 DEBUG oslo_concurrency.lockutils [None req-4b5ac096-2091-4d93-931e-de82600e7e9f 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "5aa06cd5-91e7-4797-83c0-ddd3966533ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.763s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.865 2 INFO nova.compute.manager [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Rebuilding instance#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.878 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.879 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.897 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.997 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:10 np0005473739 nova_compute[259550]: 2025-10-07 14:04:10.998 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.004 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.005 2 INFO nova.compute.claims [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.140 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.188 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'trusted_certs' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.207 2 DEBUG nova.compute.manager [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.248 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'pci_requests' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.285 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'pci_devices' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.341 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'resources' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.353 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'migration_context' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.364 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.370 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:04:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1106: 305 pgs: 305 active+clean; 88 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  7 10:04:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:04:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2066137532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.656 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.662 2 DEBUG nova.compute.provider_tree [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.678 2 DEBUG nova.scheduler.client.report [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.700 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.703s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.702 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.746 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.747 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.774 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.807 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.870 2 DEBUG nova.compute.manager [req-bdca8af6-7780-48d0-8b79-3915dcbdd0b9 req-ee6fb98b-507d-47e1-8fa6-19244796c538 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Received event network-vif-deleted-8bbf9c96-17e6-49df-8a58-e3557085f576 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.931 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.932 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.933 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Creating image(s)#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.957 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:11 np0005473739 nova_compute[259550]: 2025-10-07 14:04:11.981 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.007 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.014 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.086 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.087 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.087 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.088 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.114 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.121 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 01404604-70e9-49ec-9047-ec42ba41afee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.209 2 DEBUG nova.policy [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2d78803d42674b89b4ea28d9a2442357', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '380b73085cef431383bee110ceaefb15', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.426 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 01404604-70e9-49ec-9047-ec42ba41afee_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.304s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.488 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] resizing rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.598 2 DEBUG nova.objects.instance [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'migration_context' on Instance uuid 01404604-70e9-49ec-9047-ec42ba41afee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.635 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.661 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.665 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.666 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.666 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.698 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.699 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.743 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.745 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.769 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:12 np0005473739 nova_compute[259550]: 2025-10-07 14:04:12.774 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.059 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Successfully created port: 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:04:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1107: 305 pgs: 305 active+clean; 88 MiB data, 250 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.832 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Successfully updated port: 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.836 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/ephemeral_1_0706d66 01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0 --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.867 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.868 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquired lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.869 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.944 2 DEBUG nova.compute.manager [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-changed-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.945 2 DEBUG nova.compute.manager [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Refreshing instance network info cache due to event network-changed-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.945 2 DEBUG oslo_concurrency.lockutils [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.952 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.952 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Ensure instance console log exists: /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.953 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.953 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:13 np0005473739 nova_compute[259550]: 2025-10-07 14:04:13.954 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:14 np0005473739 podman[278107]: 2025-10-07 14:04:14.079803069 +0000 UTC m=+0.063188230 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:04:14 np0005473739 podman[278106]: 2025-10-07 14:04:14.109083452 +0000 UTC m=+0.091972701 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:04:14 np0005473739 nova_compute[259550]: 2025-10-07 14:04:14.447 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1108: 305 pgs: 305 active+clean; 117 MiB data, 265 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 145 op/s
Oct  7 10:04:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.591 2 DEBUG nova.network.neutron [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updating instance_info_cache with network_info: [{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.747 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Releasing lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.749 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance network_info: |[{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.750 2 DEBUG oslo_concurrency.lockutils [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.751 2 DEBUG nova.network.neutron [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Refreshing network info cache for port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.754 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start _get_guest_xml network_info=[{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [{'guest_format': None, 'device_type': 'disk', 'size': 1, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.760 2 WARNING nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.770 2 DEBUG nova.virt.libvirt.host [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.771 2 DEBUG nova.virt.libvirt.host [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.775 2 DEBUG nova.virt.libvirt.host [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.776 2 DEBUG nova.virt.libvirt.host [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.776 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.776 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:03:06Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={hw_rng:allowed='True'},flavorid='21014677',id=19,is_public=True,memory_mb=128,name='tempest-flavor_with_ephemeral_1-576165011',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.777 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.777 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.778 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.778 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.778 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.779 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.779 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.780 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.780 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.780 2 DEBUG nova.virt.hardware [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:04:15 np0005473739 nova_compute[259550]: 2025-10-07 14:04:15.784 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/255063289' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.239 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.241 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291129585' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.718 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.740 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.744 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.917 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.919 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.965 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:04:16 np0005473739 nova_compute[259550]: 2025-10-07 14:04:16.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.029 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.030 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.037 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.038 2 INFO nova.compute.claims [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.060 2 DEBUG nova.network.neutron [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updated VIF entry in instance network info cache for port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.061 2 DEBUG nova.network.neutron [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updating instance_info_cache with network_info: [{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.142 2 DEBUG oslo_concurrency.lockutils [req-b4e07baa-859a-4542-83f2-bc4eac4a8b61 req-b99b0f27-440c-4fe0-98a0-0fcddb6c634a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:04:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2723830996' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.191 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.193 2 DEBUG nova.virt.libvirt.vif [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(19),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1945845741',id=7,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=19,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-gkxpo37o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=01404604-70e9-49ec-9047-ec42ba41afee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.193 2 DEBUG nova.network.os_vif_util [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.194 2 DEBUG nova.network.os_vif_util [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.195 2 DEBUG nova.objects.instance [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'pci_devices' on Instance uuid 01404604-70e9-49ec-9047-ec42ba41afee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.210 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <uuid>01404604-70e9-49ec-9047-ec42ba41afee</uuid>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <name>instance-00000007</name>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersWithSpecificFlavorTestJSON-server-1945845741</nova:name>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:04:15</nova:creationTime>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <nova:flavor name="tempest-flavor_with_ephemeral_1-576165011">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <nova:ephemeral>1</nova:ephemeral>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <nova:user uuid="2d78803d42674b89b4ea28d9a2442357">tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member</nova:user>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <nova:project uuid="380b73085cef431383bee110ceaefb15">tempest-ServersWithSpecificFlavorTestJSON-1902159686</nova:project>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <nova:port uuid="60081c2f-b84d-467f-85f2-c2dd4b4fb4c5">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <entry name="serial">01404604-70e9-49ec-9047-ec42ba41afee</entry>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <entry name="uuid">01404604-70e9-49ec-9047-ec42ba41afee</entry>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/01404604-70e9-49ec-9047-ec42ba41afee_disk">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/01404604-70e9-49ec-9047-ec42ba41afee_disk.eph0">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <target dev="vdb" bus="virtio"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/01404604-70e9-49ec-9047-ec42ba41afee_disk.config">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:1e:ad:77"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <target dev="tap60081c2f-b8"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/console.log" append="off"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:04:17 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:04:17 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:04:17 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:04:17 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.214 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Preparing to wait for external event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.214 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.214 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.217 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.218 2 DEBUG nova.virt.libvirt.vif [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(19),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1945845741',id=7,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=19,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-gkxpo37o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:04:11Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=01404604-70e9-49ec-9047-ec42ba41afee,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.218 2 DEBUG nova.network.os_vif_util [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.219 2 DEBUG nova.network.os_vif_util [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.220 2 DEBUG os_vif [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.224 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.225 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap60081c2f-b8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.229 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap60081c2f-b8, col_values=(('external_ids', {'iface-id': '60081c2f-b84d-467f-85f2-c2dd4b4fb4c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:ad:77', 'vm-uuid': '01404604-70e9-49ec-9047-ec42ba41afee'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:17 np0005473739 NetworkManager[44949]: <info>  [1759845857.2322] manager: (tap60081c2f-b8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/32)
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.239 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.266 2 INFO os_vif [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8')#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.357 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.358 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.358 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.358 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] No VIF found with MAC fa:16:3e:1e:ad:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.358 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Using config drive#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.384 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1109: 305 pgs: 305 active+clean; 136 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 148 op/s
Oct  7 10:04:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:04:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/359812699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.696 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.703 2 DEBUG nova.compute.provider_tree [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.708 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Creating config drive at /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.714 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk4vtyjg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.754 2 DEBUG nova.scheduler.client.report [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.787 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.788 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.846 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjk4vtyjg" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.846 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.873 2 DEBUG nova.storage.rbd_utils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] rbd image 01404604-70e9-49ec-9047-ec42ba41afee_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.884 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config 01404604-70e9-49ec-9047-ec42ba41afee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:17 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #43. Immutable memtables: 0.
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.921 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:17 np0005473739 nova_compute[259550]: 2025-10-07 14:04:17.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.050 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.142 2 DEBUG oslo_concurrency.processutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config 01404604-70e9-49ec-9047-ec42ba41afee_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.258s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.143 2 INFO nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Deleting local config drive /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee/disk.config because it was imported into RBD.#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.171 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.173 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.173 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Creating image(s)#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.200 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:18 np0005473739 kernel: tap60081c2f-b8: entered promiscuous mode
Oct  7 10:04:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:18Z|00036|binding|INFO|Claiming lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 for this chassis.
Oct  7 10:04:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:18Z|00037|binding|INFO|60081c2f-b84d-467f-85f2-c2dd4b4fb4c5: Claiming fa:16:3e:1e:ad:77 10.100.0.9
Oct  7 10:04:18 np0005473739 NetworkManager[44949]: <info>  [1759845858.2107] manager: (tap60081c2f-b8): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.218 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ad:77 10.100.0.9'], port_security=['fa:16:3e:1e:ad:77 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01404604-70e9-49ec-9047-ec42ba41afee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '380b73085cef431383bee110ceaefb15', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd3b7853b-d94c-445c-810d-9b4dd15d78f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f05683d-d61c-46e7-a8b2-e5ebf47fffcc, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.220 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 in datapath 384938fa-4eb0-4ec5-a6a4-bc65721ba22a bound to our chassis#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.221 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 384938fa-4eb0-4ec5-a6a4-bc65721ba22a#033[00m
Oct  7 10:04:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:18Z|00038|binding|INFO|Setting lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 ovn-installed in OVS
Oct  7 10:04:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:18Z|00039|binding|INFO|Setting lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 up in Southbound
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.236 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7575b14-1d70-4b29-8cc1-bf6f76324415]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.237 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap384938fa-41 in ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.239 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap384938fa-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3effa3b-c9f5-4add-bfc6-f43cb7820e89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[305e15c7-f4c7-4932-b053-e11f210227c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.244 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.253 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b37842-42e4-4379-b356-231aa2dbcf25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 systemd-machined[214580]: New machine qemu-7-instance-00000007.
Oct  7 10:04:18 np0005473739 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Oct  7 10:04:18 np0005473739 systemd-udevd[278372]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78a38714-cb21-4d91-bc28-fed7051d5213]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 NetworkManager[44949]: <info>  [1759845858.2949] device (tap60081c2f-b8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:04:18 np0005473739 NetworkManager[44949]: <info>  [1759845858.2973] device (tap60081c2f-b8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.314 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.327 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[29fe25af-6d8f-41b3-9887-fc0a16fa3a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.332 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.333 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4ab52c19-bef5-42cc-b401-dba64ed5eecb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 NetworkManager[44949]: <info>  [1759845858.3345] manager: (tap384938fa-40): new Veth device (/org/freedesktop/NetworkManager/Devices/34)
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.377 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3cf347c4-3e0c-48e5-b67c-3bcfc154e876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.381 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[012a5885-6d3c-4ce5-94c3-da97befb4c8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 NetworkManager[44949]: <info>  [1759845858.4095] device (tap384938fa-40): carrier: link connected
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.410 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.411 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.412 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.412 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.417 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe14be9-85c8-405a-8cb7-fc540efa8a2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.443 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.445 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0fe64a74-bcf5-4d95-a592-693004d41cc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap384938fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:dc:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644197, 'reachable_time': 16446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 278430, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.451 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.468 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcecc35-1656-41cd-9e72-b2f3bfbd64ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe1c:dc9f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 644197, 'tstamp': 644197}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 278435, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.493 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7d051d6b-c39a-4fe5-af55-06a324c06604]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap384938fa-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:1c:dc:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 18], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644197, 'reachable_time': 16446, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 278437, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.536 2 DEBUG nova.compute.manager [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.536 2 DEBUG oslo_concurrency.lockutils [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.536 2 DEBUG oslo_concurrency.lockutils [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.537 2 DEBUG oslo_concurrency.lockutils [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.537 2 DEBUG nova.compute.manager [req-17a65ac9-fdb0-46e4-9287-e71214345525 req-8207175a-f69a-44dd-896b-8ed545add165 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Processing event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.537 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[79b7eca0-8874-45e9-8cc6-24de7a9cd177]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dba943e2-7241-4ede-8474-fd039ff3ff97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.623 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384938fa-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.623 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.624 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap384938fa-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:18 np0005473739 NetworkManager[44949]: <info>  [1759845858.6270] manager: (tap384938fa-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/35)
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:18 np0005473739 kernel: tap384938fa-40: entered promiscuous mode
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.630 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap384938fa-40, col_values=(('external_ids', {'iface-id': '86c408a4-938c-4caa-9ec3-5622a47990e3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:18Z|00040|binding|INFO|Releasing lport 86c408a4-938c-4caa-9ec3-5622a47990e3 from this chassis (sb_readonly=0)
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.648 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.652 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f43eaab-57e5-453d-82f3-1185139fe0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.653 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.pid.haproxy
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 384938fa-4eb0-4ec5-a6a4-bc65721ba22a
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:04:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:18.654 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'env', 'PROCESS_TAG=haproxy-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/384938fa-4eb0-4ec5-a6a4-bc65721ba22a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.792 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.341s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:18 np0005473739 nova_compute[259550]: 2025-10-07 14:04:18.868 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] resizing rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.010 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.020 2 DEBUG nova.objects.instance [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.035 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.036 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Ensure instance console log exists: /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.036 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.037 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.037 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.039 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.043 2 WARNING nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.052 2 DEBUG nova.virt.libvirt.host [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.053 2 DEBUG nova.virt.libvirt.host [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.058 2 DEBUG nova.virt.libvirt.host [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.059 2 DEBUG nova.virt.libvirt.host [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.059 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.060 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.060 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.061 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.062 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.062 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.062 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.062 2 DEBUG nova.virt.hardware [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.066 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:19 np0005473739 podman[278618]: 2025-10-07 14:04:19.129947263 +0000 UTC m=+0.089981707 container create 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 10:04:19 np0005473739 podman[278618]: 2025-10-07 14:04:19.072147198 +0000 UTC m=+0.032181672 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:04:19 np0005473739 systemd[1]: Started libpod-conmon-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2.scope.
Oct  7 10:04:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:04:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0ba8972aef5751b49df55a9acd3f9f51d77eb178dc3254adc380d65c0d9ec3cd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:04:19 np0005473739 podman[278618]: 2025-10-07 14:04:19.248140665 +0000 UTC m=+0.208175129 container init 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:04:19 np0005473739 podman[278618]: 2025-10-07 14:04:19.258016179 +0000 UTC m=+0.218050623 container start 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:04:19 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [NOTICE]   (278658) : New worker (278660) forked
Oct  7 10:04:19 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [NOTICE]   (278658) : Loading success.
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.422 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845859.4216967, 01404604-70e9-49ec-9047-ec42ba41afee => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.426 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] VM Started (Lifecycle Event)#033[00m
Oct  7 10:04:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1110: 305 pgs: 305 active+clean; 162 MiB data, 278 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 166 op/s
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.437 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.445 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.452 2 INFO nova.virt.libvirt.driver [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance spawned successfully.#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.453 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.480 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.488 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.498 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.499 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.501 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.502 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.504 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.505 2 DEBUG nova.virt.libvirt.driver [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/531227528' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.553 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.554 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845859.4259088, 01404604-70e9-49ec-9047-ec42ba41afee => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.554 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.572 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.618 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.627 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.661 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.678 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845859.44624, 01404604-70e9-49ec-9047-ec42ba41afee => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.679 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.740 2 INFO nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Took 7.81 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.740 2 DEBUG nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.762 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.766 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.848 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:04:19 np0005473739 nova_compute[259550]: 2025-10-07 14:04:19.943 2 INFO nova.compute.manager [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Took 8.99 seconds to build instance.#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.061 2 DEBUG oslo_concurrency.lockutils [None req-f43ba741-76b0-48a0-8bef-7c768d379b1a 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1886957082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.089 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.091 2 DEBUG nova.objects.instance [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.129 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <uuid>9b81bc46-3d46-47e9-85da-b3f62c6db7b2</uuid>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <name>instance-00000008</name>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerDiagnosticsV248Test-server-448006730</nova:name>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:04:19</nova:creationTime>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <nova:user uuid="affb512bf16041e687eef2ef7709dca5">tempest-ServerDiagnosticsV248Test-1776882737-project-member</nova:user>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <nova:project uuid="4ba32c1825e14b5c9c382bea6c3047c5">tempest-ServerDiagnosticsV248Test-1776882737</nova:project>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <entry name="serial">9b81bc46-3d46-47e9-85da-b3f62c6db7b2</entry>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <entry name="uuid">9b81bc46-3d46-47e9-85da-b3f62c6db7b2</entry>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/console.log" append="off"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:04:20 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:04:20 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:04:20 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:04:20 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.183 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.184 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.184 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Using config drive#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.215 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.558 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Creating config drive at /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.565 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj2quij2t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.667 2 DEBUG nova.compute.manager [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.668 2 DEBUG oslo_concurrency.lockutils [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.668 2 DEBUG oslo_concurrency.lockutils [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.669 2 DEBUG oslo_concurrency.lockutils [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.669 2 DEBUG nova.compute.manager [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] No waiting events found dispatching network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.669 2 WARNING nova.compute.manager [req-b276ad41-5f8a-4924-b925-c3c93ff9172a req-37b93cdc-fe8a-4e1d-8c06-9fd4bb0eb0f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received unexpected event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.696 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj2quij2t" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.720 2 DEBUG nova.storage.rbd_utils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] rbd image 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:20 np0005473739 nova_compute[259550]: 2025-10-07 14:04:20.724 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:21 np0005473739 nova_compute[259550]: 2025-10-07 14:04:21.310 2 DEBUG oslo_concurrency.processutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config 9b81bc46-3d46-47e9-85da-b3f62c6db7b2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.585s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:21 np0005473739 nova_compute[259550]: 2025-10-07 14:04:21.311 2 INFO nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Deleting local config drive /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2/disk.config because it was imported into RBD.#033[00m
Oct  7 10:04:21 np0005473739 systemd-machined[214580]: New machine qemu-8-instance-00000008.
Oct  7 10:04:21 np0005473739 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Oct  7 10:04:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1111: 305 pgs: 305 active+clean; 199 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 769 KiB/s rd, 5.5 MiB/s wr, 153 op/s
Oct  7 10:04:21 np0005473739 nova_compute[259550]: 2025-10-07 14:04:21.512 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845846.299679, 5aa06cd5-91e7-4797-83c0-ddd3966533ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:21 np0005473739 nova_compute[259550]: 2025-10-07 14:04:21.513 2 INFO nova.compute.manager [-] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:04:21 np0005473739 nova_compute[259550]: 2025-10-07 14:04:21.529 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:04:21 np0005473739 nova_compute[259550]: 2025-10-07 14:04:21.531 2 DEBUG nova.compute.manager [None req-4c3a0000-e7bf-4812-8ba6-3939a5330641 - - - - - -] [instance: 5aa06cd5-91e7-4797-83c0-ddd3966533ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:21 np0005473739 nova_compute[259550]: 2025-10-07 14:04:21.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.193 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845862.1926048, 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.193 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.196 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.196 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.201 2 INFO nova.virt.libvirt.driver [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance spawned successfully.#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.201 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.213 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.219 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.231 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.231 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.232 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.232 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.232 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.233 2 DEBUG nova.virt.libvirt.driver [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.240 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.240 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845862.1955829, 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.241 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] VM Started (Lifecycle Event)#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.270 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.274 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.297 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.307 2 INFO nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Took 4.14 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.307 2 DEBUG nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.360 2 INFO nova.compute.manager [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Took 5.35 seconds to build instance.#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.382 2 DEBUG nova.compute.manager [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-changed-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.382 2 DEBUG nova.compute.manager [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Refreshing instance network info cache due to event network-changed-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.383 2 DEBUG oslo_concurrency.lockutils [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.383 2 DEBUG oslo_concurrency.lockutils [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.383 2 DEBUG nova.network.neutron [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Refreshing network info cache for port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.385 2 DEBUG oslo_concurrency.lockutils [None req-40471a5b-693f-47c9-bead-a3e87ec6af15 affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:04:22
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', '.mgr', 'default.rgw.control', 'default.rgw.log', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', 'volumes', 'vms', 'cephfs.cephfs.meta', 'backups']
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:04:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:22 np0005473739 nova_compute[259550]: 2025-10-07 14:04:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.011 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.013 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1112: 305 pgs: 305 active+clean; 199 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 342 KiB/s rd, 5.5 MiB/s wr, 126 op/s
Oct  7 10:04:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:04:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2116738628' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.556 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.857 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.859 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.859 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000007 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.864 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.864 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000008 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:04:23 np0005473739 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  7 10:04:23 np0005473739 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 13.563s CPU time.
Oct  7 10:04:23 np0005473739 systemd-machined[214580]: Machine qemu-6-instance-00000006 terminated.
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.881 2 DEBUG nova.network.neutron [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updated VIF entry in instance network info cache for port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:04:23 np0005473739 nova_compute[259550]: 2025-10-07 14:04:23.882 2 DEBUG nova.network.neutron [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updating instance_info_cache with network_info: [{"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.032 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.033 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000006 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.152 2 DEBUG nova.compute.manager [None req-3eb5139d-5ad7-4f85-a2e6-13df2140e66d 9ed39eb9f7ad458fa8a7f2195995dd29 83b0f5fde49d43c9a200759be3d68a34 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.153 2 DEBUG oslo_concurrency.lockutils [req-1dbb1bfa-7dc5-41cb-921b-3bb0cd75ec69 req-4358e4c9-6c72-4162-9a0f-686e9771c3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-01404604-70e9-49ec-9047-ec42ba41afee" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.158 2 INFO nova.compute.manager [None req-3eb5139d-5ad7-4f85-a2e6-13df2140e66d 9ed39eb9f7ad458fa8a7f2195995dd29 83b0f5fde49d43c9a200759be3d68a34 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Retrieving diagnostics#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.255 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.257 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4404MB free_disk=59.90228271484375GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.257 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.257 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.336 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance af17d051-72b6-45f1-b829-94d3a2939519 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.337 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 01404604-70e9-49ec-9047-ec42ba41afee actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.337 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.338 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.338 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.411 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.552 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.566 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.571 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.#033[00m
Oct  7 10:04:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:04:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417708911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.858 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.865 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.884 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.906 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:04:24 np0005473739 nova_compute[259550]: 2025-10-07 14:04:24.907 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:25 np0005473739 podman[278893]: 2025-10-07 14:04:25.12544653 +0000 UTC m=+0.110809405 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1113: 305 pgs: 305 active+clean; 216 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 5.7 MiB/s wr, 206 op/s
Oct  7 10:04:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.507 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting instance files /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.508 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deletion of /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del complete#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.634 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.635 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating image(s)#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.662 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.702 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.736 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.740 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.741 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:25 np0005473739 nova_compute[259550]: 2025-10-07 14:04:25.978 2 DEBUG nova.virt.libvirt.imagebackend [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/d37bdf89-ce37-478a-af4d-2b9cd0435b79/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/d37bdf89-ce37-478a-af4d-2b9cd0435b79/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.013 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.105 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.part --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.107 2 DEBUG nova.virt.images [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] d37bdf89-ce37-478a-af4d-2b9cd0435b79 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.108 2 DEBUG nova.privsep.utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.111 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.part /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1114: 305 pgs: 305 active+clean; 194 MiB data, 336 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.5 MiB/s wr, 279 op/s
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.620 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.part /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.converted" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.627 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.702 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2.converted --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.704 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.963s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.733 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.737 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.958 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.958 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:04:27 np0005473739 nova_compute[259550]: 2025-10-07 14:04:27.958 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.266 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.267 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.268 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.268 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.293 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.368 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] resizing rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.496 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.496 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Ensure instance console log exists: /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.497 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.497 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.498 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.499 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.502 2 WARNING nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.512 2 DEBUG nova.virt.libvirt.host [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.513 2 DEBUG nova.virt.libvirt.host [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.517 2 DEBUG nova.virt.libvirt.host [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.517 2 DEBUG nova.virt.libvirt.host [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.519 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.519 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.520 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.520 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.520 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.520 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.521 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.522 2 DEBUG nova.virt.hardware [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.522 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'vcpu_model' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.574 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:28 np0005473739 nova_compute[259550]: 2025-10-07 14:04:28.816 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:04:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3885171355' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.049 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:29 np0005473739 podman[279122]: 2025-10-07 14:04:29.086457515 +0000 UTC m=+0.065055810 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.098 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.104 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.132 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.174 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.175 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.175 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.176 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:04:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1115: 305 pgs: 305 active+clean; 158 MiB data, 303 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 4.5 MiB/s wr, 303 op/s
Oct  7 10:04:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1605597078' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.592 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.595 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <uuid>af17d051-72b6-45f1-b829-94d3a2939519</uuid>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <name>instance-00000006</name>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdmin275Test-server-2034007563</nova:name>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:04:28</nova:creationTime>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <nova:user uuid="0418d9872e6041138cc90a3aa74cce48">tempest-ServersAdmin275Test-1410041514-project-member</nova:user>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <nova:project uuid="60e63a33759f4241845fccdb5c104b64">tempest-ServersAdmin275Test-1410041514</nova:project>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <entry name="serial">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <entry name="uuid">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk.config">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log" append="off"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:04:29 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:04:29 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:04:29 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:04:29 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.735 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.736 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.738 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Using config drive#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.778 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.827 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'ec2_ids' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:29 np0005473739 nova_compute[259550]: 2025-10-07 14:04:29.993 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'keypairs' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:30 np0005473739 nova_compute[259550]: 2025-10-07 14:04:30.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:30 np0005473739 nova_compute[259550]: 2025-10-07 14:04:30.486 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating config drive at /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config#033[00m
Oct  7 10:04:30 np0005473739 nova_compute[259550]: 2025-10-07 14:04:30.494 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzg73a04q execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:30 np0005473739 nova_compute[259550]: 2025-10-07 14:04:30.641 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzg73a04q" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:30 np0005473739 nova_compute[259550]: 2025-10-07 14:04:30.703 2 DEBUG nova.storage.rbd_utils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:30 np0005473739 nova_compute[259550]: 2025-10-07 14:04:30.713 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:30 np0005473739 nova_compute[259550]: 2025-10-07 14:04:30.911 2 DEBUG oslo_concurrency.processutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:30 np0005473739 nova_compute[259550]: 2025-10-07 14:04:30.913 2 INFO nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting local config drive /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config because it was imported into RBD.#033[00m
Oct  7 10:04:30 np0005473739 systemd-machined[214580]: New machine qemu-9-instance-00000006.
Oct  7 10:04:30 np0005473739 systemd[1]: Started Virtual Machine qemu-9-instance-00000006.
Oct  7 10:04:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1116: 305 pgs: 305 active+clean; 183 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 3.9 MiB/s wr, 272 op/s
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.842 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for af17d051-72b6-45f1-b829-94d3a2939519 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.845 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845871.84188, af17d051-72b6-45f1-b829-94d3a2939519 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.846 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.849 2 DEBUG nova.compute.manager [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.850 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.856 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance spawned successfully.#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.858 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.941 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.947 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.959 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.960 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.960 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.961 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.961 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:31 np0005473739 nova_compute[259550]: 2025-10-07 14:04:31.962 2 DEBUG nova.virt.libvirt.driver [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.207 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.212 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845871.8443518, af17d051-72b6-45f1-b829-94d3a2939519 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.212 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Started (Lifecycle Event)#033[00m
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010434599246272058 of space, bias 1.0, pg target 0.31303797738816175 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.279 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.286 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.307 2 DEBUG nova.compute.manager [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.446 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.501 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.502 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.502 2 DEBUG nova.objects.instance [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:04:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:04:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229173120' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:04:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:04:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2229173120' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:04:32 np0005473739 nova_compute[259550]: 2025-10-07 14:04:32.800 2 DEBUG oslo_concurrency.lockutils [None req-aac7d450-c191-438d-b0e7-a708be51c0ed 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.298s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1117: 305 pgs: 305 active+clean; 183 MiB data, 305 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 2.0 MiB/s wr, 214 op/s
Oct  7 10:04:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:34Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:ad:77 10.100.0.9
Oct  7 10:04:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:34Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:ad:77 10.100.0.9
Oct  7 10:04:35 np0005473739 nova_compute[259550]: 2025-10-07 14:04:35.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1118: 305 pgs: 305 active+clean; 190 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.9 MiB/s wr, 273 op/s
Oct  7 10:04:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:35 np0005473739 nova_compute[259550]: 2025-10-07 14:04:35.558 2 DEBUG nova.compute.manager [None req-d1145a78-3b7f-445d-9e05-dd4bb2a1ce19 9ed39eb9f7ad458fa8a7f2195995dd29 83b0f5fde49d43c9a200759be3d68a34 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:35 np0005473739 nova_compute[259550]: 2025-10-07 14:04:35.563 2 INFO nova.compute.manager [None req-d1145a78-3b7f-445d-9e05-dd4bb2a1ce19 9ed39eb9f7ad458fa8a7f2195995dd29 83b0f5fde49d43c9a200759be3d68a34 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Retrieving diagnostics#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.403 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.404 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.404 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.404 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.405 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.406 2 INFO nova.compute.manager [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Terminating instance#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.406 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "refresh_cache-9b81bc46-3d46-47e9-85da-b3f62c6db7b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.407 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquired lock "refresh_cache-9b81bc46-3d46-47e9-85da-b3f62c6db7b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.407 2 DEBUG nova.network.neutron [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.550 2 INFO nova.compute.manager [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Rebuilding instance#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.710 2 DEBUG nova.network.neutron [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.824 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'trusted_certs' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:36 np0005473739 nova_compute[259550]: 2025-10-07 14:04:36.929 2 DEBUG nova.compute.manager [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:37 np0005473739 nova_compute[259550]: 2025-10-07 14:04:37.023 2 DEBUG nova.network.neutron [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:37 np0005473739 nova_compute[259550]: 2025-10-07 14:04:37.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:37 np0005473739 nova_compute[259550]: 2025-10-07 14:04:37.417 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Releasing lock "refresh_cache-9b81bc46-3d46-47e9-85da-b3f62c6db7b2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:04:37 np0005473739 nova_compute[259550]: 2025-10-07 14:04:37.418 2 DEBUG nova.compute.manager [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:04:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1119: 305 pgs: 305 active+clean; 225 MiB data, 341 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 5.2 MiB/s wr, 292 op/s
Oct  7 10:04:37 np0005473739 nova_compute[259550]: 2025-10-07 14:04:37.493 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'pci_requests' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:37 np0005473739 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Oct  7 10:04:37 np0005473739 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 14.158s CPU time.
Oct  7 10:04:37 np0005473739 systemd-machined[214580]: Machine qemu-8-instance-00000008 terminated.
Oct  7 10:04:37 np0005473739 nova_compute[259550]: 2025-10-07 14:04:37.639 2 INFO nova.virt.libvirt.driver [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance destroyed successfully.#033[00m
Oct  7 10:04:37 np0005473739 nova_compute[259550]: 2025-10-07 14:04:37.641 2 DEBUG nova.objects.instance [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lazy-loading 'resources' on Instance uuid 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:37 np0005473739 nova_compute[259550]: 2025-10-07 14:04:37.805 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'pci_devices' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:38 np0005473739 nova_compute[259550]: 2025-10-07 14:04:38.049 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'resources' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:38 np0005473739 nova_compute[259550]: 2025-10-07 14:04:38.347 2 INFO nova.virt.libvirt.driver [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Deleting instance files /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2_del#033[00m
Oct  7 10:04:38 np0005473739 nova_compute[259550]: 2025-10-07 14:04:38.348 2 INFO nova.virt.libvirt.driver [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Deletion of /var/lib/nova/instances/9b81bc46-3d46-47e9-85da-b3f62c6db7b2_del complete#033[00m
Oct  7 10:04:38 np0005473739 nova_compute[259550]: 2025-10-07 14:04:38.362 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'migration_context' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:38 np0005473739 nova_compute[259550]: 2025-10-07 14:04:38.795 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:04:38 np0005473739 nova_compute[259550]: 2025-10-07 14:04:38.800 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:04:39 np0005473739 nova_compute[259550]: 2025-10-07 14:04:39.203 2 INFO nova.compute.manager [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Took 1.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:04:39 np0005473739 nova_compute[259550]: 2025-10-07 14:04:39.204 2 DEBUG oslo.service.loopingcall [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:04:39 np0005473739 nova_compute[259550]: 2025-10-07 14:04:39.205 2 DEBUG nova.compute.manager [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:04:39 np0005473739 nova_compute[259550]: 2025-10-07 14:04:39.205 2 DEBUG nova.network.neutron [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:04:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1120: 305 pgs: 305 active+clean; 210 MiB data, 340 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 6.0 MiB/s wr, 242 op/s
Oct  7 10:04:39 np0005473739 nova_compute[259550]: 2025-10-07 14:04:39.568 2 DEBUG nova.network.neutron [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:04:39 np0005473739 nova_compute[259550]: 2025-10-07 14:04:39.682 2 DEBUG nova.network.neutron [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:39 np0005473739 nova_compute[259550]: 2025-10-07 14:04:39.945 2 INFO nova.compute.manager [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Took 0.74 seconds to deallocate network for instance.#033[00m
Oct  7 10:04:40 np0005473739 nova_compute[259550]: 2025-10-07 14:04:40.306 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:40 np0005473739 nova_compute[259550]: 2025-10-07 14:04:40.307 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:40 np0005473739 nova_compute[259550]: 2025-10-07 14:04:40.382 2 DEBUG oslo_concurrency.processutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:40 np0005473739 nova_compute[259550]: 2025-10-07 14:04:40.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:04:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2393469742' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:04:40 np0005473739 nova_compute[259550]: 2025-10-07 14:04:40.898 2 DEBUG oslo_concurrency.processutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:40 np0005473739 nova_compute[259550]: 2025-10-07 14:04:40.908 2 DEBUG nova.compute.provider_tree [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:04:41 np0005473739 nova_compute[259550]: 2025-10-07 14:04:41.028 2 DEBUG nova.scheduler.client.report [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:04:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1121: 305 pgs: 305 active+clean; 169 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.5 MiB/s wr, 242 op/s
Oct  7 10:04:41 np0005473739 nova_compute[259550]: 2025-10-07 14:04:41.550 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:41 np0005473739 nova_compute[259550]: 2025-10-07 14:04:41.696 2 INFO nova.scheduler.client.report [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Deleted allocations for instance 9b81bc46-3d46-47e9-85da-b3f62c6db7b2#033[00m
Oct  7 10:04:42 np0005473739 nova_compute[259550]: 2025-10-07 14:04:42.160 2 DEBUG oslo_concurrency.lockutils [None req-99bbd976-1a04-432c-a75e-11df333e01ba affb512bf16041e687eef2ef7709dca5 4ba32c1825e14b5c9c382bea6c3047c5 - - default default] Lock "9b81bc46-3d46-47e9-85da-b3f62c6db7b2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:42 np0005473739 nova_compute[259550]: 2025-10-07 14:04:42.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1122: 305 pgs: 305 active+clean; 169 MiB data, 322 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 238 op/s
Oct  7 10:04:45 np0005473739 podman[279341]: 2025-10-07 14:04:45.100774572 +0000 UTC m=+0.074677878 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:04:45 np0005473739 podman[279340]: 2025-10-07 14:04:45.12310879 +0000 UTC m=+0.101057704 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:04:45 np0005473739 nova_compute[259550]: 2025-10-07 14:04:45.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1123: 305 pgs: 305 active+clean; 169 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 4.3 MiB/s wr, 239 op/s
Oct  7 10:04:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:47 np0005473739 nova_compute[259550]: 2025-10-07 14:04:47.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1124: 305 pgs: 305 active+clean; 185 MiB data, 330 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 4.3 MiB/s wr, 210 op/s
Oct  7 10:04:48 np0005473739 nova_compute[259550]: 2025-10-07 14:04:48.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:48 np0005473739 nova_compute[259550]: 2025-10-07 14:04:48.854 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:04:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1125: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 508 KiB/s rd, 3.0 MiB/s wr, 142 op/s
Oct  7 10:04:50 np0005473739 nova_compute[259550]: 2025-10-07 14:04:50.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:51 np0005473739 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  7 10:04:51 np0005473739 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000006.scope: Consumed 13.938s CPU time.
Oct  7 10:04:51 np0005473739 systemd-machined[214580]: Machine qemu-9-instance-00000006 terminated.
Oct  7 10:04:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1126: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 380 KiB/s rd, 2.2 MiB/s wr, 103 op/s
Oct  7 10:04:51 np0005473739 nova_compute[259550]: 2025-10-07 14:04:51.870 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:04:51 np0005473739 nova_compute[259550]: 2025-10-07 14:04:51.878 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.#033[00m
Oct  7 10:04:51 np0005473739 nova_compute[259550]: 2025-10-07 14:04:51.887 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.634 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting instance files /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.635 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deletion of /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del complete#033[00m
Oct  7 10:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.640 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845877.6361618, 9b81bc46-3d46-47e9-85da-b3f62c6db7b2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.641 2 INFO nova.compute.manager [-] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.699 2 DEBUG nova.compute.manager [None req-87e6f636-9a89-40f5-a821-f856b42db070 - - - - - -] [instance: 9b81bc46-3d46-47e9-85da-b3f62c6db7b2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:52.705 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:04:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:52.708 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.864 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.865 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating image(s)#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.893 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.919 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.941 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:52 np0005473739 nova_compute[259550]: 2025-10-07 14:04:52.945 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.009 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.010 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.011 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.012 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.034 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.038 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.433 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 af17d051-72b6-45f1-b829-94d3a2939519_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1127: 305 pgs: 305 active+clean; 202 MiB data, 344 MiB used, 60 GiB / 60 GiB avail; 284 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.504 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] resizing rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.632 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.632 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Ensure instance console log exists: /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.633 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.633 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.633 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.635 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.640 2 WARNING nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.650 2 DEBUG nova.virt.libvirt.host [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.651 2 DEBUG nova.virt.libvirt.host [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.653 2 DEBUG nova.virt.libvirt.host [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.654 2 DEBUG nova.virt.libvirt.host [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.654 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.654 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.655 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.655 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.655 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.656 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.657 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.657 2 DEBUG nova.virt.hardware [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.657 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'vcpu_model' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:53 np0005473739 nova_compute[259550]: 2025-10-07 14:04:53.684 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2175104374' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.136 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.163 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.167 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:04:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3346577199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.625 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.629 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <uuid>af17d051-72b6-45f1-b829-94d3a2939519</uuid>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <name>instance-00000006</name>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdmin275Test-server-2034007563</nova:name>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:04:53</nova:creationTime>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <nova:user uuid="0418d9872e6041138cc90a3aa74cce48">tempest-ServersAdmin275Test-1410041514-project-member</nova:user>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <nova:project uuid="60e63a33759f4241845fccdb5c104b64">tempest-ServersAdmin275Test-1410041514</nova:project>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <entry name="serial">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <entry name="uuid">af17d051-72b6-45f1-b829-94d3a2939519</entry>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/af17d051-72b6-45f1-b829-94d3a2939519_disk.config">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/console.log" append="off"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:04:54 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:04:54 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:04:54 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:04:54 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.762 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.762 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.763 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Using config drive#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.789 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.815 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'ec2_ids' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.860 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lazy-loading 'keypairs' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.982 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.982 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.983 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.983 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.983 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.984 2 INFO nova.compute.manager [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Terminating instance#033[00m
Oct  7 10:04:54 np0005473739 nova_compute[259550]: 2025-10-07 14:04:54.985 2 DEBUG nova.compute.manager [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:04:55 np0005473739 kernel: tap60081c2f-b8 (unregistering): left promiscuous mode
Oct  7 10:04:55 np0005473739 NetworkManager[44949]: <info>  [1759845895.0630] device (tap60081c2f-b8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:55Z|00041|binding|INFO|Releasing lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 from this chassis (sb_readonly=0)
Oct  7 10:04:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:55Z|00042|binding|INFO|Setting lport 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 down in Southbound
Oct  7 10:04:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:04:55Z|00043|binding|INFO|Removing iface tap60081c2f-b8 ovn-installed in OVS
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.082 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Creating config drive at /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.088 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzud2uc2s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.124 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ad:77 10.100.0.9'], port_security=['fa:16:3e:1e:ad:77 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '01404604-70e9-49ec-9047-ec42ba41afee', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '380b73085cef431383bee110ceaefb15', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd3b7853b-d94c-445c-810d-9b4dd15d78f4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.205'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8f05683d-d61c-46e7-a8b2-e5ebf47fffcc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.125 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 in datapath 384938fa-4eb0-4ec5-a6a4-bc65721ba22a unbound from our chassis#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.126 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 384938fa-4eb0-4ec5-a6a4-bc65721ba22a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:04:55 np0005473739 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Oct  7 10:04:55 np0005473739 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 16.049s CPU time.
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.129 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c96644-389c-40c8-826f-2893361cb9b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.129 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a namespace which is not needed anymore#033[00m
Oct  7 10:04:55 np0005473739 systemd-machined[214580]: Machine qemu-7-instance-00000007 terminated.
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.223 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzud2uc2s" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.253 2 DEBUG nova.storage.rbd_utils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] rbd image af17d051-72b6-45f1-b829-94d3a2939519_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.257 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.285 2 INFO nova.virt.libvirt.driver [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Instance destroyed successfully.#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.286 2 DEBUG nova.objects.instance [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lazy-loading 'resources' on Instance uuid 01404604-70e9-49ec-9047-ec42ba41afee obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:55 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [NOTICE]   (278658) : haproxy version is 2.8.14-c23fe91
Oct  7 10:04:55 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [NOTICE]   (278658) : path to executable is /usr/sbin/haproxy
Oct  7 10:04:55 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [ALERT]    (278658) : Current worker (278660) exited with code 143 (Terminated)
Oct  7 10:04:55 np0005473739 neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a[278635]: [WARNING]  (278658) : All workers exited. Exiting... (0)
Oct  7 10:04:55 np0005473739 podman[279762]: 2025-10-07 14:04:55.293613526 +0000 UTC m=+0.105763650 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  7 10:04:55 np0005473739 systemd[1]: libpod-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2.scope: Deactivated successfully.
Oct  7 10:04:55 np0005473739 conmon[278635]: conmon 5798779e97104bc286fe <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2.scope/container/memory.events
Oct  7 10:04:55 np0005473739 podman[279798]: 2025-10-07 14:04:55.302500273 +0000 UTC m=+0.063264783 container died 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  7 10:04:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2-userdata-shm.mount: Deactivated successfully.
Oct  7 10:04:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0ba8972aef5751b49df55a9acd3f9f51d77eb178dc3254adc380d65c0d9ec3cd-merged.mount: Deactivated successfully.
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.335 2 DEBUG nova.virt.libvirt.vif [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:04:09Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',display_name='tempest-ServersWithSpecificFlavorTestJSON-server-1945845741',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(19),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverswithspecificflavortestjson-server-1945845741',id=7,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=19,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMYfAR9mj2imemzusfPwwtddn0lEjKMvdiXZQNbuwqTJFtx1U2M3BRUSevXd6qU4D5KOSepLHIPjFK2NZ957Ri2Kv5dYObirVp8T/b/ktoKbgTEgyyASZo/0n0wfyQLhEw==',key_name='tempest-keypair-1528796341',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:04:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='380b73085cef431383bee110ceaefb15',ramdisk_id='',reservation_id='r-gkxpo37o',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686',owner_user_name='tempest-ServersWithSpecificFlavorTestJSON-1902159686-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:04:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='2d78803d42674b89b4ea28d9a2442357',uuid=01404604-70e9-49ec-9047-ec42ba41afee,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.336 2 DEBUG nova.network.os_vif_util [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converting VIF {"id": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "address": "fa:16:3e:1e:ad:77", "network": {"id": "384938fa-4eb0-4ec5-a6a4-bc65721ba22a", "bridge": "br-int", "label": "tempest-ServersWithSpecificFlavorTestJSON-383623364-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.205", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "380b73085cef431383bee110ceaefb15", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap60081c2f-b8", "ovs_interfaceid": "60081c2f-b84d-467f-85f2-c2dd4b4fb4c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.337 2 DEBUG nova.network.os_vif_util [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.337 2 DEBUG os_vif [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap60081c2f-b8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:55 np0005473739 podman[279798]: 2025-10-07 14:04:55.383480199 +0000 UTC m=+0.144244709 container cleanup 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.387 2 INFO os_vif [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:1e:ad:77,bridge_name='br-int',has_traffic_filtering=True,id=60081c2f-b84d-467f-85f2-c2dd4b4fb4c5,network=Network(384938fa-4eb0-4ec5-a6a4-bc65721ba22a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap60081c2f-b8')#033[00m
Oct  7 10:04:55 np0005473739 systemd[1]: libpod-conmon-5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2.scope: Deactivated successfully.
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1128: 305 pgs: 305 active+clean; 191 MiB data, 325 MiB used, 60 GiB / 60 GiB avail; 298 KiB/s rd, 3.7 MiB/s wr, 84 op/s
Oct  7 10:04:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:04:55 np0005473739 podman[279888]: 2025-10-07 14:04:55.507485415 +0000 UTC m=+0.094079517 container remove 5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.513 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ccd2474b-493e-4567-be4a-cb947869545b]: (4, ('Tue Oct  7 02:04:55 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a (5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2)\n5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2\nTue Oct  7 02:04:55 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a (5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2)\n5798779e97104bc286fe29cc313f3bd22952131d6af83e33998d459feec1b2e2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.518 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f9bece-2972-4da2-a08a-020f68c2c8d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.519 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap384938fa-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.537 2 DEBUG oslo_concurrency.processutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config af17d051-72b6-45f1-b829-94d3a2939519_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.280s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.538 2 INFO nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting local config drive /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519/disk.config because it was imported into RBD.#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:55 np0005473739 kernel: tap384938fa-40: left promiscuous mode
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.547 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81a672eb-75bd-4562-91c2-1819b92a8906]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.571 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3b6537f-ce3a-42c1-91f9-45320c9fbd47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.574 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a714dde-512c-4219-a9f5-12dc3875cfe2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.591 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13957d82-c929-4b97-aec6-63872b8cc779]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 644188, 'reachable_time': 33820, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 279954, 'error': None, 'target': 'ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:55 np0005473739 systemd[1]: run-netns-ovnmeta\x2d384938fa\x2d4eb0\x2d4ec5\x2da6a4\x2dbc65721ba22a.mount: Deactivated successfully.
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.598 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-384938fa-4eb0-4ec5-a6a4-bc65721ba22a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:04:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:04:55.599 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[da421ed4-cc5f-4747-994b-2c62127fdc64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:04:55 np0005473739 systemd-machined[214580]: New machine qemu-10-instance-00000006.
Oct  7 10:04:55 np0005473739 systemd[1]: Started Virtual Machine qemu-10-instance-00000006.
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.665 2 DEBUG nova.compute.manager [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-unplugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.665 2 DEBUG oslo_concurrency.lockutils [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.665 2 DEBUG oslo_concurrency.lockutils [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.666 2 DEBUG oslo_concurrency.lockutils [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.666 2 DEBUG nova.compute.manager [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] No waiting events found dispatching network-vif-unplugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:04:55 np0005473739 nova_compute[259550]: 2025-10-07 14:04:55.666 2 DEBUG nova.compute.manager [req-f5d281d8-8875-4364-b6c3-fc48c4b778d1 req-c1c07d87-0750-405f-8c78-c1e33961c847 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-unplugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:04:55 np0005473739 podman[280000]: 2025-10-07 14:04:55.830885364 +0000 UTC m=+0.081258784 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:04:55 np0005473739 podman[280000]: 2025-10-07 14:04:55.934368662 +0000 UTC m=+0.184742072 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.793 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for af17d051-72b6-45f1-b829-94d3a2939519 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.794 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845896.792987, af17d051-72b6-45f1-b829-94d3a2939519 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.794 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.797 2 DEBUG nova.compute.manager [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.798 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.802 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance spawned successfully.#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.803 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.827 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.831 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.845 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.845 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.846 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.847 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.848 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.848 2 DEBUG nova.virt.libvirt.driver [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.852 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.853 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845896.7967756, af17d051-72b6-45f1-b829-94d3a2939519 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Started (Lifecycle Event)#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.890 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.894 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:04:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.915 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.923 2 DEBUG nova.compute.manager [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:04:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:04:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.971 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.972 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:56 np0005473739 nova_compute[259550]: 2025-10-07 14:04:56.972 2 DEBUG nova.objects.instance [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:04:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.080 2 DEBUG oslo_concurrency.lockutils [None req-fdbb37b1-14a9-40a4-9d4d-c0207e70c842 2f3d41f8511e4a3a822057de9b07a385 ada17062e3024bd589e2784d9303ec6a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.108s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1129: 305 pgs: 305 active+clean; 169 MiB data, 301 MiB used, 60 GiB / 60 GiB avail; 321 KiB/s rd, 3.9 MiB/s wr, 122 op/s
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.489 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "af17d051-72b6-45f1-b829-94d3a2939519" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.491 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.491 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "af17d051-72b6-45f1-b829-94d3a2939519-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.492 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.492 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.494 2 INFO nova.compute.manager [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Terminating instance#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.495 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.496 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquired lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.496 2 DEBUG nova.network.neutron [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.759 2 DEBUG nova.network.neutron [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.790 2 DEBUG nova.compute.manager [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.791 2 DEBUG oslo_concurrency.lockutils [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "01404604-70e9-49ec-9047-ec42ba41afee-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.791 2 DEBUG oslo_concurrency.lockutils [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.792 2 DEBUG oslo_concurrency.lockutils [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.792 2 DEBUG nova.compute.manager [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] No waiting events found dispatching network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:04:57 np0005473739 nova_compute[259550]: 2025-10-07 14:04:57.792 2 WARNING nova.compute.manager [req-4f4eb2bc-0fba-4ca1-b015-a1cf405f3607 req-753ad797-84b0-44a8-940a-28ac0886e52f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received unexpected event network-vif-plugged-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:04:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:04:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:04:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:04:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:04:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:04:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:04:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1f452177-40df-430d-aeb9-dca7dc4ac672 does not exist
Oct  7 10:04:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 66af92ee-5ab5-407e-a6ea-4df72b4f7541 does not exist
Oct  7 10:04:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev aed59fd1-9dc1-48cb-bd51-564d2c2a371f does not exist
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:04:58 np0005473739 nova_compute[259550]: 2025-10-07 14:04:58.055 2 DEBUG nova.network.neutron [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:04:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:04:58 np0005473739 nova_compute[259550]: 2025-10-07 14:04:58.102 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Releasing lock "refresh_cache-af17d051-72b6-45f1-b829-94d3a2939519" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:04:58 np0005473739 nova_compute[259550]: 2025-10-07 14:04:58.103 2 DEBUG nova.compute.manager [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:04:58 np0005473739 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000006.scope: Deactivated successfully.
Oct  7 10:04:58 np0005473739 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d00000006.scope: Consumed 2.265s CPU time.
Oct  7 10:04:58 np0005473739 systemd-machined[214580]: Machine qemu-10-instance-00000006 terminated.
Oct  7 10:04:58 np0005473739 nova_compute[259550]: 2025-10-07 14:04:58.533 2 INFO nova.virt.libvirt.driver [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance destroyed successfully.#033[00m
Oct  7 10:04:58 np0005473739 nova_compute[259550]: 2025-10-07 14:04:58.534 2 DEBUG nova.objects.instance [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lazy-loading 'resources' on Instance uuid af17d051-72b6-45f1-b829-94d3a2939519 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:04:58 np0005473739 podman[280490]: 2025-10-07 14:04:58.721878142 +0000 UTC m=+0.032750576 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:04:58 np0005473739 podman[280490]: 2025-10-07 14:04:58.817203012 +0000 UTC m=+0.128075406 container create 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:04:58 np0005473739 systemd[1]: Started libpod-conmon-4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35.scope.
Oct  7 10:04:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:04:59 np0005473739 podman[280490]: 2025-10-07 14:04:59.006260848 +0000 UTC m=+0.317133302 container init 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.008 2 INFO nova.virt.libvirt.driver [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Deleting instance files /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee_del#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.010 2 INFO nova.virt.libvirt.driver [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Deletion of /var/lib/nova/instances/01404604-70e9-49ec-9047-ec42ba41afee_del complete#033[00m
Oct  7 10:04:59 np0005473739 podman[280490]: 2025-10-07 14:04:59.016726638 +0000 UTC m=+0.327599022 container start 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:04:59 np0005473739 podman[280490]: 2025-10-07 14:04:59.021315381 +0000 UTC m=+0.332187795 container attach 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:04:59 np0005473739 objective_blackburn[280506]: 167 167
Oct  7 10:04:59 np0005473739 systemd[1]: libpod-4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35.scope: Deactivated successfully.
Oct  7 10:04:59 np0005473739 conmon[280506]: conmon 4eb6ad457fde0965906f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35.scope/container/memory.events
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.066 2 INFO nova.compute.manager [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Took 4.08 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.067 2 DEBUG oslo.service.loopingcall [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.067 2 DEBUG nova.compute.manager [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.067 2 DEBUG nova.network.neutron [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:04:59 np0005473739 podman[280511]: 2025-10-07 14:04:59.071320159 +0000 UTC m=+0.029067679 container died 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:04:59 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:04:59 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:04:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-02d283d6545ba6fdfb3c28451f3c7b2438d97d76814a4ef6fbf296f2c0e639aa-merged.mount: Deactivated successfully.
Oct  7 10:04:59 np0005473739 podman[280511]: 2025-10-07 14:04:59.140248082 +0000 UTC m=+0.097995582 container remove 4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_blackburn, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:04:59 np0005473739 systemd[1]: libpod-conmon-4eb6ad457fde0965906fb799a02082e877085c7ba99525e384fa57941ced3b35.scope: Deactivated successfully.
Oct  7 10:04:59 np0005473739 podman[280527]: 2025-10-07 14:04:59.21907189 +0000 UTC m=+0.070359263 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Oct  7 10:04:59 np0005473739 podman[280555]: 2025-10-07 14:04:59.353256118 +0000 UTC m=+0.051761455 container create 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:04:59 np0005473739 systemd[1]: Started libpod-conmon-5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d.scope.
Oct  7 10:04:59 np0005473739 podman[280555]: 2025-10-07 14:04:59.327846839 +0000 UTC m=+0.026352226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.427 2 INFO nova.virt.libvirt.driver [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deleting instance files /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.429 2 INFO nova.virt.libvirt.driver [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deletion of /var/lib/nova/instances/af17d051-72b6-45f1-b829-94d3a2939519_del complete#033[00m
Oct  7 10:04:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:04:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:04:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1130: 305 pgs: 305 active+clean; 126 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.0 MiB/s wr, 151 op/s
Oct  7 10:04:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:04:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:04:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:04:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:04:59 np0005473739 podman[280555]: 2025-10-07 14:04:59.464427062 +0000 UTC m=+0.162932419 container init 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:04:59 np0005473739 podman[280555]: 2025-10-07 14:04:59.478410066 +0000 UTC m=+0.176915413 container start 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:04:59 np0005473739 podman[280555]: 2025-10-07 14:04:59.483989434 +0000 UTC m=+0.182494791 container attach 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.517 2 INFO nova.compute.manager [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Took 1.41 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.517 2 DEBUG oslo.service.loopingcall [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.518 2 DEBUG nova.compute.manager [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.518 2 DEBUG nova.network.neutron [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.719 2 DEBUG nova.network.neutron [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.738 2 DEBUG nova.network.neutron [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.758 2 INFO nova.compute.manager [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Took 0.24 seconds to deallocate network for instance.#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.840 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.841 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:04:59 np0005473739 nova_compute[259550]: 2025-10-07 14:04:59.945 2 DEBUG oslo_concurrency.processutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:00.038 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1363682272' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.377 2 DEBUG oslo_concurrency.processutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.391 2 DEBUG nova.compute.provider_tree [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.408 2 DEBUG nova.scheduler.client.report [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.429 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.463 2 INFO nova.scheduler.client.report [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Deleted allocations for instance af17d051-72b6-45f1-b829-94d3a2939519#033[00m
Oct  7 10:05:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.506 2 DEBUG nova.network.neutron [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.542 2 INFO nova.compute.manager [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Took 1.47 seconds to deallocate network for instance.#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.548 2 DEBUG oslo_concurrency.lockutils [None req-49574d41-3470-42a2-825b-30abac569309 0418d9872e6041138cc90a3aa74cce48 60e63a33759f4241845fccdb5c104b64 - - default default] Lock "af17d051-72b6-45f1-b829-94d3a2939519" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.586 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.587 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:00 np0005473739 interesting_bohr[280572]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:05:00 np0005473739 interesting_bohr[280572]: --> relative data size: 1.0
Oct  7 10:05:00 np0005473739 interesting_bohr[280572]: --> All data devices are unavailable
Oct  7 10:05:00 np0005473739 systemd[1]: libpod-5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d.scope: Deactivated successfully.
Oct  7 10:05:00 np0005473739 systemd[1]: libpod-5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d.scope: Consumed 1.099s CPU time.
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.652 2 DEBUG oslo_concurrency.processutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:00 np0005473739 podman[280623]: 2025-10-07 14:05:00.681898012 +0000 UTC m=+0.028926885 container died 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 10:05:00 np0005473739 nova_compute[259550]: 2025-10-07 14:05:00.693 2 DEBUG nova.compute.manager [req-82edb5a4-ce9f-4cf7-8025-04115d2cca85 req-241661ab-2adb-46e3-a3f5-436f0d2f7554 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Received event network-vif-deleted-60081c2f-b84d-467f-85f2-c2dd4b4fb4c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b4c9389afacffe0da2c5ed228d5529119b4e15d651d268007f8281caa0fb0758-merged.mount: Deactivated successfully.
Oct  7 10:05:00 np0005473739 podman[280623]: 2025-10-07 14:05:00.760362501 +0000 UTC m=+0.107391354 container remove 5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_bohr, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:05:00 np0005473739 systemd[1]: libpod-conmon-5bfc4432ea1a06b84a1e90fc3e26c7ca13e902173cb029eb3524a297f4ed541d.scope: Deactivated successfully.
Oct  7 10:05:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1198301669' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:01 np0005473739 nova_compute[259550]: 2025-10-07 14:05:01.122 2 DEBUG oslo_concurrency.processutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:01 np0005473739 nova_compute[259550]: 2025-10-07 14:05:01.133 2 DEBUG nova.compute.provider_tree [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:01 np0005473739 nova_compute[259550]: 2025-10-07 14:05:01.152 2 DEBUG nova.scheduler.client.report [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:01 np0005473739 nova_compute[259550]: 2025-10-07 14:05:01.175 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:01 np0005473739 nova_compute[259550]: 2025-10-07 14:05:01.220 2 INFO nova.scheduler.client.report [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Deleted allocations for instance 01404604-70e9-49ec-9047-ec42ba41afee#033[00m
Oct  7 10:05:01 np0005473739 nova_compute[259550]: 2025-10-07 14:05:01.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:01 np0005473739 nova_compute[259550]: 2025-10-07 14:05:01.284 2 DEBUG oslo_concurrency.lockutils [None req-e84f2ab4-82c1-48c0-98b3-aa168e4f16bb 2d78803d42674b89b4ea28d9a2442357 380b73085cef431383bee110ceaefb15 - - default default] Lock "01404604-70e9-49ec-9047-ec42ba41afee" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:01 np0005473739 podman[280800]: 2025-10-07 14:05:01.430618197 +0000 UTC m=+0.042416525 container create 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 10:05:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1131: 305 pgs: 305 active+clean; 69 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 162 op/s
Oct  7 10:05:01 np0005473739 systemd[1]: Started libpod-conmon-4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08.scope.
Oct  7 10:05:01 np0005473739 podman[280800]: 2025-10-07 14:05:01.412588525 +0000 UTC m=+0.024386883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:05:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:05:01 np0005473739 podman[280800]: 2025-10-07 14:05:01.528280169 +0000 UTC m=+0.140078527 container init 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:05:01 np0005473739 podman[280800]: 2025-10-07 14:05:01.540796413 +0000 UTC m=+0.152594741 container start 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:05:01 np0005473739 podman[280800]: 2025-10-07 14:05:01.545462058 +0000 UTC m=+0.157260386 container attach 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:05:01 np0005473739 quirky_nash[280816]: 167 167
Oct  7 10:05:01 np0005473739 systemd[1]: libpod-4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08.scope: Deactivated successfully.
Oct  7 10:05:01 np0005473739 podman[280800]: 2025-10-07 14:05:01.550322498 +0000 UTC m=+0.162120826 container died 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:05:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a855eff98aa7e2269330241313a3a31d47e15235d198e857438ea25589630ae1-merged.mount: Deactivated successfully.
Oct  7 10:05:01 np0005473739 podman[280800]: 2025-10-07 14:05:01.598331533 +0000 UTC m=+0.210129861 container remove 4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_nash, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 10:05:01 np0005473739 systemd[1]: libpod-conmon-4acddc1c7dfe8751c6dbd97f71d7b25edc90c4e019d470933a4de9180850cc08.scope: Deactivated successfully.
Oct  7 10:05:01 np0005473739 podman[280840]: 2025-10-07 14:05:01.785227001 +0000 UTC m=+0.042954231 container create 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:05:01 np0005473739 systemd[1]: Started libpod-conmon-8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629.scope.
Oct  7 10:05:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:05:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:01 np0005473739 podman[280840]: 2025-10-07 14:05:01.767108676 +0000 UTC m=+0.024835926 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:05:01 np0005473739 podman[280840]: 2025-10-07 14:05:01.87234639 +0000 UTC m=+0.130073670 container init 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:05:01 np0005473739 podman[280840]: 2025-10-07 14:05:01.882080821 +0000 UTC m=+0.139808051 container start 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:05:01 np0005473739 podman[280840]: 2025-10-07 14:05:01.886237942 +0000 UTC m=+0.143965202 container attach 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:05:02 np0005473739 funny_raman[280856]: {
Oct  7 10:05:02 np0005473739 funny_raman[280856]:    "0": [
Oct  7 10:05:02 np0005473739 funny_raman[280856]:        {
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "devices": [
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "/dev/loop3"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            ],
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_name": "ceph_lv0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_size": "21470642176",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "name": "ceph_lv0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "tags": {
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cluster_name": "ceph",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.crush_device_class": "",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.encrypted": "0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osd_id": "0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.type": "block",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.vdo": "0"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            },
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "type": "block",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "vg_name": "ceph_vg0"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:        }
Oct  7 10:05:02 np0005473739 funny_raman[280856]:    ],
Oct  7 10:05:02 np0005473739 funny_raman[280856]:    "1": [
Oct  7 10:05:02 np0005473739 funny_raman[280856]:        {
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "devices": [
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "/dev/loop4"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            ],
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_name": "ceph_lv1",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_size": "21470642176",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "name": "ceph_lv1",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "tags": {
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cluster_name": "ceph",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.crush_device_class": "",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.encrypted": "0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osd_id": "1",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.type": "block",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.vdo": "0"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            },
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "type": "block",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "vg_name": "ceph_vg1"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:        }
Oct  7 10:05:02 np0005473739 funny_raman[280856]:    ],
Oct  7 10:05:02 np0005473739 funny_raman[280856]:    "2": [
Oct  7 10:05:02 np0005473739 funny_raman[280856]:        {
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "devices": [
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "/dev/loop5"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            ],
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_name": "ceph_lv2",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_size": "21470642176",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "name": "ceph_lv2",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "tags": {
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.cluster_name": "ceph",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.crush_device_class": "",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.encrypted": "0",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osd_id": "2",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.type": "block",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:                "ceph.vdo": "0"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            },
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "type": "block",
Oct  7 10:05:02 np0005473739 funny_raman[280856]:            "vg_name": "ceph_vg2"
Oct  7 10:05:02 np0005473739 funny_raman[280856]:        }
Oct  7 10:05:02 np0005473739 funny_raman[280856]:    ]
Oct  7 10:05:02 np0005473739 funny_raman[280856]: }
Oct  7 10:05:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:02.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:02 np0005473739 systemd[1]: libpod-8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629.scope: Deactivated successfully.
Oct  7 10:05:02 np0005473739 podman[280840]: 2025-10-07 14:05:02.749759107 +0000 UTC m=+1.007486337 container died 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 10:05:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-58d9772ff57fe389986589b837a7e5cfccd001f41a74f4fc17d39fe3eb467245-merged.mount: Deactivated successfully.
Oct  7 10:05:02 np0005473739 podman[280840]: 2025-10-07 14:05:02.807384278 +0000 UTC m=+1.065111508 container remove 8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_raman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:05:02 np0005473739 systemd[1]: libpod-conmon-8cc3c007eeccf1f9cade6b4492023b8a1de653abf7381fa7b90d3ce8b59e8629.scope: Deactivated successfully.
Oct  7 10:05:03 np0005473739 podman[281017]: 2025-10-07 14:05:03.448369371 +0000 UTC m=+0.042415746 container create 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 10:05:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1132: 305 pgs: 305 active+clean; 69 MiB data, 270 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 160 op/s
Oct  7 10:05:03 np0005473739 systemd[1]: Started libpod-conmon-4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9.scope.
Oct  7 10:05:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:05:03 np0005473739 podman[281017]: 2025-10-07 14:05:03.4326187 +0000 UTC m=+0.026665095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:05:03 np0005473739 podman[281017]: 2025-10-07 14:05:03.545119798 +0000 UTC m=+0.139166193 container init 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:05:03 np0005473739 podman[281017]: 2025-10-07 14:05:03.553825041 +0000 UTC m=+0.147871416 container start 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 10:05:03 np0005473739 epic_easley[281033]: 167 167
Oct  7 10:05:03 np0005473739 systemd[1]: libpod-4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9.scope: Deactivated successfully.
Oct  7 10:05:03 np0005473739 podman[281017]: 2025-10-07 14:05:03.561560438 +0000 UTC m=+0.155606843 container attach 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:05:03 np0005473739 conmon[281033]: conmon 4d9d25869691aa4dea62 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9.scope/container/memory.events
Oct  7 10:05:03 np0005473739 podman[281017]: 2025-10-07 14:05:03.563296164 +0000 UTC m=+0.157342539 container died 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:05:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7d0bb181dbc0d38f7e781fec98b37f27361f7059d806d1facab445c37d3682e0-merged.mount: Deactivated successfully.
Oct  7 10:05:03 np0005473739 podman[281017]: 2025-10-07 14:05:03.600095068 +0000 UTC m=+0.194141443 container remove 4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:05:03 np0005473739 systemd[1]: libpod-conmon-4d9d25869691aa4dea62e1ac9fde0251b0be7bde4b288c18a278def6444b9be9.scope: Deactivated successfully.
Oct  7 10:05:03 np0005473739 podman[281058]: 2025-10-07 14:05:03.753887151 +0000 UTC m=+0.038494350 container create cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:05:03 np0005473739 systemd[1]: Started libpod-conmon-cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68.scope.
Oct  7 10:05:03 np0005473739 podman[281058]: 2025-10-07 14:05:03.739206529 +0000 UTC m=+0.023813758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:05:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:05:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:03 np0005473739 podman[281058]: 2025-10-07 14:05:03.871275651 +0000 UTC m=+0.155882940 container init cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 10:05:03 np0005473739 podman[281058]: 2025-10-07 14:05:03.883660172 +0000 UTC m=+0.168267401 container start cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:05:03 np0005473739 podman[281058]: 2025-10-07 14:05:03.88882411 +0000 UTC m=+0.173431309 container attach cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:05:04 np0005473739 gallant_keller[281075]: {
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "osd_id": 2,
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "type": "bluestore"
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:    },
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "osd_id": 1,
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "type": "bluestore"
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:    },
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "osd_id": 0,
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:        "type": "bluestore"
Oct  7 10:05:04 np0005473739 gallant_keller[281075]:    }
Oct  7 10:05:04 np0005473739 gallant_keller[281075]: }
Oct  7 10:05:05 np0005473739 systemd[1]: libpod-cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68.scope: Deactivated successfully.
Oct  7 10:05:05 np0005473739 podman[281058]: 2025-10-07 14:05:05.020043244 +0000 UTC m=+1.304650443 container died cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:05:05 np0005473739 systemd[1]: libpod-cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68.scope: Consumed 1.141s CPU time.
Oct  7 10:05:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e5cc5e180235d69a1b1d79a1336a94c312ba5991c2aa38a32247d7b86011e337-merged.mount: Deactivated successfully.
Oct  7 10:05:05 np0005473739 podman[281058]: 2025-10-07 14:05:05.086840681 +0000 UTC m=+1.371447890 container remove cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_keller, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 10:05:05 np0005473739 systemd[1]: libpod-conmon-cc6be814c544b78caa94fb441f559112b411ba10670573809cf4b7b77bcc7c68.scope: Deactivated successfully.
Oct  7 10:05:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:05:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:05:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:05:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:05:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5cc89144-b655-4080-88f5-8c5adac06ce2 does not exist
Oct  7 10:05:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 250f5054-d262-4047-9059-5905510bd2c5 does not exist
Oct  7 10:05:05 np0005473739 nova_compute[259550]: 2025-10-07 14:05:05.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:05 np0005473739 nova_compute[259550]: 2025-10-07 14:05:05.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1133: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 174 op/s
Oct  7 10:05:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:05 np0005473739 nova_compute[259550]: 2025-10-07 14:05:05.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:05 np0005473739 nova_compute[259550]: 2025-10-07 14:05:05.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:05:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:05:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1134: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 230 KiB/s wr, 153 op/s
Oct  7 10:05:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1135: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 15 KiB/s wr, 114 op/s
Oct  7 10:05:09 np0005473739 nova_compute[259550]: 2025-10-07 14:05:09.873 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:09 np0005473739 nova_compute[259550]: 2025-10-07 14:05:09.874 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:09 np0005473739 nova_compute[259550]: 2025-10-07 14:05:09.892 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:05:09 np0005473739 nova_compute[259550]: 2025-10-07 14:05:09.956 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:09 np0005473739 nova_compute[259550]: 2025-10-07 14:05:09.957 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:09 np0005473739 nova_compute[259550]: 2025-10-07 14:05:09.970 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:05:09 np0005473739 nova_compute[259550]: 2025-10-07 14:05:09.970 2 INFO nova.compute.claims [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.052 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.220 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845895.219699, 01404604-70e9-49ec-9047-ec42ba41afee => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.222 2 INFO nova.compute.manager [-] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.244 2 DEBUG nova.compute.manager [None req-f75758d6-e16a-4610-89c6-534235d3cddd - - - - - -] [instance: 01404604-70e9-49ec-9047-ec42ba41afee] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/617181251' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.515 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.521 2 DEBUG nova.compute.provider_tree [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.540 2 DEBUG nova.scheduler.client.report [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.560 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.603s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.561 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.610 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.611 2 DEBUG nova.network.neutron [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.635 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.655 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.745 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.746 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.747 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Creating image(s)#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.770 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.794 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.817 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.821 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.885 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.887 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.888 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.888 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.915 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:10 np0005473739 nova_compute[259550]: 2025-10-07 14:05:10.920 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.064 2 DEBUG nova.network.neutron [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.065 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.289 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.345 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] resizing rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:05:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1136: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 269 KiB/s rd, 1.5 KiB/s wr, 54 op/s
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.474 2 DEBUG nova.objects.instance [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lazy-loading 'migration_context' on Instance uuid dbb074a6-4040-4bb9-8c58-ee07f164e2ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.588 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.589 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Ensure instance console log exists: /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.590 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.590 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.590 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.592 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.597 2 WARNING nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.606 2 DEBUG nova.virt.libvirt.host [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.607 2 DEBUG nova.virt.libvirt.host [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.610 2 DEBUG nova.virt.libvirt.host [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.611 2 DEBUG nova.virt.libvirt.host [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.611 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.611 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.612 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.612 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.612 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.613 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.614 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.614 2 DEBUG nova.virt.hardware [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:05:11 np0005473739 nova_compute[259550]: 2025-10-07 14:05:11.617 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3439828980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.081 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.110 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.115 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1319367602' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.566 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.569 2 DEBUG nova.objects.instance [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lazy-loading 'pci_devices' on Instance uuid dbb074a6-4040-4bb9-8c58-ee07f164e2ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.620 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <uuid>dbb074a6-4040-4bb9-8c58-ee07f164e2ec</uuid>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <name>instance-00000009</name>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <nova:name>tempest-TenantUsagesTestJSON-server-1931796297</nova:name>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:05:11</nova:creationTime>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <nova:user uuid="554cedcefe7a4a1b8d0977fcf096d3fb">tempest-TenantUsagesTestJSON-632087855-project-member</nova:user>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <nova:project uuid="17c7dbcd6fa548a9b31ecf0706a8b974">tempest-TenantUsagesTestJSON-632087855</nova:project>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <entry name="serial">dbb074a6-4040-4bb9-8c58-ee07f164e2ec</entry>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <entry name="uuid">dbb074a6-4040-4bb9-8c58-ee07f164e2ec</entry>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/console.log" append="off"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:05:12 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:05:12 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:05:12 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:05:12 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.729 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.729 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.730 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Using config drive#033[00m
Oct  7 10:05:12 np0005473739 nova_compute[259550]: 2025-10-07 14:05:12.751 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.035 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Creating config drive at /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.041 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprh4ms4xi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.176 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprh4ms4xi" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.208 2 DEBUG nova.storage.rbd_utils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] rbd image dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.213 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.379 2 DEBUG oslo_concurrency.processutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config dbb074a6-4040-4bb9-8c58-ee07f164e2ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.380 2 INFO nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Deleting local config drive /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec/disk.config because it was imported into RBD.#033[00m
Oct  7 10:05:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1137: 305 pgs: 305 active+clean; 41 MiB data, 251 MiB used, 60 GiB / 60 GiB avail; 9.6 KiB/s rd, 852 B/s wr, 13 op/s
Oct  7 10:05:13 np0005473739 systemd-machined[214580]: New machine qemu-11-instance-00000009.
Oct  7 10:05:13 np0005473739 systemd[1]: Started Virtual Machine qemu-11-instance-00000009.
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.531 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845898.5292194, af17d051-72b6-45f1-b829-94d3a2939519 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.531 2 INFO nova.compute.manager [-] [instance: af17d051-72b6-45f1-b829-94d3a2939519] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:05:13 np0005473739 nova_compute[259550]: 2025-10-07 14:05:13.581 2 DEBUG nova.compute.manager [None req-951ac14b-abb0-4861-92dd-8cda14b2f446 - - - - - -] [instance: af17d051-72b6-45f1-b829-94d3a2939519] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.792 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845914.7923188, dbb074a6-4040-4bb9-8c58-ee07f164e2ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.793 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.795 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.796 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.799 2 INFO nova.virt.libvirt.driver [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance spawned successfully.#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.799 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.837 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.842 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.899 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.901 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.902 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.902 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.903 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.904 2 DEBUG nova.virt.libvirt.driver [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.970 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.970 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845914.7933679, dbb074a6-4040-4bb9-8c58-ee07f164e2ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:14 np0005473739 nova_compute[259550]: 2025-10-07 14:05:14.971 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] VM Started (Lifecycle Event)#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.197 2 INFO nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Took 4.45 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.198 2 DEBUG nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.209 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.215 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.284 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.304 2 INFO nova.compute.manager [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Took 5.37 seconds to build instance.#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.337 2 DEBUG oslo_concurrency.lockutils [None req-dc468fea-35db-44d8-823d-f8e85a6748d7 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:15 np0005473739 nova_compute[259550]: 2025-10-07 14:05:15.426 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1138: 305 pgs: 305 active+clean; 69 MiB data, 266 MiB used, 60 GiB / 60 GiB avail; 22 KiB/s rd, 1.2 MiB/s wr, 32 op/s
Oct  7 10:05:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:16 np0005473739 podman[281536]: 2025-10-07 14:05:16.085839894 +0000 UTC m=+0.063004326 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:05:16 np0005473739 podman[281535]: 2025-10-07 14:05:16.087663992 +0000 UTC m=+0.065302228 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.716 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.717 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.717 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.717 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.717 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.718 2 INFO nova.compute.manager [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Terminating instance#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.719 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "refresh_cache-dbb074a6-4040-4bb9-8c58-ee07f164e2ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.719 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquired lock "refresh_cache-dbb074a6-4040-4bb9-8c58-ee07f164e2ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.719 2 DEBUG nova.network.neutron [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:05:16 np0005473739 nova_compute[259550]: 2025-10-07 14:05:16.920 2 DEBUG nova.network.neutron [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.296 2 DEBUG nova.network.neutron [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.339 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Releasing lock "refresh_cache-dbb074a6-4040-4bb9-8c58-ee07f164e2ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.340 2 DEBUG nova.compute.manager [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:05:17 np0005473739 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000009.scope: Deactivated successfully.
Oct  7 10:05:17 np0005473739 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000009.scope: Consumed 3.901s CPU time.
Oct  7 10:05:17 np0005473739 systemd-machined[214580]: Machine qemu-11-instance-00000009 terminated.
Oct  7 10:05:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1139: 305 pgs: 305 active+clean; 88 MiB data, 272 MiB used, 60 GiB / 60 GiB avail; 226 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.564 2 INFO nova.virt.libvirt.driver [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance destroyed successfully.#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.564 2 DEBUG nova.objects.instance [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lazy-loading 'resources' on Instance uuid dbb074a6-4040-4bb9-8c58-ee07f164e2ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.588 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.589 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.727 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.885 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.886 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.895 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.896 2 INFO nova.compute.claims [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.988 2 INFO nova.virt.libvirt.driver [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Deleting instance files /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec_del#033[00m
Oct  7 10:05:17 np0005473739 nova_compute[259550]: 2025-10-07 14:05:17.989 2 INFO nova.virt.libvirt.driver [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Deletion of /var/lib/nova/instances/dbb074a6-4040-4bb9-8c58-ee07f164e2ec_del complete#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.123 2 INFO nova.compute.manager [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.124 2 DEBUG oslo.service.loopingcall [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.124 2 DEBUG nova.compute.manager [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.124 2 DEBUG nova.network.neutron [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.186 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.270 2 DEBUG nova.network.neutron [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.327 2 DEBUG nova.network.neutron [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.379 2 INFO nova.compute.manager [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Took 0.25 seconds to deallocate network for instance.#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.414 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2941124990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.630 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.637 2 DEBUG nova.compute.provider_tree [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.659 2 DEBUG nova.scheduler.client.report [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.692 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.693 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.696 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.733 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.734 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.753 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.773 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.782 2 DEBUG oslo_concurrency.processutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.861 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.865 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.866 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating image(s)#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.893 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.918 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.947 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.952 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:18 np0005473739 nova_compute[259550]: 2025-10-07 14:05:18.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.024 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.026 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.026 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.027 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.051 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.056 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.080 2 DEBUG nova.policy [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f06dda9346a24fb094ad9fe51664cc48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:05:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3265124334' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.326 2 DEBUG oslo_concurrency.processutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.334 2 DEBUG nova.compute.provider_tree [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.364 2 DEBUG nova.scheduler.client.report [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.373 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.318s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.401 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.444 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:05:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1140: 305 pgs: 305 active+clean; 60 MiB data, 271 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.475 2 INFO nova.scheduler.client.report [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Deleted allocations for instance dbb074a6-4040-4bb9-8c58-ee07f164e2ec#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.551 2 DEBUG nova.objects.instance [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.557 2 DEBUG oslo_concurrency.lockutils [None req-47a2f33c-1891-4137-87f7-a66ab2a06e5a 554cedcefe7a4a1b8d0977fcf096d3fb 17c7dbcd6fa548a9b31ecf0706a8b974 - - default default] Lock "dbb074a6-4040-4bb9-8c58-ee07f164e2ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.568 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.569 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Ensure instance console log exists: /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.569 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.569 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.570 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.627 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.627 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.658 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.842 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.844 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.852 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.853 2 INFO nova.compute.claims [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:05:19 np0005473739 nova_compute[259550]: 2025-10-07 14:05:19.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.001 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.331 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Successfully created port: 9b25db0b-246e-456c-82d7-cf361c57f9c5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/459009998' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.472 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.479 2 DEBUG nova.compute.provider_tree [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.495 2 DEBUG nova.scheduler.client.report [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.526 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.682s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.527 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.567 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.567 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.582 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.599 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.752 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.755 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.756 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Creating image(s)#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.779 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.805 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.829 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.833 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.903 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.904 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.905 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.905 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.930 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:20 np0005473739 nova_compute[259550]: 2025-10-07 14:05:20.934 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.015 2 DEBUG nova.policy [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f06dda9346a24fb094ad9fe51664cc48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.272 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.345 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:05:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1141: 305 pgs: 305 active+clean; 56 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 128 op/s
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.496 2 DEBUG nova.objects.instance [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.556 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.557 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Ensure instance console log exists: /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.558 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.559 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.560 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.571 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Successfully created port: 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.686 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Successfully updated port: 9b25db0b-246e-456c-82d7-cf361c57f9c5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.833 2 DEBUG nova.compute.manager [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-changed-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.834 2 DEBUG nova.compute.manager [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Refreshing instance network info cache due to event network-changed-9b25db0b-246e-456c-82d7-cf361c57f9c5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.834 2 DEBUG oslo_concurrency.lockutils [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.834 2 DEBUG oslo_concurrency.lockutils [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.834 2 DEBUG nova.network.neutron [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Refreshing network info cache for port 9b25db0b-246e-456c-82d7-cf361c57f9c5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.846 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:21 np0005473739 nova_compute[259550]: 2025-10-07 14:05:21.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:22 np0005473739 nova_compute[259550]: 2025-10-07 14:05:22.131 2 DEBUG nova.network.neutron [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:05:22 np0005473739 nova_compute[259550]: 2025-10-07 14:05:22.563 2 DEBUG nova.network.neutron [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:22 np0005473739 nova_compute[259550]: 2025-10-07 14:05:22.607 2 DEBUG oslo_concurrency.lockutils [req-a295e9c9-701a-4eb4-8e4a-32e0539c1dcf req-47fbe28e-5489-4a6d-bbcd-b0f9bef3aeab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:05:22
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', '.rgw.root', 'images', 'vms', 'default.rgw.meta', 'volumes', 'backups', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct  7 10:05:22 np0005473739 nova_compute[259550]: 2025-10-07 14:05:22.609 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquired lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:05:22 np0005473739 nova_compute[259550]: 2025-10-07 14:05:22.609 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:05:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:05:22 np0005473739 nova_compute[259550]: 2025-10-07 14:05:22.882 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:05:22 np0005473739 nova_compute[259550]: 2025-10-07 14:05:22.929 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Successfully updated port: 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:05:22 np0005473739 nova_compute[259550]: 2025-10-07 14:05:22.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.017 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.017 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquired lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.018 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.023 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.024 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.056 2 DEBUG nova.compute.manager [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received event network-changed-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.056 2 DEBUG nova.compute.manager [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Refreshing instance network info cache due to event network-changed-3486260f-fd35-48fb-a925-cbe6f4a1a9f5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.057 2 DEBUG oslo_concurrency.lockutils [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.341 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:05:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1142: 305 pgs: 305 active+clean; 56 MiB data, 273 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.5 MiB/s wr, 128 op/s
Oct  7 10:05:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1357035517' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.500 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.681 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.683 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4688MB free_disk=59.97964096069336GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.683 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.683 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.794 2 DEBUG nova.network.neutron [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updating instance_info_cache with network_info: [{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.819 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance ddf09c33-d956-404b-a5d8-44a3727f9a3b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.819 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.820 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.820 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.873 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.897 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Releasing lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.897 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance network_info: |[{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.900 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start _get_guest_xml network_info=[{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.904 2 WARNING nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.909 2 DEBUG nova.virt.libvirt.host [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.910 2 DEBUG nova.virt.libvirt.host [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.913 2 DEBUG nova.virt.libvirt.host [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.913 2 DEBUG nova.virt.libvirt.host [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.913 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.914 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.914 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.914 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.914 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.915 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.916 2 DEBUG nova.virt.hardware [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:05:23 np0005473739 nova_compute[259550]: 2025-10-07 14:05:23.919 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3735479179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.326 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.333 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2478153480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.363 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.385 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.390 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.413 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.463 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.463 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.803 2 DEBUG nova.network.neutron [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Updating instance_info_cache with network_info: [{"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3202289319' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.827 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.828 2 DEBUG nova.virt.libvirt.vif [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:18Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.829 2 DEBUG nova.network.os_vif_util [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.829 2 DEBUG nova.network.os_vif_util [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.830 2 DEBUG nova.objects.instance [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.832 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Releasing lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.833 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance network_info: |[{"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.833 2 DEBUG oslo_concurrency.lockutils [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.833 2 DEBUG nova.network.neutron [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Refreshing network info cache for port 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.836 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start _get_guest_xml network_info=[{"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.840 2 WARNING nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.844 2 DEBUG nova.virt.libvirt.host [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.845 2 DEBUG nova.virt.libvirt.host [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.848 2 DEBUG nova.virt.libvirt.host [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.848 2 DEBUG nova.virt.libvirt.host [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.848 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.848 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.849 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.849 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.849 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.849 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.850 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.851 2 DEBUG nova.virt.hardware [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.854 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.884 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <uuid>ddf09c33-d956-404b-a5d8-44a3727f9a3b</uuid>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <name>instance-0000000a</name>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdminTestJSON-server-1321212972</nova:name>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:05:23</nova:creationTime>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <nova:port uuid="9b25db0b-246e-456c-82d7-cf361c57f9c5">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <entry name="serial">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <entry name="uuid">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6c:03:d4"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <target dev="tap9b25db0b-24"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log" append="off"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:05:24 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:05:24 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:05:24 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:05:24 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.886 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Preparing to wait for external event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.887 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.887 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.888 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.889 2 DEBUG nova.virt.libvirt.vif [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:18Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.889 2 DEBUG nova.network.os_vif_util [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.890 2 DEBUG nova.network.os_vif_util [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.890 2 DEBUG os_vif [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.892 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.897 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b25db0b-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.898 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b25db0b-24, col_values=(('external_ids', {'iface-id': '9b25db0b-246e-456c-82d7-cf361c57f9c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:03:d4', 'vm-uuid': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:24 np0005473739 NetworkManager[44949]: <info>  [1759845924.9017] manager: (tap9b25db0b-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.903 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:24 np0005473739 nova_compute[259550]: 2025-10-07 14:05:24.911 2 INFO os_vif [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.094 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.095 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.095 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:6c:03:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.096 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Using config drive#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.127 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1485311075' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.305 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.326 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.331 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1143: 305 pgs: 305 active+clean; 112 MiB data, 286 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.7 MiB/s wr, 179 op/s
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.465 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.749 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating config drive at /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.755 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7o0wlq1t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2613226568' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.785 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.787 2 DEBUG nova.virt.libvirt.vif [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-33708594',display_name='tempest-ServersAdminTestJSON-server-33708594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-33708594',id=11,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-346ygca7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:20Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=7322f2d1-885e-4e41-8a96-e90d4ddc6c38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.788 2 DEBUG nova.network.os_vif_util [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.789 2 DEBUG nova.network.os_vif_util [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.790 2 DEBUG nova.objects.instance [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.807 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.808 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.812 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <uuid>7322f2d1-885e-4e41-8a96-e90d4ddc6c38</uuid>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <name>instance-0000000b</name>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdminTestJSON-server-33708594</nova:name>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:05:24</nova:creationTime>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <nova:port uuid="3486260f-fd35-48fb-a925-cbe6f4a1a9f5">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <entry name="serial">7322f2d1-885e-4e41-8a96-e90d4ddc6c38</entry>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <entry name="uuid">7322f2d1-885e-4e41-8a96-e90d4ddc6c38</entry>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:68:5d:93"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <target dev="tap3486260f-fd"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/console.log" append="off"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:05:25 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:05:25 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:05:25 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:05:25 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.813 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Preparing to wait for external event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.813 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.814 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.814 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.815 2 DEBUG nova.virt.libvirt.vif [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-33708594',display_name='tempest-ServersAdminTestJSON-server-33708594',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-33708594',id=11,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-346ygca7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:20Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=7322f2d1-885e-4e41-8a96-e90d4ddc6c38,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.815 2 DEBUG nova.network.os_vif_util [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.815 2 DEBUG nova.network.os_vif_util [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.816 2 DEBUG os_vif [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.816 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.817 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.820 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3486260f-fd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.821 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3486260f-fd, col_values=(('external_ids', {'iface-id': '3486260f-fd35-48fb-a925-cbe6f4a1a9f5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:68:5d:93', 'vm-uuid': '7322f2d1-885e-4e41-8a96-e90d4ddc6c38'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:25 np0005473739 NetworkManager[44949]: <info>  [1759845925.8237] manager: (tap3486260f-fd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.823 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.830 2 INFO os_vif [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd')#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.890 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.890 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.896 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.897 2 INFO nova.compute.claims [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.899 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7o0wlq1t" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.923 2 DEBUG nova.storage.rbd_utils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.928 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.963 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.964 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.964 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:68:5d:93, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.965 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Using config drive#033[00m
Oct  7 10:05:25 np0005473739 nova_compute[259550]: 2025-10-07 14:05:25.993 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.007 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.048 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.048 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.098 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:26 np0005473739 podman[282229]: 2025-10-07 14:05:26.127163256 +0000 UTC m=+0.116730852 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.127 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.128 2 DEBUG oslo_concurrency.processutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.200s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.129 2 INFO nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting local config drive /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config because it was imported into RBD.#033[00m
Oct  7 10:05:26 np0005473739 kernel: tap9b25db0b-24: entered promiscuous mode
Oct  7 10:05:26 np0005473739 NetworkManager[44949]: <info>  [1759845926.2062] manager: (tap9b25db0b-24): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Oct  7 10:05:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:26Z|00044|binding|INFO|Claiming lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 for this chassis.
Oct  7 10:05:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:26Z|00045|binding|INFO|9b25db0b-246e-456c-82d7-cf361c57f9c5: Claiming fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.226 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.227 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.228 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:05:26 np0005473739 systemd-machined[214580]: New machine qemu-12-instance-0000000a.
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.245 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a37e379-be09-4d3e-9793-6945fb9186e7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.246 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1eabd9ee-61 in ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.248 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1eabd9ee-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.248 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0442ef50-24f9-4fc8-ba4f-15dd74d314fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.250 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c36a5e05-6460-486a-a033-cf7b79f7c401]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 systemd[1]: Started Virtual Machine qemu-12-instance-0000000a.
Oct  7 10:05:26 np0005473739 systemd-udevd[282301]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.265 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0deb0828-9097-49db-96e1-f76dfeac5bdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 NetworkManager[44949]: <info>  [1759845926.2761] device (tap9b25db0b-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:05:26 np0005473739 NetworkManager[44949]: <info>  [1759845926.2786] device (tap9b25db0b-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.297 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30ab7615-a867-4d69-a597-c01972bed74c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:26Z|00046|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 ovn-installed in OVS
Oct  7 10:05:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:26Z|00047|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 up in Southbound
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.339 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[62533f46-c84e-42df-a8ac-4cc5d237cf91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.345 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9962620f-d531-4e52-ae87-92fdcdbfedda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 NetworkManager[44949]: <info>  [1759845926.3473] manager: (tap1eabd9ee-60): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.385 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6cad1063-b938-4037-a54a-7121c3c6bb81]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.388 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8dc1c40c-18d1-4eeb-bf77-f9edc80a68f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 NetworkManager[44949]: <info>  [1759845926.4173] device (tap1eabd9ee-60): carrier: link connected
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.424 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2c582eb4-7738-4e3e-adf3-8480ca458bbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.447 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00e4b3b2-e5c9-4d18-bd6d-3abe08997af9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282340, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.474 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d6d98c75-627e-4c91-992a-c421000cba34]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2e:d4d3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 650998, 'tstamp': 650998}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282342, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.499 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df5fab42-4bfe-488c-ba81-438aaedf2f87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282343, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.536 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0aa8a68-64ea-4266-822b-5ab001b14083]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.588 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Creating config drive at /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.594 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptzrif0ux execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.607 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f91e182-ead2-4006-9319-635ad55a29ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.609 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.609 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.610 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:26 np0005473739 NetworkManager[44949]: <info>  [1759845926.6133] manager: (tap1eabd9ee-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Oct  7 10:05:26 np0005473739 kernel: tap1eabd9ee-60: entered promiscuous mode
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:26Z|00048|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct  7 10:05:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1914118239' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.643 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1eabd9ee-6333-432b-b50d-9679677d38f6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1eabd9ee-6333-432b-b50d-9679677d38f6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[049d078c-ecab-401e-b011-4268441623f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.645 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-1eabd9ee-6333-432b-b50d-9679677d38f6
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/1eabd9ee-6333-432b-b50d-9679677d38f6.pid.haproxy
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 1eabd9ee-6333-432b-b50d-9679677d38f6
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:05:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:26.646 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'env', 'PROCESS_TAG=haproxy-1eabd9ee-6333-432b-b50d-9679677d38f6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1eabd9ee-6333-432b-b50d-9679677d38f6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.663 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.671 2 DEBUG nova.compute.provider_tree [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.697 2 DEBUG nova.scheduler.client.report [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.726 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptzrif0ux" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.752 2 DEBUG nova.storage.rbd_utils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.757 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.791 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.793 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.807 2 DEBUG nova.network.neutron [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Updated VIF entry in instance network info cache for port 3486260f-fd35-48fb-a925-cbe6f4a1a9f5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.808 2 DEBUG nova.network.neutron [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Updating instance_info_cache with network_info: [{"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.928 2 DEBUG oslo_concurrency.lockutils [req-2a9e1a0b-17a5-498a-84a1-a5d9d2c5879f req-1b24c9f9-8808-4156-a561-410eb245da3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-7322f2d1-885e-4e41-8a96-e90d4ddc6c38" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.950 2 DEBUG oslo_concurrency.processutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config 7322f2d1-885e-4e41-8a96-e90d4ddc6c38_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.194s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:26 np0005473739 nova_compute[259550]: 2025-10-07 14:05:26.951 2 INFO nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Deleting local config drive /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38/disk.config because it was imported into RBD.#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.003 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.004 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:05:27 np0005473739 NetworkManager[44949]: <info>  [1759845927.0121] manager: (tap3486260f-fd): new Tun device (/org/freedesktop/NetworkManager/Devices/41)
Oct  7 10:05:27 np0005473739 kernel: tap3486260f-fd: entered promiscuous mode
Oct  7 10:05:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:27Z|00049|binding|INFO|Claiming lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 for this chassis.
Oct  7 10:05:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:27Z|00050|binding|INFO|3486260f-fd35-48fb-a925-cbe6f4a1a9f5: Claiming fa:16:3e:68:5d:93 10.100.0.9
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:27 np0005473739 NetworkManager[44949]: <info>  [1759845927.0265] device (tap3486260f-fd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:05:27 np0005473739 NetworkManager[44949]: <info>  [1759845927.0278] device (tap3486260f-fd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.029 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:5d:93 10.100.0.9'], port_security=['fa:16:3e:68:5d:93 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7322f2d1-885e-4e41-8a96-e90d4ddc6c38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3486260f-fd35-48fb-a925-cbe6f4a1a9f5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:05:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:27Z|00051|binding|INFO|Setting lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 ovn-installed in OVS
Oct  7 10:05:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:27Z|00052|binding|INFO|Setting lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 up in Southbound
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:27 np0005473739 systemd-machined[214580]: New machine qemu-13-instance-0000000b.
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.063 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:05:27 np0005473739 systemd[1]: Started Virtual Machine qemu-13-instance-0000000b.
Oct  7 10:05:27 np0005473739 podman[282421]: 2025-10-07 14:05:27.092092274 +0000 UTC m=+0.090990295 container create 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.095 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:05:27 np0005473739 podman[282421]: 2025-10-07 14:05:27.049612107 +0000 UTC m=+0.048510148 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:05:27 np0005473739 systemd[1]: Started libpod-conmon-534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091.scope.
Oct  7 10:05:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:05:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ee58677923a11d6eaf88f22eb26cdfd6a6c1bfcaf21246d843b12d3493d77fd3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:27 np0005473739 podman[282421]: 2025-10-07 14:05:27.237766199 +0000 UTC m=+0.236664260 container init 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:05:27 np0005473739 podman[282421]: 2025-10-07 14:05:27.243750609 +0000 UTC m=+0.242648630 container start 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:05:27 np0005473739 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [NOTICE]   (282456) : New worker (282458) forked
Oct  7 10:05:27 np0005473739 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [NOTICE]   (282456) : Loading success.
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.345 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.348 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.361 2 DEBUG nova.policy [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '124a23e91e614186848847e685d191d9', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '54e39443887a407284ed98974d4e0771', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.364 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae044ac8-96ee-4211-baa7-6904b1ffad62]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.374 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.379 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.380 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Creating image(s)#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.407 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.408 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[08d115dc-12bf-43ad-b516-a88cc5f6a9fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.412 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aa6740b5-a635-44b2-b895-b38dae2251f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.444 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.450 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8cc37f8-375d-40af-878d-5318e98d4b09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1144: 305 pgs: 305 active+clean; 134 MiB data, 293 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 162 op/s
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[347d88e3-0232-43e9-8c25-045dea49308c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 306, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 6, 'rx_bytes': 306, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 264, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 264, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282508, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.487 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a694bf-2dca-41e9-ac59-802fca4aa280]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282516, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282516, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.489 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.490 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.492 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.493 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.493 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:27.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.497 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.525 2 DEBUG nova.compute.manager [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.526 2 DEBUG oslo_concurrency.lockutils [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.526 2 DEBUG oslo_concurrency.lockutils [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.526 2 DEBUG oslo_concurrency.lockutils [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.526 2 DEBUG nova.compute.manager [req-d2c0c196-efb7-408c-b797-beacb0a44d84 req-a08bba62-020a-47ef-bc87-8161964a6466 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Processing event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.558 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.559 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.560 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.560 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.580 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.583 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a2f7901e-6572-4162-b995-0c44fb69eab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:27 np0005473739 nova_compute[259550]: 2025-10-07 14:05:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.224 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.225 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.2230887, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.225 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Started (Lifecycle Event)#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.231 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.237 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance spawned successfully.#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.238 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.309 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.313 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.362 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.362 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.363 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.363 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.363 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.364 2 DEBUG nova.virt.libvirt.driver [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.418 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.419 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.2249036, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.419 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.443 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a2f7901e-6572-4162-b995-0c44fb69eab5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.861s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.513 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] resizing rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.645 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.648 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Successfully created port: d4571f56-54c6-4986-845c-cd57c4faadac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.654 2 INFO nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Took 9.79 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.654 2 DEBUG nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.658 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.2298362, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.659 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.715 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.718 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.801 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.3106873, 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.802 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] VM Started (Lifecycle Event)#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.805 2 INFO nova.compute.manager [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Took 10.95 seconds to build instance.#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.848 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.853 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845928.311534, 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.858 2 DEBUG oslo_concurrency.lockutils [None req-41c5df1d-dd2d-4500-8ec1-a643cf471911 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.269s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.874 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.878 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.905 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:28 np0005473739 nova_compute[259550]: 2025-10-07 14:05:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.121 2 DEBUG nova.objects.instance [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lazy-loading 'migration_context' on Instance uuid a2f7901e-6572-4162-b995-0c44fb69eab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.157 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.158 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Ensure instance console log exists: /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.159 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.159 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.159 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1145: 305 pgs: 305 active+clean; 168 MiB data, 298 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.9 MiB/s wr, 176 op/s
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.706 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.706 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.707 2 WARNING nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.707 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Processing event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.708 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.709 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.709 2 DEBUG oslo_concurrency.lockutils [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.709 2 DEBUG nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] No waiting events found dispatching network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.709 2 WARNING nova.compute.manager [req-71e26c28-750f-4473-9f29-b38034281217 req-5590540a-d394-462d-a8d7-74593a75f2d9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received unexpected event network-vif-plugged-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.710 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.714 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845929.7134633, 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.715 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.717 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.721 2 INFO nova.virt.libvirt.driver [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance spawned successfully.#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.722 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.729 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Successfully updated port: d4571f56-54c6-4986-845c-cd57c4faadac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.969 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.977 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.978 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.978 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.979 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.979 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.980 2 DEBUG nova.virt.libvirt.driver [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.985 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.996 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.997 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquired lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:29 np0005473739 nova_compute[259550]: 2025-10-07 14:05:29.997 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.021 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.045 2 INFO nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Took 9.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.045 2 DEBUG nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:30 np0005473739 podman[282724]: 2025-10-07 14:05:30.10006544 +0000 UTC m=+0.078857900 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.109 2 INFO nova.compute.manager [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Took 10.40 seconds to build instance.#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.128 2 DEBUG oslo_concurrency.lockutils [None req-6d834f2b-be3e-486f-a12d-6a3428920775 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.501s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.225 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.411 2 DEBUG nova.compute.manager [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.412 2 DEBUG nova.compute.manager [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing instance network info cache due to event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.412 2 DEBUG oslo_concurrency.lockutils [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #54. Immutable memtables: 0.
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.499865) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 27] Flushing memtable with next log file: 54
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930499939, "job": 27, "event": "flush_started", "num_memtables": 1, "num_entries": 1242, "num_deletes": 507, "total_data_size": 1313482, "memory_usage": 1336920, "flush_reason": "Manual Compaction"}
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 27] Level-0 flush table #55: started
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930514730, "cf_name": "default", "job": 27, "event": "table_file_creation", "file_number": 55, "file_size": 893255, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 23055, "largest_seqno": 24296, "table_properties": {"data_size": 888584, "index_size": 1619, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 14668, "raw_average_key_size": 18, "raw_value_size": 876646, "raw_average_value_size": 1132, "num_data_blocks": 73, "num_entries": 774, "num_filter_entries": 774, "num_deletions": 507, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845848, "oldest_key_time": 1759845848, "file_creation_time": 1759845930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 55, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 27] Flush lasted 14908 microseconds, and 3631 cpu microseconds.
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.514773) [db/flush_job.cc:967] [default] [JOB 27] Level-0 flush table #55: 893255 bytes OK
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.514796) [db/memtable_list.cc:519] [default] Level-0 commit table #55 started
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.516302) [db/memtable_list.cc:722] [default] Level-0 commit table #55: memtable #1 done
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.516318) EVENT_LOG_v1 {"time_micros": 1759845930516312, "job": 27, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.516342) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 27] Try to delete WAL files size 1306705, prev total WAL file size 1306705, number of live WAL files 2.
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000051.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.517251) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D00353033' seq:72057594037927935, type:22 .. '6C6F676D00373535' seq:0, type:0; will stop at (end)
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 28] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 27 Base level 0, inputs: [55(872KB)], [53(8956KB)]
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930517323, "job": 28, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [55], "files_L6": [53], "score": -1, "input_data_size": 10065157, "oldest_snapshot_seqno": -1}
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 28] Generated table #56: 4547 keys, 7056062 bytes, temperature: kUnknown
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930558081, "cf_name": "default", "job": 28, "event": "table_file_creation", "file_number": 56, "file_size": 7056062, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7025983, "index_size": 17617, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11397, "raw_key_size": 113930, "raw_average_key_size": 25, "raw_value_size": 6944036, "raw_average_value_size": 1527, "num_data_blocks": 732, "num_entries": 4547, "num_filter_entries": 4547, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759845930, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 56, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.558398) [db/compaction/compaction_job.cc:1663] [default] [JOB 28] Compacted 1@0 + 1@6 files to L6 => 7056062 bytes
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.561210) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 246.4 rd, 172.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(6.7 +0.0 blob), read-write-amplify(19.2) write-amplify(7.9) OK, records in: 5544, records dropped: 997 output_compression: NoCompression
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.561237) EVENT_LOG_v1 {"time_micros": 1759845930561223, "job": 28, "event": "compaction_finished", "compaction_time_micros": 40846, "compaction_time_cpu_micros": 18237, "output_level": 6, "num_output_files": 1, "total_output_size": 7056062, "num_input_records": 5544, "num_output_records": 4547, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000055.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930561708, "job": 28, "event": "table_file_deletion", "file_number": 55}
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000053.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759845930563263, "job": 28, "event": "table_file_deletion", "file_number": 53}
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.517138) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563374) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563382) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563384) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563387) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:05:30 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:05:30.563389) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:05:30 np0005473739 nova_compute[259550]: 2025-10-07 14:05:30.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.386 2 DEBUG nova.network.neutron [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.408 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Releasing lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.409 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance network_info: |[{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.410 2 DEBUG oslo_concurrency.lockutils [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.410 2 DEBUG nova.network.neutron [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.413 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start _get_guest_xml network_info=[{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.419 2 WARNING nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.428 2 DEBUG nova.virt.libvirt.host [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.429 2 DEBUG nova.virt.libvirt.host [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.434 2 DEBUG nova.virt.libvirt.host [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.435 2 DEBUG nova.virt.libvirt.host [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.436 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.436 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.436 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.437 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.437 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.437 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.438 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.438 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.438 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.438 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.439 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.439 2 DEBUG nova.virt.hardware [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.442 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1146: 305 pgs: 305 active+clean; 181 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 5.3 MiB/s wr, 158 op/s
Oct  7 10:05:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3913640470' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.897 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.922 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:31 np0005473739 nova_compute[259550]: 2025-10-07 14:05:31.927 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0010426968361333595 of space, bias 1.0, pg target 0.3128090508400078 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:05:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1549963102' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.376 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.378 2 DEBUG nova.virt.libvirt.vif [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-230026105',id=12,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54e39443887a407284ed98974d4e0771',ramdisk_id='',reservation_id='r-0i2aqjqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:27Z,user_data=None,user_id='124a23e91e614186848847e685d191d9',uuid=a2f7901e-6572-4162-b995-0c44fb69eab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.379 2 DEBUG nova.network.os_vif_util [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converting VIF {"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.380 2 DEBUG nova.network.os_vif_util [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.381 2 DEBUG nova.objects.instance [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lazy-loading 'pci_devices' on Instance uuid a2f7901e-6572-4162-b995-0c44fb69eab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.441 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <uuid>a2f7901e-6572-4162-b995-0c44fb69eab5</uuid>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <name>instance-0000000c</name>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <nova:name>tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105</nova:name>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:05:31</nova:creationTime>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <nova:user uuid="124a23e91e614186848847e685d191d9">tempest-FloatingIPsAssociationNegativeTestJSON-909075908-project-member</nova:user>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <nova:project uuid="54e39443887a407284ed98974d4e0771">tempest-FloatingIPsAssociationNegativeTestJSON-909075908</nova:project>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <nova:port uuid="d4571f56-54c6-4986-845c-cd57c4faadac">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <entry name="serial">a2f7901e-6572-4162-b995-0c44fb69eab5</entry>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <entry name="uuid">a2f7901e-6572-4162-b995-0c44fb69eab5</entry>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a2f7901e-6572-4162-b995-0c44fb69eab5_disk">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:47:1a:81"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <target dev="tapd4571f56-54"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/console.log" append="off"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:05:32 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:05:32 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:05:32 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:05:32 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.447 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Preparing to wait for external event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.448 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.448 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.448 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.449 2 DEBUG nova.virt.libvirt.vif [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-230026105',id=12,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='54e39443887a407284ed98974d4e0771',ramdisk_id='',reservation_id='r-0i2aqjqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:27Z,user_data=None,user_id='124a23e91e614186848847e685d191d9',uuid=a2f7901e-6572-4162-b995-0c44fb69eab5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.450 2 DEBUG nova.network.os_vif_util [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converting VIF {"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.450 2 DEBUG nova.network.os_vif_util [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.451 2 DEBUG os_vif [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.457 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd4571f56-54, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd4571f56-54, col_values=(('external_ids', {'iface-id': 'd4571f56-54c6-4986-845c-cd57c4faadac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:1a:81', 'vm-uuid': 'a2f7901e-6572-4162-b995-0c44fb69eab5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:32 np0005473739 NetworkManager[44949]: <info>  [1759845932.4615] manager: (tapd4571f56-54): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.469 2 INFO os_vif [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54')#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.540 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.543 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.546 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] No VIF found with MAC fa:16:3e:47:1a:81, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.547 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Using config drive#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.593 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.599 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845917.5605228, dbb074a6-4040-4bb9-8c58-ee07f164e2ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.600 2 INFO nova.compute.manager [-] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.639 2 DEBUG nova.compute.manager [None req-9432ad32-7ba7-4281-9023-48cb5966d012 - - - - - -] [instance: dbb074a6-4040-4bb9-8c58-ee07f164e2ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:05:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975917875' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:05:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:05:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2975917875' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.808 2 DEBUG nova.network.neutron [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updated VIF entry in instance network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.809 2 DEBUG nova.network.neutron [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:32 np0005473739 nova_compute[259550]: 2025-10-07 14:05:32.829 2 DEBUG oslo_concurrency.lockutils [req-6b89023d-27a7-4598-9aef-e21c971531e9 req-80c80b44-6386-461e-aaf0-b38de50a086d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.176 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Creating config drive at /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.182 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcn8c9_lg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.315 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcn8c9_lg" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.348 2 DEBUG nova.storage.rbd_utils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] rbd image a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.352 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1147: 305 pgs: 305 active+clean; 181 MiB data, 304 MiB used, 60 GiB / 60 GiB avail; 1.1 MiB/s rd, 4.6 MiB/s wr, 134 op/s
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.509 2 DEBUG oslo_concurrency.processutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config a2f7901e-6572-4162-b995-0c44fb69eab5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.510 2 INFO nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Deleting local config drive /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5/disk.config because it was imported into RBD.#033[00m
Oct  7 10:05:33 np0005473739 virtqemud[259430]: End of file while reading data: Input/output error
Oct  7 10:05:33 np0005473739 virtqemud[259430]: End of file while reading data: Input/output error
Oct  7 10:05:33 np0005473739 NetworkManager[44949]: <info>  [1759845933.5731] manager: (tapd4571f56-54): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Oct  7 10:05:33 np0005473739 kernel: tapd4571f56-54: entered promiscuous mode
Oct  7 10:05:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:33Z|00053|binding|INFO|Claiming lport d4571f56-54c6-4986-845c-cd57c4faadac for this chassis.
Oct  7 10:05:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:33Z|00054|binding|INFO|d4571f56-54c6-4986-845c-cd57c4faadac: Claiming fa:16:3e:47:1a:81 10.100.0.9
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.605 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:1a:81 10.100.0.9'], port_security=['fa:16:3e:47:1a:81 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a2f7901e-6572-4162-b995-0c44fb69eab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54e39443887a407284ed98974d4e0771', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b31c4f6-cd30-4c48-acc0-2bceb0f06bb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11ba191e-5394-4c4a-8b33-ca086e98b88d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d4571f56-54c6-4986-845c-cd57c4faadac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.611 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d4571f56-54c6-4986-845c-cd57c4faadac in datapath c11e448c-21ee-492b-8c01-cf2af1e6c4f4 bound to our chassis#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.613 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c11e448c-21ee-492b-8c01-cf2af1e6c4f4#033[00m
Oct  7 10:05:33 np0005473739 systemd-machined[214580]: New machine qemu-14-instance-0000000c.
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.628 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[322d2c35-f92a-4783-944b-08151802387a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.629 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc11e448c-21 in ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.631 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc11e448c-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.631 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[757a8b46-0eaa-4118-8f22-87d386959654]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2c9d9f1-379f-45d8-bb47-2102cead6c24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 systemd[1]: Started Virtual Machine qemu-14-instance-0000000c.
Oct  7 10:05:33 np0005473739 systemd-udevd[282880]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.655 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5fb33639-0275-4689-ba44-50fb7475f820]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 NetworkManager[44949]: <info>  [1759845933.6730] device (tapd4571f56-54): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:33Z|00055|binding|INFO|Setting lport d4571f56-54c6-4986-845c-cd57c4faadac ovn-installed in OVS
Oct  7 10:05:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:33Z|00056|binding|INFO|Setting lport d4571f56-54c6-4986-845c-cd57c4faadac up in Southbound
Oct  7 10:05:33 np0005473739 NetworkManager[44949]: <info>  [1759845933.6758] device (tapd4571f56-54): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2038cc05-fb79-4c16-9d48-0b1a0d9806de]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.716 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2a563063-4eaf-42e6-b1ca-4d2279344b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.725 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[728d1f6c-8924-4e74-b561-b2eed31ab3a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 NetworkManager[44949]: <info>  [1759845933.7272] manager: (tapc11e448c-20): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Oct  7 10:05:33 np0005473739 systemd-udevd[282883]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.763 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c75d0143-3e4f-4513-8c58-6e0a7d8d7ee9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.767 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ce0b25f1-c168-4779-ace2-f3f36c1e2ae2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 NetworkManager[44949]: <info>  [1759845933.8061] device (tapc11e448c-20): carrier: link connected
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.814 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f5a6ef-d8fa-405e-88a6-448cce6b554f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.836 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc95221-8f99-4eb7-8496-72269f730c0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc11e448c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:ab:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651737, 'reachable_time': 40650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 282912, 'error': None, 'target': 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.859 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8043d0a0-f9cb-4e5d-99f7-28014ecf4515]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe65:abcb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651737, 'tstamp': 651737}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 282913, 'error': None, 'target': 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.874 2 DEBUG nova.compute.manager [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.875 2 DEBUG oslo_concurrency.lockutils [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.875 2 DEBUG oslo_concurrency.lockutils [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.876 2 DEBUG oslo_concurrency.lockutils [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:33 np0005473739 nova_compute[259550]: 2025-10-07 14:05:33.876 2 DEBUG nova.compute.manager [req-b30ae0aa-7057-4ad8-b0e4-372410ca199e req-2c77c112-4979-4218-bcc7-de0e8e0757f0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Processing event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.882 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[46bf391e-3ec3-4d1f-a38d-886965cac203]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc11e448c-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:65:ab:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 24], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651737, 'reachable_time': 40650, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 282914, 'error': None, 'target': 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:33.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[703069e2-0e66-40ca-bf9a-619116d93734]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.018 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8938ba2-79b4-455f-a7b4-2fdf17bbf3bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.021 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc11e448c-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.022 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.022 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc11e448c-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:34 np0005473739 NetworkManager[44949]: <info>  [1759845934.0260] manager: (tapc11e448c-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Oct  7 10:05:34 np0005473739 kernel: tapc11e448c-20: entered promiscuous mode
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.031 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc11e448c-20, col_values=(('external_ids', {'iface-id': '7dd78e44-59a7-4f45-983c-498cc6aa3cd4'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:34Z|00057|binding|INFO|Releasing lport 7dd78e44-59a7-4f45-983c-498cc6aa3cd4 from this chassis (sb_readonly=0)
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.056 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c11e448c-21ee-492b-8c01-cf2af1e6c4f4.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c11e448c-21ee-492b-8c01-cf2af1e6c4f4.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9878fddf-c3c2-43a3-a94d-bc6453e1b14f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.058 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-c11e448c-21ee-492b-8c01-cf2af1e6c4f4
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/c11e448c-21ee-492b-8c01-cf2af1e6c4f4.pid.haproxy
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID c11e448c-21ee-492b-8c01-cf2af1e6c4f4
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:05:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:34.060 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'env', 'PROCESS_TAG=haproxy-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c11e448c-21ee-492b-8c01-cf2af1e6c4f4.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:05:34 np0005473739 podman[282986]: 2025-10-07 14:05:34.484847129 +0000 UTC m=+0.054244332 container create eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  7 10:05:34 np0005473739 systemd[1]: Started libpod-conmon-eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482.scope.
Oct  7 10:05:34 np0005473739 podman[282986]: 2025-10-07 14:05:34.456633054 +0000 UTC m=+0.026030267 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.560 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.561 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845934.5612476, a2f7901e-6572-4162-b995-0c44fb69eab5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.562 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] VM Started (Lifecycle Event)#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.569 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.573 2 INFO nova.virt.libvirt.driver [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance spawned successfully.#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.574 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:05:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:05:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08cfefc9e73b43b0851f8beb7cc640de90c39f70e6d14855bf71f6c4d9005459/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.603 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:34 np0005473739 podman[282986]: 2025-10-07 14:05:34.606203014 +0000 UTC m=+0.175600247 container init eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.613 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:34 np0005473739 podman[282986]: 2025-10-07 14:05:34.618959505 +0000 UTC m=+0.188356728 container start eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.621 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.622 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.622 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.623 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.624 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.624 2 DEBUG nova.virt.libvirt.driver [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.635 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.636 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845934.56133, a2f7901e-6572-4162-b995-0c44fb69eab5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.636 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:05:34 np0005473739 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [NOTICE]   (283004) : New worker (283006) forked
Oct  7 10:05:34 np0005473739 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [NOTICE]   (283004) : Loading success.
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.691 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.695 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845934.5687776, a2f7901e-6572-4162-b995-0c44fb69eab5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.696 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.729 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.736 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.744 2 INFO nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Took 7.37 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.744 2 DEBUG nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.773 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.820 2 INFO nova.compute.manager [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Took 8.95 seconds to build instance.#033[00m
Oct  7 10:05:34 np0005473739 nova_compute[259550]: 2025-10-07 14:05:34.850 2 DEBUG oslo_concurrency.lockutils [None req-86a34ca0-426c-4dc9-97e4-445975c72c4c 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.042s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.074 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.075 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.102 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.172 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.173 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.182 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.183 2 INFO nova.compute.claims [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.328 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1148: 305 pgs: 305 active+clean; 181 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 4.6 MiB/s wr, 214 op/s
Oct  7 10:05:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e131 do_prune osdmap full prune enabled
Oct  7 10:05:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e132 e132: 3 total, 3 up, 3 in
Oct  7 10:05:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e132: 3 total, 3 up, 3 in
Oct  7 10:05:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/769448829' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.855 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.527s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.861 2 DEBUG nova.compute.provider_tree [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.889 2 DEBUG nova.scheduler.client.report [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.910 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:35 np0005473739 nova_compute[259550]: 2025-10-07 14:05:35.911 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.235 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.236 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.299 2 DEBUG nova.compute.manager [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.299 2 DEBUG oslo_concurrency.lockutils [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.300 2 DEBUG oslo_concurrency.lockutils [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.300 2 DEBUG oslo_concurrency.lockutils [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.300 2 DEBUG nova.compute.manager [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] No waiting events found dispatching network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.300 2 WARNING nova.compute.manager [req-4db57a34-968f-4870-a2fc-e6fdf8bcad7d req-39e46e12-533d-4263-9e1e-f9d56f364ed0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received unexpected event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac for instance with vm_state active and task_state None.#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.401 2 DEBUG nova.policy [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f06dda9346a24fb094ad9fe51664cc48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.417 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.481 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.805 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.807 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.808 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Creating image(s)#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.833 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.866 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.898 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.905 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.973 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.975 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.975 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:36 np0005473739 nova_compute[259550]: 2025-10-07 14:05:36.976 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.002 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.006 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 74094438-0995-4031-9943-cc85a5ef4f57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.162 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Successfully created port: 132e9e5d-eaab-437e-a82e-d49f6c4a09df _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1150: 305 pgs: 305 active+clean; 181 MiB data, 315 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 2.2 MiB/s wr, 224 op/s
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.650 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 74094438-0995-4031-9943-cc85a5ef4f57_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.643s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.722 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.938 2 DEBUG nova.objects.instance [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid 74094438-0995-4031-9943-cc85a5ef4f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.976 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.977 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Ensure instance console log exists: /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.977 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.978 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:37 np0005473739 nova_compute[259550]: 2025-10-07 14:05:37.978 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e132 do_prune osdmap full prune enabled
Oct  7 10:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 e133: 3 total, 3 up, 3 in
Oct  7 10:05:38 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e133: 3 total, 3 up, 3 in
Oct  7 10:05:39 np0005473739 nova_compute[259550]: 2025-10-07 14:05:39.426 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Successfully updated port: 132e9e5d-eaab-437e-a82e-d49f6c4a09df _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:05:39 np0005473739 nova_compute[259550]: 2025-10-07 14:05:39.447 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:39 np0005473739 nova_compute[259550]: 2025-10-07 14:05:39.448 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquired lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:39 np0005473739 nova_compute[259550]: 2025-10-07 14:05:39.448 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:05:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1152: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 213 MiB data, 326 MiB used, 60 GiB / 60 GiB avail; 6.5 MiB/s rd, 1.4 MiB/s wr, 280 op/s
Oct  7 10:05:39 np0005473739 nova_compute[259550]: 2025-10-07 14:05:39.596 2 DEBUG nova.compute.manager [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-changed-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:39 np0005473739 nova_compute[259550]: 2025-10-07 14:05:39.597 2 DEBUG nova.compute.manager [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Refreshing instance network info cache due to event network-changed-132e9e5d-eaab-437e-a82e-d49f6c4a09df. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:05:39 np0005473739 nova_compute[259550]: 2025-10-07 14:05:39.597 2 DEBUG oslo_concurrency.lockutils [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:40 np0005473739 nova_compute[259550]: 2025-10-07 14:05:40.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:40 np0005473739 nova_compute[259550]: 2025-10-07 14:05:40.608 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:05:40 np0005473739 nova_compute[259550]: 2025-10-07 14:05:40.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:40 np0005473739 NetworkManager[44949]: <info>  [1759845940.8513] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Oct  7 10:05:40 np0005473739 NetworkManager[44949]: <info>  [1759845940.8524] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Oct  7 10:05:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:40Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:05:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:40Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:05:40 np0005473739 nova_compute[259550]: 2025-10-07 14:05:40.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:40Z|00058|binding|INFO|Releasing lport 7dd78e44-59a7-4f45-983c-498cc6aa3cd4 from this chassis (sb_readonly=0)
Oct  7 10:05:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:40Z|00059|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct  7 10:05:40 np0005473739 nova_compute[259550]: 2025-10-07 14:05:40.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1153: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 234 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 7.1 MiB/s rd, 3.5 MiB/s wr, 339 op/s
Oct  7 10:05:41 np0005473739 nova_compute[259550]: 2025-10-07 14:05:41.707 2 DEBUG nova.compute.manager [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:41 np0005473739 nova_compute[259550]: 2025-10-07 14:05:41.708 2 DEBUG nova.compute.manager [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing instance network info cache due to event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:05:41 np0005473739 nova_compute[259550]: 2025-10-07 14:05:41.708 2 DEBUG oslo_concurrency.lockutils [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:41 np0005473739 nova_compute[259550]: 2025-10-07 14:05:41.708 2 DEBUG oslo_concurrency.lockutils [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:41 np0005473739 nova_compute[259550]: 2025-10-07 14:05:41.709 2 DEBUG nova.network.neutron [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:05:41 np0005473739 nova_compute[259550]: 2025-10-07 14:05:41.973 2 DEBUG nova.network.neutron [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Updating instance_info_cache with network_info: [{"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:41 np0005473739 nova_compute[259550]: 2025-10-07 14:05:41.992 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Releasing lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:41 np0005473739 nova_compute[259550]: 2025-10-07 14:05:41.998 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance network_info: |[{"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.001 2 DEBUG oslo_concurrency.lockutils [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.001 2 DEBUG nova.network.neutron [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Refreshing network info cache for port 132e9e5d-eaab-437e-a82e-d49f6c4a09df _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.006 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start _get_guest_xml network_info=[{"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.012 2 WARNING nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.018 2 DEBUG nova.virt.libvirt.host [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.019 2 DEBUG nova.virt.libvirt.host [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.028 2 DEBUG nova.virt.libvirt.host [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.029 2 DEBUG nova.virt.libvirt.host [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.030 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.030 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.032 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.032 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.032 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.033 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.033 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.033 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.034 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.034 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.034 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.035 2 DEBUG nova.virt.hardware [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.040 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2271294480' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.491 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.516 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:42 np0005473739 nova_compute[259550]: 2025-10-07 14:05:42.522 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/964272690' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.000 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.004 2 DEBUG nova.virt.libvirt.vif [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-222035485',display_name='tempest-ServersAdminTestJSON-server-222035485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-222035485',id=13,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-lr6293xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:36Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=74094438-0995-4031-9943-cc85a5ef4f57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.004 2 DEBUG nova.network.os_vif_util [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.006 2 DEBUG nova.network.os_vif_util [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:43Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:68:5d:93 10.100.0.9
Oct  7 10:05:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:43Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:68:5d:93 10.100.0.9
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.008 2 DEBUG nova.objects.instance [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid 74094438-0995-4031-9943-cc85a5ef4f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.023 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <uuid>74094438-0995-4031-9943-cc85a5ef4f57</uuid>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <name>instance-0000000d</name>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdminTestJSON-server-222035485</nova:name>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:05:42</nova:creationTime>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <nova:port uuid="132e9e5d-eaab-437e-a82e-d49f6c4a09df">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <entry name="serial">74094438-0995-4031-9943-cc85a5ef4f57</entry>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <entry name="uuid">74094438-0995-4031-9943-cc85a5ef4f57</entry>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/74094438-0995-4031-9943-cc85a5ef4f57_disk">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/74094438-0995-4031-9943-cc85a5ef4f57_disk.config">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:66:2c:66"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <target dev="tap132e9e5d-ea"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/console.log" append="off"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:05:43 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:05:43 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:05:43 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:05:43 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.030 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Preparing to wait for external event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.031 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.031 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.031 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.032 2 DEBUG nova.virt.libvirt.vif [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-222035485',display_name='tempest-ServersAdminTestJSON-server-222035485',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-222035485',id=13,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-lr6293xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:36Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=74094438-0995-4031-9943-cc85a5ef4f57,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.033 2 DEBUG nova.network.os_vif_util [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.034 2 DEBUG nova.network.os_vif_util [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.034 2 DEBUG os_vif [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.036 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap132e9e5d-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.042 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap132e9e5d-ea, col_values=(('external_ids', {'iface-id': '132e9e5d-eaab-437e-a82e-d49f6c4a09df', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:2c:66', 'vm-uuid': '74094438-0995-4031-9943-cc85a5ef4f57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:43 np0005473739 NetworkManager[44949]: <info>  [1759845943.0454] manager: (tap132e9e5d-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.054 2 INFO os_vif [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea')#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.109 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.110 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.111 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:66:2c:66, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.112 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Using config drive#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.135 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1154: 305 pgs: 7 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 296 active+clean; 234 MiB data, 343 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 3.5 MiB/s wr, 218 op/s
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.593 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Creating config drive at /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.599 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpajbk0hx7 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.682 2 DEBUG nova.network.neutron [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updated VIF entry in instance network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.683 2 DEBUG nova.network.neutron [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.202", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.723 2 DEBUG oslo_concurrency.lockutils [req-c985606a-2a5f-44c4-9f06-d04cdb1c9f89 req-df6c903d-da79-4239-a336-1dbbf7669f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.735 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpajbk0hx7" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.762 2 DEBUG nova.storage.rbd_utils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 74094438-0995-4031-9943-cc85a5ef4f57_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:43 np0005473739 nova_compute[259550]: 2025-10-07 14:05:43.767 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config 74094438-0995-4031-9943-cc85a5ef4f57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.099 2 DEBUG oslo_concurrency.processutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config 74094438-0995-4031-9943-cc85a5ef4f57_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.333s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.100 2 INFO nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Deleting local config drive /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57/disk.config because it was imported into RBD.#033[00m
Oct  7 10:05:44 np0005473739 kernel: tap132e9e5d-ea: entered promiscuous mode
Oct  7 10:05:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:44Z|00060|binding|INFO|Claiming lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df for this chassis.
Oct  7 10:05:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:44Z|00061|binding|INFO|132e9e5d-eaab-437e-a82e-d49f6c4a09df: Claiming fa:16:3e:66:2c:66 10.100.0.10
Oct  7 10:05:44 np0005473739 NetworkManager[44949]: <info>  [1759845944.1717] manager: (tap132e9e5d-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:44Z|00062|binding|INFO|Setting lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df ovn-installed in OVS
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:44 np0005473739 systemd-udevd[283340]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:05:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:44Z|00063|binding|INFO|Setting lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df up in Southbound
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.215 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:2c:66 10.100.0.10'], port_security=['fa:16:3e:66:2c:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '74094438-0995-4031-9943-cc85a5ef4f57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=132e9e5d-eaab-437e-a82e-d49f6c4a09df) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.216 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 132e9e5d-eaab-437e-a82e-d49f6c4a09df in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.218 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:05:44 np0005473739 NetworkManager[44949]: <info>  [1759845944.2247] device (tap132e9e5d-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:05:44 np0005473739 systemd-machined[214580]: New machine qemu-15-instance-0000000d.
Oct  7 10:05:44 np0005473739 NetworkManager[44949]: <info>  [1759845944.2266] device (tap132e9e5d-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:05:44 np0005473739 systemd[1]: Started Virtual Machine qemu-15-instance-0000000d.
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.245 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb1e52d-ccfd-4f32-8915-ecf2815ebae9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.288 2 DEBUG nova.network.neutron [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Updated VIF entry in instance network info cache for port 132e9e5d-eaab-437e-a82e-d49f6c4a09df. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.289 2 DEBUG nova.network.neutron [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Updating instance_info_cache with network_info: [{"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.292 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[29030d98-fc01-428b-9385-52120d28fa3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.296 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2327cb71-095a-47cc-a7f3-b3b8a69afe89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.329 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1eddd0df-32f0-487c-99b4-582ea41e53b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.352 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8490addb-db4c-49ca-b5f5-3f8da5f0c8e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 8, 'rx_bytes': 832, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283355, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.377 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e71fe57e-94aa-4c41-8912-3aea10e6a5ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283356, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 283356, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.380 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.384 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.384 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.384 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:44.385 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.415 2 DEBUG oslo_concurrency.lockutils [req-ff20ea32-22d5-4fbe-b23f-a096099b48a2 req-d4bbb38b-67bf-4718-9caf-89b9551e3ff9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-74094438-0995-4031-9943-cc85a5ef4f57" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.716 2 DEBUG nova.compute.manager [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.716 2 DEBUG oslo_concurrency.lockutils [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.717 2 DEBUG oslo_concurrency.lockutils [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.717 2 DEBUG oslo_concurrency.lockutils [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:44 np0005473739 nova_compute[259550]: 2025-10-07 14:05:44.717 2 DEBUG nova.compute.manager [req-24c7b9a2-22cc-46ee-aa11-c1d5f67f0860 req-72debb31-0c2e-48c3-9375-e1c178a14504 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Processing event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1155: 305 pgs: 305 active+clean; 269 MiB data, 377 MiB used, 60 GiB / 60 GiB avail; 2.6 MiB/s rd, 6.0 MiB/s wr, 254 op/s
Oct  7 10:05:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.680 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845945.6797159, 74094438-0995-4031-9943-cc85a5ef4f57 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.680 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] VM Started (Lifecycle Event)#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.683 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.691 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.696 2 INFO nova.virt.libvirt.driver [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance spawned successfully.#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.697 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.702 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.706 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.728 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.728 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845945.6798453, 74094438-0995-4031-9943-cc85a5ef4f57 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.729 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.732 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.733 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.734 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.734 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.734 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.735 2 DEBUG nova.virt.libvirt.driver [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.756 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.761 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845945.6904776, 74094438-0995-4031-9943-cc85a5ef4f57 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.762 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.791 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.798 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.807 2 INFO nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Took 9.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.807 2 DEBUG nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.817 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:05:45 np0005473739 nova_compute[259550]: 2025-10-07 14:05:45.872 2 INFO nova.compute.manager [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Took 10.72 seconds to build instance.#033[00m
Oct  7 10:05:46 np0005473739 nova_compute[259550]: 2025-10-07 14:05:46.053 2 DEBUG oslo_concurrency.lockutils [None req-bd265495-da08-4e78-a3c8-dff8df1c3ab2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.978s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:46 np0005473739 nova_compute[259550]: 2025-10-07 14:05:46.835 2 DEBUG nova.compute.manager [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:46 np0005473739 nova_compute[259550]: 2025-10-07 14:05:46.836 2 DEBUG oslo_concurrency.lockutils [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:46 np0005473739 nova_compute[259550]: 2025-10-07 14:05:46.836 2 DEBUG oslo_concurrency.lockutils [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:46 np0005473739 nova_compute[259550]: 2025-10-07 14:05:46.836 2 DEBUG oslo_concurrency.lockutils [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:46 np0005473739 nova_compute[259550]: 2025-10-07 14:05:46.837 2 DEBUG nova.compute.manager [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] No waiting events found dispatching network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:05:46 np0005473739 nova_compute[259550]: 2025-10-07 14:05:46.837 2 WARNING nova.compute.manager [req-aa7401a7-7a51-4c08-84c0-71c6b6847289 req-985b2ca8-e6bb-43d0-9df2-1f06bc18248d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received unexpected event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df for instance with vm_state active and task_state None.#033[00m
Oct  7 10:05:47 np0005473739 podman[283400]: 2025-10-07 14:05:47.100584411 +0000 UTC m=+0.079435444 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:05:47 np0005473739 podman[283399]: 2025-10-07 14:05:47.106701154 +0000 UTC m=+0.086033070 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:05:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1156: 305 pgs: 305 active+clean; 293 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 7.2 MiB/s wr, 305 op/s
Oct  7 10:05:48 np0005473739 nova_compute[259550]: 2025-10-07 14:05:48.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:48Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:1a:81 10.100.0.9
Oct  7 10:05:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:48Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:1a:81 10.100.0.9
Oct  7 10:05:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1157: 305 pgs: 305 active+clean; 314 MiB data, 407 MiB used, 60 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.2 MiB/s wr, 317 op/s
Oct  7 10:05:50 np0005473739 nova_compute[259550]: 2025-10-07 14:05:50.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e133 do_prune osdmap full prune enabled
Oct  7 10:05:50 np0005473739 nova_compute[259550]: 2025-10-07 14:05:50.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 e134: 3 total, 3 up, 3 in
Oct  7 10:05:50 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e134: 3 total, 3 up, 3 in
Oct  7 10:05:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1159: 305 pgs: 305 active+clean; 320 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.0 MiB/s wr, 304 op/s
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.062 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.062 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.150 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.279 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.280 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.289 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.290 2 INFO nova.compute.claims [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.368 2 DEBUG nova.compute.manager [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.369 2 DEBUG nova.compute.manager [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing instance network info cache due to event network-changed-d4571f56-54c6-4986-845c-cd57c4faadac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.369 2 DEBUG oslo_concurrency.lockutils [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.369 2 DEBUG oslo_concurrency.lockutils [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.369 2 DEBUG nova.network.neutron [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Refreshing network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.531 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:05:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646590604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.957 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.964 2 DEBUG nova.compute.provider_tree [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:52 np0005473739 nova_compute[259550]: 2025-10-07 14:05:52.990 2 DEBUG nova.scheduler.client.report [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.032 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.033 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.050 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.094 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.095 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:05:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:53.134 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:53.135 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.150 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.184 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:05:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1160: 305 pgs: 305 active+clean; 320 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 7.0 MiB/s wr, 304 op/s
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.533 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.534 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.535 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Creating image(s)#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.561 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.587 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.611 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.615 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.696 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.697 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.698 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.699 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.725 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.731 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:53 np0005473739 nova_compute[259550]: 2025-10-07 14:05:53.799 2 DEBUG nova.policy [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f06dda9346a24fb094ad9fe51664cc48', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.123 2 DEBUG nova.network.neutron [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updated VIF entry in instance network info cache for port d4571f56-54c6-4986-845c-cd57c4faadac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.124 2 DEBUG nova.network.neutron [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [{"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.143 2 DEBUG oslo_concurrency.lockutils [req-d6c87d66-e6eb-4e53-b9e6-aa20366a50cf req-d3655765-3bc5-4849-aca6-14749d4121a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a2f7901e-6572-4162-b995-0c44fb69eab5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.250 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.316 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.681 2 DEBUG nova.objects.instance [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid 348c80b7-7f65-4300-9dab-6a333f1b2c74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.697 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.698 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Ensure instance console log exists: /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.699 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.700 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.701 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:54 np0005473739 nova_compute[259550]: 2025-10-07 14:05:54.804 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Successfully created port: 184f2379-8442-414e-bccb-6f5e5a314e72 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1161: 305 pgs: 305 active+clean; 326 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 210 op/s
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.472 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Successfully updated port: 184f2379-8442-414e-bccb-6f5e5a314e72 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.488 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.488 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquired lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.488 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.540 2 DEBUG nova.compute.manager [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-changed-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.540 2 DEBUG nova.compute.manager [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Refreshing instance network info cache due to event network-changed-184f2379-8442-414e-bccb-6f5e5a314e72. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.541 2 DEBUG oslo_concurrency.lockutils [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:05:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:05:55 np0005473739 nova_compute[259550]: 2025-10-07 14:05:55.714 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.110 2 DEBUG nova.network.neutron [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updating instance_info_cache with network_info: [{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:57 np0005473739 podman[283623]: 2025-10-07 14:05:57.119280917 +0000 UTC m=+0.106996040 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.123 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Releasing lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.124 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance network_info: |[{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.124 2 DEBUG oslo_concurrency.lockutils [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.124 2 DEBUG nova.network.neutron [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Refreshing network info cache for port 184f2379-8442-414e-bccb-6f5e5a314e72 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.127 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start _get_guest_xml network_info=[{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.130 2 WARNING nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.134 2 DEBUG nova.virt.libvirt.host [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.135 2 DEBUG nova.virt.libvirt.host [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.137 2 DEBUG nova.virt.libvirt.host [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.138 2 DEBUG nova.virt.libvirt.host [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.138 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.138 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.139 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.140 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.140 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.140 2 DEBUG nova.virt.hardware [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.142 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1162: 305 pgs: 305 active+clean; 344 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 3.1 MiB/s wr, 158 op/s
Oct  7 10:05:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3760172047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.625 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.647 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.652 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.944 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.945 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.945 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.945 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.946 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.947 2 INFO nova.compute.manager [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Terminating instance#033[00m
Oct  7 10:05:57 np0005473739 nova_compute[259550]: 2025-10-07 14:05:57.948 2 DEBUG nova.compute.manager [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:05:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/305683047' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.144 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.146 2 DEBUG nova.virt.libvirt.vif [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-353618528',display_name='tempest-ServersAdminTestJSON-server-353618528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-353618528',id=14,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-tg713xdg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:53Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=348c80b7-7f65-4300-9dab-6a333f1b2c74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.146 2 DEBUG nova.network.os_vif_util [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.147 2 DEBUG nova.network.os_vif_util [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.148 2 DEBUG nova.objects.instance [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid 348c80b7-7f65-4300-9dab-6a333f1b2c74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.166 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <uuid>348c80b7-7f65-4300-9dab-6a333f1b2c74</uuid>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <name>instance-0000000e</name>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdminTestJSON-server-353618528</nova:name>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:05:57</nova:creationTime>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <nova:port uuid="184f2379-8442-414e-bccb-6f5e5a314e72">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <entry name="serial">348c80b7-7f65-4300-9dab-6a333f1b2c74</entry>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <entry name="uuid">348c80b7-7f65-4300-9dab-6a333f1b2c74</entry>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/348c80b7-7f65-4300-9dab-6a333f1b2c74_disk">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:c8:34:23"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <target dev="tap184f2379-84"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/console.log" append="off"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:05:58 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:05:58 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:05:58 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:05:58 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.166 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Preparing to wait for external event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.167 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.167 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.167 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.168 2 DEBUG nova.virt.libvirt.vif [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-353618528',display_name='tempest-ServersAdminTestJSON-server-353618528',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-353618528',id=14,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-tg713xdg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:53Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=348c80b7-7f65-4300-9dab-6a333f1b2c74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.169 2 DEBUG nova.network.os_vif_util [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.169 2 DEBUG nova.network.os_vif_util [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.170 2 DEBUG os_vif [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.176 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap184f2379-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.177 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap184f2379-84, col_values=(('external_ids', {'iface-id': '184f2379-8442-414e-bccb-6f5e5a314e72', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c8:34:23', 'vm-uuid': '348c80b7-7f65-4300-9dab-6a333f1b2c74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 NetworkManager[44949]: <info>  [1759845958.1800] manager: (tap184f2379-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.189 2 INFO os_vif [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84')#033[00m
Oct  7 10:05:58 np0005473739 kernel: tapd4571f56-54 (unregistering): left promiscuous mode
Oct  7 10:05:58 np0005473739 NetworkManager[44949]: <info>  [1759845958.3703] device (tapd4571f56-54): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:58Z|00064|binding|INFO|Releasing lport d4571f56-54c6-4986-845c-cd57c4faadac from this chassis (sb_readonly=0)
Oct  7 10:05:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:58Z|00065|binding|INFO|Setting lport d4571f56-54c6-4986-845c-cd57c4faadac down in Southbound
Oct  7 10:05:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:58Z|00066|binding|INFO|Removing iface tapd4571f56-54 ovn-installed in OVS
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.411 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:1a:81 10.100.0.9'], port_security=['fa:16:3e:47:1a:81 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a2f7901e-6572-4162-b995-0c44fb69eab5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '54e39443887a407284ed98974d4e0771', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b31c4f6-cd30-4c48-acc0-2bceb0f06bb5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=11ba191e-5394-4c4a-8b33-ca086e98b88d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d4571f56-54c6-4986-845c-cd57c4faadac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:05:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.412 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d4571f56-54c6-4986-845c-cd57c4faadac in datapath c11e448c-21ee-492b-8c01-cf2af1e6c4f4 unbound from our chassis#033[00m
Oct  7 10:05:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.413 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c11e448c-21ee-492b-8c01-cf2af1e6c4f4, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:05:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.418 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[920eab32-22d6-41db-9cbe-540bde7682df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:58.419 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 namespace which is not needed anymore#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.426 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.427 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.427 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:c8:34:23, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.427 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Using config drive#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.449 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:58 np0005473739 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Oct  7 10:05:58 np0005473739 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000c.scope: Consumed 14.831s CPU time.
Oct  7 10:05:58 np0005473739 systemd-machined[214580]: Machine qemu-14-instance-0000000c terminated.
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.592 2 INFO nova.virt.libvirt.driver [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Instance destroyed successfully.#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.592 2 DEBUG nova.objects.instance [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lazy-loading 'resources' on Instance uuid a2f7901e-6572-4162-b995-0c44fb69eab5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:05:58 np0005473739 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [NOTICE]   (283004) : haproxy version is 2.8.14-c23fe91
Oct  7 10:05:58 np0005473739 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [NOTICE]   (283004) : path to executable is /usr/sbin/haproxy
Oct  7 10:05:58 np0005473739 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [WARNING]  (283004) : Exiting Master process...
Oct  7 10:05:58 np0005473739 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [ALERT]    (283004) : Current worker (283006) exited with code 143 (Terminated)
Oct  7 10:05:58 np0005473739 neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4[283000]: [WARNING]  (283004) : All workers exited. Exiting... (0)
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.622 2 DEBUG nova.virt.libvirt.vif [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:25Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',display_name='tempest-FloatingIPsAssociationNegativeTestJSON-server-230026105',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationnegativetestjson-server-230026105',id=12,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:34Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='54e39443887a407284ed98974d4e0771',ramdisk_id='',reservation_id='r-0i2aqjqd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908',owner_user_name='tempest-FloatingIPsAssociationNegativeTestJSON-909075908-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:05:34Z,user_data=None,user_id='124a23e91e614186848847e685d191d9',uuid=a2f7901e-6572-4162-b995-0c44fb69eab5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.622 2 DEBUG nova.network.os_vif_util [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converting VIF {"id": "d4571f56-54c6-4986-845c-cd57c4faadac", "address": "fa:16:3e:47:1a:81", "network": {"id": "c11e448c-21ee-492b-8c01-cf2af1e6c4f4", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationNegativeTestJSON-1976607003-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "54e39443887a407284ed98974d4e0771", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd4571f56-54", "ovs_interfaceid": "d4571f56-54c6-4986-845c-cd57c4faadac", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.623 2 DEBUG nova.network.os_vif_util [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.624 2 DEBUG os_vif [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:05:58 np0005473739 systemd[1]: libpod-eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482.scope: Deactivated successfully.
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.625 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd4571f56-54, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:05:58 np0005473739 podman[283754]: 2025-10-07 14:05:58.631276018 +0000 UTC m=+0.117762828 container died eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.636 2 INFO os_vif [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:47:1a:81,bridge_name='br-int',has_traffic_filtering=True,id=d4571f56-54c6-4986-845c-cd57c4faadac,network=Network(c11e448c-21ee-492b-8c01-cf2af1e6c4f4),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd4571f56-54')#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.762 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Creating config drive at /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.771 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpox63kxhu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482-userdata-shm.mount: Deactivated successfully.
Oct  7 10:05:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-08cfefc9e73b43b0851f8beb7cc640de90c39f70e6d14855bf71f6c4d9005459-merged.mount: Deactivated successfully.
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.905 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpox63kxhu" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.976 2 DEBUG nova.storage.rbd_utils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:05:58 np0005473739 nova_compute[259550]: 2025-10-07 14:05:58.980 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.018 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.019 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.023 2 DEBUG nova.network.neutron [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updated VIF entry in instance network info cache for port 184f2379-8442-414e-bccb-6f5e5a314e72. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.024 2 DEBUG nova.network.neutron [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updating instance_info_cache with network_info: [{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.040 2 DEBUG oslo_concurrency.lockutils [req-33e0ac3a-92f9-4cb2-9873-a6f6695fc40e req-f7826e93-229f-41ad-8731-ec8d18760400 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.041 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.111 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.112 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.119 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.120 2 INFO nova.compute.claims [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:05:59 np0005473739 podman[283754]: 2025-10-07 14:05:59.185512101 +0000 UTC m=+0.671998891 container cleanup eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:05:59 np0005473739 systemd[1]: libpod-conmon-eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482.scope: Deactivated successfully.
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.274 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:05:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:59Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:66:2c:66 10.100.0.10
Oct  7 10:05:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:05:59Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:66:2c:66 10.100.0.10
Oct  7 10:05:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1163: 305 pgs: 305 active+clean; 390 MiB data, 457 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.1 MiB/s wr, 132 op/s
Oct  7 10:05:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:05:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2372964451' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:05:59 np0005473739 podman[283853]: 2025-10-07 14:05:59.751735174 +0000 UTC m=+0.543068035 container remove eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.760 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3bbc6438-2a76-499c-9575-408ae1959a48]: (4, ('Tue Oct  7 02:05:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 (eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482)\neef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482\nTue Oct  7 02:05:59 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 (eef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482)\neef7ac40b1d44a6b82bbcd2b9865f1e12bf5036bc9ccf9ffb94218b038eb8482\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.762 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[83b4c3c5-6737-42fc-9db3-02b2cd98308f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.763 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc11e448c-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:59 np0005473739 kernel: tapc11e448c-20: left promiscuous mode
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.772 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.778 2 DEBUG nova.compute.provider_tree [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.791 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1df188d8-b363-4a86-963f-be3ed2f2f4eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.804 2 DEBUG nova.scheduler.client.report [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.821 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdda217c-fa7c-46a6-945f-93ba5839cccf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.824 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca912246-2cd6-4ec3-bf64-8b6fc40e6aba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.831 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.832 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.842 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[195d97d5-b8a8-40ad-9e2a-ad434dfd8783]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 651727, 'reachable_time': 29525, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 283898, 'error': None, 'target': 'ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.845 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c11e448c-21ee-492b-8c01-cf2af1e6c4f4 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:05:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:05:59.845 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a5a0a98e-8bb3-4cc0-9d9a-fcb08a65b109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:05:59 np0005473739 systemd[1]: run-netns-ovnmeta\x2dc11e448c\x2d21ee\x2d492b\x2d8c01\x2dcf2af1e6c4f4.mount: Deactivated successfully.
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.878 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.878 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.901 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:05:59 np0005473739 nova_compute[259550]: 2025-10-07 14:05:59.922 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.004 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.008 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.009 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Creating image(s)#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.030 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.040 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.067 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.097 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.101 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.137 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.162 2 DEBUG oslo_concurrency.processutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config 348c80b7-7f65-4300-9dab-6a333f1b2c74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.163 2 INFO nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Deleting local config drive /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74/disk.config because it was imported into RBD.#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.172 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.173 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.175 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.176 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.204 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.209 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4b95692e-088d-452c-83b7-4c50df73b8fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:00 np0005473739 kernel: tap184f2379-84: entered promiscuous mode
Oct  7 10:06:00 np0005473739 NetworkManager[44949]: <info>  [1759845960.2396] manager: (tap184f2379-84): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Oct  7 10:06:00 np0005473739 systemd-udevd[283717]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:06:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:00Z|00067|binding|INFO|Claiming lport 184f2379-8442-414e-bccb-6f5e5a314e72 for this chassis.
Oct  7 10:06:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:00Z|00068|binding|INFO|184f2379-8442-414e-bccb-6f5e5a314e72: Claiming fa:16:3e:c8:34:23 10.100.0.11
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.250 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:34:23 10.100.0.11'], port_security=['fa:16:3e:c8:34:23 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '348c80b7-7f65-4300-9dab-6a333f1b2c74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=184f2379-8442-414e-bccb-6f5e5a314e72) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.252 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 184f2379-8442-414e-bccb-6f5e5a314e72 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.253 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:06:00 np0005473739 NetworkManager[44949]: <info>  [1759845960.2624] device (tap184f2379-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:06:00 np0005473739 NetworkManager[44949]: <info>  [1759845960.2635] device (tap184f2379-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:00Z|00069|binding|INFO|Setting lport 184f2379-8442-414e-bccb-6f5e5a314e72 ovn-installed in OVS
Oct  7 10:06:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:00Z|00070|binding|INFO|Setting lport 184f2379-8442-414e-bccb-6f5e5a314e72 up in Southbound
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad76f724-79c7-4fad-9a1f-45a0c7db687e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:00 np0005473739 systemd-machined[214580]: New machine qemu-16-instance-0000000e.
Oct  7 10:06:00 np0005473739 podman[283957]: 2025-10-07 14:06:00.309151522 +0000 UTC m=+0.113387481 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  7 10:06:00 np0005473739 systemd[1]: Started Virtual Machine qemu-16-instance-0000000e.
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.316 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[669007ba-db5b-4c8d-8f3d-884ae6ecac03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.319 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[19f6522d-d5f0-457f-968b-3bcdd85baa9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.352 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7bdd4a8f-78e8-43f6-9258-294eef27a92b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.373 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1f79e34e-0e36-4f6b-b854-f350b271f440]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 10, 'rx_bytes': 1000, 'tx_bytes': 608, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284035, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.388 2 DEBUG nova.policy [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ff78293ed4e40f9954a0b0e6fca0caa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '711e531670b1460a923f2f91ce0f63db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.393 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b02da377-9a2b-4392-8567-eac392f4dfe8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284037, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284037, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.396 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.400 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.400 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:00.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.586 2 DEBUG nova.compute.manager [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.586 2 DEBUG oslo_concurrency.lockutils [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.587 2 DEBUG oslo_concurrency.lockutils [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.587 2 DEBUG oslo_concurrency.lockutils [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.587 2 DEBUG nova.compute.manager [req-5d7fd2e3-a056-4e68-a796-b496b10b4be0 req-fd106426-838e-4440-84b7-cd5f548048b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Processing event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.597 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4b95692e-088d-452c-83b7-4c50df73b8fe_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.388s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.663 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] resizing rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.741 2 INFO nova.virt.libvirt.driver [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Deleting instance files /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5_del#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.743 2 INFO nova.virt.libvirt.driver [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Deletion of /var/lib/nova/instances/a2f7901e-6572-4162-b995-0c44fb69eab5_del complete#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.804 2 DEBUG nova.compute.manager [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-unplugged-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.804 2 DEBUG oslo_concurrency.lockutils [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.805 2 DEBUG oslo_concurrency.lockutils [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.805 2 DEBUG oslo_concurrency.lockutils [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.805 2 DEBUG nova.compute.manager [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] No waiting events found dispatching network-vif-unplugged-d4571f56-54c6-4986-845c-cd57c4faadac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.806 2 DEBUG nova.compute.manager [req-ff4a3dd9-69a1-4a8f-8599-78eae53ad771 req-6138f193-df93-4436-b1d9-86a590067d6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-unplugged-d4571f56-54c6-4986-845c-cd57c4faadac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.812 2 DEBUG nova.objects.instance [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'migration_context' on Instance uuid 4b95692e-088d-452c-83b7-4c50df73b8fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.865 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.865 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Ensure instance console log exists: /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.866 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.866 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.866 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.867 2 INFO nova.compute.manager [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Took 2.92 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.867 2 DEBUG oslo.service.loopingcall [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.868 2 DEBUG nova.compute.manager [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:06:00 np0005473739 nova_compute[259550]: 2025-10-07 14:06:00.868 2 DEBUG nova.network.neutron [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.100 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Successfully created port: d06bd5a4-d9b7-4791-a387-d190eb1457f6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.161 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845961.1604762, 348c80b7-7f65-4300-9dab-6a333f1b2c74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.161 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] VM Started (Lifecycle Event)#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.163 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.167 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.170 2 INFO nova.virt.libvirt.driver [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance spawned successfully.#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.170 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.179 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.181 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.189 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.190 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.190 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.190 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.191 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.191 2 DEBUG nova.virt.libvirt.driver [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.214 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.214 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845961.1606133, 348c80b7-7f65-4300-9dab-6a333f1b2c74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.214 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.241 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.246 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845961.166253, 348c80b7-7f65-4300-9dab-6a333f1b2c74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.247 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.250 2 INFO nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Took 7.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.251 2 DEBUG nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.263 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.266 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.289 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.314 2 INFO nova.compute.manager [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Took 9.07 seconds to build instance.#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.332 2 DEBUG oslo_concurrency.lockutils [None req-4a9cb3a6-bf3e-48ef-a53e-8f62ac1fc371 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.270s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1164: 305 pgs: 305 active+clean; 385 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 339 KiB/s rd, 4.4 MiB/s wr, 107 op/s
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.579 2 DEBUG nova.network.neutron [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.596 2 INFO nova.compute.manager [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Took 0.73 seconds to deallocate network for instance.#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.635 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.636 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.762 2 DEBUG oslo_concurrency.processutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.827 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Successfully updated port: d06bd5a4-d9b7-4791-a387-d190eb1457f6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.844 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.844 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:01 np0005473739 nova_compute[259550]: 2025-10-07 14:06:01.845 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.157 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:06:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:06:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2187989551' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.278 2 DEBUG oslo_concurrency.processutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.283 2 DEBUG nova.compute.provider_tree [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.308 2 DEBUG nova.scheduler.client.report [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.329 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.363 2 INFO nova.scheduler.client.report [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Deleted allocations for instance a2f7901e-6572-4162-b995-0c44fb69eab5#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.442 2 DEBUG oslo_concurrency.lockutils [None req-4cd5a354-357a-463f-94c8-68cf25ea6e4d 124a23e91e614186848847e685d191d9 54e39443887a407284ed98974d4e0771 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.497s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.520 2 DEBUG oslo_concurrency.lockutils [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] Acquiring lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.520 2 DEBUG oslo_concurrency.lockutils [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] Acquired lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.520 2 DEBUG nova.network.neutron [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.739 2 DEBUG nova.compute.manager [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.740 2 DEBUG oslo_concurrency.lockutils [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.740 2 DEBUG oslo_concurrency.lockutils [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.740 2 DEBUG oslo_concurrency.lockutils [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.741 2 DEBUG nova.compute.manager [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] No waiting events found dispatching network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.741 2 WARNING nova.compute.manager [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received unexpected event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.741 2 DEBUG nova.compute.manager [req-4f19a190-61e6-4dcf-a825-4ccc75d4f7a7 req-20833546-e31e-4cbe-b9dd-846fb88737f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-deleted-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:02 np0005473739 nova_compute[259550]: 2025-10-07 14:06:02.953 2 DEBUG nova.network.neutron [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.010 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.010 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance network_info: |[{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.013 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start _get_guest_xml network_info=[{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.018 2 WARNING nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.023 2 DEBUG nova.virt.libvirt.host [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.024 2 DEBUG nova.virt.libvirt.host [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.027 2 DEBUG nova.virt.libvirt.host [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.027 2 DEBUG nova.virt.libvirt.host [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.028 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.029 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.030 2 DEBUG nova.virt.hardware [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.033 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.339 2 DEBUG nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.340 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.340 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.340 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a2f7901e-6572-4162-b995-0c44fb69eab5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.341 2 DEBUG nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] No waiting events found dispatching network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.341 2 WARNING nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Received unexpected event network-vif-plugged-d4571f56-54c6-4986-845c-cd57c4faadac for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.342 2 DEBUG nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.342 2 DEBUG nova.compute.manager [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.343 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.343 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.344 2 DEBUG nova.network.neutron [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:06:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1165: 305 pgs: 305 active+clean; 385 MiB data, 447 MiB used, 60 GiB / 60 GiB avail; 308 KiB/s rd, 4.0 MiB/s wr, 97 op/s
Oct  7 10:06:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:06:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3933433294' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.512 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.540 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.544 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:06:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2082770503' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:06:03 np0005473739 nova_compute[259550]: 2025-10-07 14:06:03.998 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.000 2 DEBUG nova.virt.libvirt.vif [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-99112055',display_name='tempest-FloatingIPsAssociationTestJSON-server-99112055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-99112055',id=15,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-ib7l0lly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:59Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=4b95692e-088d-452c-83b7-4c50df73b8fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.000 2 DEBUG nova.network.os_vif_util [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.001 2 DEBUG nova.network.os_vif_util [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.003 2 DEBUG nova.objects.instance [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'pci_devices' on Instance uuid 4b95692e-088d-452c-83b7-4c50df73b8fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.006 2 DEBUG nova.network.neutron [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updating instance_info_cache with network_info: [{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.050 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <uuid>4b95692e-088d-452c-83b7-4c50df73b8fe</uuid>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <name>instance-0000000f</name>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-99112055</nova:name>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:06:03</nova:creationTime>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <nova:user uuid="7ff78293ed4e40f9954a0b0e6fca0caa">tempest-FloatingIPsAssociationTestJSON-1021362371-project-member</nova:user>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <nova:project uuid="711e531670b1460a923f2f91ce0f63db">tempest-FloatingIPsAssociationTestJSON-1021362371</nova:project>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <nova:port uuid="d06bd5a4-d9b7-4791-a387-d190eb1457f6">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <entry name="serial">4b95692e-088d-452c-83b7-4c50df73b8fe</entry>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <entry name="uuid">4b95692e-088d-452c-83b7-4c50df73b8fe</entry>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4b95692e-088d-452c-83b7-4c50df73b8fe_disk">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:65:f8:94"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <target dev="tapd06bd5a4-d9"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/console.log" append="off"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:06:04 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:06:04 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:06:04 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:06:04 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.050 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Preparing to wait for external event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.051 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.051 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.051 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.052 2 DEBUG nova.virt.libvirt.vif [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-99112055',display_name='tempest-FloatingIPsAssociationTestJSON-server-99112055',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-99112055',id=15,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-ib7l0lly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:05:59Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=4b95692e-088d-452c-83b7-4c50df73b8fe,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.053 2 DEBUG nova.network.os_vif_util [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.053 2 DEBUG nova.network.os_vif_util [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.054 2 DEBUG os_vif [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd06bd5a4-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.059 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd06bd5a4-d9, col_values=(('external_ids', {'iface-id': 'd06bd5a4-d9b7-4791-a387-d190eb1457f6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:f8:94', 'vm-uuid': '4b95692e-088d-452c-83b7-4c50df73b8fe'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:04 np0005473739 NetworkManager[44949]: <info>  [1759845964.0622] manager: (tapd06bd5a4-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.068 2 DEBUG oslo_concurrency.lockutils [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] Releasing lock "refresh_cache-348c80b7-7f65-4300-9dab-6a333f1b2c74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.068 2 DEBUG nova.compute.manager [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.068 2 DEBUG nova.compute.manager [None req-4a8aa99f-fd68-42df-a76b-0cb3d8f24c63 d0be8e9cfd464bddaa8ac5bddebb0025 4b1d8140b79544acaa6c75dbb71cd46e - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] network_info to inject: |[{"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.070 2 INFO os_vif [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9')#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.134 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.135 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.135 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No VIF found with MAC fa:16:3e:65:f8:94, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.136 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Using config drive#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.161 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.963 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Creating config drive at /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config#033[00m
Oct  7 10:06:04 np0005473739 nova_compute[259550]: 2025-10-07 14:06:04.967 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sw8hwmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.100 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0sw8hwmj" returned: 0 in 0.133s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.128 2 DEBUG nova.storage.rbd_utils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.132 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.201 2 DEBUG nova.network.neutron [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.202 2 DEBUG nova.network.neutron [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.236 2 DEBUG oslo_concurrency.lockutils [req-c655d998-d75c-4d8a-8eb2-8fff5ea28cc4 req-e36c8883-ac2c-470b-a65a-90f249c2f3f5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.294 2 DEBUG oslo_concurrency.processutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config 4b95692e-088d-452c-83b7-4c50df73b8fe_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.295 2 INFO nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Deleting local config drive /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe/disk.config because it was imported into RBD.#033[00m
Oct  7 10:06:05 np0005473739 kernel: tapd06bd5a4-d9: entered promiscuous mode
Oct  7 10:06:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:05Z|00071|binding|INFO|Claiming lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 for this chassis.
Oct  7 10:06:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:05Z|00072|binding|INFO|d06bd5a4-d9b7-4791-a387-d190eb1457f6: Claiming fa:16:3e:65:f8:94 10.100.0.7
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:05 np0005473739 NetworkManager[44949]: <info>  [1759845965.3674] manager: (tapd06bd5a4-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.372 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f8:94 10.100.0.7'], port_security=['fa:16:3e:65:f8:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4b95692e-088d-452c-83b7-4c50df73b8fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80f412c9-c511-49ba-a8ca-ff830fcff803', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '711e531670b1460a923f2f91ce0f63db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '087bdc78-7c9e-48d3-b83f-b37a1a3f8ec7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e75ebab-8b50-4bfb-bcab-f5fcada82242, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d06bd5a4-d9b7-4791-a387-d190eb1457f6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.373 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d06bd5a4-d9b7-4791-a387-d190eb1457f6 in datapath 80f412c9-c511-49ba-a8ca-ff830fcff803 bound to our chassis#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.375 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80f412c9-c511-49ba-a8ca-ff830fcff803#033[00m
Oct  7 10:06:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:05Z|00073|binding|INFO|Setting lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 ovn-installed in OVS
Oct  7 10:06:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:05Z|00074|binding|INFO|Setting lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 up in Southbound
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be0c2cbc-e1e4-4e86-aeb1-3c845ae1f21e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.395 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80f412c9-c1 in ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.398 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80f412c9-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a925b706-2451-4683-aff4-241926d5c99e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[83c15ff3-3838-40e2-836b-a7346a5c2341]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 systemd-udevd[284353]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.420 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8d95b6eb-9b53-4a22-8dc3-3250ff78b9f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 systemd-machined[214580]: New machine qemu-17-instance-0000000f.
Oct  7 10:06:05 np0005473739 NetworkManager[44949]: <info>  [1759845965.4379] device (tapd06bd5a4-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:06:05 np0005473739 NetworkManager[44949]: <info>  [1759845965.4392] device (tapd06bd5a4-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:06:05 np0005473739 systemd[1]: Started Virtual Machine qemu-17-instance-0000000f.
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.446 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc51c2ff-be27-45b7-81f4-005687a11e24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1166: 305 pgs: 305 active+clean; 359 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 5.0 MiB/s wr, 201 op/s
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.499 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a56f62b0-fdf4-491c-8179-675647a40184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 NetworkManager[44949]: <info>  [1759845965.5090] manager: (tap80f412c9-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.508 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3eb7bd77-3875-4135-a9e4-7bd8aa33f4ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.561 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a6241e6c-69c1-4911-8c50-80f2d855e537]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.567 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ab2c3dd-802b-4f32-96e6-055c75e1d860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 NetworkManager[44949]: <info>  [1759845965.6004] device (tap80f412c9-c0): carrier: link connected
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.607 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dc579c54-ecee-486d-b04a-44de5d783638]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.633 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a175ed11-2d25-41cf-8ba1-e917ed7cc663]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80f412c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:6b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654916, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 284442, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.663 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9ac76663-7a29-41f7-a9f7-09bfb0f7aae5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febc:6b79'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654916, 'tstamp': 654916}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 284445, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.687 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a23ee53f-c26d-4410-9dd3-09cfe7a73a87]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80f412c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:6b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654916, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 284446, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.730 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1531067-7dd6-432c-bda4-ef6a6deeafee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.804 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de2c184a-39f8-44e4-a069-4fe0db7e0a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f412c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80f412c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:05 np0005473739 NetworkManager[44949]: <info>  [1759845965.8102] manager: (tap80f412c9-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:05 np0005473739 kernel: tap80f412c9-c0: entered promiscuous mode
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80f412c9-c0, col_values=(('external_ids', {'iface-id': '60dd3b69-c15d-4f1f-8348-1807afc1578d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:05Z|00075|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.839 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80f412c9-c511-49ba-a8ca-ff830fcff803.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80f412c9-c511-49ba-a8ca-ff830fcff803.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.840 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de4f5b9c-e5e8-42ed-af69-f8841e8aebf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.840 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-80f412c9-c511-49ba-a8ca-ff830fcff803
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/80f412c9-c511-49ba-a8ca-ff830fcff803.pid.haproxy
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 80f412c9-c511-49ba-a8ca-ff830fcff803
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:06:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:05.841 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'env', 'PROCESS_TAG=haproxy-80f412c9-c511-49ba-a8ca-ff830fcff803', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80f412c9-c511-49ba-a8ca-ff830fcff803.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.983 2 DEBUG nova.compute.manager [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.985 2 DEBUG oslo_concurrency.lockutils [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.985 2 DEBUG oslo_concurrency.lockutils [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.985 2 DEBUG oslo_concurrency.lockutils [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:05 np0005473739 nova_compute[259550]: 2025-10-07 14:06:05.986 2 DEBUG nova.compute.manager [req-249efe03-b420-4b5f-8fea-8536a5c6fe83 req-53c865b8-71a1-4056-a28f-07daf60b7086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Processing event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:06:06 np0005473739 podman[284505]: 2025-10-07 14:06:06.245450058 +0000 UTC m=+0.055774010 container create 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:06:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3b14746a-cfb3-4025-afd3-9a834488c2cc does not exist
Oct  7 10:06:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0aa7d394-35cb-4107-bede-6b6793855552 does not exist
Oct  7 10:06:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 38ce0fde-2cef-45ee-8170-2c221306e192 does not exist
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:06:06 np0005473739 systemd[1]: Started libpod-conmon-44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084.scope.
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:06:06 np0005473739 podman[284505]: 2025-10-07 14:06:06.21557587 +0000 UTC m=+0.025899842 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:06:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:06:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ed1157e3fd8b123979ce2884e8cf7ae250f676278229a741e499d63f106c27c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:06 np0005473739 podman[284505]: 2025-10-07 14:06:06.343571151 +0000 UTC m=+0.153895133 container init 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:06:06 np0005473739 podman[284505]: 2025-10-07 14:06:06.351241476 +0000 UTC m=+0.161565428 container start 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  7 10:06:06 np0005473739 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [NOTICE]   (284582) : New worker (284589) forked
Oct  7 10:06:06 np0005473739 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [NOTICE]   (284582) : Loading success.
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:06:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.924 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.925 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845966.9238791, 4b95692e-088d-452c-83b7-4c50df73b8fe => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.925 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] VM Started (Lifecycle Event)#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.932 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.936 2 INFO nova.virt.libvirt.driver [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance spawned successfully.#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.936 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.944 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.952 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.972 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.973 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.973 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:06 np0005473739 podman[284720]: 2025-10-07 14:06:06.972809449 +0000 UTC m=+0.060350274 container create a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.973 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.974 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.974 2 DEBUG nova.virt.libvirt.driver [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.977 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.977 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845966.9248953, 4b95692e-088d-452c-83b7-4c50df73b8fe => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:06 np0005473739 nova_compute[259550]: 2025-10-07 14:06:06.978 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.007 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.012 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845966.9290807, 4b95692e-088d-452c-83b7-4c50df73b8fe => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.012 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:06:07 np0005473739 systemd[1]: Started libpod-conmon-a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064.scope.
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.041 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:07 np0005473739 podman[284720]: 2025-10-07 14:06:06.951654583 +0000 UTC m=+0.039195438 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.044 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.052 2 INFO nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Took 7.05 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.052 2 DEBUG nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.064 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:06:07 np0005473739 podman[284720]: 2025-10-07 14:06:07.078361599 +0000 UTC m=+0.165902444 container init a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:06:07 np0005473739 podman[284720]: 2025-10-07 14:06:07.086211079 +0000 UTC m=+0.173751904 container start a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:06:07 np0005473739 podman[284720]: 2025-10-07 14:06:07.089637111 +0000 UTC m=+0.177177966 container attach a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:06:07 np0005473739 kind_leavitt[284736]: 167 167
Oct  7 10:06:07 np0005473739 systemd[1]: libpod-a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064.scope: Deactivated successfully.
Oct  7 10:06:07 np0005473739 podman[284720]: 2025-10-07 14:06:07.09370877 +0000 UTC m=+0.181249595 container died a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.106 2 INFO nova.compute.manager [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Took 8.02 seconds to build instance.#033[00m
Oct  7 10:06:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-822abaf9521394065b8c6437ea368001bb6118d023ef5217b6da980ab7b18128-merged.mount: Deactivated successfully.
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.125 2 DEBUG oslo_concurrency.lockutils [None req-e878bc26-9922-437c-8177-d815031c37f9 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:07 np0005473739 podman[284720]: 2025-10-07 14:06:07.138088356 +0000 UTC m=+0.225629181 container remove a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_leavitt, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 10:06:07 np0005473739 systemd[1]: libpod-conmon-a17abc1bb23296baf43d5d14cb69c8f86fc2e3e720c85ec60b0e0496f0163064.scope: Deactivated successfully.
Oct  7 10:06:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:07Z|00076|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct  7 10:06:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:07Z|00077|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:07 np0005473739 podman[284760]: 2025-10-07 14:06:07.360179311 +0000 UTC m=+0.047668885 container create 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 10:06:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:07Z|00078|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct  7 10:06:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:07Z|00079|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct  7 10:06:07 np0005473739 nova_compute[259550]: 2025-10-07 14:06:07.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:07 np0005473739 systemd[1]: Started libpod-conmon-610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c.scope.
Oct  7 10:06:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:06:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:07 np0005473739 podman[284760]: 2025-10-07 14:06:07.341169654 +0000 UTC m=+0.028659248 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:06:07 np0005473739 podman[284760]: 2025-10-07 14:06:07.447446294 +0000 UTC m=+0.134935898 container init 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 10:06:07 np0005473739 podman[284760]: 2025-10-07 14:06:07.454721498 +0000 UTC m=+0.142211062 container start 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:06:07 np0005473739 podman[284760]: 2025-10-07 14:06:07.458704315 +0000 UTC m=+0.146193919 container attach 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:06:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1167: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 5.7 MiB/s wr, 219 op/s
Oct  7 10:06:08 np0005473739 nova_compute[259550]: 2025-10-07 14:06:08.386 2 DEBUG nova.compute.manager [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:08 np0005473739 nova_compute[259550]: 2025-10-07 14:06:08.387 2 DEBUG oslo_concurrency.lockutils [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:08 np0005473739 nova_compute[259550]: 2025-10-07 14:06:08.388 2 DEBUG oslo_concurrency.lockutils [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:08 np0005473739 nova_compute[259550]: 2025-10-07 14:06:08.388 2 DEBUG oslo_concurrency.lockutils [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:08 np0005473739 nova_compute[259550]: 2025-10-07 14:06:08.389 2 DEBUG nova.compute.manager [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] No waiting events found dispatching network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:08 np0005473739 nova_compute[259550]: 2025-10-07 14:06:08.389 2 WARNING nova.compute.manager [req-c14bc630-078f-4f2d-9363-684ff59864d5 req-0027c28e-67fe-4cc3-ab8b-bc990a1f573a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received unexpected event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:06:08 np0005473739 confident_northcutt[284777]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:06:08 np0005473739 confident_northcutt[284777]: --> relative data size: 1.0
Oct  7 10:06:08 np0005473739 confident_northcutt[284777]: --> All data devices are unavailable
Oct  7 10:06:08 np0005473739 systemd[1]: libpod-610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c.scope: Deactivated successfully.
Oct  7 10:06:08 np0005473739 systemd[1]: libpod-610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c.scope: Consumed 1.112s CPU time.
Oct  7 10:06:08 np0005473739 podman[284760]: 2025-10-07 14:06:08.650825666 +0000 UTC m=+1.338315250 container died 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:06:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ceb9329f245a9aeb018ea41ce2fff940de7d75c6b3613bac067f6a0d1b50e156-merged.mount: Deactivated successfully.
Oct  7 10:06:08 np0005473739 podman[284760]: 2025-10-07 14:06:08.889808413 +0000 UTC m=+1.577298017 container remove 610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_northcutt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:06:08 np0005473739 systemd[1]: libpod-conmon-610f32ebcda8ba8ba8f5fde6a7adf5ccdf5821781e51a44d575e26e3b101cc0c.scope: Deactivated successfully.
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.180 2 INFO nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Rebuilding instance#033[00m
Oct  7 10:06:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1168: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.3 MiB/s rd, 5.3 MiB/s wr, 257 op/s
Oct  7 10:06:09 np0005473739 podman[284959]: 2025-10-07 14:06:09.552134365 +0000 UTC m=+0.039883667 container create 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.589 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:09 np0005473739 systemd[1]: Started libpod-conmon-1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a.scope.
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.606 2 DEBUG nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:06:09 np0005473739 podman[284959]: 2025-10-07 14:06:09.531999557 +0000 UTC m=+0.019748869 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:06:09 np0005473739 podman[284959]: 2025-10-07 14:06:09.633340645 +0000 UTC m=+0.121089967 container init 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:06:09 np0005473739 podman[284959]: 2025-10-07 14:06:09.641154114 +0000 UTC m=+0.128903416 container start 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 10:06:09 np0005473739 podman[284959]: 2025-10-07 14:06:09.644870193 +0000 UTC m=+0.132619525 container attach 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:06:09 np0005473739 optimistic_payne[284975]: 167 167
Oct  7 10:06:09 np0005473739 systemd[1]: libpod-1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a.scope: Deactivated successfully.
Oct  7 10:06:09 np0005473739 podman[284959]: 2025-10-07 14:06:09.647952675 +0000 UTC m=+0.135701977 container died 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:06:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6878ab2c1a541dd565a06ec8d1d827074ab2d60339641aa4da868692dd79fca2-merged.mount: Deactivated successfully.
Oct  7 10:06:09 np0005473739 podman[284959]: 2025-10-07 14:06:09.685117859 +0000 UTC m=+0.172867151 container remove 1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:06:09 np0005473739 systemd[1]: libpod-conmon-1a0f664082ff4907aa2bc14f226315c0b84f711eda357dcfc0fd40d1d171822a.scope: Deactivated successfully.
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.825 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_requests' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.843 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.855 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.869 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.880 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:06:09 np0005473739 nova_compute[259550]: 2025-10-07 14:06:09.885 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:06:09 np0005473739 podman[284998]: 2025-10-07 14:06:09.884153928 +0000 UTC m=+0.033530827 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:06:10 np0005473739 podman[284998]: 2025-10-07 14:06:10.011099842 +0000 UTC m=+0.160476711 container create 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:06:10 np0005473739 systemd[1]: Started libpod-conmon-3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24.scope.
Oct  7 10:06:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:06:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:10 np0005473739 podman[284998]: 2025-10-07 14:06:10.265689836 +0000 UTC m=+0.415066725 container init 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:06:10 np0005473739 podman[284998]: 2025-10-07 14:06:10.272284612 +0000 UTC m=+0.421661491 container start 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:06:10 np0005473739 podman[284998]: 2025-10-07 14:06:10.364075455 +0000 UTC m=+0.513452624 container attach 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:06:10 np0005473739 nova_compute[259550]: 2025-10-07 14:06:10.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]: {
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:    "0": [
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:        {
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "devices": [
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "/dev/loop3"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            ],
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_name": "ceph_lv0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_size": "21470642176",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "name": "ceph_lv0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "tags": {
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cluster_name": "ceph",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.crush_device_class": "",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.encrypted": "0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osd_id": "0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.type": "block",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.vdo": "0"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            },
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "type": "block",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "vg_name": "ceph_vg0"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:        }
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:    ],
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:    "1": [
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:        {
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "devices": [
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "/dev/loop4"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            ],
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_name": "ceph_lv1",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_size": "21470642176",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "name": "ceph_lv1",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "tags": {
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cluster_name": "ceph",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.crush_device_class": "",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.encrypted": "0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osd_id": "1",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.type": "block",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.vdo": "0"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            },
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "type": "block",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "vg_name": "ceph_vg1"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:        }
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:    ],
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:    "2": [
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:        {
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "devices": [
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "/dev/loop5"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            ],
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_name": "ceph_lv2",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_size": "21470642176",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "name": "ceph_lv2",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "tags": {
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.cluster_name": "ceph",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.crush_device_class": "",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.encrypted": "0",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osd_id": "2",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.type": "block",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:                "ceph.vdo": "0"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            },
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "type": "block",
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:            "vg_name": "ceph_vg2"
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:        }
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]:    ]
Oct  7 10:06:11 np0005473739 laughing_haslett[285015]: }
Oct  7 10:06:11 np0005473739 systemd[1]: libpod-3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24.scope: Deactivated successfully.
Oct  7 10:06:11 np0005473739 conmon[285015]: conmon 3c1bd9f8eee22a58aa6c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24.scope/container/memory.events
Oct  7 10:06:11 np0005473739 podman[284998]: 2025-10-07 14:06:11.160041498 +0000 UTC m=+1.309418377 container died 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:06:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ef614f8a702e01de142d646aa8d814f67012a165401631ef8025b70c19f81a31-merged.mount: Deactivated successfully.
Oct  7 10:06:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1169: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.3 MiB/s wr, 244 op/s
Oct  7 10:06:11 np0005473739 podman[284998]: 2025-10-07 14:06:11.818316433 +0000 UTC m=+1.967693302 container remove 3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_haslett, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:06:11 np0005473739 systemd[1]: libpod-conmon-3c1bd9f8eee22a58aa6c62dfe4e54f18325721510f6f1359284e884cc204fe24.scope: Deactivated successfully.
Oct  7 10:06:12 np0005473739 podman[285177]: 2025-10-07 14:06:12.531056532 +0000 UTC m=+0.087018198 container create 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:06:12 np0005473739 podman[285177]: 2025-10-07 14:06:12.469451465 +0000 UTC m=+0.025413151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:06:12 np0005473739 systemd[1]: Started libpod-conmon-756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0.scope.
Oct  7 10:06:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:06:12 np0005473739 podman[285177]: 2025-10-07 14:06:12.635966696 +0000 UTC m=+0.191928382 container init 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:06:12 np0005473739 podman[285177]: 2025-10-07 14:06:12.645738727 +0000 UTC m=+0.201700383 container start 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:06:12 np0005473739 awesome_dewdney[285191]: 167 167
Oct  7 10:06:12 np0005473739 podman[285177]: 2025-10-07 14:06:12.650196716 +0000 UTC m=+0.206158382 container attach 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:06:12 np0005473739 systemd[1]: libpod-756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0.scope: Deactivated successfully.
Oct  7 10:06:12 np0005473739 podman[285177]: 2025-10-07 14:06:12.652724172 +0000 UTC m=+0.208685838 container died 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:06:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c16af6d8f7ffe50bf46a12b054599db0fb831469c6169c042f6597df044996bf-merged.mount: Deactivated successfully.
Oct  7 10:06:12 np0005473739 podman[285177]: 2025-10-07 14:06:12.71210538 +0000 UTC m=+0.268067046 container remove 756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_dewdney, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:06:12 np0005473739 systemd[1]: libpod-conmon-756e633c19224be72c9f694d67b08fa9bff9e0fc988088d8c9a1c6198b2d0bc0.scope: Deactivated successfully.
Oct  7 10:06:12 np0005473739 podman[285215]: 2025-10-07 14:06:12.932888701 +0000 UTC m=+0.044192382 container create 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:06:13 np0005473739 systemd[1]: Started libpod-conmon-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope.
Oct  7 10:06:13 np0005473739 podman[285215]: 2025-10-07 14:06:12.916166414 +0000 UTC m=+0.027470125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:06:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:06:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:06:13 np0005473739 podman[285215]: 2025-10-07 14:06:13.077902996 +0000 UTC m=+0.189206707 container init 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:06:13 np0005473739 podman[285215]: 2025-10-07 14:06:13.087750219 +0000 UTC m=+0.199053900 container start 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:06:13 np0005473739 podman[285215]: 2025-10-07 14:06:13.111210867 +0000 UTC m=+0.222514578 container attach 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 10:06:13 np0005473739 kernel: tap9b25db0b-24 (unregistering): left promiscuous mode
Oct  7 10:06:13 np0005473739 NetworkManager[44949]: <info>  [1759845973.1570] device (tap9b25db0b-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:06:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:13Z|00080|binding|INFO|Releasing lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 from this chassis (sb_readonly=0)
Oct  7 10:06:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:13Z|00081|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 down in Southbound
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:13Z|00082|binding|INFO|Removing iface tap9b25db0b-24 ovn-installed in OVS
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.181 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.183 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.186 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.214 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d59a16-ba78-4a77-b16d-6a2035899ceb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:13 np0005473739 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  7 10:06:13 np0005473739 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000a.scope: Consumed 16.174s CPU time.
Oct  7 10:06:13 np0005473739 systemd-machined[214580]: Machine qemu-12-instance-0000000a terminated.
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.262 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6be01c2f-7309-40b1-8944-6841494e7bd1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.265 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[af9496da-8cef-43a1-8627-92e2662857d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.295 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[67b394a1-bc97-4144-9f06-3d2a7d56c5a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.317 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2a9765-e65c-4c05-8f7b-37c8ae7884c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 12, 'rx_bytes': 1084, 'tx_bytes': 692, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285249, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be70bf88-78f9-4f59-8277-fda9a28248f3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285250, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285250, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.341 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.348 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.349 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.349 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:13.349 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.470 2 DEBUG nova.compute.manager [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.471 2 DEBUG oslo_concurrency.lockutils [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.472 2 DEBUG oslo_concurrency.lockutils [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.472 2 DEBUG oslo_concurrency.lockutils [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.472 2 DEBUG nova.compute.manager [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.472 2 WARNING nova.compute.manager [req-94977ac3-88af-458e-b8d9-70c8bfd6d2be req-6e9ada91-11b4-43b0-ac57-98d590767ffa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state error and task_state rebuilding.#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.479 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.479 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1170: 305 pgs: 305 active+clean; 372 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.502 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.560 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.560 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.567 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.568 2 INFO nova.compute.claims [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.589 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759845958.5887623, a2f7901e-6572-4162-b995-0c44fb69eab5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.590 2 INFO nova.compute.manager [-] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.631 2 DEBUG nova.compute.manager [None req-6c7f47d3-8e2f-4608-b9dc-c4980b99bdef - - - - - -] [instance: a2f7901e-6572-4162-b995-0c44fb69eab5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.914 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.942 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance shutdown successfully after 4 seconds.#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.949 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.963 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.964 2 DEBUG nova.virt.libvirt.vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:08Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.965 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.966 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.967 2 DEBUG os_vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:13 np0005473739 nova_compute[259550]: 2025-10-07 14:06:13.970 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b25db0b-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.023 2 INFO os_vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')#033[00m
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]: {
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "osd_id": 2,
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "type": "bluestore"
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:    },
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "osd_id": 1,
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "type": "bluestore"
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:    },
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "osd_id": 0,
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:        "type": "bluestore"
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]:    }
Oct  7 10:06:14 np0005473739 inspiring_kalam[285233]: }
Oct  7 10:06:14 np0005473739 systemd[1]: libpod-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope: Deactivated successfully.
Oct  7 10:06:14 np0005473739 systemd[1]: libpod-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope: Consumed 1.065s CPU time.
Oct  7 10:06:14 np0005473739 conmon[285233]: conmon 80a161c7d4fc95a6c8aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope/container/memory.events
Oct  7 10:06:14 np0005473739 podman[285215]: 2025-10-07 14:06:14.20500125 +0000 UTC m=+1.316304931 container died 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:06:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f103876b3dfb5b50be99c5bcad73ba04aee0fc79e3cdfdeac4c6403aed9acffb-merged.mount: Deactivated successfully.
Oct  7 10:06:14 np0005473739 podman[285215]: 2025-10-07 14:06:14.272430842 +0000 UTC m=+1.383734523 container remove 80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:06:14 np0005473739 systemd[1]: libpod-conmon-80a161c7d4fc95a6c8aae2dc6dfd046b0c2f9b5e63c490d2c9a558ea1f0b4b06.scope: Deactivated successfully.
Oct  7 10:06:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:06:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:06:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:06:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:06:14 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f0fde30d-d18a-44f1-8326-c852476af9e3 does not exist
Oct  7 10:06:14 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 936ebc31-55eb-4a34-ad0c-d95c4a228ac3 does not exist
Oct  7 10:06:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:06:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216741884' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.415 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.422 2 DEBUG nova.compute.provider_tree [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.439 2 DEBUG nova.scheduler.client.report [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.463 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.464 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.518 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.519 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.542 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.559 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.642 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.643 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.644 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Creating image(s)#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.668 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.691 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:06:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.715 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.721 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:14Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c8:34:23 10.100.0.11
Oct  7 10:06:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:14Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c8:34:23 10.100.0.11
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.755 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting instance files /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.756 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deletion of /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del complete#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.786 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.786 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.787 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.787 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.811 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.815 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 115674ad-2273-4c42-b9ae-d380c2c005d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.966 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:06:14 np0005473739 nova_compute[259550]: 2025-10-07 14:06:14.967 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating image(s)#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.001 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.037 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.062 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.066 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.103 2 DEBUG nova.policy [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7ff78293ed4e40f9954a0b0e6fca0caa', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '711e531670b1460a923f2f91ce0f63db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.141 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.143 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.144 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.144 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.168 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.172 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.440 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 115674ad-2273-4c42-b9ae-d380c2c005d6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1171: 305 pgs: 305 active+clean; 392 MiB data, 459 MiB used, 60 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.2 MiB/s wr, 242 op/s
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.520 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] resizing rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:06:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.886 2 DEBUG nova.compute.manager [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.887 2 DEBUG oslo_concurrency.lockutils [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.887 2 DEBUG oslo_concurrency.lockutils [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.887 2 DEBUG oslo_concurrency.lockutils [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.888 2 DEBUG nova.compute.manager [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:15 np0005473739 nova_compute[259550]: 2025-10-07 14:06:15.888 2 WARNING nova.compute.manager [req-5bf5e547-1def-4183-a451-ab653d9541ad req-422223c9-ddd0-4ad9-b6f1-b1a079fcfe1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state error and task_state rebuild_spawning.#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.174 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Successfully created port: e6624198-96a3-41f5-b3e6-217be5426796 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.235 2 DEBUG nova.objects.instance [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'migration_context' on Instance uuid 115674ad-2273-4c42-b9ae-d380c2c005d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.251 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.252 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Ensure instance console log exists: /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.252 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.252 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.253 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.291 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.349 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.595 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.596 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Ensure instance console log exists: /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.596 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.596 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.596 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.598 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start _get_guest_xml network_info=[{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.603 2 WARNING nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.607 2 DEBUG nova.virt.libvirt.host [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.608 2 DEBUG nova.virt.libvirt.host [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.610 2 DEBUG nova.virt.libvirt.host [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.611 2 DEBUG nova.virt.libvirt.host [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.611 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.611 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.611 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.612 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.virt.hardware [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.613 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:16 np0005473739 nova_compute[259550]: 2025-10-07 14:06:16.627 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:06:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3556732845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.093 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.142 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.149 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1172: 305 pgs: 305 active+clean; 394 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 4.1 MiB/s wr, 175 op/s
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.591 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Successfully updated port: e6624198-96a3-41f5-b3e6-217be5426796 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:06:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:06:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2265620750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.617 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.618 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquired lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.618 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.625 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.627 2 DEBUG nova.virt.libvirt.vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:14Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.627 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.628 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.631 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <uuid>ddf09c33-d956-404b-a5d8-44a3727f9a3b</uuid>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <name>instance-0000000a</name>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdminTestJSON-server-1321212972</nova:name>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:06:16</nova:creationTime>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <nova:port uuid="9b25db0b-246e-456c-82d7-cf361c57f9c5">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <entry name="serial">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <entry name="uuid">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6c:03:d4"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <target dev="tap9b25db0b-24"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log" append="off"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:06:17 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:06:17 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:06:17 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:06:17 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.632 2 DEBUG nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Preparing to wait for external event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.632 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.632 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.633 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.633 2 DEBUG nova.virt.libvirt.vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:14Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='error') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.633 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.634 2 DEBUG nova.network.os_vif_util [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.634 2 DEBUG os_vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.637 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b25db0b-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b25db0b-24, col_values=(('external_ids', {'iface-id': '9b25db0b-246e-456c-82d7-cf361c57f9c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:03:d4', 'vm-uuid': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:17 np0005473739 NetworkManager[44949]: <info>  [1759845977.6475] manager: (tap9b25db0b-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.657 2 INFO os_vif [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.753 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.754 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.754 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:6c:03:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.755 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Using config drive#033[00m
Oct  7 10:06:17 np0005473739 podman[285790]: 2025-10-07 14:06:17.769789874 +0000 UTC m=+0.068322787 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:06:17 np0005473739 podman[285791]: 2025-10-07 14:06:17.787746554 +0000 UTC m=+0.086010639 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.793 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.801 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.825 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:17 np0005473739 nova_compute[259550]: 2025-10-07 14:06:17.847 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'keypairs' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:18 np0005473739 nova_compute[259550]: 2025-10-07 14:06:18.121 2 DEBUG nova.compute.manager [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-changed-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:18 np0005473739 nova_compute[259550]: 2025-10-07 14:06:18.121 2 DEBUG nova.compute.manager [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing instance network info cache due to event network-changed-e6624198-96a3-41f5-b3e6-217be5426796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:06:18 np0005473739 nova_compute[259550]: 2025-10-07 14:06:18.122 2 DEBUG oslo_concurrency.lockutils [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:18 np0005473739 nova_compute[259550]: 2025-10-07 14:06:18.732 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating config drive at /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config#033[00m
Oct  7 10:06:18 np0005473739 nova_compute[259550]: 2025-10-07 14:06:18.737 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppaohkwte execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:18 np0005473739 nova_compute[259550]: 2025-10-07 14:06:18.872 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppaohkwte" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:18 np0005473739 nova_compute[259550]: 2025-10-07 14:06:18.903 2 DEBUG nova.storage.rbd_utils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:18 np0005473739 nova_compute[259550]: 2025-10-07 14:06:18.907 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1173: 305 pgs: 305 active+clean; 408 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.7 MiB/s wr, 226 op/s
Oct  7 10:06:19 np0005473739 nova_compute[259550]: 2025-10-07 14:06:19.761 2 DEBUG oslo_concurrency.processutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:19 np0005473739 nova_compute[259550]: 2025-10-07 14:06:19.762 2 INFO nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting local config drive /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config because it was imported into RBD.#033[00m
Oct  7 10:06:19 np0005473739 NetworkManager[44949]: <info>  [1759845979.8191] manager: (tap9b25db0b-24): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Oct  7 10:06:19 np0005473739 kernel: tap9b25db0b-24: entered promiscuous mode
Oct  7 10:06:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:19Z|00083|binding|INFO|Claiming lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 for this chassis.
Oct  7 10:06:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:19Z|00084|binding|INFO|9b25db0b-246e-456c-82d7-cf361c57f9c5: Claiming fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:06:19 np0005473739 nova_compute[259550]: 2025-10-07 14:06:19.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.841 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.843 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis#033[00m
Oct  7 10:06:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.845 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:06:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:19Z|00085|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 ovn-installed in OVS
Oct  7 10:06:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:19Z|00086|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 up in Southbound
Oct  7 10:06:19 np0005473739 nova_compute[259550]: 2025-10-07 14:06:19.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:19 np0005473739 nova_compute[259550]: 2025-10-07 14:06:19.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.872 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[217299ff-39af-4c8b-a0f6-0e6c81809947]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:19 np0005473739 systemd-udevd[285904]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:06:19 np0005473739 systemd-machined[214580]: New machine qemu-18-instance-0000000a.
Oct  7 10:06:19 np0005473739 systemd[1]: Started Virtual Machine qemu-18-instance-0000000a.
Oct  7 10:06:19 np0005473739 NetworkManager[44949]: <info>  [1759845979.8947] device (tap9b25db0b-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:06:19 np0005473739 NetworkManager[44949]: <info>  [1759845979.8955] device (tap9b25db0b-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:06:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.914 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d02f829f-893e-405c-89f7-18cf0893fe31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.918 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e10fab5b-447b-4336-bea4-ca5be13db5d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.957 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[569b96d5-3c9e-45f8-9188-7fe5a1083206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:19 np0005473739 nova_compute[259550]: 2025-10-07 14:06:19.980 2 DEBUG nova.network.neutron [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:19.980 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aff6e865-4eff-4a53-98f5-aca0b8d4fd8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 14, 'rx_bytes': 1126, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 14, 'rx_bytes': 1126, 'tx_bytes': 776, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 285916, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:19 np0005473739 nova_compute[259550]: 2025-10-07 14:06:19.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.003 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9be7a4e4-3ad8-46a2-ae00-9e15b73c6c02]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285918, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 285918, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.006 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.013 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.013 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.014 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.015 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Releasing lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.016 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance network_info: |[{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.016 2 DEBUG oslo_concurrency.lockutils [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.017 2 DEBUG nova.network.neutron [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing network info cache for port e6624198-96a3-41f5-b3e6-217be5426796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:06:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:20.017 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.020 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start _get_guest_xml network_info=[{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.027 2 WARNING nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.033 2 DEBUG nova.virt.libvirt.host [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.034 2 DEBUG nova.virt.libvirt.host [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.040 2 DEBUG nova.virt.libvirt.host [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.041 2 DEBUG nova.virt.libvirt.host [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.041 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.041 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.042 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.042 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.043 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.044 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.044 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.044 2 DEBUG nova.virt.hardware [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.048 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:20Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:f8:94 10.100.0.7
Oct  7 10:06:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:20Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:f8:94 10.100.0.7
Oct  7 10:06:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:06:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3319170676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.525 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.562 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.568 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:20 np0005473739 nova_compute[259550]: 2025-10-07 14:06:20.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:06:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:06:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2732484885' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.060 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.062 2 DEBUG nova.virt.libvirt.vif [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-279731597',display_name='tempest-FloatingIPsAssociationTestJSON-server-279731597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-279731597',id=16,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-juuxxw0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:14Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=115674ad-2273-4c42-b9ae-d380c2c005d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.063 2 DEBUG nova.network.os_vif_util [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.064 2 DEBUG nova.network.os_vif_util [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.065 2 DEBUG nova.objects.instance [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'pci_devices' on Instance uuid 115674ad-2273-4c42-b9ae-d380c2c005d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.085 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <uuid>115674ad-2273-4c42-b9ae-d380c2c005d6</uuid>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <name>instance-00000010</name>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <nova:name>tempest-FloatingIPsAssociationTestJSON-server-279731597</nova:name>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:06:20</nova:creationTime>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <nova:user uuid="7ff78293ed4e40f9954a0b0e6fca0caa">tempest-FloatingIPsAssociationTestJSON-1021362371-project-member</nova:user>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <nova:project uuid="711e531670b1460a923f2f91ce0f63db">tempest-FloatingIPsAssociationTestJSON-1021362371</nova:project>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <nova:port uuid="e6624198-96a3-41f5-b3e6-217be5426796">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <entry name="serial">115674ad-2273-4c42-b9ae-d380c2c005d6</entry>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <entry name="uuid">115674ad-2273-4c42-b9ae-d380c2c005d6</entry>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/115674ad-2273-4c42-b9ae-d380c2c005d6_disk">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:89:da:ba"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <target dev="tape6624198-96"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/console.log" append="off"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:06:21 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:06:21 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:06:21 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:06:21 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.087 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Preparing to wait for external event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.087 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.087 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.088 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.089 2 DEBUG nova.virt.libvirt.vif [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-279731597',display_name='tempest-FloatingIPsAssociationTestJSON-server-279731597',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-279731597',id=16,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-juuxxw0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:14Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=115674ad-2273-4c42-b9ae-d380c2c005d6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.089 2 DEBUG nova.network.os_vif_util [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.090 2 DEBUG nova.network.os_vif_util [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.091 2 DEBUG os_vif [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.101 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6624198-96, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.102 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape6624198-96, col_values=(('external_ids', {'iface-id': 'e6624198-96a3-41f5-b3e6-217be5426796', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:89:da:ba', 'vm-uuid': '115674ad-2273-4c42-b9ae-d380c2c005d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:21 np0005473739 NetworkManager[44949]: <info>  [1759845981.1047] manager: (tape6624198-96): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.112 2 INFO os_vif [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96')#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.251 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.252 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.253 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] No VIF found with MAC fa:16:3e:89:da:ba, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.254 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Using config drive#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.285 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1174: 305 pgs: 305 active+clean; 434 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 7.2 MiB/s wr, 220 op/s
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.538 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for ddf09c33-d956-404b-a5d8-44a3727f9a3b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.539 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845981.5370498, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.539 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Started (Lifecycle Event)#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.557 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.563 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845981.5375614, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.563 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.628 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.632 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.663 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.795 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Creating config drive at /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.820 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeipfy_sh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.951 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpeipfy_sh" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.978 2 DEBUG nova.storage.rbd_utils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] rbd image 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:21 np0005473739 nova_compute[259550]: 2025-10-07 14:06:21.983 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.010 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.285 2 DEBUG oslo_concurrency.processutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config 115674ad-2273-4c42-b9ae-d380c2c005d6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.302s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.286 2 INFO nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Deleting local config drive /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6/disk.config because it was imported into RBD.#033[00m
Oct  7 10:06:22 np0005473739 kernel: tape6624198-96: entered promiscuous mode
Oct  7 10:06:22 np0005473739 systemd-udevd[285908]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:06:22 np0005473739 NetworkManager[44949]: <info>  [1759845982.3410] manager: (tape6624198-96): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Oct  7 10:06:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:22Z|00087|binding|INFO|Claiming lport e6624198-96a3-41f5-b3e6-217be5426796 for this chassis.
Oct  7 10:06:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:22Z|00088|binding|INFO|e6624198-96a3-41f5-b3e6-217be5426796: Claiming fa:16:3e:89:da:ba 10.100.0.9
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.350 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:da:ba 10.100.0.9'], port_security=['fa:16:3e:89:da:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '115674ad-2273-4c42-b9ae-d380c2c005d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80f412c9-c511-49ba-a8ca-ff830fcff803', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '711e531670b1460a923f2f91ce0f63db', 'neutron:revision_number': '2', 'neutron:security_group_ids': '087bdc78-7c9e-48d3-b83f-b37a1a3f8ec7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e75ebab-8b50-4bfb-bcab-f5fcada82242, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e6624198-96a3-41f5-b3e6-217be5426796) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.351 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e6624198-96a3-41f5-b3e6-217be5426796 in datapath 80f412c9-c511-49ba-a8ca-ff830fcff803 bound to our chassis#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.352 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80f412c9-c511-49ba-a8ca-ff830fcff803#033[00m
Oct  7 10:06:22 np0005473739 NetworkManager[44949]: <info>  [1759845982.3573] device (tape6624198-96): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:06:22 np0005473739 NetworkManager[44949]: <info>  [1759845982.3584] device (tape6624198-96): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:06:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:22Z|00089|binding|INFO|Setting lport e6624198-96a3-41f5-b3e6-217be5426796 ovn-installed in OVS
Oct  7 10:06:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:22Z|00090|binding|INFO|Setting lport e6624198-96a3-41f5-b3e6-217be5426796 up in Southbound
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.373 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b34e63fe-6540-4cd1-9bec-54ee8ca3a2d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:22 np0005473739 systemd-machined[214580]: New machine qemu-19-instance-00000010.
Oct  7 10:06:22 np0005473739 systemd[1]: Started Virtual Machine qemu-19-instance-00000010.
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.414 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f29a12ba-0d9a-4979-b56f-9a10d57bbad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.418 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[827c26f3-66b0-42d0-8ffd-81c0b48640cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.456 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b983360d-2a98-4e1e-91ce-93e16dee275d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.476 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd45bf31-370a-4dd9-ada8-30d84d77b5a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80f412c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:6b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654916, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286108, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.496 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d53241d8-5227-45cc-87be-c964d236c371]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80f412c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654932, 'tstamp': 654932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286110, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80f412c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654936, 'tstamp': 654936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286110, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.498 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f412c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.500 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.501 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.502 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80f412c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.502 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.503 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80f412c9-c0, col_values=(('external_ids', {'iface-id': '60dd3b69-c15d-4f1f-8348-1807afc1578d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:22.503 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:06:22
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', '.mgr', 'default.rgw.meta', 'vms', 'images', 'backups', 'volumes', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.703 2 DEBUG nova.network.neutron [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updated VIF entry in instance network info cache for port e6624198-96a3-41f5-b3e6-217be5426796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.704 2 DEBUG nova.network.neutron [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.718 2 DEBUG oslo_concurrency.lockutils [req-26c59cb7-9b4a-471e-8793-3d767d44e7b3 req-78b8e034-f51e-4a38-b0d6-1dcf7d58d961 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:06:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.921 2 DEBUG nova.compute.manager [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.922 2 DEBUG oslo_concurrency.lockutils [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.922 2 DEBUG oslo_concurrency.lockutils [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.923 2 DEBUG oslo_concurrency.lockutils [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.923 2 DEBUG nova.compute.manager [req-47cab03d-a371-4233-aa8a-099d4bd1b9c3 req-f7a0e6a3-8ca1-4494-84d9-472aec6148a0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Processing event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:22 np0005473739 nova_compute[259550]: 2025-10-07 14:06:22.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.150 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.150 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1175: 305 pgs: 305 active+clean; 434 MiB data, 495 MiB used, 60 GiB / 60 GiB avail; 587 KiB/s rd, 7.2 MiB/s wr, 185 op/s
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.516 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845983.5160422, 115674ad-2273-4c42-b9ae-d380c2c005d6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.517 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] VM Started (Lifecycle Event)#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.519 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.523 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.526 2 INFO nova.virt.libvirt.driver [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance spawned successfully.#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.527 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.594 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.599 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.693 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.694 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845983.5169919, 115674ad-2273-4c42-b9ae-d380c2c005d6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.694 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.701 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.702 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.702 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.703 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.703 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.703 2 DEBUG nova.virt.libvirt.driver [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.733 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.737 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845983.5223353, 115674ad-2273-4c42-b9ae-d380c2c005d6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.737 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.803 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.806 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.820 2 INFO nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Took 9.18 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.821 2 DEBUG nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.860 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:06:23 np0005473739 nova_compute[259550]: 2025-10-07 14:06:23.939 2 INFO nova.compute.manager [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Took 10.39 seconds to build instance.#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.124 2 DEBUG oslo_concurrency.lockutils [None req-b69888db-84ee-406f-97c4-88e84cade132 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.207 2 DEBUG nova.compute.manager [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.208 2 DEBUG oslo_concurrency.lockutils [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.208 2 DEBUG oslo_concurrency.lockutils [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.208 2 DEBUG oslo_concurrency.lockutils [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.209 2 DEBUG nova.compute.manager [req-783735ee-ab4c-44ec-83fc-8f779588f357 req-3b9b6133-e783-4705-9d31-a64e0320260b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Processing event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.209 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.210 2 DEBUG nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.215 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759845984.2147563, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.215 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.218 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.222 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance spawned successfully.#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.222 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.318 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.322 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: error, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.325 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.326 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.326 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.327 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.327 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.328 2 DEBUG nova.virt.libvirt.driver [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.380 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.500 2 DEBUG nova.compute.manager [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.563 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.563 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.563 2 DEBUG nova.objects.instance [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.692 2 DEBUG oslo_concurrency.lockutils [None req-5050a3ee-f534-4ca6-9b34-b8fe952947d5 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:24 np0005473739 nova_compute[259550]: 2025-10-07 14:06:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.019 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.339 2 DEBUG nova.compute.manager [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.340 2 DEBUG oslo_concurrency.lockutils [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.341 2 DEBUG oslo_concurrency.lockutils [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.341 2 DEBUG oslo_concurrency.lockutils [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.342 2 DEBUG nova.compute.manager [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] No waiting events found dispatching network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.342 2 WARNING nova.compute.manager [req-520fe25e-9523-4bcf-b172-a5d028610380 req-548ec93c-c9b8-496b-9a34-999d9ba5df1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received unexpected event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1176: 305 pgs: 305 active+clean; 449 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 7.8 MiB/s wr, 213 op/s
Oct  7 10:06:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:06:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/833656625' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:06:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.554 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.787 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.788 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.795 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.795 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.800 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.801 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000010 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.807 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.808 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.813 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.813 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.818 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:25 np0005473739 nova_compute[259550]: 2025-10-07 14:06:25.818 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000000f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:06:26 np0005473739 nova_compute[259550]: 2025-10-07 14:06:26.096 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:06:26 np0005473739 nova_compute[259550]: 2025-10-07 14:06:26.100 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3710MB free_disk=59.7696533203125GB free_vcpus=2 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:06:26 np0005473739 nova_compute[259550]: 2025-10-07 14:06:26.100 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:26 np0005473739 nova_compute[259550]: 2025-10-07 14:06:26.101 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:26 np0005473739 nova_compute[259550]: 2025-10-07 14:06:26.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.371 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance ddf09c33-d956-404b-a5d8-44a3727f9a3b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.372 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.372 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 74094438-0995-4031-9943-cc85a5ef4f57 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.373 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 348c80b7-7f65-4300-9dab-6a333f1b2c74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.373 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4b95692e-088d-452c-83b7-4c50df73b8fe actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.373 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 115674ad-2273-4c42-b9ae-d380c2c005d6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.373 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.374 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1280MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:06:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1177: 305 pgs: 305 active+clean; 451 MiB data, 505 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 6.3 MiB/s wr, 245 op/s
Oct  7 10:06:27 np0005473739 nova_compute[259550]: 2025-10-07 14:06:27.554 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:06:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/664003288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:06:28 np0005473739 nova_compute[259550]: 2025-10-07 14:06:28.071 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:28 np0005473739 nova_compute[259550]: 2025-10-07 14:06:28.080 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:06:28 np0005473739 podman[286197]: 2025-10-07 14:06:28.129124104 +0000 UTC m=+0.111008658 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:06:28 np0005473739 nova_compute[259550]: 2025-10-07 14:06:28.339 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:06:28 np0005473739 nova_compute[259550]: 2025-10-07 14:06:28.846 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:06:28 np0005473739 nova_compute[259550]: 2025-10-07 14:06:28.846 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:29 np0005473739 NetworkManager[44949]: <info>  [1759845989.0780] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Oct  7 10:06:29 np0005473739 NetworkManager[44949]: <info>  [1759845989.0788] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:29Z|00091|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct  7 10:06:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:29Z|00092|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1178: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 4.5 MiB/s wr, 281 op/s
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.848 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.848 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.848 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.895 2 DEBUG nova.compute.manager [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.895 2 DEBUG oslo_concurrency.lockutils [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.896 2 DEBUG oslo_concurrency.lockutils [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.896 2 DEBUG oslo_concurrency.lockutils [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.896 2 DEBUG nova.compute.manager [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:29 np0005473739 nova_compute[259550]: 2025-10-07 14:06:29.896 2 WARNING nova.compute.manager [req-b501e603-f3d8-481c-a591-6308ee1bfa2e req-d583e908-bb6a-4b6e-8c8e-5724058f18c4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:06:30 np0005473739 nova_compute[259550]: 2025-10-07 14:06:30.084 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:30 np0005473739 nova_compute[259550]: 2025-10-07 14:06:30.084 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:30 np0005473739 nova_compute[259550]: 2025-10-07 14:06:30.085 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:06:30 np0005473739 nova_compute[259550]: 2025-10-07 14:06:30.085 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:30 np0005473739 nova_compute[259550]: 2025-10-07 14:06:30.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:31 np0005473739 podman[286225]: 2025-10-07 14:06:31.072374746 +0000 UTC m=+0.057640061 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:06:31 np0005473739 nova_compute[259550]: 2025-10-07 14:06:31.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1179: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 2.2 MiB/s wr, 208 op/s
Oct  7 10:06:31 np0005473739 nova_compute[259550]: 2025-10-07 14:06:31.975 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updating instance_info_cache with network_info: [{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003729976557921469 of space, bias 1.0, pg target 1.1189929673764407 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.296 2 DEBUG nova.compute.manager [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.296 2 DEBUG nova.compute.manager [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.296 2 DEBUG oslo_concurrency.lockutils [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.296 2 DEBUG oslo_concurrency.lockutils [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.297 2 DEBUG nova.network.neutron [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.423 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-ddf09c33-d956-404b-a5d8-44a3727f9a3b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.423 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.423 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:32 np0005473739 nova_compute[259550]: 2025-10-07 14:06:32.424 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:06:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:06:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973118103' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:06:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:06:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/973118103' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:06:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1180: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 705 KiB/s wr, 173 op/s
Oct  7 10:06:35 np0005473739 nova_compute[259550]: 2025-10-07 14:06:35.275 2 INFO nova.compute.manager [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Rebuilding instance#033[00m
Oct  7 10:06:35 np0005473739 nova_compute[259550]: 2025-10-07 14:06:35.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:35 np0005473739 nova_compute[259550]: 2025-10-07 14:06:35.471 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1181: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 714 KiB/s wr, 175 op/s
Oct  7 10:06:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:35 np0005473739 nova_compute[259550]: 2025-10-07 14:06:35.715 2 DEBUG nova.compute.manager [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:35 np0005473739 nova_compute[259550]: 2025-10-07 14:06:35.934 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_requests' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.132 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'pci_devices' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.294 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.493 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'migration_context' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.677 2 DEBUG nova.network.neutron [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.678 2 DEBUG nova.network.neutron [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.701 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.707 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:06:36 np0005473739 nova_compute[259550]: 2025-10-07 14:06:36.838 2 DEBUG oslo_concurrency.lockutils [req-3c80eb06-e1d6-4378-b6fb-1e74936f6500 req-303b84a7-bd52-4c3d-ad10-00f75855939b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1182: 305 pgs: 305 active+clean; 451 MiB data, 506 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 92 KiB/s wr, 147 op/s
Oct  7 10:06:38 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  7 10:06:39 np0005473739 nova_compute[259550]: 2025-10-07 14:06:39.229 2 DEBUG nova.compute.manager [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:39 np0005473739 nova_compute[259550]: 2025-10-07 14:06:39.229 2 DEBUG nova.compute.manager [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:06:39 np0005473739 nova_compute[259550]: 2025-10-07 14:06:39.230 2 DEBUG oslo_concurrency.lockutils [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:39 np0005473739 nova_compute[259550]: 2025-10-07 14:06:39.230 2 DEBUG oslo_concurrency.lockutils [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:39 np0005473739 nova_compute[259550]: 2025-10-07 14:06:39.230 2 DEBUG nova.network.neutron [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:06:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1183: 305 pgs: 305 active+clean; 458 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 758 KiB/s wr, 101 op/s
Oct  7 10:06:40 np0005473739 nova_compute[259550]: 2025-10-07 14:06:40.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:41 np0005473739 nova_compute[259550]: 2025-10-07 14:06:41.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1184: 305 pgs: 305 active+clean; 472 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.1 MiB/s wr, 42 op/s
Oct  7 10:06:41 np0005473739 nova_compute[259550]: 2025-10-07 14:06:41.684 2 DEBUG nova.network.neutron [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:06:41 np0005473739 nova_compute[259550]: 2025-10-07 14:06:41.685 2 DEBUG nova.network.neutron [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:41 np0005473739 nova_compute[259550]: 2025-10-07 14:06:41.890 2 DEBUG oslo_concurrency.lockutils [req-0ce290b7-4f73-46ca-813a-1e6196f5c2ce req-89fb3a84-02db-4b1e-859b-6cc0631d1b23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:42Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:89:da:ba 10.100.0.9
Oct  7 10:06:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:42Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:89:da:ba 10.100.0.9
Oct  7 10:06:42 np0005473739 nova_compute[259550]: 2025-10-07 14:06:42.374 2 DEBUG nova.compute.manager [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-changed-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:42 np0005473739 nova_compute[259550]: 2025-10-07 14:06:42.375 2 DEBUG nova.compute.manager [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing instance network info cache due to event network-changed-e6624198-96a3-41f5-b3e6-217be5426796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:06:42 np0005473739 nova_compute[259550]: 2025-10-07 14:06:42.375 2 DEBUG oslo_concurrency.lockutils [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:42 np0005473739 nova_compute[259550]: 2025-10-07 14:06:42.375 2 DEBUG oslo_concurrency.lockutils [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:42 np0005473739 nova_compute[259550]: 2025-10-07 14:06:42.375 2 DEBUG nova.network.neutron [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing network info cache for port e6624198-96a3-41f5-b3e6-217be5426796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:06:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:43Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:06:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:43Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:06:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1185: 305 pgs: 305 active+clean; 472 MiB data, 539 MiB used, 59 GiB / 60 GiB avail; 60 KiB/s rd, 2.1 MiB/s wr, 36 op/s
Oct  7 10:06:43 np0005473739 nova_compute[259550]: 2025-10-07 14:06:43.837 2 DEBUG nova.network.neutron [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updated VIF entry in instance network info cache for port e6624198-96a3-41f5-b3e6-217be5426796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:06:43 np0005473739 nova_compute[259550]: 2025-10-07 14:06:43.838 2 DEBUG nova.network.neutron [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:43 np0005473739 nova_compute[259550]: 2025-10-07 14:06:43.856 2 DEBUG oslo_concurrency.lockutils [req-b6b9ffb5-5419-4fd8-abdd-af78bdbeda8c req-c623241d-7249-40fe-85fe-51d1ac4d813c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:45 np0005473739 nova_compute[259550]: 2025-10-07 14:06:45.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1186: 305 pgs: 305 active+clean; 493 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 550 KiB/s rd, 3.7 MiB/s wr, 96 op/s
Oct  7 10:06:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:46 np0005473739 nova_compute[259550]: 2025-10-07 14:06:46.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:46 np0005473739 nova_compute[259550]: 2025-10-07 14:06:46.763 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:06:46 np0005473739 nova_compute[259550]: 2025-10-07 14:06:46.967 2 DEBUG nova.compute.manager [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-changed-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:46 np0005473739 nova_compute[259550]: 2025-10-07 14:06:46.967 2 DEBUG nova.compute.manager [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing instance network info cache due to event network-changed-e6624198-96a3-41f5-b3e6-217be5426796. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:06:46 np0005473739 nova_compute[259550]: 2025-10-07 14:06:46.968 2 DEBUG oslo_concurrency.lockutils [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:46 np0005473739 nova_compute[259550]: 2025-10-07 14:06:46.968 2 DEBUG oslo_concurrency.lockutils [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:46 np0005473739 nova_compute[259550]: 2025-10-07 14:06:46.968 2 DEBUG nova.network.neutron [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Refreshing network info cache for port e6624198-96a3-41f5-b3e6-217be5426796 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:06:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1187: 305 pgs: 305 active+clean; 507 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 692 KiB/s rd, 4.2 MiB/s wr, 119 op/s
Oct  7 10:06:47 np0005473739 nova_compute[259550]: 2025-10-07 14:06:47.771 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:47 np0005473739 nova_compute[259550]: 2025-10-07 14:06:47.772 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:47 np0005473739 nova_compute[259550]: 2025-10-07 14:06:47.773 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:47 np0005473739 nova_compute[259550]: 2025-10-07 14:06:47.773 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:47 np0005473739 nova_compute[259550]: 2025-10-07 14:06:47.773 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:47 np0005473739 nova_compute[259550]: 2025-10-07 14:06:47.775 2 INFO nova.compute.manager [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Terminating instance#033[00m
Oct  7 10:06:47 np0005473739 nova_compute[259550]: 2025-10-07 14:06:47.776 2 DEBUG nova.compute.manager [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:06:48 np0005473739 podman[286247]: 2025-10-07 14:06:48.086735216 +0000 UTC m=+0.069323265 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:06:48 np0005473739 podman[286248]: 2025-10-07 14:06:48.12019614 +0000 UTC m=+0.101512265 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible)
Oct  7 10:06:48 np0005473739 kernel: tape6624198-96 (unregistering): left promiscuous mode
Oct  7 10:06:48 np0005473739 NetworkManager[44949]: <info>  [1759846008.2929] device (tape6624198-96): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:48Z|00093|binding|INFO|Releasing lport e6624198-96a3-41f5-b3e6-217be5426796 from this chassis (sb_readonly=0)
Oct  7 10:06:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:48Z|00094|binding|INFO|Setting lport e6624198-96a3-41f5-b3e6-217be5426796 down in Southbound
Oct  7 10:06:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:48Z|00095|binding|INFO|Removing iface tape6624198-96 ovn-installed in OVS
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.321 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:89:da:ba 10.100.0.9'], port_security=['fa:16:3e:89:da:ba 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '115674ad-2273-4c42-b9ae-d380c2c005d6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80f412c9-c511-49ba-a8ca-ff830fcff803', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '711e531670b1460a923f2f91ce0f63db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '087bdc78-7c9e-48d3-b83f-b37a1a3f8ec7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e75ebab-8b50-4bfb-bcab-f5fcada82242, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e6624198-96a3-41f5-b3e6-217be5426796) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e6624198-96a3-41f5-b3e6-217be5426796 in datapath 80f412c9-c511-49ba-a8ca-ff830fcff803 unbound from our chassis#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.324 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80f412c9-c511-49ba-a8ca-ff830fcff803#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.348 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[59e728fe-bc78-4ac1-978e-eaca945f01a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:48 np0005473739 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000010.scope: Deactivated successfully.
Oct  7 10:06:48 np0005473739 systemd[1]: machine-qemu\x2d19\x2dinstance\x2d00000010.scope: Consumed 15.308s CPU time.
Oct  7 10:06:48 np0005473739 systemd-machined[214580]: Machine qemu-19-instance-00000010 terminated.
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.381 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b558386e-b659-4eee-adf0-6daf1de8fb0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.384 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b22a04ab-6aca-4eab-a8e4-50e650db7320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.418 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0e881b89-3dca-4e92-aa2f-667cd7389182]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.417 2 INFO nova.virt.libvirt.driver [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Instance destroyed successfully.#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.418 2 DEBUG nova.objects.instance [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'resources' on Instance uuid 115674ad-2273-4c42-b9ae-d380c2c005d6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.435 2 DEBUG nova.virt.libvirt.vif [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:06:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-279731597',display_name='tempest-FloatingIPsAssociationTestJSON-server-279731597',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-279731597',id=16,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-juuxxw0j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:06:23Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=115674ad-2273-4c42-b9ae-d380c2c005d6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.436 2 DEBUG nova.network.os_vif_util [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.437 2 DEBUG nova.network.os_vif_util [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.437 2 DEBUG os_vif [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.439 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6624198-96, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.438 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56340906-1c72-44a8-b05a-8ca880bff3d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80f412c9-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:6b:79'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654916, 'reachable_time': 32115, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286309, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.444 2 INFO os_vif [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:89:da:ba,bridge_name='br-int',has_traffic_filtering=True,id=e6624198-96a3-41f5-b3e6-217be5426796,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape6624198-96')#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.456 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa75935-c5cc-4643-8f0d-a79f0562bb84]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap80f412c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654932, 'tstamp': 654932}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286310, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap80f412c9-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 654936, 'tstamp': 654936}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286310, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.458 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f412c9-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.461 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80f412c9-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.461 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.462 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80f412c9-c0, col_values=(('external_ids', {'iface-id': '60dd3b69-c15d-4f1f-8348-1807afc1578d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:48 np0005473739 nova_compute[259550]: 2025-10-07 14:06:48.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:48.462 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1188: 305 pgs: 305 active+clean; 517 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 750 KiB/s rd, 4.3 MiB/s wr, 141 op/s
Oct  7 10:06:49 np0005473739 nova_compute[259550]: 2025-10-07 14:06:49.780 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:06:49 np0005473739 kernel: tap9b25db0b-24 (unregistering): left promiscuous mode
Oct  7 10:06:49 np0005473739 NetworkManager[44949]: <info>  [1759846009.8082] device (tap9b25db0b-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:06:49 np0005473739 nova_compute[259550]: 2025-10-07 14:06:49.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:49Z|00096|binding|INFO|Releasing lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 from this chassis (sb_readonly=0)
Oct  7 10:06:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:49Z|00097|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 down in Southbound
Oct  7 10:06:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:49Z|00098|binding|INFO|Removing iface tap9b25db0b-24 ovn-installed in OVS
Oct  7 10:06:49 np0005473739 nova_compute[259550]: 2025-10-07 14:06:49.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:49 np0005473739 nova_compute[259550]: 2025-10-07 14:06:49.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.855 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.856 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.857 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.876 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfbc461-e3b2-4b53-8a09-8ca31ed9b3c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:49 np0005473739 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  7 10:06:49 np0005473739 systemd[1]: machine-qemu\x2d18\x2dinstance\x2d0000000a.scope: Consumed 16.416s CPU time.
Oct  7 10:06:49 np0005473739 systemd-machined[214580]: Machine qemu-18-instance-0000000a terminated.
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.908 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[19d161c5-fbfd-4983-81b2-ddd1210464dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.911 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1be040a8-0dea-4f2a-beb1-56b3a4f80b28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:49 np0005473739 nova_compute[259550]: 2025-10-07 14:06:49.922 2 INFO nova.virt.libvirt.driver [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Deleting instance files /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6_del#033[00m
Oct  7 10:06:49 np0005473739 nova_compute[259550]: 2025-10-07 14:06:49.923 2 INFO nova.virt.libvirt.driver [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Deletion of /var/lib/nova/instances/115674ad-2273-4c42-b9ae-d380c2c005d6_del complete#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.947 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[329729a6-5dde-41e6-b40c-8b6c4c4c2eaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.966 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03892cb3-aaba-4f60-8fbe-647756392c6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 16, 'rx_bytes': 1168, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 16, 'rx_bytes': 1168, 'tx_bytes': 860, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286339, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.989 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90998376-2ca7-4ff2-b9c0-49df383cd20e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286340, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286340, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.991 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:49 np0005473739 nova_compute[259550]: 2025-10-07 14:06:49.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:49 np0005473739 nova_compute[259550]: 2025-10-07 14:06:49.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.998 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.998 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.998 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:49 np0005473739 kernel: tap9b25db0b-24: entered promiscuous mode
Oct  7 10:06:50 np0005473739 systemd-udevd[286290]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:49.999 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:50 np0005473739 NetworkManager[44949]: <info>  [1759846010.0030] manager: (tap9b25db0b-24): new Tun device (/org/freedesktop/NetworkManager/Devices/62)
Oct  7 10:06:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:50Z|00099|binding|INFO|Claiming lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 for this chassis.
Oct  7 10:06:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:50Z|00100|binding|INFO|9b25db0b-246e-456c-82d7-cf361c57f9c5: Claiming fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:06:50 np0005473739 kernel: tap9b25db0b-24 (unregistering): left promiscuous mode
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.024 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.#033[00m
Oct  7 10:06:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:50Z|00101|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 ovn-installed in OVS
Oct  7 10:06:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:50Z|00102|if_status|INFO|Not setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 down as sb is readonly
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.032 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.035 2 DEBUG nova.virt.libvirt.vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:33Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.035 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.036 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.036 2 DEBUG os_vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b25db0b-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:50Z|00103|binding|INFO|Releasing lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 from this chassis (sb_readonly=0)
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.038 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.039 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.040 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.058 2 INFO os_vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd902c8-6648-426f-8af6-45195b4ceb6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.101 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d149ca3f-4471-4a12-a9ec-551fbe368f94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.106 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a96992-4a80-4219-9c01-63942ffa17c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.140 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d22346b6-8596-427a-8f93-3167602d533b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.168 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e9facc50-360c-4fbd-8d6f-6dbca58af7ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 18, 'rx_bytes': 1168, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 18, 'rx_bytes': 1168, 'tx_bytes': 944, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286369, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.189 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf776aa7-bd50-4a0a-ae5e-5f86565ff1c5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286370, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286370, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.191 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.194 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.195 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.195 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.195 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.235 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.236 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.238 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.259 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a13b511-192b-4475-9849-3ce4a96c3b73]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.299 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e69b01ee-24a6-4a0a-bba3-c1663310f158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.306 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9af5bd0e-9ad7-4599-8efe-fc9be64648e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.336 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39d1733f-f0e5-46f1-9949-ab2bf608cc77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.356 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30a66143-e3a8-41fe-b643-0660197454ca]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 20, 'rx_bytes': 1168, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 20, 'rx_bytes': 1168, 'tx_bytes': 1028, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286377, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.378 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74daa060-0131-4ae6-b9c6-393b160d1eb2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286378, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286378, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.379 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.383 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.383 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.383 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:50.384 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.399 2 INFO nova.compute.manager [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Took 2.62 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.399 2 DEBUG oslo.service.loopingcall [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.400 2 DEBUG nova.compute.manager [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.400 2 DEBUG nova.network.neutron [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.428 2 DEBUG nova.network.neutron [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updated VIF entry in instance network info cache for port e6624198-96a3-41f5-b3e6-217be5426796. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.429 2 DEBUG nova.network.neutron [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [{"id": "e6624198-96a3-41f5-b3e6-217be5426796", "address": "fa:16:3e:89:da:ba", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape6624198-96", "ovs_interfaceid": "e6624198-96a3-41f5-b3e6-217be5426796", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.483 2 DEBUG oslo_concurrency.lockutils [req-a1f17fda-a2d0-43a5-86f9-c441331c1b82 req-04d56703-109f-4cb4-8d5e-e0e47dc2f3fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-115674ad-2273-4c42-b9ae-d380c2c005d6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.512 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting instance files /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.513 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deletion of /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del complete#033[00m
Oct  7 10:06:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.822 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.823 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating image(s)#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.872 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.900 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.925 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.931 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.965 2 DEBUG nova.compute.manager [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-unplugged-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.965 2 DEBUG oslo_concurrency.lockutils [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.966 2 DEBUG oslo_concurrency.lockutils [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.967 2 DEBUG oslo_concurrency.lockutils [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.967 2 DEBUG nova.compute.manager [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] No waiting events found dispatching network-vif-unplugged-e6624198-96a3-41f5-b3e6-217be5426796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.967 2 DEBUG nova.compute.manager [req-f00e5bf4-cfd4-43c5-8e0b-dcb736992834 req-2f44708a-a60f-42a2-b0d4-4264470cbe22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-unplugged-e6624198-96a3-41f5-b3e6-217be5426796 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.993 2 DEBUG nova.compute.manager [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.993 2 DEBUG oslo_concurrency.lockutils [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.993 2 DEBUG oslo_concurrency.lockutils [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.993 2 DEBUG oslo_concurrency.lockutils [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.994 2 DEBUG nova.compute.manager [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:50 np0005473739 nova_compute[259550]: 2025-10-07 14:06:50.994 2 WARNING nova.compute.manager [req-ee78968a-9678-4f25-8873-15db737f5d27 req-65741cd4-4e8f-4dfe-802f-b10daf78fbcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.008 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.009 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.010 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.010 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.031 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.035 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.338 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.303s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.415 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] resizing rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:06:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1189: 305 pgs: 305 active+clean; 449 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 713 KiB/s rd, 3.6 MiB/s wr, 134 op/s
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.574 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.574 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Ensure instance console log exists: /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.575 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.575 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.575 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.577 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start _get_guest_xml network_info=[{"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.581 2 WARNING nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.588 2 DEBUG nova.virt.libvirt.host [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.589 2 DEBUG nova.virt.libvirt.host [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.591 2 DEBUG nova.network.neutron [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.593 2 DEBUG nova.virt.libvirt.host [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.594 2 DEBUG nova.virt.libvirt.host [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.595 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.595 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.595 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.595 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.596 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.597 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.597 2 DEBUG nova.virt.hardware [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.597 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.638 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.662 2 INFO nova.compute.manager [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Took 1.26 seconds to deallocate network for instance.#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.733 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.733 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:51 np0005473739 nova_compute[259550]: 2025-10-07 14:06:51.892 2 DEBUG oslo_concurrency.processutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:06:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/88216778' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.085 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.110 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.114 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:06:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1335684738' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.346 2 DEBUG oslo_concurrency.processutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.353 2 DEBUG nova.compute.provider_tree [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.395 2 DEBUG nova.scheduler.client.report [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.445 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.487 2 INFO nova.scheduler.client.report [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Deleted allocations for instance 115674ad-2273-4c42-b9ae-d380c2c005d6#033[00m
Oct  7 10:06:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:06:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2864037943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.578 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.580 2 DEBUG nova.virt.libvirt.vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:50Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.580 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.581 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.584 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <uuid>ddf09c33-d956-404b-a5d8-44a3727f9a3b</uuid>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <name>instance-0000000a</name>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdminTestJSON-server-1321212972</nova:name>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:06:51</nova:creationTime>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <nova:user uuid="f06dda9346a24fb094ad9fe51664cc48">tempest-ServersAdminTestJSON-1442908900-project-member</nova:user>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <nova:project uuid="48bbd5aa8b9d4a0ea0150bd57145fc68">tempest-ServersAdminTestJSON-1442908900</nova:project>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <nova:port uuid="9b25db0b-246e-456c-82d7-cf361c57f9c5">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <entry name="serial">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <entry name="uuid">ddf09c33-d956-404b-a5d8-44a3727f9a3b</entry>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6c:03:d4"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <target dev="tap9b25db0b-24"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/console.log" append="off"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:06:52 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:06:52 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:06:52 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:06:52 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.585 2 DEBUG nova.virt.libvirt.vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:24Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:06:50Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.585 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.586 2 DEBUG nova.network.os_vif_util [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.586 2 DEBUG os_vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.587 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.590 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b25db0b-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.591 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9b25db0b-24, col_values=(('external_ids', {'iface-id': '9b25db0b-246e-456c-82d7-cf361c57f9c5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6c:03:d4', 'vm-uuid': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:52 np0005473739 NetworkManager[44949]: <info>  [1759846012.5936] manager: (tap9b25db0b-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.600 2 INFO os_vif [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')#033[00m
Oct  7 10:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:06:52 np0005473739 nova_compute[259550]: 2025-10-07 14:06:52.957 2 DEBUG oslo_concurrency.lockutils [None req-9fa72097-7062-4fb0-978d-39df86e0346c 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.048 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.049 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.049 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] No VIF found with MAC fa:16:3e:6c:03:d4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.050 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Using config drive#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.078 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.088 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.088 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.088 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.089 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "115674ad-2273-4c42-b9ae-d380c2c005d6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.089 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] No waiting events found dispatching network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.089 2 WARNING nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received unexpected event network-vif-plugged-e6624198-96a3-41f5-b3e6-217be5426796 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.089 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Received event network-vif-deleted-e6624198-96a3-41f5-b3e6-217be5426796 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.089 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.090 2 DEBUG nova.compute.manager [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.090 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.090 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.090 2 DEBUG nova.network.neutron [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.388 2 DEBUG nova.compute.manager [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.388 2 DEBUG oslo_concurrency.lockutils [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.389 2 DEBUG oslo_concurrency.lockutils [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.389 2 DEBUG oslo_concurrency.lockutils [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.389 2 DEBUG nova.compute.manager [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.389 2 WARNING nova.compute.manager [req-c64a424c-4c13-4428-a63f-5de0fdee97c3 req-f9fbc40a-daed-4b5c-8aee-0da72624c16f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.396 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'ec2_ids' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1190: 305 pgs: 305 active+clean; 449 MiB data, 552 MiB used, 59 GiB / 60 GiB avail; 697 KiB/s rd, 2.2 MiB/s wr, 119 op/s
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.569 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'keypairs' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:06:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:53Z|00104|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct  7 10:06:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:53Z|00105|binding|INFO|Releasing lport 60dd3b69-c15d-4f1f-8348-1807afc1578d from this chassis (sb_readonly=0)
Oct  7 10:06:53 np0005473739 nova_compute[259550]: 2025-10-07 14:06:53.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.235 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Creating config drive at /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.240 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf1xjdx7z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.278 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.279 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.374 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf1xjdx7z" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.402 2 DEBUG nova.storage.rbd_utils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] rbd image ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.406 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.870 2 DEBUG oslo_concurrency.processutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config ddf09c33-d956-404b-a5d8-44a3727f9a3b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.871 2 INFO nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting local config drive /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b/disk.config because it was imported into RBD.#033[00m
Oct  7 10:06:54 np0005473739 kernel: tap9b25db0b-24: entered promiscuous mode
Oct  7 10:06:54 np0005473739 NetworkManager[44949]: <info>  [1759846014.9235] manager: (tap9b25db0b-24): new Tun device (/org/freedesktop/NetworkManager/Devices/64)
Oct  7 10:06:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:54Z|00106|binding|INFO|Claiming lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 for this chassis.
Oct  7 10:06:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:54Z|00107|binding|INFO|9b25db0b-246e-456c-82d7-cf361c57f9c5: Claiming fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:54Z|00108|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 ovn-installed in OVS
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:54 np0005473739 nova_compute[259550]: 2025-10-07 14:06:54.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:54 np0005473739 systemd-udevd[286700]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:06:54 np0005473739 NetworkManager[44949]: <info>  [1759846014.9631] device (tap9b25db0b-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:06:54 np0005473739 NetworkManager[44949]: <info>  [1759846014.9651] device (tap9b25db0b-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:06:54 np0005473739 systemd-machined[214580]: New machine qemu-20-instance-0000000a.
Oct  7 10:06:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:06:54Z|00109|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 up in Southbound
Oct  7 10:06:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.973 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:06:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.974 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 bound to our chassis#033[00m
Oct  7 10:06:54 np0005473739 systemd[1]: Started Virtual Machine qemu-20-instance-0000000a.
Oct  7 10:06:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.976 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:06:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:54.993 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f92d953c-91c6-483f-95cd-1e9750a5889d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.024 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aadcd0e3-c2e9-4a5d-879a-4b1557570b78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.027 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0f8686fb-011a-44fb-9716-cae42ce202a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.060 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0b32027b-321f-4de9-819b-10e6d0f17e1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.064 2 DEBUG nova.network.neutron [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.065 2 DEBUG nova.network.neutron [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.082 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01a14f9e-04ca-48ef-a2bc-7ca5ea8b990c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 22, 'rx_bytes': 1168, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 22, 'rx_bytes': 1168, 'tx_bytes': 1112, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286717, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.102 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fb16ab-f999-473f-90a0-5cb4b843beeb]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286718, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286718, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.104 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.108 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.109 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.109 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:06:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:06:55.109 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.204 2 DEBUG oslo_concurrency.lockutils [req-caa124e1-7804-4f01-8378-49b05338681f req-ca25ab00-2b92-4b41-925a-01b43f822cc4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1191: 305 pgs: 305 active+clean; 386 MiB data, 512 MiB used, 59 GiB / 60 GiB avail; 742 KiB/s rd, 3.3 MiB/s wr, 185 op/s
Oct  7 10:06:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.854 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for ddf09c33-d956-404b-a5d8-44a3727f9a3b due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.855 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846015.8535411, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.855 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.858 2 DEBUG nova.compute.manager [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.858 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.862 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance spawned successfully.#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.862 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.976 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:55 np0005473739 nova_compute[259550]: 2025-10-07 14:06:55.981 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.179 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.180 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846015.855435, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.180 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Started (Lifecycle Event)#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.186 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.186 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.187 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.187 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.188 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.188 2 DEBUG nova.virt.libvirt.driver [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.205 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.209 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.256 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.362 2 DEBUG nova.compute.manager [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.492 2 DEBUG nova.compute.manager [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.492 2 DEBUG oslo_concurrency.lockutils [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.493 2 DEBUG oslo_concurrency.lockutils [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.493 2 DEBUG oslo_concurrency.lockutils [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.493 2 DEBUG nova.compute.manager [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.493 2 WARNING nova.compute.manager [req-f98fb5c3-d62c-4df5-a4c0-00ff19b8387e req-87d8feb1-4564-4e62-9c7c-98dbbc02d1d5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.518 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.519 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.519 2 DEBUG nova.objects.instance [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:06:56 np0005473739 nova_compute[259550]: 2025-10-07 14:06:56.653 2 DEBUG oslo_concurrency.lockutils [None req-ac8918d2-436c-4e66-9cdd-11e1d77f1d86 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1192: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 257 KiB/s rd, 2.3 MiB/s wr, 133 op/s
Oct  7 10:06:57 np0005473739 nova_compute[259550]: 2025-10-07 14:06:57.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:06:59 np0005473739 podman[286761]: 2025-10-07 14:06:59.111979103 +0000 UTC m=+0.089716269 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.303 2 DEBUG nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.303 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.304 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.304 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.304 2 DEBUG nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.304 2 WARNING nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state error and task_state None.#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.304 2 DEBUG nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.305 2 DEBUG nova.compute.manager [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing instance network info cache due to event network-changed-d06bd5a4-d9b7-4791-a387-d190eb1457f6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.305 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.305 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:06:59 np0005473739 nova_compute[259550]: 2025-10-07 14:06:59.305 2 DEBUG nova.network.neutron [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Refreshing network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:06:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1193: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.9 MiB/s wr, 152 op/s
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.039 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.040 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.040 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.336 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.337 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.337 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.337 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.338 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.339 2 INFO nova.compute.manager [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Terminating instance#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.340 2 DEBUG nova.compute.manager [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:07:00 np0005473739 kernel: tap184f2379-84 (unregistering): left promiscuous mode
Oct  7 10:07:00 np0005473739 NetworkManager[44949]: <info>  [1759846020.4717] device (tap184f2379-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:00Z|00110|binding|INFO|Releasing lport 184f2379-8442-414e-bccb-6f5e5a314e72 from this chassis (sb_readonly=0)
Oct  7 10:07:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:00Z|00111|binding|INFO|Setting lport 184f2379-8442-414e-bccb-6f5e5a314e72 down in Southbound
Oct  7 10:07:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:00Z|00112|binding|INFO|Removing iface tap184f2379-84 ovn-installed in OVS
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.539 2 DEBUG nova.network.neutron [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updated VIF entry in instance network info cache for port d06bd5a4-d9b7-4791-a387-d190eb1457f6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.540 2 DEBUG nova.network.neutron [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [{"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:00 np0005473739 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Oct  7 10:07:00 np0005473739 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000e.scope: Consumed 15.247s CPU time.
Oct  7 10:07:00 np0005473739 systemd-machined[214580]: Machine qemu-16-instance-0000000e terminated.
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.559 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c8:34:23 10.100.0.11'], port_security=['fa:16:3e:c8:34:23 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '348c80b7-7f65-4300-9dab-6a333f1b2c74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=184f2379-8442-414e-bccb-6f5e5a314e72) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.560 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 184f2379-8442-414e-bccb-6f5e5a314e72 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.561 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.574 2 DEBUG oslo_concurrency.lockutils [req-babcefd0-0ada-41a9-bce8-aa3046f85b85 req-f012bab7-6bbc-4541-8f7e-df75da87ae41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4b95692e-088d-452c-83b7-4c50df73b8fe" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41940f52-74f9-441e-b74c-f6e9b0aa13d2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.595 2 INFO nova.virt.libvirt.driver [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Instance destroyed successfully.#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.596 2 DEBUG nova.objects.instance [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid 348c80b7-7f65-4300-9dab-6a333f1b2c74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.617 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2c77311e-15e4-4690-8b41-659c03dd6d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[febb76cb-9535-402b-b6c6-46899857881a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.658 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b31b9394-cde3-46d5-81f4-495af53aa997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.670 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.670 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.671 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.671 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.671 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.673 2 INFO nova.compute.manager [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Terminating instance#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.674 2 DEBUG nova.compute.manager [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.681 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f9f0835-1c75-40b3-986d-19db50224073]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 24, 'rx_bytes': 1168, 'tx_bytes': 1196, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286810, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.703 2 DEBUG nova.virt.libvirt.vif [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-353618528',display_name='tempest-ServersAdminTestJSON-server-353618528',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-353618528',id=14,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-tg713xdg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:06:01Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=348c80b7-7f65-4300-9dab-6a333f1b2c74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.703 2 DEBUG nova.network.os_vif_util [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "184f2379-8442-414e-bccb-6f5e5a314e72", "address": "fa:16:3e:c8:34:23", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap184f2379-84", "ovs_interfaceid": "184f2379-8442-414e-bccb-6f5e5a314e72", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.704 2 DEBUG nova.network.os_vif_util [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.705 2 DEBUG os_vif [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap184f2379-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.707 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9710863-0b8b-4271-a144-f836f43cd3c2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286811, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 286811, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.713 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.714 2 INFO os_vif [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c8:34:23,bridge_name='br-int',has_traffic_filtering=True,id=184f2379-8442-414e-bccb-6f5e5a314e72,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap184f2379-84')#033[00m
Oct  7 10:07:00 np0005473739 kernel: tapd06bd5a4-d9 (unregistering): left promiscuous mode
Oct  7 10:07:00 np0005473739 NetworkManager[44949]: <info>  [1759846020.8197] device (tapd06bd5a4-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:07:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:00Z|00113|binding|INFO|Releasing lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 from this chassis (sb_readonly=0)
Oct  7 10:07:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:00Z|00114|binding|INFO|Setting lport d06bd5a4-d9b7-4791-a387-d190eb1457f6 down in Southbound
Oct  7 10:07:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:00Z|00115|binding|INFO|Removing iface tapd06bd5a4-d9 ovn-installed in OVS
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.857 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:f8:94 10.100.0.7'], port_security=['fa:16:3e:65:f8:94 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4b95692e-088d-452c-83b7-4c50df73b8fe', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80f412c9-c511-49ba-a8ca-ff830fcff803', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '711e531670b1460a923f2f91ce0f63db', 'neutron:revision_number': '4', 'neutron:security_group_ids': '087bdc78-7c9e-48d3-b83f-b37a1a3f8ec7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2e75ebab-8b50-4bfb-bcab-f5fcada82242, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d06bd5a4-d9b7-4791-a387-d190eb1457f6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.858 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d06bd5a4-d9b7-4791-a387-d190eb1457f6 in datapath 80f412c9-c511-49ba-a8ca-ff830fcff803 unbound from our chassis#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.860 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80f412c9-c511-49ba-a8ca-ff830fcff803, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74223833-3950-4235-93ca-1de7aead968f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:00.861 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 namespace which is not needed anymore#033[00m
Oct  7 10:07:00 np0005473739 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Oct  7 10:07:00 np0005473739 systemd[1]: machine-qemu\x2d17\x2dinstance\x2d0000000f.scope: Consumed 15.427s CPU time.
Oct  7 10:07:00 np0005473739 systemd-machined[214580]: Machine qemu-17-instance-0000000f terminated.
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.917 2 INFO nova.virt.libvirt.driver [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Instance destroyed successfully.#033[00m
Oct  7 10:07:00 np0005473739 nova_compute[259550]: 2025-10-07 14:07:00.917 2 DEBUG nova.objects.instance [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lazy-loading 'resources' on Instance uuid 4b95692e-088d-452c-83b7-4c50df73b8fe obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:01 np0005473739 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [NOTICE]   (284582) : haproxy version is 2.8.14-c23fe91
Oct  7 10:07:01 np0005473739 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [NOTICE]   (284582) : path to executable is /usr/sbin/haproxy
Oct  7 10:07:01 np0005473739 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [ALERT]    (284582) : Current worker (284589) exited with code 143 (Terminated)
Oct  7 10:07:01 np0005473739 neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803[284528]: [WARNING]  (284582) : All workers exited. Exiting... (0)
Oct  7 10:07:01 np0005473739 systemd[1]: libpod-44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084.scope: Deactivated successfully.
Oct  7 10:07:01 np0005473739 podman[286861]: 2025-10-07 14:07:01.053631636 +0000 UTC m=+0.068623615 container died 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.068 2 DEBUG nova.virt.libvirt.vif [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-FloatingIPsAssociationTestJSON-server-99112055',display_name='tempest-FloatingIPsAssociationTestJSON-server-99112055',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-floatingipsassociationtestjson-server-99112055',id=15,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='711e531670b1460a923f2f91ce0f63db',ramdisk_id='',reservation_id='r-ib7l0lly',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-FloatingIPsAssociationTestJSON-1021362371',owner_user_name='tempest-FloatingIPsAssociationTestJSON-1021362371-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:06:07Z,user_data=None,user_id='7ff78293ed4e40f9954a0b0e6fca0caa',uuid=4b95692e-088d-452c-83b7-4c50df73b8fe,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.069 2 DEBUG nova.network.os_vif_util [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converting VIF {"id": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "address": "fa:16:3e:65:f8:94", "network": {"id": "80f412c9-c511-49ba-a8ca-ff830fcff803", "bridge": "br-int", "label": "tempest-FloatingIPsAssociationTestJSON-317578742-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "711e531670b1460a923f2f91ce0f63db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd06bd5a4-d9", "ovs_interfaceid": "d06bd5a4-d9b7-4791-a387-d190eb1457f6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.070 2 DEBUG nova.network.os_vif_util [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.070 2 DEBUG os_vif [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.072 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd06bd5a4-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.078 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.084 2 INFO os_vif [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:f8:94,bridge_name='br-int',has_traffic_filtering=True,id=d06bd5a4-d9b7-4791-a387-d190eb1457f6,network=Network(80f412c9-c511-49ba-a8ca-ff830fcff803),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd06bd5a4-d9')#033[00m
Oct  7 10:07:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2ed1157e3fd8b123979ce2884e8cf7ae250f676278229a741e499d63f106c27c-merged.mount: Deactivated successfully.
Oct  7 10:07:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084-userdata-shm.mount: Deactivated successfully.
Oct  7 10:07:01 np0005473739 podman[286861]: 2025-10-07 14:07:01.181289298 +0000 UTC m=+0.196281257 container cleanup 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:07:01 np0005473739 systemd[1]: libpod-conmon-44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084.scope: Deactivated successfully.
Oct  7 10:07:01 np0005473739 podman[286901]: 2025-10-07 14:07:01.226762914 +0000 UTC m=+0.095032801 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.280 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:01 np0005473739 podman[286921]: 2025-10-07 14:07:01.311056907 +0000 UTC m=+0.099249504 container remove 44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.318 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee257e4-4b70-4316-ab7d-145e7cd8397d]: (4, ('Tue Oct  7 02:07:00 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 (44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084)\n44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084\nTue Oct  7 02:07:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 (44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084)\n44df3b732d01d289c8f89d188b21989f40929f8431ccc04f9ff1162d22c5e084\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.319 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8e2bf33b-4630-4fd6-990d-0050ebbcc0be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.320 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80f412c9-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:01 np0005473739 kernel: tap80f412c9-c0: left promiscuous mode
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.342 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9abf47b-4d33-4751-9704-9c1d41fc1e23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[82196678-78f8-4d2f-bc30-9d8239523ae8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.384 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[87ef612b-80c2-4c77-9884-539e0688f805]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.409 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd3cd8c8-81a3-403d-8977-1e2f14e79b49]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 654905, 'reachable_time': 15332, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 286943, 'error': None, 'target': 'ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:01 np0005473739 systemd[1]: run-netns-ovnmeta\x2d80f412c9\x2dc511\x2d49ba\x2da8ca\x2dff830fcff803.mount: Deactivated successfully.
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.417 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80f412c9-c511-49ba-a8ca-ff830fcff803 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:07:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:01.417 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[be53bc8e-a122-4fd3-8122-89580faf8896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1194: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 153 op/s
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.893 2 DEBUG nova.compute.manager [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-unplugged-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG oslo_concurrency.lockutils [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG oslo_concurrency.lockutils [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG oslo_concurrency.lockutils [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG nova.compute.manager [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] No waiting events found dispatching network-vif-unplugged-184f2379-8442-414e-bccb-6f5e5a314e72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.894 2 DEBUG nova.compute.manager [req-b596d2ff-5781-4ca1-9ae7-de6385ebaac7 req-42b491d1-2c73-4a32-8df4-7e800a4d7548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-unplugged-184f2379-8442-414e-bccb-6f5e5a314e72 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.971 2 DEBUG nova.compute.manager [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-unplugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.971 2 DEBUG oslo_concurrency.lockutils [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.972 2 DEBUG oslo_concurrency.lockutils [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.972 2 DEBUG oslo_concurrency.lockutils [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.972 2 DEBUG nova.compute.manager [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] No waiting events found dispatching network-vif-unplugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:01 np0005473739 nova_compute[259550]: 2025-10-07 14:07:01.972 2 DEBUG nova.compute.manager [req-b4b2c494-1a43-421b-bf8f-b429d9e82690 req-af667705-5bd0-48a2-86d8-bbcb2048027a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-unplugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:07:02 np0005473739 nova_compute[259550]: 2025-10-07 14:07:02.783 2 INFO nova.virt.libvirt.driver [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Deleting instance files /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74_del#033[00m
Oct  7 10:07:02 np0005473739 nova_compute[259550]: 2025-10-07 14:07:02.784 2 INFO nova.virt.libvirt.driver [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Deletion of /var/lib/nova/instances/348c80b7-7f65-4300-9dab-6a333f1b2c74_del complete#033[00m
Oct  7 10:07:02 np0005473739 nova_compute[259550]: 2025-10-07 14:07:02.898 2 INFO nova.compute.manager [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Took 2.56 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:07:02 np0005473739 nova_compute[259550]: 2025-10-07 14:07:02.899 2 DEBUG oslo.service.loopingcall [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:07:02 np0005473739 nova_compute[259550]: 2025-10-07 14:07:02.899 2 DEBUG nova.compute.manager [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:07:02 np0005473739 nova_compute[259550]: 2025-10-07 14:07:02.899 2 DEBUG nova.network.neutron [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:07:02 np0005473739 nova_compute[259550]: 2025-10-07 14:07:02.950 2 INFO nova.virt.libvirt.driver [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Deleting instance files /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe_del#033[00m
Oct  7 10:07:02 np0005473739 nova_compute[259550]: 2025-10-07 14:07:02.954 2 INFO nova.virt.libvirt.driver [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Deletion of /var/lib/nova/instances/4b95692e-088d-452c-83b7-4c50df73b8fe_del complete#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.056 2 INFO nova.compute.manager [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Took 2.38 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.057 2 DEBUG oslo.service.loopingcall [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.057 2 DEBUG nova.compute.manager [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.057 2 DEBUG nova.network.neutron [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.416 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846008.4146168, 115674ad-2273-4c42-b9ae-d380c2c005d6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.416 2 INFO nova.compute.manager [-] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.451 2 DEBUG nova.compute.manager [None req-e872ea26-ca95-4feb-8cb2-cf4b20658361 - - - - - -] [instance: 115674ad-2273-4c42-b9ae-d380c2c005d6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1195: 305 pgs: 305 active+clean; 405 MiB data, 502 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 141 op/s
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.562 2 DEBUG nova.network.neutron [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.607 2 INFO nova.compute.manager [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Took 0.55 seconds to deallocate network for instance.#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.684 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.685 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.700 2 DEBUG nova.network.neutron [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.733 2 INFO nova.compute.manager [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.781 2 DEBUG oslo_concurrency.processutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.810 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.994 2 DEBUG nova.compute.manager [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.994 2 DEBUG oslo_concurrency.lockutils [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.995 2 DEBUG oslo_concurrency.lockutils [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.995 2 DEBUG oslo_concurrency.lockutils [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.995 2 DEBUG nova.compute.manager [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] No waiting events found dispatching network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.995 2 WARNING nova.compute.manager [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received unexpected event network-vif-plugged-184f2379-8442-414e-bccb-6f5e5a314e72 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:07:03 np0005473739 nova_compute[259550]: 2025-10-07 14:07:03.995 2 DEBUG nova.compute.manager [req-e01d3919-6648-41ce-be7c-3de44722d312 req-4be98727-ec5f-4c47-b7ce-9f10057bd060 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-deleted-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.165 2 DEBUG nova.compute.manager [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.165 2 DEBUG oslo_concurrency.lockutils [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.165 2 DEBUG oslo_concurrency.lockutils [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.165 2 DEBUG oslo_concurrency.lockutils [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.166 2 DEBUG nova.compute.manager [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] No waiting events found dispatching network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.166 2 WARNING nova.compute.manager [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Received unexpected event network-vif-plugged-d06bd5a4-d9b7-4791-a387-d190eb1457f6 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.166 2 DEBUG nova.compute.manager [req-81705dd3-4aee-48d3-af37-a3a5a92a1407 req-e41e9903-eda3-4fbd-a6ae-616b5cdcb1de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Received event network-vif-deleted-184f2379-8442-414e-bccb-6f5e5a314e72 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3671485503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.238 2 DEBUG oslo_concurrency.processutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.245 2 DEBUG nova.compute.provider_tree [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.327 2 DEBUG nova.scheduler.client.report [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.437 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.439 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.493 2 INFO nova.scheduler.client.report [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Deleted allocations for instance 4b95692e-088d-452c-83b7-4c50df73b8fe#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.531 2 DEBUG oslo_concurrency.processutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:04 np0005473739 nova_compute[259550]: 2025-10-07 14:07:04.783 2 DEBUG oslo_concurrency.lockutils [None req-01ff00f8-c45c-4140-ba47-168835e71636 7ff78293ed4e40f9954a0b0e6fca0caa 711e531670b1460a923f2f91ce0f63db - - default default] Lock "4b95692e-088d-452c-83b7-4c50df73b8fe" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657177471' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:05 np0005473739 nova_compute[259550]: 2025-10-07 14:07:05.036 2 DEBUG oslo_concurrency.processutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:05 np0005473739 nova_compute[259550]: 2025-10-07 14:07:05.043 2 DEBUG nova.compute.provider_tree [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:05 np0005473739 nova_compute[259550]: 2025-10-07 14:07:05.411 2 DEBUG nova.scheduler.client.report [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:05 np0005473739 nova_compute[259550]: 2025-10-07 14:07:05.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1196: 305 pgs: 305 active+clean; 307 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 188 op/s
Oct  7 10:07:05 np0005473739 nova_compute[259550]: 2025-10-07 14:07:05.578 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:05 np0005473739 nova_compute[259550]: 2025-10-07 14:07:05.612 2 INFO nova.scheduler.client.report [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Deleted allocations for instance 348c80b7-7f65-4300-9dab-6a333f1b2c74#033[00m
Oct  7 10:07:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:07:05 np0005473739 nova_compute[259550]: 2025-10-07 14:07:05.831 2 DEBUG oslo_concurrency.lockutils [None req-4c6883fd-808d-4582-8eea-bee1cb69e3a2 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "348c80b7-7f65-4300-9dab-6a333f1b2c74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.494s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.179 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.180 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.180 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.180 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.180 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.181 2 INFO nova.compute.manager [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Terminating instance#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.182 2 DEBUG nova.compute.manager [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:07:06 np0005473739 kernel: tap132e9e5d-ea (unregistering): left promiscuous mode
Oct  7 10:07:06 np0005473739 NetworkManager[44949]: <info>  [1759846026.3764] device (tap132e9e5d-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:06Z|00116|binding|INFO|Releasing lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df from this chassis (sb_readonly=0)
Oct  7 10:07:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:06Z|00117|binding|INFO|Setting lport 132e9e5d-eaab-437e-a82e-d49f6c4a09df down in Southbound
Oct  7 10:07:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:06Z|00118|binding|INFO|Removing iface tap132e9e5d-ea ovn-installed in OVS
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.391 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:2c:66 10.100.0.10'], port_security=['fa:16:3e:66:2c:66 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '74094438-0995-4031-9943-cc85a5ef4f57', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=132e9e5d-eaab-437e-a82e-d49f6c4a09df) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.392 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 132e9e5d-eaab-437e-a82e-d49f6c4a09df in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.394 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.420 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[abbf938c-17e3-4e41-9f58-68a62e095c9e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:06 np0005473739 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Oct  7 10:07:06 np0005473739 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000d.scope: Consumed 16.561s CPU time.
Oct  7 10:07:06 np0005473739 systemd-machined[214580]: Machine qemu-15-instance-0000000d terminated.
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.457 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e5fd5c8e-2480-46e4-a7de-76b3e3111c19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.462 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5492a0a5-999d-404f-9105-12aa2b3c43c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.498 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3915fb63-8332-4984-b54f-62a63ce13636]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.522 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[104eefc0-56ef-4ddd-9a7f-f20e0ecf3fc9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 26, 'rx_bytes': 1168, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 26, 'rx_bytes': 1168, 'tx_bytes': 1280, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287001, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.537 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4fee9561-4f12-42e1-8057-7912a73afed9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287002, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287002, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.539 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.546 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.546 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:06.547 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.634 2 INFO nova.virt.libvirt.driver [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Instance destroyed successfully.#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.636 2 DEBUG nova.objects.instance [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid 74094438-0995-4031-9943-cc85a5ef4f57 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.693 2 DEBUG nova.virt.libvirt.vif [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-222035485',display_name='tempest-ServersAdminTestJSON-server-222035485',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-222035485',id=13,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-lr6293xm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:05:45Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=74094438-0995-4031-9943-cc85a5ef4f57,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.695 2 DEBUG nova.network.os_vif_util [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "address": "fa:16:3e:66:2c:66", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132e9e5d-ea", "ovs_interfaceid": "132e9e5d-eaab-437e-a82e-d49f6c4a09df", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.696 2 DEBUG nova.network.os_vif_util [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.696 2 DEBUG os_vif [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.698 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap132e9e5d-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.704 2 INFO os_vif [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:2c:66,bridge_name='br-int',has_traffic_filtering=True,id=132e9e5d-eaab-437e-a82e-d49f6c4a09df,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132e9e5d-ea')#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.723 2 DEBUG nova.compute.manager [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-unplugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.724 2 DEBUG oslo_concurrency.lockutils [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.724 2 DEBUG oslo_concurrency.lockutils [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.724 2 DEBUG oslo_concurrency.lockutils [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.724 2 DEBUG nova.compute.manager [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] No waiting events found dispatching network-vif-unplugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:06 np0005473739 nova_compute[259550]: 2025-10-07 14:07:06.725 2 DEBUG nova.compute.manager [req-3fa3fd81-187b-4c53-b8ab-e17769a06f06 req-71977daf-99f9-498d-a56e-a203d251ee1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-unplugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:07:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1197: 305 pgs: 305 active+clean; 246 MiB data, 409 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 719 KiB/s wr, 130 op/s
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.070 2 DEBUG nova.compute.manager [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.071 2 DEBUG oslo_concurrency.lockutils [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "74094438-0995-4031-9943-cc85a5ef4f57-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.071 2 DEBUG oslo_concurrency.lockutils [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.071 2 DEBUG oslo_concurrency.lockutils [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.071 2 DEBUG nova.compute.manager [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] No waiting events found dispatching network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.071 2 WARNING nova.compute.manager [req-d4b4c03f-2484-4711-a17c-b9fd65a1c110 req-58af5a3e-33b2-4010-a768-20ce0321f2ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received unexpected event network-vif-plugged-132e9e5d-eaab-437e-a82e-d49f6c4a09df for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:07:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:09Z|00119|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:09Z|00120|binding|INFO|Releasing lport 47854cb1-b863-4b06-b664-27d734ff5751 from this chassis (sb_readonly=0)
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1198: 305 pgs: 305 active+clean; 191 MiB data, 382 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 514 KiB/s wr, 139 op/s
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.594 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.594 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.613 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.675 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.675 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.682 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.683 2 INFO nova.compute.claims [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:07:09 np0005473739 nova_compute[259550]: 2025-10-07 14:07:09.870 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.029 2 INFO nova.virt.libvirt.driver [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Deleting instance files /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57_del#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.031 2 INFO nova.virt.libvirt.driver [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Deletion of /var/lib/nova/instances/74094438-0995-4031-9943-cc85a5ef4f57_del complete#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.090 2 INFO nova.compute.manager [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Took 3.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.091 2 DEBUG oslo.service.loopingcall [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.091 2 DEBUG nova.compute.manager [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.092 2 DEBUG nova.network.neutron [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:07:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1766614299' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.337 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.345 2 DEBUG nova.compute.provider_tree [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.357 2 DEBUG nova.scheduler.client.report [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.376 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.377 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.417 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.417 2 DEBUG nova.network.neutron [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.432 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.448 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.539 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.541 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.541 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Creating image(s)#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.561 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.583 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.604 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.608 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:10Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:07:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:10Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6c:03:d4 10.100.0.7
Oct  7 10:07:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.673 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.674 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.675 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.675 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.699 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.703 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.811 2 DEBUG nova.network.neutron [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:07:10 np0005473739 nova_compute[259550]: 2025-10-07 14:07:10.812 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:07:11 np0005473739 nova_compute[259550]: 2025-10-07 14:07:11.324 2 DEBUG nova.network.neutron [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:11 np0005473739 nova_compute[259550]: 2025-10-07 14:07:11.403 2 INFO nova.compute.manager [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Took 1.31 seconds to deallocate network for instance.#033[00m
Oct  7 10:07:11 np0005473739 nova_compute[259550]: 2025-10-07 14:07:11.470 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:11 np0005473739 nova_compute[259550]: 2025-10-07 14:07:11.471 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:11 np0005473739 nova_compute[259550]: 2025-10-07 14:07:11.486 2 DEBUG nova.compute.manager [req-9a16ffbe-5df6-4696-9cc2-6c627716c9ec req-9cfaf2cd-84be-4067-a798-83b0a5fadd7e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Received event network-vif-deleted-132e9e5d-eaab-437e-a82e-d49f6c4a09df external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1199: 305 pgs: 305 active+clean; 176 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 861 KiB/s rd, 1019 KiB/s wr, 126 op/s
Oct  7 10:07:11 np0005473739 nova_compute[259550]: 2025-10-07 14:07:11.573 2 DEBUG oslo_concurrency.processutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:11 np0005473739 nova_compute[259550]: 2025-10-07 14:07:11.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/11611704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:12 np0005473739 nova_compute[259550]: 2025-10-07 14:07:12.048 2 DEBUG oslo_concurrency.processutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:12 np0005473739 nova_compute[259550]: 2025-10-07 14:07:12.056 2 DEBUG nova.compute.provider_tree [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:12 np0005473739 nova_compute[259550]: 2025-10-07 14:07:12.106 2 DEBUG nova.scheduler.client.report [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:12 np0005473739 nova_compute[259550]: 2025-10-07 14:07:12.218 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.748s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:12 np0005473739 nova_compute[259550]: 2025-10-07 14:07:12.295 2 INFO nova.scheduler.client.report [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Deleted allocations for instance 74094438-0995-4031-9943-cc85a5ef4f57#033[00m
Oct  7 10:07:12 np0005473739 nova_compute[259550]: 2025-10-07 14:07:12.379 2 DEBUG oslo_concurrency.lockutils [None req-a0f569a5-8e23-4347-9eb5-c14b3d732fa9 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "74094438-0995-4031-9943-cc85a5ef4f57" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.199s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:12 np0005473739 nova_compute[259550]: 2025-10-07 14:07:12.789 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:12 np0005473739 nova_compute[259550]: 2025-10-07 14:07:12.862 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] resizing rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:07:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1200: 305 pgs: 305 active+clean; 176 MiB data, 374 MiB used, 60 GiB / 60 GiB avail; 114 KiB/s rd, 1019 KiB/s wr, 102 op/s
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.548 2 DEBUG nova.objects.instance [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'migration_context' on Instance uuid 809b049f-447c-4cdd-b8d2-8325f6d3b576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.561 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.561 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Ensure instance console log exists: /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.562 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.562 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.563 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.565 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.569 2 WARNING nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.573 2 DEBUG nova.virt.libvirt.host [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.573 2 DEBUG nova.virt.libvirt.host [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.576 2 DEBUG nova.virt.libvirt.host [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.577 2 DEBUG nova.virt.libvirt.host [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.578 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.578 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.579 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.579 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.579 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.580 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.580 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.580 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.581 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.581 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.581 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.581 2 DEBUG nova.virt.hardware [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.585 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.758 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.759 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.759 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.759 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.762 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.763 2 INFO nova.compute.manager [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Terminating instance#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.764 2 DEBUG nova.compute.manager [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:07:13 np0005473739 kernel: tap3486260f-fd (unregistering): left promiscuous mode
Oct  7 10:07:13 np0005473739 NetworkManager[44949]: <info>  [1759846033.9083] device (tap3486260f-fd): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:07:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:13Z|00121|binding|INFO|Releasing lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 from this chassis (sb_readonly=0)
Oct  7 10:07:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:13Z|00122|binding|INFO|Setting lport 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 down in Southbound
Oct  7 10:07:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:13Z|00123|binding|INFO|Removing iface tap3486260f-fd ovn-installed in OVS
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.933 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:68:5d:93 10.100.0.9'], port_security=['fa:16:3e:68:5d:93 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '7322f2d1-885e-4e41-8a96-e90d4ddc6c38', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3486260f-fd35-48fb-a925-cbe6f4a1a9f5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:07:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.934 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3486260f-fd35-48fb-a925-cbe6f4a1a9f5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis#033[00m
Oct  7 10:07:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.941 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1eabd9ee-6333-432b-b50d-9679677d38f6#033[00m
Oct  7 10:07:13 np0005473739 nova_compute[259550]: 2025-10-07 14:07:13.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.966 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e12e297e-3c47-424c-886f-14ccf7f1cdb0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:13 np0005473739 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Oct  7 10:07:13 np0005473739 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000b.scope: Consumed 17.821s CPU time.
Oct  7 10:07:13 np0005473739 systemd-machined[214580]: Machine qemu-13-instance-0000000b terminated.
Oct  7 10:07:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.992 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[72730b9d-42b8-4545-835a-38de165d8273]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:13.995 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[341a7778-8ef1-410c-8b3a-5088fbe55c67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.022 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[980c64c4-92fd-4ff0-b07a-c1e545040c47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.042 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b060a57-7ac5-4219-b55d-5619165c8d57]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1eabd9ee-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2e:d4:d3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 16, 'tx_packets': 28, 'rx_bytes': 1168, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 16, 'tx_packets': 28, 'rx_bytes': 1168, 'tx_bytes': 1364, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650998, 'reachable_time': 37221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 287278, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:07:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/160664761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:07:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d43e55d-65b0-44a2-b7a9-9e97e6ab0a8c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651013, 'tstamp': 651013}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287280, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1eabd9ee-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 651016, 'tstamp': 651016}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 287280, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.062 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.069 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1eabd9ee-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:07:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.102 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1eabd9ee-60, col_values=(('external_ids', {'iface-id': '47854cb1-b863-4b06-b664-27d734ff5751'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:14.102 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.111 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.117 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.208 2 INFO nova.virt.libvirt.driver [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Instance destroyed successfully.#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.209 2 DEBUG nova.objects.instance [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.224 2 DEBUG nova.virt.libvirt.vif [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:05:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-33708594',display_name='tempest-ServersAdminTestJSON-server-33708594',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-33708594',id=11,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:05:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-346ygca7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:05:30Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=7322f2d1-885e-4e41-8a96-e90d4ddc6c38,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.225 2 DEBUG nova.network.os_vif_util [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "address": "fa:16:3e:68:5d:93", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3486260f-fd", "ovs_interfaceid": "3486260f-fd35-48fb-a925-cbe6f4a1a9f5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.226 2 DEBUG nova.network.os_vif_util [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.226 2 DEBUG os_vif [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.228 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3486260f-fd, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.235 2 INFO os_vif [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:68:5d:93,bridge_name='br-int',has_traffic_filtering=True,id=3486260f-fd35-48fb-a925-cbe6f4a1a9f5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3486260f-fd')#033[00m
Oct  7 10:07:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:07:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/36237976' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.577 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.578 2 DEBUG nova.objects.instance [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 809b049f-447c-4cdd-b8d2-8325f6d3b576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.591 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <uuid>809b049f-447c-4cdd-b8d2-8325f6d3b576</uuid>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <name>instance-00000011</name>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <nova:name>tempest-LiveMigrationNegativeTest-server-842363532</nova:name>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:07:13</nova:creationTime>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <nova:user uuid="027a68b305ed4ef799375643ce9e5831">tempest-LiveMigrationNegativeTest-182284186-project-member</nova:user>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <nova:project uuid="64dfda49c33d49eb9d343ba7a8f90d6d">tempest-LiveMigrationNegativeTest-182284186</nova:project>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <entry name="serial">809b049f-447c-4cdd-b8d2-8325f6d3b576</entry>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <entry name="uuid">809b049f-447c-4cdd-b8d2-8325f6d3b576</entry>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/809b049f-447c-4cdd-b8d2-8325f6d3b576_disk">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/console.log" append="off"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:07:14 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:07:14 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:07:14 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:07:14 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.667 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.667 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.668 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Using config drive#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.691 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.913 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Creating config drive at /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config#033[00m
Oct  7 10:07:14 np0005473739 nova_compute[259550]: 2025-10-07 14:07:14.921 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd0b1ikph execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.056 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd0b1ikph" returned: 0 in 0.135s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.078 2 DEBUG nova.storage.rbd_utils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.081 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:07:15 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d5bfaa21-e0b8-4777-9891-a22f95873d8e does not exist
Oct  7 10:07:15 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 645aa166-edc4-427a-9dc9-e3073da22029 does not exist
Oct  7 10:07:15 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0415d47e-4e9b-4e32-b07b-5b30f72c60d1 does not exist
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.452 2 DEBUG oslo_concurrency.processutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config 809b049f-447c-4cdd-b8d2-8325f6d3b576_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.371s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.453 2 INFO nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Deleting local config drive /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576/disk.config because it was imported into RBD.#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1201: 305 pgs: 305 active+clean; 228 MiB data, 401 MiB used, 60 GiB / 60 GiB avail; 335 KiB/s rd, 3.3 MiB/s wr, 168 op/s
Oct  7 10:07:15 np0005473739 systemd-machined[214580]: New machine qemu-21-instance-00000011.
Oct  7 10:07:15 np0005473739 systemd[1]: Started Virtual Machine qemu-21-instance-00000011.
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.594 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846020.5930336, 348c80b7-7f65-4300-9dab-6a333f1b2c74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.596 2 INFO nova.compute.manager [-] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.673 2 DEBUG nova.compute.manager [None req-bb845f3a-0c20-4f9d-b7fd-8fd062d0fcfa - - - - - -] [instance: 348c80b7-7f65-4300-9dab-6a333f1b2c74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:07:15 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.757 2 INFO nova.virt.libvirt.driver [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Deleting instance files /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38_del#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.758 2 INFO nova.virt.libvirt.driver [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Deletion of /var/lib/nova/instances/7322f2d1-885e-4e41-8a96-e90d4ddc6c38_del complete#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.911 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846020.9110177, 4b95692e-088d-452c-83b7-4c50df73b8fe => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.913 2 INFO nova.compute.manager [-] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.937 2 INFO nova.compute.manager [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Took 2.17 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.938 2 DEBUG oslo.service.loopingcall [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.938 2 DEBUG nova.compute.manager [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.938 2 DEBUG nova.network.neutron [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:07:15 np0005473739 nova_compute[259550]: 2025-10-07 14:07:15.942 2 DEBUG nova.compute.manager [None req-e08215e8-c508-4a75-90ad-29630b3abb9e - - - - - -] [instance: 4b95692e-088d-452c-83b7-4c50df73b8fe] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:16 np0005473739 podman[287698]: 2025-10-07 14:07:16.050503253 +0000 UTC m=+0.060704913 container create d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:07:16 np0005473739 systemd[1]: Started libpod-conmon-d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b.scope.
Oct  7 10:07:16 np0005473739 podman[287698]: 2025-10-07 14:07:16.018681832 +0000 UTC m=+0.028883522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:07:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:07:16 np0005473739 podman[287698]: 2025-10-07 14:07:16.174953369 +0000 UTC m=+0.185155029 container init d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 10:07:16 np0005473739 podman[287698]: 2025-10-07 14:07:16.184485334 +0000 UTC m=+0.194686984 container start d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:07:16 np0005473739 naughty_dubinsky[287730]: 167 167
Oct  7 10:07:16 np0005473739 systemd[1]: libpod-d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b.scope: Deactivated successfully.
Oct  7 10:07:16 np0005473739 podman[287698]: 2025-10-07 14:07:16.199804883 +0000 UTC m=+0.210006553 container attach d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 10:07:16 np0005473739 podman[287698]: 2025-10-07 14:07:16.200953383 +0000 UTC m=+0.211155043 container died d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default)
Oct  7 10:07:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5f53987f538ab19c836d25239c8db0f5c59bbe98cdcac0231d96fd81b28f627f-merged.mount: Deactivated successfully.
Oct  7 10:07:16 np0005473739 podman[287698]: 2025-10-07 14:07:16.531192729 +0000 UTC m=+0.541394369 container remove d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_dubinsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:07:16 np0005473739 systemd[1]: libpod-conmon-d9f70f19adb3dd1d01ed4f7e1401411b9499f9fadb99984136c6e388ead1249b.scope: Deactivated successfully.
Oct  7 10:07:16 np0005473739 nova_compute[259550]: 2025-10-07 14:07:16.705 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846036.70553, 809b049f-447c-4cdd-b8d2-8325f6d3b576 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:16 np0005473739 nova_compute[259550]: 2025-10-07 14:07:16.706 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:07:16 np0005473739 nova_compute[259550]: 2025-10-07 14:07:16.709 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:07:16 np0005473739 nova_compute[259550]: 2025-10-07 14:07:16.709 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:07:16 np0005473739 nova_compute[259550]: 2025-10-07 14:07:16.713 2 INFO nova.virt.libvirt.driver [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance spawned successfully.#033[00m
Oct  7 10:07:16 np0005473739 nova_compute[259550]: 2025-10-07 14:07:16.713 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:07:16 np0005473739 nova_compute[259550]: 2025-10-07 14:07:16.789 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:16 np0005473739 nova_compute[259550]: 2025-10-07 14:07:16.792 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:07:16 np0005473739 podman[287779]: 2025-10-07 14:07:16.708915689 +0000 UTC m=+0.031066671 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:07:16 np0005473739 podman[287779]: 2025-10-07 14:07:16.870908989 +0000 UTC m=+0.193059951 container create 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:07:16 np0005473739 systemd[1]: Started libpod-conmon-49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8.scope.
Oct  7 10:07:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:07:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:17 np0005473739 podman[287779]: 2025-10-07 14:07:17.018333539 +0000 UTC m=+0.340484521 container init 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 10:07:17 np0005473739 podman[287779]: 2025-10-07 14:07:17.031326166 +0000 UTC m=+0.353477128 container start 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:07:17 np0005473739 podman[287779]: 2025-10-07 14:07:17.037118242 +0000 UTC m=+0.359269204 container attach 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.062 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.065 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846036.7085156, 809b049f-447c-4cdd-b8d2-8325f6d3b576 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.065 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] VM Started (Lifecycle Event)#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.071 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.072 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.072 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.073 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.073 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.073 2 DEBUG nova.virt.libvirt.driver [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.108 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.108 2 DEBUG nova.network.neutron [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.114 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.150 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.153 2 INFO nova.compute.manager [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Took 1.21 seconds to deallocate network for instance.#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.159 2 INFO nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Took 6.62 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.160 2 DEBUG nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.195 2 DEBUG nova.compute.manager [req-87785c9c-63c2-4c98-925e-300d213ed4d6 req-c5fac24c-9bc6-4fb5-95a6-ead4a291bea4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Received event network-vif-deleted-3486260f-fd35-48fb-a925-cbe6f4a1a9f5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.221 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.222 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.246 2 INFO nova.compute.manager [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Took 7.59 seconds to build instance.#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.274 2 DEBUG oslo_concurrency.lockutils [None req-7319d24c-f8d7-4db7-98d0-3fb65c1ad8b5 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.680s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.333 2 DEBUG oslo_concurrency.processutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1202: 305 pgs: 305 active+clean; 209 MiB data, 408 MiB used, 60 GiB / 60 GiB avail; 320 KiB/s rd, 3.9 MiB/s wr, 129 op/s
Oct  7 10:07:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860727468' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.854 2 DEBUG oslo_concurrency.processutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.862 2 DEBUG nova.compute.provider_tree [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.898 2 DEBUG nova.scheduler.client.report [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.931 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.933 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.950 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:17 np0005473739 nova_compute[259550]: 2025-10-07 14:07:17.966 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.000 2 INFO nova.scheduler.client.report [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Deleted allocations for instance 7322f2d1-885e-4e41-8a96-e90d4ddc6c38#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.078 2 DEBUG oslo_concurrency.lockutils [None req-341af071-2606-4928-9cdb-c13f38c1f672 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "7322f2d1-885e-4e41-8a96-e90d4ddc6c38" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.081 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.081 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.088 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.089 2 INFO nova.compute.claims [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.243 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:18 np0005473739 quizzical_aryabhata[287796]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:07:18 np0005473739 quizzical_aryabhata[287796]: --> relative data size: 1.0
Oct  7 10:07:18 np0005473739 quizzical_aryabhata[287796]: --> All data devices are unavailable
Oct  7 10:07:18 np0005473739 systemd[1]: libpod-49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8.scope: Deactivated successfully.
Oct  7 10:07:18 np0005473739 systemd[1]: libpod-49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8.scope: Consumed 1.232s CPU time.
Oct  7 10:07:18 np0005473739 podman[287779]: 2025-10-07 14:07:18.430449091 +0000 UTC m=+1.752600053 container died 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.485 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.487 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.488 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.488 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.488 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.489 2 INFO nova.compute.manager [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Terminating instance#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.490 2 DEBUG nova.compute.manager [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:07:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5b0e61456fee75454fd88b45e72bf2eb0413ab983a29fda6e66c8b10a5bf850e-merged.mount: Deactivated successfully.
Oct  7 10:07:18 np0005473739 podman[287779]: 2025-10-07 14:07:18.52995197 +0000 UTC m=+1.852102922 container remove 49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:07:18 np0005473739 systemd[1]: libpod-conmon-49aca03f6f086cc5c8b247bb2b94999b70fc584435c35346e68293442865f4a8.scope: Deactivated successfully.
Oct  7 10:07:18 np0005473739 podman[287868]: 2025-10-07 14:07:18.594187257 +0000 UTC m=+0.117977845 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:07:18 np0005473739 podman[287875]: 2025-10-07 14:07:18.606592998 +0000 UTC m=+0.123980555 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 10:07:18 np0005473739 kernel: tap9b25db0b-24 (unregistering): left promiscuous mode
Oct  7 10:07:18 np0005473739 NetworkManager[44949]: <info>  [1759846038.6101] device (tap9b25db0b-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:07:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:18Z|00124|binding|INFO|Releasing lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 from this chassis (sb_readonly=0)
Oct  7 10:07:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:18Z|00125|binding|INFO|Setting lport 9b25db0b-246e-456c-82d7-cf361c57f9c5 down in Southbound
Oct  7 10:07:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:18Z|00126|binding|INFO|Removing iface tap9b25db0b-24 ovn-installed in OVS
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.635 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6c:03:d4 10.100.0.7'], port_security=['fa:16:3e:6c:03:d4 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ddf09c33-d956-404b-a5d8-44a3727f9a3b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1eabd9ee-6333-432b-b50d-9679677d38f6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '48bbd5aa8b9d4a0ea0150bd57145fc68', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'be138a33-b858-4ac6-ac6d-fec3cc069fee', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8257067-c40c-4b54-afa9-833af0a72190, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9b25db0b-246e-456c-82d7-cf361c57f9c5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:07:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.637 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9b25db0b-246e-456c-82d7-cf361c57f9c5 in datapath 1eabd9ee-6333-432b-b50d-9679677d38f6 unbound from our chassis#033[00m
Oct  7 10:07:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.638 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1eabd9ee-6333-432b-b50d-9679677d38f6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:07:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.639 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[207a5d39-f746-455c-b02c-12c896424f56]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:18.640 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 namespace which is not needed anymore#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:18 np0005473739 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Oct  7 10:07:18 np0005473739 systemd[1]: machine-qemu\x2d20\x2dinstance\x2d0000000a.scope: Consumed 14.085s CPU time.
Oct  7 10:07:18 np0005473739 systemd-machined[214580]: Machine qemu-20-instance-0000000a terminated.
Oct  7 10:07:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/849641466' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.736 2 INFO nova.virt.libvirt.driver [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Instance destroyed successfully.#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.737 2 DEBUG nova.objects.instance [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lazy-loading 'resources' on Instance uuid ddf09c33-d956-404b-a5d8-44a3727f9a3b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.757 2 DEBUG nova.virt.libvirt.vif [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:05:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersAdminTestJSON-server-1321212972',display_name='tempest-ServersAdminTestJSON-server-1321212972',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversadmintestjson-server-1321212972',id=10,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:06:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='48bbd5aa8b9d4a0ea0150bd57145fc68',ramdisk_id='',reservation_id='r-ixj4uqzd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='2',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersAdminTestJSON-1442908900',owner_user_name='tempest-ServersAdminTestJSON-1442908900-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:06:59Z,user_data=None,user_id='f06dda9346a24fb094ad9fe51664cc48',uuid=ddf09c33-d956-404b-a5d8-44a3727f9a3b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.758 2 DEBUG nova.network.os_vif_util [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converting VIF {"id": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "address": "fa:16:3e:6c:03:d4", "network": {"id": "1eabd9ee-6333-432b-b50d-9679677d38f6", "bridge": "br-int", "label": "tempest-ServersAdminTestJSON-294290795-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "48bbd5aa8b9d4a0ea0150bd57145fc68", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9b25db0b-24", "ovs_interfaceid": "9b25db0b-246e-456c-82d7-cf361c57f9c5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.759 2 DEBUG nova.network.os_vif_util [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.759 2 DEBUG os_vif [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.761 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b25db0b-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.777 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.780 2 INFO os_vif [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6c:03:d4,bridge_name='br-int',has_traffic_filtering=True,id=9b25db0b-246e-456c-82d7-cf361c57f9c5,network=Network(1eabd9ee-6333-432b-b50d-9679677d38f6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9b25db0b-24')#033[00m
Oct  7 10:07:18 np0005473739 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [NOTICE]   (282456) : haproxy version is 2.8.14-c23fe91
Oct  7 10:07:18 np0005473739 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [NOTICE]   (282456) : path to executable is /usr/sbin/haproxy
Oct  7 10:07:18 np0005473739 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [WARNING]  (282456) : Exiting Master process...
Oct  7 10:07:18 np0005473739 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [ALERT]    (282456) : Current worker (282458) exited with code 143 (Terminated)
Oct  7 10:07:18 np0005473739 neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6[282452]: [WARNING]  (282456) : All workers exited. Exiting... (0)
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.844 2 DEBUG nova.compute.provider_tree [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:18 np0005473739 systemd[1]: libpod-534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091.scope: Deactivated successfully.
Oct  7 10:07:18 np0005473739 podman[287999]: 2025-10-07 14:07:18.852555201 +0000 UTC m=+0.068113780 container died 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.862 2 DEBUG nova.scheduler.client.report [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.890 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.893 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:07:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091-userdata-shm.mount: Deactivated successfully.
Oct  7 10:07:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ee58677923a11d6eaf88f22eb26cdfd6a6c1bfcaf21246d843b12d3493d77fd3-merged.mount: Deactivated successfully.
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.938 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.938 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.955 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:07:18 np0005473739 nova_compute[259550]: 2025-10-07 14:07:18.974 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:07:19 np0005473739 podman[287999]: 2025-10-07 14:07:19.00514387 +0000 UTC m=+0.220702409 container cleanup 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:07:19 np0005473739 systemd[1]: libpod-conmon-534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091.scope: Deactivated successfully.
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.061 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.063 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.064 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Creating image(s)#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.099 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.134 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:19 np0005473739 podman[288098]: 2025-10-07 14:07:19.171538077 +0000 UTC m=+0.132972865 container remove 534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.172 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.181 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8760a037-d1d7-40a4-b4cf-ec2ffcc16f6a]: (4, ('Tue Oct  7 02:07:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 (534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091)\n534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091\nTue Oct  7 02:07:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 (534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091)\n534ee5390690e4739c48e485bd00bc50bf6b84bdff80cb4b8f47e28a17044091\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.183 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[864a1b9f-251c-4204-9c3b-fc1363ae10ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.184 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1eabd9ee-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.184 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:19 np0005473739 kernel: tap1eabd9ee-60: left promiscuous mode
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.209 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[408e545e-ac63-4e7d-8e2e-1ab0d2434934]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.228 2 DEBUG nova.policy [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8977e4fa4adb42bfae4fe2be5d339769', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '15d729b9c0bc4738b4f887d6b764fb5a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.227 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1621709-935b-4a58-b6e1-f4553d685d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.232 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[935ef788-bfd2-499f-92cd-def0af83c5b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.255 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37a17185-314a-401e-9049-ea058019d184]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 650989, 'reachable_time': 25990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 288191, 'error': None, 'target': 'ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:19 np0005473739 systemd[1]: run-netns-ovnmeta\x2d1eabd9ee\x2d6333\x2d432b\x2db50d\x2d9679677d38f6.mount: Deactivated successfully.
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.265 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1eabd9ee-6333-432b-b50d-9679677d38f6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:07:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:19.266 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2a740434-d43b-43a0-9954-b040f2c04f7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.269 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.270 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.271 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.271 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.299 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.305 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:19 np0005473739 podman[288225]: 2025-10-07 14:07:19.421034775 +0000 UTC m=+0.061950566 container create 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:07:19 np0005473739 systemd[1]: Started libpod-conmon-70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f.scope.
Oct  7 10:07:19 np0005473739 podman[288225]: 2025-10-07 14:07:19.387480618 +0000 UTC m=+0.028396429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:07:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:07:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1203: 305 pgs: 305 active+clean; 167 MiB data, 385 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 197 op/s
Oct  7 10:07:19 np0005473739 podman[288225]: 2025-10-07 14:07:19.556161887 +0000 UTC m=+0.197077698 container init 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:07:19 np0005473739 podman[288225]: 2025-10-07 14:07:19.566190795 +0000 UTC m=+0.207106586 container start 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:07:19 np0005473739 podman[288225]: 2025-10-07 14:07:19.576924032 +0000 UTC m=+0.217839823 container attach 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:07:19 np0005473739 systemd[1]: libpod-70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f.scope: Deactivated successfully.
Oct  7 10:07:19 np0005473739 friendly_hermann[288260]: 167 167
Oct  7 10:07:19 np0005473739 conmon[288260]: conmon 70bf7384c4706cfdadb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f.scope/container/memory.events
Oct  7 10:07:19 np0005473739 podman[288225]: 2025-10-07 14:07:19.584310999 +0000 UTC m=+0.225226800 container died 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 10:07:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a9120e654d0be78ccedec1f9237381d7035a2c7c0bc8a79682a1a4a9931add64-merged.mount: Deactivated successfully.
Oct  7 10:07:19 np0005473739 podman[288225]: 2025-10-07 14:07:19.724723302 +0000 UTC m=+0.365639093 container remove 70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_hermann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:07:19 np0005473739 systemd[1]: libpod-conmon-70bf7384c4706cfdadb460e741a07568f1e215f98ec422f8f10aa6135bc7e51f.scope: Deactivated successfully.
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.750 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.883 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] resizing rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:07:19 np0005473739 podman[288317]: 2025-10-07 14:07:19.93529382 +0000 UTC m=+0.054296262 container create 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:07:19 np0005473739 systemd[1]: Started libpod-conmon-569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c.scope.
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.994 2 INFO nova.virt.libvirt.driver [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deleting instance files /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del#033[00m
Oct  7 10:07:19 np0005473739 nova_compute[259550]: 2025-10-07 14:07:19.996 2 INFO nova.virt.libvirt.driver [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deletion of /var/lib/nova/instances/ddf09c33-d956-404b-a5d8-44a3727f9a3b_del complete#033[00m
Oct  7 10:07:20 np0005473739 podman[288317]: 2025-10-07 14:07:19.914991267 +0000 UTC m=+0.033993729 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:07:20 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:07:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:20 np0005473739 podman[288317]: 2025-10-07 14:07:20.049345318 +0000 UTC m=+0.168347760 container init 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 10:07:20 np0005473739 podman[288317]: 2025-10-07 14:07:20.061180174 +0000 UTC m=+0.180182616 container start 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:07:20 np0005473739 podman[288317]: 2025-10-07 14:07:20.066815974 +0000 UTC m=+0.185818436 container attach 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.097 2 DEBUG nova.objects.instance [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lazy-loading 'migration_context' on Instance uuid dfe85d14-0395-4f7f-8bc1-f536aefe2ffc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.101 2 INFO nova.compute.manager [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Took 1.61 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.101 2 DEBUG oslo.service.loopingcall [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.102 2 DEBUG nova.compute.manager [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.102 2 DEBUG nova.network.neutron [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.112 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.113 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Ensure instance console log exists: /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.113 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.113 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.114 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.323 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Successfully created port: 78dbc059-1828-41b6-84fa-bf4ac0b31103 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.650 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "10c5acec-9e20-431b-a467-d54b7acbabfd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.651 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.885 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]: {
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:    "0": [
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:        {
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "devices": [
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "/dev/loop3"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            ],
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_name": "ceph_lv0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_size": "21470642176",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "name": "ceph_lv0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "tags": {
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cluster_name": "ceph",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.crush_device_class": "",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.encrypted": "0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osd_id": "0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.type": "block",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.vdo": "0"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            },
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "type": "block",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "vg_name": "ceph_vg0"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:        }
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:    ],
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:    "1": [
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:        {
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "devices": [
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "/dev/loop4"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            ],
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_name": "ceph_lv1",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_size": "21470642176",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "name": "ceph_lv1",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "tags": {
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cluster_name": "ceph",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.crush_device_class": "",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.encrypted": "0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osd_id": "1",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.type": "block",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.vdo": "0"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            },
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "type": "block",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "vg_name": "ceph_vg1"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:        }
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:    ],
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:    "2": [
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:        {
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "devices": [
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "/dev/loop5"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            ],
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_name": "ceph_lv2",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_size": "21470642176",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "name": "ceph_lv2",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "tags": {
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.cluster_name": "ceph",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.crush_device_class": "",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.encrypted": "0",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osd_id": "2",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.type": "block",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:                "ceph.vdo": "0"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            },
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "type": "block",
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:            "vg_name": "ceph_vg2"
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:        }
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]:    ]
Oct  7 10:07:20 np0005473739 agitated_khayyam[288354]: }
Oct  7 10:07:20 np0005473739 systemd[1]: libpod-569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c.scope: Deactivated successfully.
Oct  7 10:07:20 np0005473739 conmon[288354]: conmon 569ffc8244a13310c5d5 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c.scope/container/memory.events
Oct  7 10:07:20 np0005473739 nova_compute[259550]: 2025-10-07 14:07:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:21 np0005473739 podman[288381]: 2025-10-07 14:07:21.002293187 +0000 UTC m=+0.031864413 container died 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:07:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2622a655679518c44ffa4b92f8058b2155cd01bfb99200db9ee27c7da64c5d76-merged.mount: Deactivated successfully.
Oct  7 10:07:21 np0005473739 podman[288381]: 2025-10-07 14:07:21.156906319 +0000 UTC m=+0.186477525 container remove 569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_khayyam, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:07:21 np0005473739 systemd[1]: libpod-conmon-569ffc8244a13310c5d5613e262c66bbe20881e142d5b2c133f60891782ca40c.scope: Deactivated successfully.
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.237 2 DEBUG nova.compute.manager [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.238 2 DEBUG oslo_concurrency.lockutils [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.238 2 DEBUG oslo_concurrency.lockutils [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.239 2 DEBUG oslo_concurrency.lockutils [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.239 2 DEBUG nova.compute.manager [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.239 2 DEBUG nova.compute.manager [req-ddd1c097-cf60-4282-a478-72b156b1188c req-beacdfde-69a2-4eb3-a471-cab777782e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-unplugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.370 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.371 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.382 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.382 2 INFO nova.compute.claims [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:07:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1204: 305 pgs: 305 active+clean; 150 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.7 MiB/s wr, 227 op/s
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.541 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.629 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846026.6275175, 74094438-0995-4031-9943-cc85a5ef4f57 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.630 2 INFO nova.compute.manager [-] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.666 2 DEBUG nova.compute.manager [None req-5ed7bf7d-344d-4c82-b742-2473a2894331 - - - - - -] [instance: 74094438-0995-4031-9943-cc85a5ef4f57] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.809 2 DEBUG nova.network.neutron [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.848 2 INFO nova.compute.manager [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Took 1.75 seconds to deallocate network for instance.#033[00m
Oct  7 10:07:21 np0005473739 podman[288555]: 2025-10-07 14:07:21.88467833 +0000 UTC m=+0.048867097 container create 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.913 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:21 np0005473739 systemd[1]: Started libpod-conmon-1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac.scope.
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.956 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Successfully updated port: 78dbc059-1828-41b6-84fa-bf4ac0b31103 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:07:21 np0005473739 podman[288555]: 2025-10-07 14:07:21.863606577 +0000 UTC m=+0.027795364 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:07:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:07:21 np0005473739 podman[288555]: 2025-10-07 14:07:21.978473317 +0000 UTC m=+0.142662094 container init 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:07:21 np0005473739 podman[288555]: 2025-10-07 14:07:21.986544673 +0000 UTC m=+0.150733430 container start 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True)
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.989 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.990 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquired lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:07:21 np0005473739 nova_compute[259550]: 2025-10-07 14:07:21.990 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:07:21 np0005473739 podman[288555]: 2025-10-07 14:07:21.990308723 +0000 UTC m=+0.154497510 container attach 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:07:21 np0005473739 interesting_wilson[288571]: 167 167
Oct  7 10:07:21 np0005473739 systemd[1]: libpod-1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac.scope: Deactivated successfully.
Oct  7 10:07:21 np0005473739 podman[288555]: 2025-10-07 14:07:21.993963531 +0000 UTC m=+0.158152318 container died 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:07:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1b6efeaaf303e509469dd4a97a146952643ab3f4c2b939264f679aefa221cdf9-merged.mount: Deactivated successfully.
Oct  7 10:07:22 np0005473739 podman[288555]: 2025-10-07 14:07:22.038371338 +0000 UTC m=+0.202560095 container remove 1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_wilson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:07:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/635956020' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:22 np0005473739 systemd[1]: libpod-conmon-1cfb3ff64d8f72a3c6929e5b927ad7c2ba800c13ebbf4196fc9f4ba85dbf84ac.scope: Deactivated successfully.
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.079 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.090 2 DEBUG nova.compute.provider_tree [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.107 2 DEBUG nova.scheduler.client.report [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.111 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.145 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.774s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.146 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.149 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.237 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.237 2 DEBUG nova.network.neutron [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.244 2 DEBUG nova.compute.manager [req-9ea3882d-560f-460a-bb08-5056628e5f74 req-816f3324-2937-4969-b2a1-2cbed01f4d7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-deleted-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.280 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:07:22 np0005473739 podman[288597]: 2025-10-07 14:07:22.212866071 +0000 UTC m=+0.031710698 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.323 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:07:22 np0005473739 podman[288597]: 2025-10-07 14:07:22.40589063 +0000 UTC m=+0.224735227 container create d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.463 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.464 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.465 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Creating image(s)#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.491 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.518 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.541 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.547 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.584 2 DEBUG oslo_concurrency.processutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:22 np0005473739 systemd[1]: Started libpod-conmon-d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d.scope.
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:07:22
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'cephfs.cephfs.data', 'volumes', '.mgr', '.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'default.rgw.control']
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:07:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:07:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.645 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.646 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.647 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.647 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.674 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.679 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 10c5acec-9e20-431b-a467-d54b7acbabfd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:22 np0005473739 podman[288597]: 2025-10-07 14:07:22.688634257 +0000 UTC m=+0.507478864 container init d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:07:22 np0005473739 podman[288597]: 2025-10-07 14:07:22.698575143 +0000 UTC m=+0.517419760 container start d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:07:22 np0005473739 podman[288597]: 2025-10-07 14:07:22.70671758 +0000 UTC m=+0.525562187 container attach d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.794 2 DEBUG nova.network.neutron [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:07:22 np0005473739 nova_compute[259550]: 2025-10-07 14:07:22.795 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:07:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.121 2 DEBUG nova.network.neutron [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Updating instance_info_cache with network_info: [{"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3313080681' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.153 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Releasing lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.153 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance network_info: |[{"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.156 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start _get_guest_xml network_info=[{"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.161 2 DEBUG oslo_concurrency.processutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.577s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.165 2 WARNING nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.171 2 DEBUG nova.compute.provider_tree [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.175 2 DEBUG nova.virt.libvirt.host [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.175 2 DEBUG nova.virt.libvirt.host [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.179 2 DEBUG nova.virt.libvirt.host [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.180 2 DEBUG nova.virt.libvirt.host [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.180 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.180 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.181 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.181 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.181 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.181 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.182 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.182 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.182 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.182 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.183 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.183 2 DEBUG nova.virt.hardware [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.187 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.217 2 DEBUG nova.scheduler.client.report [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.237 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 10c5acec-9e20-431b-a467-d54b7acbabfd_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.262 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.297 2 INFO nova.scheduler.client.report [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Deleted allocations for instance ddf09c33-d956-404b-a5d8-44a3727f9a3b#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.304 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] resizing rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.389 2 DEBUG oslo_concurrency.lockutils [None req-87d33350-43cc-4dd5-a810-45614f6a5443 f06dda9346a24fb094ad9fe51664cc48 48bbd5aa8b9d4a0ea0150bd57145fc68 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.902s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.438 2 DEBUG nova.objects.instance [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'migration_context' on Instance uuid 10c5acec-9e20-431b-a467-d54b7acbabfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.451 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.451 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Ensure instance console log exists: /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.452 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.452 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.452 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.453 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.458 2 WARNING nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.462 2 DEBUG nova.virt.libvirt.host [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.463 2 DEBUG nova.virt.libvirt.host [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.468 2 DEBUG nova.compute.manager [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.469 2 DEBUG oslo_concurrency.lockutils [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.469 2 DEBUG oslo_concurrency.lockutils [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.470 2 DEBUG oslo_concurrency.lockutils [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ddf09c33-d956-404b-a5d8-44a3727f9a3b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.470 2 DEBUG nova.compute.manager [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] No waiting events found dispatching network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.470 2 WARNING nova.compute.manager [req-9ccadb1b-3c02-4bdd-8ece-7a34a8d3e9f2 req-7c8e21c0-8fba-4c0f-b411-4f69d0fb13a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Received unexpected event network-vif-plugged-9b25db0b-246e-456c-82d7-cf361c57f9c5 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.474 2 DEBUG nova.virt.libvirt.host [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.475 2 DEBUG nova.virt.libvirt.host [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.475 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.476 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.476 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.477 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.477 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.477 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.477 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.478 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.478 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.478 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.479 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.479 2 DEBUG nova.virt.hardware [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.483 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1205: 305 pgs: 305 active+clean; 150 MiB data, 354 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.1 MiB/s wr, 198 op/s
Oct  7 10:07:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:07:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1550266879' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.773 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.586s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.793 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.798 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]: {
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "osd_id": 2,
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "type": "bluestore"
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:    },
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "osd_id": 1,
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "type": "bluestore"
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:    },
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "osd_id": 0,
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:        "type": "bluestore"
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]:    }
Oct  7 10:07:23 np0005473739 gallant_bartik[288670]: }
Oct  7 10:07:23 np0005473739 systemd[1]: libpod-d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d.scope: Deactivated successfully.
Oct  7 10:07:23 np0005473739 systemd[1]: libpod-d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d.scope: Consumed 1.144s CPU time.
Oct  7 10:07:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:07:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3443311987' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:07:23 np0005473739 podman[288896]: 2025-10-07 14:07:23.953728729 +0000 UTC m=+0.032405088 container died d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.980 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:23 np0005473739 nova_compute[259550]: 2025-10-07 14:07:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-afd8baf8ba7f9516ce10b55f09929e28055f0b91003d0b5bbeadb3a833c1e823-merged.mount: Deactivated successfully.
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.010 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.016 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:24 np0005473739 podman[288896]: 2025-10-07 14:07:24.090616597 +0000 UTC m=+0.169292956 container remove d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_bartik, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:07:24 np0005473739 systemd[1]: libpod-conmon-d1fa59b011a634d48790473ec7ecd62b5ed5cf28e109cea32509d86dd1dc746d.scope: Deactivated successfully.
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:07:24 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b82b8d9a-65c1-410d-a85d-f7c968a6d49c does not exist
Oct  7 10:07:24 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f4444ea3-cd3e-457f-8696-a0de6ce47c94 does not exist
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1508793537' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.320 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.323 2 DEBUG nova.compute.manager [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-changed-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.323 2 DEBUG nova.compute.manager [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Refreshing instance network info cache due to event network-changed-78dbc059-1828-41b6-84fa-bf4ac0b31103. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.323 2 DEBUG oslo_concurrency.lockutils [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.324 2 DEBUG oslo_concurrency.lockutils [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.324 2 DEBUG nova.network.neutron [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Refreshing network info cache for port 78dbc059-1828-41b6-84fa-bf4ac0b31103 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.326 2 DEBUG nova.virt.libvirt.vif [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-926892453',display_name='tempest-ImagesNegativeTestJSON-server-926892453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-926892453',id=18,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15d729b9c0bc4738b4f887d6b764fb5a',ramdisk_id='',reservation_id='r-z35xzswz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-151850229',owner_user_name='tempest-ImagesNegativeTestJSON-151850229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:07:19Z,user_data=None,user_id='8977e4fa4adb42bfae4fe2be5d339769',uuid=dfe85d14-0395-4f7f-8bc1-f536aefe2ffc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.327 2 DEBUG nova.network.os_vif_util [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converting VIF {"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.327 2 DEBUG nova.network.os_vif_util [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.329 2 DEBUG nova.objects.instance [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lazy-loading 'pci_devices' on Instance uuid dfe85d14-0395-4f7f-8bc1-f536aefe2ffc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.352 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <uuid>dfe85d14-0395-4f7f-8bc1-f536aefe2ffc</uuid>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <name>instance-00000012</name>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesNegativeTestJSON-server-926892453</nova:name>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:07:23</nova:creationTime>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:user uuid="8977e4fa4adb42bfae4fe2be5d339769">tempest-ImagesNegativeTestJSON-151850229-project-member</nova:user>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:project uuid="15d729b9c0bc4738b4f887d6b764fb5a">tempest-ImagesNegativeTestJSON-151850229</nova:project>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:port uuid="78dbc059-1828-41b6-84fa-bf4ac0b31103">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="serial">dfe85d14-0395-4f7f-8bc1-f536aefe2ffc</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="uuid">dfe85d14-0395-4f7f-8bc1-f536aefe2ffc</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:8e:a8:72"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <target dev="tap78dbc059-18"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/console.log" append="off"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:07:24 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:07:24 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.357 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Preparing to wait for external event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.358 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.358 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.359 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.360 2 DEBUG nova.virt.libvirt.vif [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-926892453',display_name='tempest-ImagesNegativeTestJSON-server-926892453',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-926892453',id=18,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='15d729b9c0bc4738b4f887d6b764fb5a',ramdisk_id='',reservation_id='r-z35xzswz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesNegativeTestJSON-151850229',owner_user_name='tempest-ImagesNegativeTestJSON-151850229-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:07:19Z,user_data=None,user_id='8977e4fa4adb42bfae4fe2be5d339769',uuid=dfe85d14-0395-4f7f-8bc1-f536aefe2ffc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.360 2 DEBUG nova.network.os_vif_util [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converting VIF {"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.361 2 DEBUG nova.network.os_vif_util [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.361 2 DEBUG os_vif [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.362 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.363 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.366 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78dbc059-18, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78dbc059-18, col_values=(('external_ids', {'iface-id': '78dbc059-1828-41b6-84fa-bf4ac0b31103', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8e:a8:72', 'vm-uuid': 'dfe85d14-0395-4f7f-8bc1-f536aefe2ffc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:24 np0005473739 NetworkManager[44949]: <info>  [1759846044.3699] manager: (tap78dbc059-18): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.377 2 INFO os_vif [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18')#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.437 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.438 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.438 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] No VIF found with MAC fa:16:3e:8e:a8:72, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.439 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Using config drive#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.459 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1180699488' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.519 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.521 2 DEBUG nova.objects.instance [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'pci_devices' on Instance uuid 10c5acec-9e20-431b-a467-d54b7acbabfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.533 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <uuid>10c5acec-9e20-431b-a467-d54b7acbabfd</uuid>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <name>instance-00000013</name>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:name>tempest-LiveMigrationNegativeTest-server-160156078</nova:name>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:07:23</nova:creationTime>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:user uuid="027a68b305ed4ef799375643ce9e5831">tempest-LiveMigrationNegativeTest-182284186-project-member</nova:user>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <nova:project uuid="64dfda49c33d49eb9d343ba7a8f90d6d">tempest-LiveMigrationNegativeTest-182284186</nova:project>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="serial">10c5acec-9e20-431b-a467-d54b7acbabfd</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="uuid">10c5acec-9e20-431b-a467-d54b7acbabfd</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/10c5acec-9e20-431b-a467-d54b7acbabfd_disk">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/console.log" append="off"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:07:24 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:07:24 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:07:24 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:07:24 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.582 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.583 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.584 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Using config drive#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.608 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:07:24 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.813 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Creating config drive at /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.817 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ibdkb3x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.954 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4ibdkb3x" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.977 2 DEBUG nova.storage.rbd_utils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] rbd image 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:24 np0005473739 nova_compute[259550]: 2025-10-07 14:07:24.980 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.007 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Creating config drive at /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.012 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_uee70k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.039 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.076 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.076 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.077 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.077 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.078 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.149 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg_uee70k" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.179 2 DEBUG nova.storage.rbd_utils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] rbd image dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.185 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1206: 305 pgs: 305 active+clean; 163 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.2 MiB/s wr, 238 op/s
Oct  7 10:07:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1849065527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.556 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.574 2 DEBUG oslo_concurrency.processutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config 10c5acec-9e20-431b-a467-d54b7acbabfd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.593s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.575 2 INFO nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Deleting local config drive /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd/disk.config because it was imported into RBD.#033[00m
Oct  7 10:07:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:07:25 np0005473739 systemd-machined[214580]: New machine qemu-22-instance-00000013.
Oct  7 10:07:25 np0005473739 systemd[1]: Started Virtual Machine qemu-22-instance-00000013.
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.847 2 DEBUG nova.network.neutron [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Updated VIF entry in instance network info cache for port 78dbc059-1828-41b6-84fa-bf4ac0b31103. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.849 2 DEBUG nova.network.neutron [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Updating instance_info_cache with network_info: [{"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.882 2 DEBUG oslo_concurrency.lockutils [req-5d51b61b-70d0-4f4e-98cf-03da3d5b2344 req-08f629c8-a5c8-459a-9c25-145a65722867 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.896 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.897 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000012 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.900 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.900 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000011 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.902 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:07:25 np0005473739 nova_compute[259550]: 2025-10-07 14:07:25.903 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000013 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.058 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.060 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4300MB free_disk=59.93330383300781GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.060 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.061 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.213 2 DEBUG oslo_concurrency.processutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.028s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.215 2 INFO nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Deleting local config drive /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc/disk.config because it was imported into RBD.#033[00m
Oct  7 10:07:26 np0005473739 kernel: tap78dbc059-18: entered promiscuous mode
Oct  7 10:07:26 np0005473739 NetworkManager[44949]: <info>  [1759846046.2729] manager: (tap78dbc059-18): new Tun device (/org/freedesktop/NetworkManager/Devices/66)
Oct  7 10:07:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:26Z|00127|binding|INFO|Claiming lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 for this chassis.
Oct  7 10:07:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:26Z|00128|binding|INFO|78dbc059-1828-41b6-84fa-bf4ac0b31103: Claiming fa:16:3e:8e:a8:72 10.100.0.6
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.282 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 809b049f-447c-4cdd-b8d2-8325f6d3b576 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.283 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance dfe85d14-0395-4f7f-8bc1-f536aefe2ffc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.283 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 10c5acec-9e20-431b-a467-d54b7acbabfd actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.283 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.283 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:07:26 np0005473739 systemd-udevd[289233]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:07:26 np0005473739 systemd-machined[214580]: New machine qemu-23-instance-00000012.
Oct  7 10:07:26 np0005473739 NetworkManager[44949]: <info>  [1759846046.3168] device (tap78dbc059-18): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:07:26 np0005473739 NetworkManager[44949]: <info>  [1759846046.3178] device (tap78dbc059-18): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:07:26 np0005473739 systemd[1]: Started Virtual Machine qemu-23-instance-00000012.
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.355 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:26Z|00129|binding|INFO|Setting lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 ovn-installed in OVS
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.399 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a8:72 10.100.0.6'], port_security=['fa:16:3e:8e:a8:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dfe85d14-0395-4f7f-8bc1-f536aefe2ffc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15d729b9c0bc4738b4f887d6b764fb5a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad09d09b-a958-4ea8-8f7f-2304bf5c2ff2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5e58e1-0685-45b5-97d6-a48d03b5304d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=78dbc059-1828-41b6-84fa-bf4ac0b31103) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.400 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 78dbc059-1828-41b6-84fa-bf4ac0b31103 in datapath 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c bound to our chassis#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.401 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c#033[00m
Oct  7 10:07:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:26Z|00130|binding|INFO|Setting lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 up in Southbound
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.418 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbe2632-593c-473a-9fe7-8578c2a3e745]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.419 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap2f4de79f-41 in ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.421 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap2f4de79f-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.421 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a9cb575-528a-4bc9-9595-bbf31f0c0002]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b606f1c-e78b-4058-ad15-889035244307]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.436 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[581555b7-6774-474d-9ce1-67e5302e5ad0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.455 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92cc0b9b-17f0-435d-b66b-a4ec1ddaf633]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.497 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b112f4b1-a7ba-4c2e-af5a-b14f385349e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 NetworkManager[44949]: <info>  [1759846046.5064] manager: (tap2f4de79f-40): new Veth device (/org/freedesktop/NetworkManager/Devices/67)
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.507 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75cbfeb1-bad6-4065-b56b-30c69e8e4ef2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.560 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39b4f7ec-b0b9-49b5-9bd9-b4fb75ae9800]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.565 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[53789d76-bfde-48e3-904e-f44cffe4a25d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 NetworkManager[44949]: <info>  [1759846046.5985] device (tap2f4de79f-40): carrier: link connected
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.608 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[89130bee-f947-4882-b964-d6fe6b14f5e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2abef8a1-449c-425a-ba50-177093b50b89]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4de79f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:21:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663016, 'reachable_time': 39645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289323, 'error': None, 'target': 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b233f16f-7ba3-4607-b8dc-8b449adde5f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb1:2157'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 663016, 'tstamp': 663016}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 289325, 'error': None, 'target': 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.665 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7878af95-9c02-4f7b-a280-81f171de5a2d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap2f4de79f-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b1:21:57'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 42], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663016, 'reachable_time': 39645, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 289326, 'error': None, 'target': 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e7a2cba-11c6-498f-829c-b45121fc9543]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f171d96-1a04-4ed9-83b3-a5c912d09c3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/524300082' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.822 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4de79f-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.822 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.823 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2f4de79f-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:26 np0005473739 kernel: tap2f4de79f-40: entered promiscuous mode
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:26 np0005473739 NetworkManager[44949]: <info>  [1759846046.8275] manager: (tap2f4de79f-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.829 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap2f4de79f-40, col_values=(('external_ids', {'iface-id': 'e2713bfd-9c9e-4d08-ab85-48bb88f7c6b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:26Z|00131|binding|INFO|Releasing lport e2713bfd-9c9e-4d08-ab85-48bb88f7c6b2 from this chassis (sb_readonly=0)
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.848 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/2f4de79f-442a-4f3f-b7b5-11fe265c4f7c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/2f4de79f-442a-4f3f-b7b5-11fe265c4f7c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a4252919-7ff9-4be7-9f65-52cff94157d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.850 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/2f4de79f-442a-4f3f-b7b5-11fe265c4f7c.pid.haproxy
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:07:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:26.851 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'env', 'PROCESS_TAG=haproxy-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/2f4de79f-442a-4f3f-b7b5-11fe265c4f7c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.852 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.865 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.972 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846046.9668386, 10c5acec-9e20-431b-a467-d54b7acbabfd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.975 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.982 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.985 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.991 2 INFO nova.virt.libvirt.driver [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance spawned successfully.#033[00m
Oct  7 10:07:26 np0005473739 nova_compute[259550]: 2025-10-07 14:07:26.993 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.000 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.034 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.039 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:07:27 np0005473739 podman[289365]: 2025-10-07 14:07:27.351362365 +0000 UTC m=+0.082754152 container create 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:07:27 np0005473739 podman[289365]: 2025-10-07 14:07:27.298161283 +0000 UTC m=+0.029553090 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:07:27 np0005473739 systemd[1]: Started libpod-conmon-76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32.scope.
Oct  7 10:07:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:07:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aed952b4b685f9b30f7b598687d00937742144bbb550bb6c55e216cf5c4aede9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:07:27 np0005473739 podman[289365]: 2025-10-07 14:07:27.461198982 +0000 UTC m=+0.192590769 container init 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:07:27 np0005473739 podman[289365]: 2025-10-07 14:07:27.467249523 +0000 UTC m=+0.198641310 container start 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  7 10:07:27 np0005473739 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [NOTICE]   (289385) : New worker (289387) forked
Oct  7 10:07:27 np0005473739 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [NOTICE]   (289385) : Loading success.
Oct  7 10:07:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1207: 305 pgs: 305 active+clean; 181 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.1 MiB/s wr, 187 op/s
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.581 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.581 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846046.9694026, 10c5acec-9e20-431b-a467-d54b7acbabfd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.582 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] VM Started (Lifecycle Event)#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.593 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.594 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.596 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.597 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.598 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.598 2 DEBUG nova.virt.libvirt.driver [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.623 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.624 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.651 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.656 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.686 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.686 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846047.2509806, dfe85d14-0395-4f7f-8bc1-f536aefe2ffc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.687 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] VM Started (Lifecycle Event)#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.691 2 INFO nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Took 5.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.692 2 DEBUG nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.704 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.710 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846047.2510815, dfe85d14-0395-4f7f-8bc1-f536aefe2ffc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.711 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.740 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.744 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.758 2 INFO nova.compute.manager [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Took 6.41 seconds to build instance.#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.764 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:07:27 np0005473739 nova_compute[259550]: 2025-10-07 14:07:27.779 2 DEBUG oslo_concurrency.lockutils [None req-1e09060a-e2e8-4964-ac29-d274abdbc8a4 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.128s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:28Z|00132|binding|INFO|Releasing lport e2713bfd-9c9e-4d08-ab85-48bb88f7c6b2 from this chassis (sb_readonly=0)
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.567 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.568 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.747 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.749 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.749 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.985 2 DEBUG nova.compute.manager [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.985 2 DEBUG oslo_concurrency.lockutils [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.986 2 DEBUG oslo_concurrency.lockutils [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.987 2 DEBUG oslo_concurrency.lockutils [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.987 2 DEBUG nova.compute.manager [req-ca101f4d-bfb2-4670-8dcd-514d84ca48d1 req-5c1ca2bb-7acf-4fe9-8885-f23b4f70cd51 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Processing event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.988 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.992 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846048.992123, dfe85d14-0395-4f7f-8bc1-f536aefe2ffc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.992 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.994 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:07:28 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.998 2 INFO nova.virt.libvirt.driver [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance spawned successfully.#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:28.999 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.015 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.031 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.040 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.041 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.042 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.042 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.043 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.044 2 DEBUG nova.virt.libvirt.driver [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.057 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.112 2 INFO nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Took 10.05 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.114 2 DEBUG nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.159 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.206 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846034.2016592, 7322f2d1-885e-4e41-8a96-e90d4ddc6c38 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.207 2 INFO nova.compute.manager [-] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.211 2 INFO nova.compute.manager [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Took 11.15 seconds to build instance.#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.232 2 DEBUG oslo_concurrency.lockutils [None req-521ca62b-c362-415f-a027-27722b4f9bc6 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.234 2 DEBUG nova.compute.manager [None req-1582ccdb-3bc0-4a68-804e-dc0d36ffe8dc - - - - - -] [instance: 7322f2d1-885e-4e41-8a96-e90d4ddc6c38] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1208: 305 pgs: 305 active+clean; 190 MiB data, 380 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 4.6 MiB/s wr, 268 op/s
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.800 2 DEBUG nova.objects.instance [None req-2c5560ad-cf6e-4610-9405-b8fb71491818 cd2830ea97e14a6a875d93683bcf1ddb 546157e74cec441b84dcf0074737c31f - - default default] Lazy-loading 'pci_devices' on Instance uuid 10c5acec-9e20-431b-a467-d54b7acbabfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.827 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846049.8270304, 10c5acec-9e20-431b-a467-d54b7acbabfd => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.827 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.850 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.855 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:07:29 np0005473739 nova_compute[259550]: 2025-10-07 14:07:29.880 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  7 10:07:30 np0005473739 podman[289399]: 2025-10-07 14:07:30.154862763 +0000 UTC m=+0.139047327 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  7 10:07:30 np0005473739 nova_compute[259550]: 2025-10-07 14:07:30.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 322961408
Oct  7 10:07:30 np0005473739 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Deactivated successfully.
Oct  7 10:07:30 np0005473739 systemd[1]: machine-qemu\x2d22\x2dinstance\x2d00000013.scope: Consumed 3.739s CPU time.
Oct  7 10:07:30 np0005473739 systemd-machined[214580]: Machine qemu-22-instance-00000013 terminated.
Oct  7 10:07:30 np0005473739 nova_compute[259550]: 2025-10-07 14:07:30.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.012 2 DEBUG nova.compute.manager [None req-2c5560ad-cf6e-4610-9405-b8fb71491818 cd2830ea97e14a6a875d93683bcf1ddb 546157e74cec441b84dcf0074737c31f - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.072 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.073 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.073 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.073 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.073 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.075 2 INFO nova.compute.manager [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Terminating instance#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.076 2 DEBUG nova.compute.manager [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:07:31 np0005473739 kernel: tap78dbc059-18 (unregistering): left promiscuous mode
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.188 2 DEBUG nova.compute.manager [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.189 2 DEBUG oslo_concurrency.lockutils [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.189 2 DEBUG oslo_concurrency.lockutils [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.189 2 DEBUG oslo_concurrency.lockutils [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.190 2 DEBUG nova.compute.manager [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] No waiting events found dispatching network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.190 2 WARNING nova.compute.manager [req-6517e8a6-6661-4886-8260-1158da206801 req-83c331ab-2498-4b95-a539-ed5a14a1ea6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received unexpected event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:07:31 np0005473739 NetworkManager[44949]: <info>  [1759846051.1914] device (tap78dbc059-18): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:07:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:31Z|00133|binding|INFO|Releasing lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 from this chassis (sb_readonly=0)
Oct  7 10:07:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:31Z|00134|binding|INFO|Setting lport 78dbc059-1828-41b6-84fa-bf4ac0b31103 down in Southbound
Oct  7 10:07:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:07:31Z|00135|binding|INFO|Removing iface tap78dbc059-18 ovn-installed in OVS
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:31 np0005473739 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000012.scope: Deactivated successfully.
Oct  7 10:07:31 np0005473739 systemd[1]: machine-qemu\x2d23\x2dinstance\x2d00000012.scope: Consumed 2.847s CPU time.
Oct  7 10:07:31 np0005473739 systemd-machined[214580]: Machine qemu-23-instance-00000012 terminated.
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.329 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8e:a8:72 10.100.0.6'], port_security=['fa:16:3e:8e:a8:72 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'dfe85d14-0395-4f7f-8bc1-f536aefe2ffc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '15d729b9c0bc4738b4f887d6b764fb5a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad09d09b-a958-4ea8-8f7f-2304bf5c2ff2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0e5e58e1-0685-45b5-97d6-a48d03b5304d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=78dbc059-1828-41b6-84fa-bf4ac0b31103) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.331 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 78dbc059-1828-41b6-84fa-bf4ac0b31103 in datapath 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c unbound from our chassis#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.332 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.333 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e25fae2b-6e81-438a-adc0-2625e6944035]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.334 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c namespace which is not needed anymore#033[00m
Oct  7 10:07:31 np0005473739 podman[289435]: 2025-10-07 14:07:31.405017445 +0000 UTC m=+0.078581351 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:07:31 np0005473739 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [NOTICE]   (289385) : haproxy version is 2.8.14-c23fe91
Oct  7 10:07:31 np0005473739 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [NOTICE]   (289385) : path to executable is /usr/sbin/haproxy
Oct  7 10:07:31 np0005473739 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [WARNING]  (289385) : Exiting Master process...
Oct  7 10:07:31 np0005473739 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [ALERT]    (289385) : Current worker (289387) exited with code 143 (Terminated)
Oct  7 10:07:31 np0005473739 neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c[289379]: [WARNING]  (289385) : All workers exited. Exiting... (0)
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.512 2 INFO nova.virt.libvirt.driver [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Instance destroyed successfully.#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.512 2 DEBUG nova.objects.instance [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lazy-loading 'resources' on Instance uuid dfe85d14-0395-4f7f-8bc1-f536aefe2ffc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:31 np0005473739 systemd[1]: libpod-76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32.scope: Deactivated successfully.
Oct  7 10:07:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1209: 305 pgs: 305 active+clean; 206 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 5.6 MiB/s wr, 230 op/s
Oct  7 10:07:31 np0005473739 podman[289472]: 2025-10-07 14:07:31.522461964 +0000 UTC m=+0.078386186 container died 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.547 2 DEBUG nova.virt.libvirt.vif [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:07:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesNegativeTestJSON-server-926892453',display_name='tempest-ImagesNegativeTestJSON-server-926892453',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesnegativetestjson-server-926892453',id=18,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:07:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='15d729b9c0bc4738b4f887d6b764fb5a',ramdisk_id='',reservation_id='r-z35xzswz',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesNegativeTestJSON-151850229',owner_user_name='tempest-ImagesNegativeTestJSON-151850229-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:07:29Z,user_data=None,user_id='8977e4fa4adb42bfae4fe2be5d339769',uuid=dfe85d14-0395-4f7f-8bc1-f536aefe2ffc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.548 2 DEBUG nova.network.os_vif_util [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converting VIF {"id": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "address": "fa:16:3e:8e:a8:72", "network": {"id": "2f4de79f-442a-4f3f-b7b5-11fe265c4f7c", "bridge": "br-int", "label": "tempest-ImagesNegativeTestJSON-921946818-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "15d729b9c0bc4738b4f887d6b764fb5a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78dbc059-18", "ovs_interfaceid": "78dbc059-1828-41b6-84fa-bf4ac0b31103", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.549 2 DEBUG nova.network.os_vif_util [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.549 2 DEBUG os_vif [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.553 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78dbc059-18, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.561 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.566 2 INFO os_vif [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8e:a8:72,bridge_name='br-int',has_traffic_filtering=True,id=78dbc059-1828-41b6-84fa-bf4ac0b31103,network=Network(2f4de79f-442a-4f3f-b7b5-11fe265c4f7c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78dbc059-18')#033[00m
Oct  7 10:07:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32-userdata-shm.mount: Deactivated successfully.
Oct  7 10:07:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-aed952b4b685f9b30f7b598687d00937742144bbb550bb6c55e216cf5c4aede9-merged.mount: Deactivated successfully.
Oct  7 10:07:31 np0005473739 podman[289472]: 2025-10-07 14:07:31.637207661 +0000 UTC m=+0.193131883 container cleanup 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:07:31 np0005473739 systemd[1]: libpod-conmon-76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32.scope: Deactivated successfully.
Oct  7 10:07:31 np0005473739 podman[289532]: 2025-10-07 14:07:31.726146489 +0000 UTC m=+0.057042536 container remove 76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.736 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[243a65fd-22d1-432e-b1ae-39f740ff2779]: (4, ('Tue Oct  7 02:07:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c (76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32)\n76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32\nTue Oct  7 02:07:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c (76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32)\n76548fb6b29f75b599e32d133d950fe5d7d3ac73456cbe32c56b47a2a1305b32\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.738 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01448963-c800-42b4-a3e8-d65349e4d211]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.740 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2f4de79f-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:07:31 np0005473739 kernel: tap2f4de79f-40: left promiscuous mode
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.747 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd595a9-b04e-4a1f-a7ea-f10cb99dac19]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:31 np0005473739 nova_compute[259550]: 2025-10-07 14:07:31.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d503b9a-d71f-4c96-8faa-c6914774105b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.782 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a90c18c0-1e20-4903-908a-910b16a9f29b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.806 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8a00fdd-4f7c-4b2d-ac6f-b49e5c76b262]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 663005, 'reachable_time': 31956, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 289548, 'error': None, 'target': 'ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:31 np0005473739 systemd[1]: run-netns-ovnmeta\x2d2f4de79f\x2d442a\x2d4f3f\x2db7b5\x2d11fe265c4f7c.mount: Deactivated successfully.
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.809 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-2f4de79f-442a-4f3f-b7b5-11fe265c4f7c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:07:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:31.809 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1a571427-564b-4bef-a79f-a54681e218f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e134 do_prune osdmap full prune enabled
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 e135: 3 total, 3 up, 3 in
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e135: 3 total, 3 up, 3 in
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #57. Immutable memtables: 0.
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.842801) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 29] Flushing memtable with next log file: 57
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051842864, "job": 29, "event": "flush_started", "num_memtables": 1, "num_entries": 1362, "num_deletes": 251, "total_data_size": 1942667, "memory_usage": 1979456, "flush_reason": "Manual Compaction"}
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 29] Level-0 flush table #58: started
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051860949, "cf_name": "default", "job": 29, "event": "table_file_creation", "file_number": 58, "file_size": 1911230, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 24297, "largest_seqno": 25658, "table_properties": {"data_size": 1904831, "index_size": 3538, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1797, "raw_key_size": 14062, "raw_average_key_size": 20, "raw_value_size": 1891781, "raw_average_value_size": 2718, "num_data_blocks": 158, "num_entries": 696, "num_filter_entries": 696, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759845930, "oldest_key_time": 1759845930, "file_creation_time": 1759846051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 58, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 29] Flush lasted 18257 microseconds, and 6911 cpu microseconds.
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.861064) [db/flush_job.cc:967] [default] [JOB 29] Level-0 flush table #58: 1911230 bytes OK
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.861086) [db/memtable_list.cc:519] [default] Level-0 commit table #58 started
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.862853) [db/memtable_list.cc:722] [default] Level-0 commit table #58: memtable #1 done
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.862897) EVENT_LOG_v1 {"time_micros": 1759846051862887, "job": 29, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.862921) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 29] Try to delete WAL files size 1936530, prev total WAL file size 1936530, number of live WAL files 2.
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000054.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.863981) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032303038' seq:72057594037927935, type:22 .. '7061786F730032323630' seq:0, type:0; will stop at (end)
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 30] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 29 Base level 0, inputs: [58(1866KB)], [56(6890KB)]
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051864013, "job": 30, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [58], "files_L6": [56], "score": -1, "input_data_size": 8967292, "oldest_snapshot_seqno": -1}
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 30] Generated table #59: 4725 keys, 7222282 bytes, temperature: kUnknown
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051917657, "cf_name": "default", "job": 30, "event": "table_file_creation", "file_number": 59, "file_size": 7222282, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7190952, "index_size": 18420, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 11845, "raw_key_size": 118457, "raw_average_key_size": 25, "raw_value_size": 7105868, "raw_average_value_size": 1503, "num_data_blocks": 760, "num_entries": 4725, "num_filter_entries": 4725, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846051, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 59, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.918019) [db/compaction/compaction_job.cc:1663] [default] [JOB 30] Compacted 1@0 + 1@6 files to L6 => 7222282 bytes
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.920101) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 166.9 rd, 134.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 6.7 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(8.5) write-amplify(3.8) OK, records in: 5243, records dropped: 518 output_compression: NoCompression
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.920144) EVENT_LOG_v1 {"time_micros": 1759846051920122, "job": 30, "event": "compaction_finished", "compaction_time_micros": 53742, "compaction_time_cpu_micros": 17967, "output_level": 6, "num_output_files": 1, "total_output_size": 7222282, "num_input_records": 5243, "num_output_records": 4725, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000058.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051920802, "job": 30, "event": "table_file_deletion", "file_number": 58}
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000056.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846051922230, "job": 30, "event": "table_file_deletion", "file_number": 56}
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.863879) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922363) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922366) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922370) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:07:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:07:31.922373) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.121 2 INFO nova.virt.libvirt.driver [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Deleting instance files /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_del#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.122 2 INFO nova.virt.libvirt.driver [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Deletion of /var/lib/nova/instances/dfe85d14-0395-4f7f-8bc1-f536aefe2ffc_del complete#033[00m
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001440965439213388 of space, bias 1.0, pg target 0.4322896317640164 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.283 2 INFO nova.compute.manager [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Took 1.21 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.283 2 DEBUG oslo.service.loopingcall [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.283 2 DEBUG nova.compute.manager [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.283 2 DEBUG nova.network.neutron [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.495 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "10c5acec-9e20-431b-a467-d54b7acbabfd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.495 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.495 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "10c5acec-9e20-431b-a467-d54b7acbabfd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.496 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.496 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.497 2 INFO nova.compute.manager [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Terminating instance#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.497 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "refresh_cache-10c5acec-9e20-431b-a467-d54b7acbabfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.498 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquired lock "refresh_cache-10c5acec-9e20-431b-a467-d54b7acbabfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.498 2 DEBUG nova.network.neutron [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:07:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:07:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4039940584' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:07:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:07:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4039940584' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:07:32 np0005473739 nova_compute[259550]: 2025-10-07 14:07:32.719 2 DEBUG nova.network.neutron [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.004 2 DEBUG nova.network.neutron [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.108 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Releasing lock "refresh_cache-10c5acec-9e20-431b-a467-d54b7acbabfd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.109 2 DEBUG nova.compute.manager [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.120 2 INFO nova.virt.libvirt.driver [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance destroyed successfully.#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.121 2 DEBUG nova.objects.instance [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'resources' on Instance uuid 10c5acec-9e20-431b-a467-d54b7acbabfd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.277 2 DEBUG nova.network.neutron [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.404 2 INFO nova.compute.manager [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Took 1.12 seconds to deallocate network for instance.#033[00m
Oct  7 10:07:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1211: 305 pgs: 305 active+clean; 206 MiB data, 392 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.5 MiB/s wr, 218 op/s
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.658 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.658 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.694 2 INFO nova.virt.libvirt.driver [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Deleting instance files /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd_del#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.695 2 INFO nova.virt.libvirt.driver [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Deletion of /var/lib/nova/instances/10c5acec-9e20-431b-a467-d54b7acbabfd_del complete#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.729 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846038.7282531, ddf09c33-d956-404b-a5d8-44a3727f9a3b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.730 2 INFO nova.compute.manager [-] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.837 2 DEBUG nova.compute.manager [None req-fc288ad5-eb1b-41ae-9ee1-2b4b946a7b6d - - - - - -] [instance: ddf09c33-d956-404b-a5d8-44a3727f9a3b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.880 2 INFO nova.compute.manager [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Took 0.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.880 2 DEBUG oslo.service.loopingcall [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.881 2 DEBUG nova.compute.manager [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.881 2 DEBUG nova.network.neutron [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.944 2 DEBUG nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-unplugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.945 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.945 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.945 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.945 2 DEBUG nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] No waiting events found dispatching network-vif-unplugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.946 2 WARNING nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received unexpected event network-vif-unplugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.946 2 DEBUG nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.946 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.946 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.947 2 DEBUG oslo_concurrency.lockutils [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.947 2 DEBUG nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] No waiting events found dispatching network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:07:33 np0005473739 nova_compute[259550]: 2025-10-07 14:07:33.947 2 WARNING nova.compute.manager [req-d3056693-8e4f-44fb-9d27-c267f2b81008 req-639689f5-761d-4ba2-bed8-a0206499c4a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received unexpected event network-vif-plugged-78dbc059-1828-41b6-84fa-bf4ac0b31103 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:07:34 np0005473739 nova_compute[259550]: 2025-10-07 14:07:34.663 2 DEBUG oslo_concurrency.processutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:34 np0005473739 nova_compute[259550]: 2025-10-07 14:07:34.713 2 DEBUG nova.network.neutron [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:07:34 np0005473739 nova_compute[259550]: 2025-10-07 14:07:34.732 2 DEBUG nova.network.neutron [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:34 np0005473739 nova_compute[259550]: 2025-10-07 14:07:34.749 2 INFO nova.compute.manager [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Took 0.87 seconds to deallocate network for instance.#033[00m
Oct  7 10:07:34 np0005473739 nova_compute[259550]: 2025-10-07 14:07:34.805 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3849188411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.121 2 DEBUG oslo_concurrency.processutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.128 2 DEBUG nova.compute.provider_tree [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.145 2 DEBUG nova.scheduler.client.report [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.168 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.172 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.367s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.217 2 INFO nova.scheduler.client.report [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Deleted allocations for instance dfe85d14-0395-4f7f-8bc1-f536aefe2ffc#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.274 2 DEBUG oslo_concurrency.processutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.313 2 DEBUG oslo_concurrency.lockutils [None req-c728488a-08a2-4269-b50c-33235ab6460e 8977e4fa4adb42bfae4fe2be5d339769 15d729b9c0bc4738b4f887d6b764fb5a - - default default] Lock "dfe85d14-0395-4f7f-8bc1-f536aefe2ffc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.240s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1212: 305 pgs: 305 active+clean; 145 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.9 MiB/s wr, 325 op/s
Oct  7 10:07:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:07:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051418325' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.739 2 DEBUG oslo_concurrency.processutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.746 2 DEBUG nova.compute.provider_tree [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.759 2 DEBUG nova.scheduler.client.report [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.782 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.807 2 INFO nova.scheduler.client.report [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Deleted allocations for instance 10c5acec-9e20-431b-a467-d54b7acbabfd#033[00m
Oct  7 10:07:35 np0005473739 nova_compute[259550]: 2025-10-07 14:07:35.884 2 DEBUG oslo_concurrency.lockutils [None req-a7b597b0-c8f6-453b-9500-937a29f3528b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "10c5acec-9e20-431b-a467-d54b7acbabfd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.389s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.019 2 DEBUG nova.compute.manager [req-3a64622f-0f2b-4f6a-966a-094d0bc91994 req-4561319d-52f7-46d6-85d9-4012f4495b2b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Received event network-vif-deleted-78dbc059-1828-41b6-84fa-bf4ac0b31103 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.146 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.147 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.147 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "809b049f-447c-4cdd-b8d2-8325f6d3b576-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.147 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.147 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.149 2 INFO nova.compute.manager [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Terminating instance#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.150 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "refresh_cache-809b049f-447c-4cdd-b8d2-8325f6d3b576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.150 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquired lock "refresh_cache-809b049f-447c-4cdd-b8d2-8325f6d3b576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.150 2 DEBUG nova.network.neutron [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.359 2 DEBUG nova.network.neutron [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.589 2 DEBUG nova.network.neutron [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.784 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Releasing lock "refresh_cache-809b049f-447c-4cdd-b8d2-8325f6d3b576" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:07:36 np0005473739 nova_compute[259550]: 2025-10-07 14:07:36.785 2 DEBUG nova.compute.manager [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:07:37 np0005473739 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000011.scope: Deactivated successfully.
Oct  7 10:07:37 np0005473739 systemd[1]: machine-qemu\x2d21\x2dinstance\x2d00000011.scope: Consumed 13.721s CPU time.
Oct  7 10:07:37 np0005473739 systemd-machined[214580]: Machine qemu-21-instance-00000011 terminated.
Oct  7 10:07:37 np0005473739 nova_compute[259550]: 2025-10-07 14:07:37.235 2 INFO nova.virt.libvirt.driver [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance destroyed successfully.#033[00m
Oct  7 10:07:37 np0005473739 nova_compute[259550]: 2025-10-07 14:07:37.236 2 DEBUG nova.objects.instance [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lazy-loading 'resources' on Instance uuid 809b049f-447c-4cdd-b8d2-8325f6d3b576 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1213: 305 pgs: 305 active+clean; 121 MiB data, 386 MiB used, 60 GiB / 60 GiB avail; 5.0 MiB/s rd, 2.6 MiB/s wr, 329 op/s
Oct  7 10:07:38 np0005473739 nova_compute[259550]: 2025-10-07 14:07:38.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1214: 305 pgs: 305 active+clean; 99 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.3 MiB/s wr, 234 op/s
Oct  7 10:07:40 np0005473739 nova_compute[259550]: 2025-10-07 14:07:40.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:07:40 np0005473739 nova_compute[259550]: 2025-10-07 14:07:40.670 2 INFO nova.virt.libvirt.driver [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Deleting instance files /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576_del#033[00m
Oct  7 10:07:40 np0005473739 nova_compute[259550]: 2025-10-07 14:07:40.671 2 INFO nova.virt.libvirt.driver [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Deletion of /var/lib/nova/instances/809b049f-447c-4cdd-b8d2-8325f6d3b576_del complete#033[00m
Oct  7 10:07:40 np0005473739 nova_compute[259550]: 2025-10-07 14:07:40.790 2 INFO nova.compute.manager [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Took 4.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:07:40 np0005473739 nova_compute[259550]: 2025-10-07 14:07:40.791 2 DEBUG oslo.service.loopingcall [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:07:40 np0005473739 nova_compute[259550]: 2025-10-07 14:07:40.791 2 DEBUG nova.compute.manager [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:07:40 np0005473739 nova_compute[259550]: 2025-10-07 14:07:40.791 2 DEBUG nova.network.neutron [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:07:40 np0005473739 nova_compute[259550]: 2025-10-07 14:07:40.992 2 DEBUG nova.network.neutron [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.011 2 DEBUG nova.network.neutron [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.025 2 INFO nova.compute.manager [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Took 0.23 seconds to deallocate network for instance.#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.095 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.095 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.168 2 DEBUG oslo_concurrency.processutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1215: 305 pgs: 305 active+clean; 65 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 113 KiB/s wr, 202 op/s
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087273953' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.681 2 DEBUG oslo_concurrency.processutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.690 2 DEBUG nova.compute.provider_tree [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.716 2 DEBUG nova.scheduler.client.report [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.734 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.758 2 INFO nova.scheduler.client.report [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Deleted allocations for instance 809b049f-447c-4cdd-b8d2-8325f6d3b576#033[00m
Oct  7 10:07:41 np0005473739 nova_compute[259550]: 2025-10-07 14:07:41.818 2 DEBUG oslo_concurrency.lockutils [None req-a664f11c-5ca6-484e-8c2e-6bdcb16f114b 027a68b305ed4ef799375643ce9e5831 64dfda49c33d49eb9d343ba7a8f90d6d - - default default] Lock "809b049f-447c-4cdd-b8d2-8325f6d3b576" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.672s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1216: 305 pgs: 305 active+clean; 65 MiB data, 347 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 96 KiB/s wr, 173 op/s
Oct  7 10:07:45 np0005473739 nova_compute[259550]: 2025-10-07 14:07:45.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1217: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 95 KiB/s wr, 175 op/s
Oct  7 10:07:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:07:46 np0005473739 nova_compute[259550]: 2025-10-07 14:07:46.015 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846051.0126994, 10c5acec-9e20-431b-a467-d54b7acbabfd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:46 np0005473739 nova_compute[259550]: 2025-10-07 14:07:46.016 2 INFO nova.compute.manager [-] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:46 np0005473739 nova_compute[259550]: 2025-10-07 14:07:46.057 2 DEBUG nova.compute.manager [None req-03a2cf04-50d5-4acf-b6d7-c9d360356f63 - - - - - -] [instance: 10c5acec-9e20-431b-a467-d54b7acbabfd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:46 np0005473739 nova_compute[259550]: 2025-10-07 14:07:46.510 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846051.508801, dfe85d14-0395-4f7f-8bc1-f536aefe2ffc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:46 np0005473739 nova_compute[259550]: 2025-10-07 14:07:46.510 2 INFO nova.compute.manager [-] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:46 np0005473739 nova_compute[259550]: 2025-10-07 14:07:46.536 2 DEBUG nova.compute.manager [None req-b0912971-47bf-4c3f-b365-d27e13dc5efb - - - - - -] [instance: dfe85d14-0395-4f7f-8bc1-f536aefe2ffc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:46 np0005473739 nova_compute[259550]: 2025-10-07 14:07:46.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1218: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 50 KiB/s rd, 48 KiB/s wr, 45 op/s
Oct  7 10:07:49 np0005473739 podman[289659]: 2025-10-07 14:07:49.091072645 +0000 UTC m=+0.072173291 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 10:07:49 np0005473739 podman[289658]: 2025-10-07 14:07:49.091173707 +0000 UTC m=+0.072521799 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:07:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1219: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:07:50 np0005473739 nova_compute[259550]: 2025-10-07 14:07:50.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:07:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1220: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 16 op/s
Oct  7 10:07:51 np0005473739 nova_compute[259550]: 2025-10-07 14:07:51.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e135 do_prune osdmap full prune enabled
Oct  7 10:07:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e136 e136: 3 total, 3 up, 3 in
Oct  7 10:07:51 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e136: 3 total, 3 up, 3 in
Oct  7 10:07:52 np0005473739 nova_compute[259550]: 2025-10-07 14:07:52.234 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846057.2324383, 809b049f-447c-4cdd-b8d2-8325f6d3b576 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:07:52 np0005473739 nova_compute[259550]: 2025-10-07 14:07:52.235 2 INFO nova.compute.manager [-] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:07:52 np0005473739 nova_compute[259550]: 2025-10-07 14:07:52.267 2 DEBUG nova.compute.manager [None req-cd2281ac-64f8-4554-99e5-c585c87c9d99 - - - - - -] [instance: 809b049f-447c-4cdd-b8d2-8325f6d3b576] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:07:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1222: 305 pgs: 305 active+clean; 41 MiB data, 319 MiB used, 60 GiB / 60 GiB avail; 4.2 KiB/s rd, 1.1 KiB/s wr, 8 op/s
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1223: 305 pgs: 305 active+clean; 41 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.600 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.601 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.621 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:07:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.706 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.707 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.715 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.715 2 INFO nova.compute.claims [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:07:55 np0005473739 nova_compute[259550]: 2025-10-07 14:07:55.821 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:56.236 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:07:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:07:56.237 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:07:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1487533641' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.352 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.531s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.360 2 DEBUG nova.compute.provider_tree [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.380 2 DEBUG nova.scheduler.client.report [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.402 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.403 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.449 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.449 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.475 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.499 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.631 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.632 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.633 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Creating image(s)#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.659 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.684 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.709 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.715 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.754 2 DEBUG nova.policy [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b1ccfdaee9324154bed6828c0fa32e6d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '63a8d182eca84056a1214aff59d1a164', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.793 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.794 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.795 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.795 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.821 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:07:56 np0005473739 nova_compute[259550]: 2025-10-07 14:07:56.826 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:07:57 np0005473739 nova_compute[259550]: 2025-10-07 14:07:57.279 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:07:57 np0005473739 nova_compute[259550]: 2025-10-07 14:07:57.340 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] resizing rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:07:57 np0005473739 nova_compute[259550]: 2025-10-07 14:07:57.459 2 DEBUG nova.objects.instance [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:07:57 np0005473739 nova_compute[259550]: 2025-10-07 14:07:57.483 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:07:57 np0005473739 nova_compute[259550]: 2025-10-07 14:07:57.484 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Ensure instance console log exists: /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:07:57 np0005473739 nova_compute[259550]: 2025-10-07 14:07:57.485 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:07:57 np0005473739 nova_compute[259550]: 2025-10-07 14:07:57.486 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:07:57 np0005473739 nova_compute[259550]: 2025-10-07 14:07:57.487 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:07:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1224: 305 pgs: 305 active+clean; 41 MiB data, 316 MiB used, 60 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  7 10:07:58 np0005473739 nova_compute[259550]: 2025-10-07 14:07:58.167 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Successfully created port: d23018fc-ec2d-4a03-8e09-88c7ecb34f8b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:07:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1225: 305 pgs: 305 active+clean; 68 MiB data, 332 MiB used, 60 GiB / 60 GiB avail; 38 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct  7 10:08:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:00.041 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:00.041 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:00.042 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:00 np0005473739 nova_compute[259550]: 2025-10-07 14:08:00.222 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Successfully updated port: d23018fc-ec2d-4a03-8e09-88c7ecb34f8b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:08:00 np0005473739 nova_compute[259550]: 2025-10-07 14:08:00.254 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:00 np0005473739 nova_compute[259550]: 2025-10-07 14:08:00.255 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquired lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:00 np0005473739 nova_compute[259550]: 2025-10-07 14:08:00.255 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:08:00 np0005473739 nova_compute[259550]: 2025-10-07 14:08:00.483 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:08:00 np0005473739 nova_compute[259550]: 2025-10-07 14:08:00.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e136 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e136 do_prune osdmap full prune enabled
Oct  7 10:08:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 e137: 3 total, 3 up, 3 in
Oct  7 10:08:00 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e137: 3 total, 3 up, 3 in
Oct  7 10:08:01 np0005473739 podman[289887]: 2025-10-07 14:08:01.096556921 +0000 UTC m=+0.089682278 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  7 10:08:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1227: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.2 MiB/s wr, 58 op/s
Oct  7 10:08:01 np0005473739 nova_compute[259550]: 2025-10-07 14:08:01.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:01 np0005473739 nova_compute[259550]: 2025-10-07 14:08:01.650 2 DEBUG nova.compute.manager [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-changed-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:01 np0005473739 nova_compute[259550]: 2025-10-07 14:08:01.651 2 DEBUG nova.compute.manager [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Refreshing instance network info cache due to event network-changed-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:08:01 np0005473739 nova_compute[259550]: 2025-10-07 14:08:01.651 2 DEBUG oslo_concurrency.lockutils [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.066 2 DEBUG nova.network.neutron [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:02 np0005473739 podman[289915]: 2025-10-07 14:08:02.081043783 +0000 UTC m=+0.061380232 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.239 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Releasing lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.239 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance network_info: |[{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.240 2 DEBUG oslo_concurrency.lockutils [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.240 2 DEBUG nova.network.neutron [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Refreshing network info cache for port d23018fc-ec2d-4a03-8e09-88c7ecb34f8b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.242 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start _get_guest_xml network_info=[{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.246 2 WARNING nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.251 2 DEBUG nova.virt.libvirt.host [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.252 2 DEBUG nova.virt.libvirt.host [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.256 2 DEBUG nova.virt.libvirt.host [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.257 2 DEBUG nova.virt.libvirt.host [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.257 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.257 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.258 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.259 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.259 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.259 2 DEBUG nova.virt.hardware [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.261 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.676 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.677 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1523630697' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.750 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.771 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.775 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:02 np0005473739 nova_compute[259550]: 2025-10-07 14:08:02.878 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.103 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.104 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.113 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.113 2 INFO nova.compute.claims [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:08:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3694626128' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:03.239 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.251 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.253 2 DEBUG nova.virt.libvirt.vif [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:07:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-370744096',display_name='tempest-ImagesOneServerTestJSON-server-370744096',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-370744096',id=20,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63a8d182eca84056a1214aff59d1a164',ramdisk_id='',reservation_id='r-4hhekc2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-961060000',owner_user_name='tempest-ImagesOneServerTestJSON-961060000-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:07:56Z,user_data=None,user_id='b1ccfdaee9324154bed6828c0fa32e6d',uuid=4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.253 2 DEBUG nova.network.os_vif_util [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converting VIF {"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.254 2 DEBUG nova.network.os_vif_util [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.255 2 DEBUG nova.objects.instance [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.409 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <uuid>4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1</uuid>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <name>instance-00000014</name>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesOneServerTestJSON-server-370744096</nova:name>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:08:02</nova:creationTime>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <nova:user uuid="b1ccfdaee9324154bed6828c0fa32e6d">tempest-ImagesOneServerTestJSON-961060000-project-member</nova:user>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <nova:project uuid="63a8d182eca84056a1214aff59d1a164">tempest-ImagesOneServerTestJSON-961060000</nova:project>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <nova:port uuid="d23018fc-ec2d-4a03-8e09-88c7ecb34f8b">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <entry name="serial">4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1</entry>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <entry name="uuid">4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1</entry>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:84:73:3d"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <target dev="tapd23018fc-ec"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/console.log" append="off"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:08:03 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:08:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:08:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:08:03 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.411 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Preparing to wait for external event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.412 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.412 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.412 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.413 2 DEBUG nova.virt.libvirt.vif [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:07:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-370744096',display_name='tempest-ImagesOneServerTestJSON-server-370744096',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-370744096',id=20,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='63a8d182eca84056a1214aff59d1a164',ramdisk_id='',reservation_id='r-4hhekc2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerTestJSON-961060000',owner_user_name='tempest-ImagesOneServerTestJSON-961060000-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:07:56Z,user_data=None,user_id='b1ccfdaee9324154bed6828c0fa32e6d',uuid=4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.413 2 DEBUG nova.network.os_vif_util [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converting VIF {"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.414 2 DEBUG nova.network.os_vif_util [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.415 2 DEBUG os_vif [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.417 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.422 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd23018fc-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.423 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd23018fc-ec, col_values=(('external_ids', {'iface-id': 'd23018fc-ec2d-4a03-8e09-88c7ecb34f8b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:73:3d', 'vm-uuid': '4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:03 np0005473739 NetworkManager[44949]: <info>  [1759846083.4263] manager: (tapd23018fc-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/69)
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.433 2 INFO os_vif [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec')#033[00m
Oct  7 10:08:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1228: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 39 KiB/s rd, 2.1 MiB/s wr, 57 op/s
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.692 2 DEBUG nova.network.neutron [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updated VIF entry in instance network info cache for port d23018fc-ec2d-4a03-8e09-88c7ecb34f8b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.693 2 DEBUG nova.network.neutron [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.698 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.699 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.699 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No VIF found with MAC fa:16:3e:84:73:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.700 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Using config drive#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.723 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.733 2 DEBUG oslo_concurrency.lockutils [req-77afa52d-fbe1-4f21-9d48-27aa2aac2766 req-314d6820-498c-44c6-abd9-569729635bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:03 np0005473739 nova_compute[259550]: 2025-10-07 14:08:03.762 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3253482312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.292 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.299 2 DEBUG nova.compute.provider_tree [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.333 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Creating config drive at /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.337 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2j_sxebh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.396 2 DEBUG nova.scheduler.client.report [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.471 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2j_sxebh" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.501 2 DEBUG nova.storage.rbd_utils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] rbd image 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.506 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.590 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.486s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.592 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.676 2 DEBUG oslo_concurrency.processutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.677 2 INFO nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Deleting local config drive /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1/disk.config because it was imported into RBD.#033[00m
Oct  7 10:08:04 np0005473739 kernel: tapd23018fc-ec: entered promiscuous mode
Oct  7 10:08:04 np0005473739 NetworkManager[44949]: <info>  [1759846084.7464] manager: (tapd23018fc-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/70)
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:04Z|00136|binding|INFO|Claiming lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b for this chassis.
Oct  7 10:08:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:04Z|00137|binding|INFO|d23018fc-ec2d-4a03-8e09-88c7ecb34f8b: Claiming fa:16:3e:84:73:3d 10.100.0.3
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:04 np0005473739 systemd-udevd[290092]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:08:04 np0005473739 systemd-machined[214580]: New machine qemu-24-instance-00000014.
Oct  7 10:08:04 np0005473739 NetworkManager[44949]: <info>  [1759846084.7963] device (tapd23018fc-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:08:04 np0005473739 NetworkManager[44949]: <info>  [1759846084.7972] device (tapd23018fc-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:08:04 np0005473739 systemd[1]: Started Virtual Machine qemu-24-instance-00000014.
Oct  7 10:08:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:04Z|00138|binding|INFO|Setting lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b ovn-installed in OVS
Oct  7 10:08:04 np0005473739 nova_compute[259550]: 2025-10-07 14:08:04.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:05Z|00139|binding|INFO|Setting lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b up in Southbound
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.002 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:73:3d 10.100.0.3'], port_security=['fa:16:3e:84:73:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63a8d182eca84056a1214aff59d1a164', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ee4342df-374f-4e89-b1cb-d9f5b15e7a74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70fa1751-9080-486f-a255-eba81ea8e3da, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.004 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d23018fc-ec2d-4a03-8e09-88c7ecb34f8b in datapath a90af50e-9409-4ab3-b31a-0927ca38c12d bound to our chassis#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.005 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network a90af50e-9409-4ab3-b31a-0927ca38c12d#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.021 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62455790-6035-4078-a832-dd24d2fd7d1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.023 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapa90af50e-91 in ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.026 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapa90af50e-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8f405dc8-e0fd-4672-81f4-a6530edd90a4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc76ec8-5938-4415-9c04-f89ee6eed11d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.042 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cde663f4-6655-4788-ac47-fccf062647c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.063 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68d85992-2a58-41ac-9327-97f4adfa11f0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.103 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[83e07109-d8e8-4ab4-a75e-9fbecf74ddcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 NetworkManager[44949]: <info>  [1759846085.1118] manager: (tapa90af50e-90): new Veth device (/org/freedesktop/NetworkManager/Devices/71)
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df9fc7f0-bbcc-4dc0-956b-dc1d9d7c6e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.117 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.117 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.152 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1338341f-971d-403b-98fb-cfe04f4f4f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.157 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[47e39742-5fe4-4c11-86b8-dffbbf0945e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 NetworkManager[44949]: <info>  [1759846085.1838] device (tapa90af50e-90): carrier: link connected
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.193 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8b90283-687c-412a-bfc5-0afa411dce62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.215 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a581a84-4f46-4909-9109-049c00836e02]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa90af50e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:b8:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666874, 'reachable_time': 15680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290125, 'error': None, 'target': 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.235 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[808db8a1-9f8d-4937-a879-d54bbf3969f7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe42:b8ad'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 666874, 'tstamp': 666874}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290126, 'error': None, 'target': 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0446172e-4bb7-44e2-aef4-d1dfa9930348]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapa90af50e-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:42:b8:ad'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 110, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 3, 'rx_bytes': 110, 'tx_bytes': 266, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 45], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666874, 'reachable_time': 15680, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 224, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 224, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290128, 'error': None, 'target': 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.299 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37dcc860-14a8-4ad8-acf7-0cb1a9678f0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.313 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.383 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61d58030-9c80-49c2-87d0-8129cf4bf583]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.385 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa90af50e-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.385 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.385 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa90af50e-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:05 np0005473739 kernel: tapa90af50e-90: entered promiscuous mode
Oct  7 10:08:05 np0005473739 NetworkManager[44949]: <info>  [1759846085.3885] manager: (tapa90af50e-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.391 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapa90af50e-90, col_values=(('external_ids', {'iface-id': '9191c21b-68d0-487b-8ce1-9baafe080d13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:05Z|00140|binding|INFO|Releasing lport 9191c21b-68d0-487b-8ce1-9baafe080d13 from this chassis (sb_readonly=0)
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.407 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.409 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/a90af50e-9409-4ab3-b31a-0927ca38c12d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/a90af50e-9409-4ab3-b31a-0927ca38c12d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.410 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e13f88-42ce-4532-8bae-1fc1e3725a35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.411 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-a90af50e-9409-4ab3-b31a-0927ca38c12d
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/a90af50e-9409-4ab3-b31a-0927ca38c12d.pid.haproxy
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID a90af50e-9409-4ab3-b31a-0927ca38c12d
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:08:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:05.412 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'env', 'PROCESS_TAG=haproxy-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/a90af50e-9409-4ab3-b31a-0927ca38c12d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.447 2 DEBUG nova.policy [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1229: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.634 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:08:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.858 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846085.8576436, 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.860 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] VM Started (Lifecycle Event)#033[00m
Oct  7 10:08:05 np0005473739 podman[290202]: 2025-10-07 14:08:05.901070278 +0000 UTC m=+0.106918269 container create 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:08:05 np0005473739 podman[290202]: 2025-10-07 14:08:05.821658095 +0000 UTC m=+0.027506106 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.959 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.968 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846085.8589928, 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.968 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:08:05 np0005473739 systemd[1]: Started libpod-conmon-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e.scope.
Oct  7 10:08:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.994 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:08:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/62e91291afc6a80e340c9e03107adfd8c1875ed746c7741b6e4d627197ed67bb/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:05 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.997 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:05.999 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Creating image(s)#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.026 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:06 np0005473739 podman[290202]: 2025-10-07 14:08:06.045834407 +0000 UTC m=+0.251682418 container init 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:08:06 np0005473739 podman[290202]: 2025-10-07 14:08:06.052854165 +0000 UTC m=+0.258702156 container start 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.058 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:06 np0005473739 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [NOTICE]   (290256) : New worker (290274) forked
Oct  7 10:08:06 np0005473739 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [NOTICE]   (290256) : Loading success.
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.090 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.096 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.129 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.135 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.169 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.181 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.182 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.183 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.183 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.210 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.216 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.648 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.717 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.826 2 DEBUG nova.objects.instance [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid cd77c7c3-e287-4a6a-b2b6-61655f604ec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.907 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.907 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Ensure instance console log exists: /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.908 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.908 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:06 np0005473739 nova_compute[259550]: 2025-10-07 14:08:06.909 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1230: 305 pgs: 305 active+clean; 88 MiB data, 337 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 2.1 MiB/s wr, 33 op/s
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.682 2 DEBUG nova.compute.manager [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.683 2 DEBUG oslo_concurrency.lockutils [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.683 2 DEBUG oslo_concurrency.lockutils [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.683 2 DEBUG oslo_concurrency.lockutils [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.683 2 DEBUG nova.compute.manager [req-c78d2b86-2efd-4e09-ada5-4de906bd2f2d req-d758656a-71d3-4870-89b5-27519404f9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Processing event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.684 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.690 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846087.6902676, 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.691 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.696 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.701 2 INFO nova.virt.libvirt.driver [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance spawned successfully.#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.702 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.724 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Successfully created port: 95414c38-8a03-41fc-a460-bb80d2febc10 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.915 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.921 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.926 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.927 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.927 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.927 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.928 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:07 np0005473739 nova_compute[259550]: 2025-10-07 14:08:07.928 2 DEBUG nova.virt.libvirt.driver [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:08 np0005473739 nova_compute[259550]: 2025-10-07 14:08:08.057 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:08 np0005473739 nova_compute[259550]: 2025-10-07 14:08:08.266 2 INFO nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 11.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:08:08 np0005473739 nova_compute[259550]: 2025-10-07 14:08:08.267 2 DEBUG nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:08 np0005473739 nova_compute[259550]: 2025-10-07 14:08:08.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:08 np0005473739 nova_compute[259550]: 2025-10-07 14:08:08.434 2 INFO nova.compute.manager [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 12.76 seconds to build instance.#033[00m
Oct  7 10:08:08 np0005473739 nova_compute[259550]: 2025-10-07 14:08:08.501 2 DEBUG oslo_concurrency.lockutils [None req-1dc10126-ad0d-46ba-beca-edd6ea7293a1 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.900s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1231: 305 pgs: 305 active+clean; 124 MiB data, 353 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.1 MiB/s wr, 84 op/s
Oct  7 10:08:09 np0005473739 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG nova.compute.manager [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:09 np0005473739 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG oslo_concurrency.lockutils [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:09 np0005473739 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG oslo_concurrency.lockutils [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:09 np0005473739 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG oslo_concurrency.lockutils [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:09 np0005473739 nova_compute[259550]: 2025-10-07 14:08:09.806 2 DEBUG nova.compute.manager [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] No waiting events found dispatching network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:08:09 np0005473739 nova_compute[259550]: 2025-10-07 14:08:09.807 2 WARNING nova.compute.manager [req-811e856b-9620-4558-9ab6-5a9462ae5b00 req-88bf75aa-fdfb-4f9e-9dfc-40283c192a0b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received unexpected event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:08:10 np0005473739 nova_compute[259550]: 2025-10-07 14:08:10.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:10 np0005473739 nova_compute[259550]: 2025-10-07 14:08:10.841 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Successfully updated port: 95414c38-8a03-41fc-a460-bb80d2febc10 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:08:10 np0005473739 nova_compute[259550]: 2025-10-07 14:08:10.951 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:10 np0005473739 nova_compute[259550]: 2025-10-07 14:08:10.951 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:10 np0005473739 nova_compute[259550]: 2025-10-07 14:08:10.951 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:08:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1232: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 93 op/s
Oct  7 10:08:11 np0005473739 nova_compute[259550]: 2025-10-07 14:08:11.704 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:08:11 np0005473739 nova_compute[259550]: 2025-10-07 14:08:11.913 2 DEBUG nova.compute.manager [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-changed-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:11 np0005473739 nova_compute[259550]: 2025-10-07 14:08:11.913 2 DEBUG nova.compute.manager [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Refreshing instance network info cache due to event network-changed-95414c38-8a03-41fc-a460-bb80d2febc10. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:08:11 np0005473739 nova_compute[259550]: 2025-10-07 14:08:11.913 2 DEBUG oslo_concurrency.lockutils [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:12 np0005473739 nova_compute[259550]: 2025-10-07 14:08:12.891 2 DEBUG nova.network.neutron [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Updating instance_info_cache with network_info: [{"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:13 np0005473739 nova_compute[259550]: 2025-10-07 14:08:13.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1233: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct  7 10:08:15 np0005473739 nova_compute[259550]: 2025-10-07 14:08:15.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1234: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:08:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.290 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.291 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance network_info: |[{"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.291 2 DEBUG oslo_concurrency.lockutils [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.291 2 DEBUG nova.network.neutron [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Refreshing network info cache for port 95414c38-8a03-41fc-a460-bb80d2febc10 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.294 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start _get_guest_xml network_info=[{"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.299 2 WARNING nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.310 2 DEBUG nova.virt.libvirt.host [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.311 2 DEBUG nova.virt.libvirt.host [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.328 2 DEBUG nova.virt.libvirt.host [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.329 2 DEBUG nova.virt.libvirt.host [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.329 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.330 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.331 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.331 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.331 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.332 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.332 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.332 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.333 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.333 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.334 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.334 2 DEBUG nova.virt.hardware [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.342 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.378 2 DEBUG nova.compute.manager [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.467 2 INFO nova.compute.manager [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] instance snapshotting#033[00m
Oct  7 10:08:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3466957009' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.818 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.843 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:16 np0005473739 nova_compute[259550]: 2025-10-07 14:08:16.849 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.061 2 INFO nova.virt.libvirt.driver [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Beginning live snapshot process#033[00m
Oct  7 10:08:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/628064836' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.306 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.309 2 DEBUG nova.virt.libvirt.vif [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1004114679',display_name='tempest-ImagesTestJSON-server-1004114679',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1004114679',id=21,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-rgihze8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:05Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=cd77c7c3-e287-4a6a-b2b6-61655f604ec2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.310 2 DEBUG nova.network.os_vif_util [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.311 2 DEBUG nova.network.os_vif_util [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.312 2 DEBUG nova.objects.instance [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid cd77c7c3-e287-4a6a-b2b6-61655f604ec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.405 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <uuid>cd77c7c3-e287-4a6a-b2b6-61655f604ec2</uuid>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <name>instance-00000015</name>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesTestJSON-server-1004114679</nova:name>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:08:16</nova:creationTime>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <nova:port uuid="95414c38-8a03-41fc-a460-bb80d2febc10">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <entry name="serial">cd77c7c3-e287-4a6a-b2b6-61655f604ec2</entry>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <entry name="uuid">cd77c7c3-e287-4a6a-b2b6-61655f604ec2</entry>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:34:1c:1f"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <target dev="tap95414c38-8a"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/console.log" append="off"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:08:17 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:08:17 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:08:17 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:08:17 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.412 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Preparing to wait for external event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.413 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.414 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.414 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.415 2 DEBUG nova.virt.libvirt.vif [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1004114679',display_name='tempest-ImagesTestJSON-server-1004114679',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1004114679',id=21,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-rgihze8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:05Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=cd77c7c3-e287-4a6a-b2b6-61655f604ec2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.415 2 DEBUG nova.network.os_vif_util [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.416 2 DEBUG nova.network.os_vif_util [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.417 2 DEBUG os_vif [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.419 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap95414c38-8a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap95414c38-8a, col_values=(('external_ids', {'iface-id': '95414c38-8a03-41fc-a460-bb80d2febc10', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:1c:1f', 'vm-uuid': 'cd77c7c3-e287-4a6a-b2b6-61655f604ec2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:17 np0005473739 NetworkManager[44949]: <info>  [1759846097.4284] manager: (tap95414c38-8a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/73)
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.437 2 INFO os_vif [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a')#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.512 2 DEBUG nova.virt.libvirt.imagebackend [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:08:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1235: 305 pgs: 305 active+clean; 134 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.658 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.659 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.659 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:34:1c:1f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.660 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Using config drive#033[00m
Oct  7 10:08:17 np0005473739 nova_compute[259550]: 2025-10-07 14:08:17.698 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:18 np0005473739 nova_compute[259550]: 2025-10-07 14:08:18.627 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] creating snapshot(17284dfc40ab47889b7e2dd00785c4c6) on rbd image(4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:08:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e137 do_prune osdmap full prune enabled
Oct  7 10:08:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e138 e138: 3 total, 3 up, 3 in
Oct  7 10:08:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e138: 3 total, 3 up, 3 in
Oct  7 10:08:18 np0005473739 nova_compute[259550]: 2025-10-07 14:08:18.764 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] cloning vms/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk@17284dfc40ab47889b7e2dd00785c4c6 to images/939a30ca-ae15-4718-a461-e26825722fa1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:08:18 np0005473739 nova_compute[259550]: 2025-10-07 14:08:18.905 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] flattening images/939a30ca-ae15-4718-a461-e26825722fa1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:08:18 np0005473739 nova_compute[259550]: 2025-10-07 14:08:18.949 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Creating config drive at /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config#033[00m
Oct  7 10:08:18 np0005473739 nova_compute[259550]: 2025-10-07 14:08:18.956 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6qpj939t execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.093 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6qpj939t" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.127 2 DEBUG nova.storage.rbd_utils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.131 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.263 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] removing snapshot(17284dfc40ab47889b7e2dd00785c4c6) on rbd image(4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.334 2 DEBUG oslo_concurrency.processutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.334 2 INFO nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Deleting local config drive /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2/disk.config because it was imported into RBD.#033[00m
Oct  7 10:08:19 np0005473739 kernel: tap95414c38-8a: entered promiscuous mode
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:19Z|00141|binding|INFO|Claiming lport 95414c38-8a03-41fc-a460-bb80d2febc10 for this chassis.
Oct  7 10:08:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:19Z|00142|binding|INFO|95414c38-8a03-41fc-a460-bb80d2febc10: Claiming fa:16:3e:34:1c:1f 10.100.0.11
Oct  7 10:08:19 np0005473739 NetworkManager[44949]: <info>  [1759846099.3939] manager: (tap95414c38-8a): new Tun device (/org/freedesktop/NetworkManager/Devices/74)
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:19 np0005473739 systemd-machined[214580]: New machine qemu-25-instance-00000015.
Oct  7 10:08:19 np0005473739 systemd-udevd[290679]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:08:19 np0005473739 NetworkManager[44949]: <info>  [1759846099.4777] device (tap95414c38-8a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:08:19 np0005473739 NetworkManager[44949]: <info>  [1759846099.4792] device (tap95414c38-8a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:08:19 np0005473739 systemd[1]: Started Virtual Machine qemu-25-instance-00000015.
Oct  7 10:08:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:19Z|00143|binding|INFO|Setting lport 95414c38-8a03-41fc-a460-bb80d2febc10 ovn-installed in OVS
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:19 np0005473739 podman[290658]: 2025-10-07 14:08:19.506049833 +0000 UTC m=+0.080336469 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:08:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:19Z|00144|binding|INFO|Setting lport 95414c38-8a03-41fc-a460-bb80d2febc10 up in Southbound
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.520 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:1c:1f 10.100.0.11'], port_security=['fa:16:3e:34:1c:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cd77c7c3-e287-4a6a-b2b6-61655f604ec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=95414c38-8a03-41fc-a460-bb80d2febc10) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.521 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 95414c38-8a03-41fc-a460-bb80d2febc10 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.523 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb#033[00m
Oct  7 10:08:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1237: 305 pgs: 305 active+clean; 145 MiB data, 359 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 922 KiB/s wr, 60 op/s
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.538 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c6f4efd6-39fe-4cd2-88e9-a8a857be7942]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.540 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:08:19 np0005473739 podman[290657]: 2025-10-07 14:08:19.542660361 +0000 UTC m=+0.119766412 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.542 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.543 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d265f8d-850b-48f7-9627-8b76b32adb85]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d901a9f2-f090-44c8-947c-54bb76a4c67c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.559 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7cb3f793-6775-4eff-8821-b2e64d3276df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1722bbc5-65ee-44b9-ac89-b69059d8b89f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.618 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3172a22b-5193-403a-ae5b-866b36e0cc99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.626 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d727be-3daf-4777-8d4b-0f44c066206c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 NetworkManager[44949]: <info>  [1759846099.6279] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/75)
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.676 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ecac6b58-4ed0-4494-83ae-e5d6cbceb49d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.679 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fd097ff1-6e7c-4111-b047-468ea41c4e33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e138 do_prune osdmap full prune enabled
Oct  7 10:08:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e139 e139: 3 total, 3 up, 3 in
Oct  7 10:08:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e139: 3 total, 3 up, 3 in
Oct  7 10:08:19 np0005473739 NetworkManager[44949]: <info>  [1759846099.7160] device (tap9f80456d-d0): carrier: link connected
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.725 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[da0279d8-291b-4798-b5f5-60f6b246efa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4d6b6c79-0dd1-4a35-8ccf-5e3aa4ab6936]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668328, 'reachable_time': 34850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 290737, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.753 2 DEBUG nova.storage.rbd_utils [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] creating snapshot(snap) on rbd image(939a30ca-ae15-4718-a461-e26825722fa1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb5d8489-c5e0-4f92-8e52-710772998988]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 668328, 'tstamp': 668328}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 290747, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.818 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f6739ea-c19b-463d-bfc6-cedea18c40a5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 47], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668328, 'reachable_time': 34850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 290765, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.868 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61c67761-80ae-4938-92ac-54ff46e4429a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.949 2 DEBUG nova.compute.manager [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.950 2 DEBUG oslo_concurrency.lockutils [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.952 2 DEBUG oslo_concurrency.lockutils [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.952 2 DEBUG oslo_concurrency.lockutils [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.953 2 DEBUG nova.compute.manager [req-818b34e4-44d3-4218-9a8f-7b8ad5a32a2c req-da731cd6-70e2-40b7-8c2c-ce689284c9ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Processing event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.952 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70005f73-7c0a-4c00-b674-e1b5ab36566d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.955 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.955 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.956 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:19 np0005473739 NetworkManager[44949]: <info>  [1759846099.9592] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Oct  7 10:08:19 np0005473739 kernel: tap9f80456d-d0: entered promiscuous mode
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.962 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:19Z|00145|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.983 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:08:19 np0005473739 nova_compute[259550]: 2025-10-07 14:08:19.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.986 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98b45210-5c0f-45b4-9c10-ba450b97437d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.987 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:08:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:19.988 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.015 2 DEBUG nova.network.neutron [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Updated VIF entry in instance network info cache for port 95414c38-8a03-41fc-a460-bb80d2febc10. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.015 2 DEBUG nova.network.neutron [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Updating instance_info_cache with network_info: [{"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.064 2 DEBUG oslo_concurrency.lockutils [req-0befd928-79f9-408e-a01c-101b70ebed88 req-8830ffce-0080-47b7-aba0-33a51eccca2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-cd77c7c3-e287-4a6a-b2b6-61655f604ec2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:20 np0005473739 podman[290823]: 2025-10-07 14:08:20.443259791 +0000 UTC m=+0.061614078 container create 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:08:20 np0005473739 systemd[1]: Started libpod-conmon-3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28.scope.
Oct  7 10:08:20 np0005473739 podman[290823]: 2025-10-07 14:08:20.410337451 +0000 UTC m=+0.028691688 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.518 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:20 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0e6de4f7c968bc67535eb773c8b2db1d245a72f57212d42fe527c2c5ef20c1fd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.558 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846100.5575027, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.558 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Started (Lifecycle Event)#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.561 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:08:20 np0005473739 podman[290823]: 2025-10-07 14:08:20.564860671 +0000 UTC m=+0.183214908 container init 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.569 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.574 2 INFO nova.virt.libvirt.driver [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance spawned successfully.#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.574 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:08:20 np0005473739 podman[290823]: 2025-10-07 14:08:20.576112821 +0000 UTC m=+0.194467028 container start 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:08:20 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [NOTICE]   (290842) : New worker (290844) forked
Oct  7 10:08:20 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [NOTICE]   (290842) : Loading success.
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.631 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.636 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e139 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.648 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.648 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.649 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.649 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.649 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.650 2 DEBUG nova.virt.libvirt.driver [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.710 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.711 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846100.5576746, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.711 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:08:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e139 do_prune osdmap full prune enabled
Oct  7 10:08:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e140 e140: 3 total, 3 up, 3 in
Oct  7 10:08:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e140: 3 total, 3 up, 3 in
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.761 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.764 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846100.56942, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.764 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.791 2 INFO nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 14.80 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.792 2 DEBUG nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.817 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.822 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:20 np0005473739 nova_compute[259550]: 2025-10-07 14:08:20.908 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:21 np0005473739 nova_compute[259550]: 2025-10-07 14:08:21.015 2 INFO nova.compute.manager [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 17.94 seconds to build instance.#033[00m
Oct  7 10:08:21 np0005473739 nova_compute[259550]: 2025-10-07 14:08:21.093 2 DEBUG oslo_concurrency.lockutils [None req-fb8a1376-a8a6-4e00-9791-a7cf0394dca3 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.416s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:21Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:73:3d 10.100.0.3
Oct  7 10:08:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:21Z|00027|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:73:3d 10.100.0.3
Oct  7 10:08:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1240: 305 pgs: 305 active+clean; 158 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 89 op/s
Oct  7 10:08:21 np0005473739 nova_compute[259550]: 2025-10-07 14:08:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.125 2 DEBUG nova.compute.manager [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.125 2 DEBUG oslo_concurrency.lockutils [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.125 2 DEBUG oslo_concurrency.lockutils [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.125 2 DEBUG oslo_concurrency.lockutils [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.126 2 DEBUG nova.compute.manager [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] No waiting events found dispatching network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.126 2 WARNING nova.compute.manager [req-89be5386-a2ec-4e75-ac21-9de0873b74a1 req-fbd6264c-b3e2-4a31-a797-515c869e8086 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received unexpected event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:08:22
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'volumes', 'images', '.rgw.root', 'cephfs.cephfs.meta', 'backups', 'default.rgw.control']
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:08:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.909 2 INFO nova.virt.libvirt.driver [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Snapshot image upload complete#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.910 2 INFO nova.compute.manager [None req-b63619ec-4d39-41a7-8be9-153ccf14705f b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 6.44 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:22 np0005473739 nova_compute[259550]: 2025-10-07 14:08:22.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:08:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1241: 305 pgs: 305 active+clean; 158 MiB data, 364 MiB used, 60 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.9 MiB/s wr, 89 op/s
Oct  7 10:08:23 np0005473739 nova_compute[259550]: 2025-10-07 14:08:23.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:23 np0005473739 nova_compute[259550]: 2025-10-07 14:08:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:23 np0005473739 nova_compute[259550]: 2025-10-07 14:08:23.996 2 INFO nova.compute.manager [None req-1aef7977-c7f3-4cfe-ae5d-fa7a7a6d0b2a a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Pausing#033[00m
Oct  7 10:08:23 np0005473739 nova_compute[259550]: 2025-10-07 14:08:23.997 2 DEBUG nova.objects.instance [None req-1aef7977-c7f3-4cfe-ae5d-fa7a7a6d0b2a a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'flavor' on Instance uuid cd77c7c3-e287-4a6a-b2b6-61655f604ec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:24 np0005473739 nova_compute[259550]: 2025-10-07 14:08:24.097 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846104.096981, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:24 np0005473739 nova_compute[259550]: 2025-10-07 14:08:24.097 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:08:24 np0005473739 nova_compute[259550]: 2025-10-07 14:08:24.099 2 DEBUG nova.compute.manager [None req-1aef7977-c7f3-4cfe-ae5d-fa7a7a6d0b2a a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:24 np0005473739 nova_compute[259550]: 2025-10-07 14:08:24.160 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:24 np0005473739 nova_compute[259550]: 2025-10-07 14:08:24.164 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:24 np0005473739 nova_compute[259550]: 2025-10-07 14:08:24.296 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:08:25 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d8c1e408-c0e4-45c7-a799-d15171f1ce7c does not exist
Oct  7 10:08:25 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 38f95525-453d-462d-9dd5-8431b6bb10af does not exist
Oct  7 10:08:25 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cd22e9d2-d90e-4e5c-b15e-326d1b8132e0 does not exist
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:08:25 np0005473739 nova_compute[259550]: 2025-10-07 14:08:25.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1242: 305 pgs: 305 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 7.2 MiB/s rd, 6.9 MiB/s wr, 365 op/s
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:08:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e140 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:25 np0005473739 nova_compute[259550]: 2025-10-07 14:08:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.024 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.024 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:26 np0005473739 podman[291125]: 2025-10-07 14:08:26.108029111 +0000 UTC m=+0.024365152 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:08:26 np0005473739 podman[291125]: 2025-10-07 14:08:26.263129306 +0000 UTC m=+0.179465367 container create d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:08:26 np0005473739 systemd[1]: Started libpod-conmon-d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59.scope.
Oct  7 10:08:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:26 np0005473739 podman[291125]: 2025-10-07 14:08:26.413054663 +0000 UTC m=+0.329390704 container init d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:08:26 np0005473739 podman[291125]: 2025-10-07 14:08:26.424393146 +0000 UTC m=+0.340729167 container start d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:08:26 np0005473739 sharp_nobel[291160]: 167 167
Oct  7 10:08:26 np0005473739 systemd[1]: libpod-d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59.scope: Deactivated successfully.
Oct  7 10:08:26 np0005473739 podman[291125]: 2025-10-07 14:08:26.442772817 +0000 UTC m=+0.359108858 container attach d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:08:26 np0005473739 podman[291125]: 2025-10-07 14:08:26.444301658 +0000 UTC m=+0.360637679 container died d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 10:08:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/641932959' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.481 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e140 do_prune osdmap full prune enabled
Oct  7 10:08:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:08:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:08:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9288babebde297eabeb9931ae59c183df85c8a19ad217975be28220c1dd501c6-merged.mount: Deactivated successfully.
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.621 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.622 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000014 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.626 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.627 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000015 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:08:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e141 e141: 3 total, 3 up, 3 in
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.649 2 DEBUG nova.compute.manager [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:26 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e141: 3 total, 3 up, 3 in
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.699 2 INFO nova.compute.manager [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] instance snapshotting#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.699 2 WARNING nova.compute.manager [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] trying to snapshot a non-running instance: (state: 3 expected: 1)#033[00m
Oct  7 10:08:26 np0005473739 podman[291125]: 2025-10-07 14:08:26.74146528 +0000 UTC m=+0.657801291 container remove d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_nobel, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 10:08:26 np0005473739 systemd[1]: libpod-conmon-d2aace6dde688c9822dad661c78f41d45e5a74c832f1793b8c967675b9f0df59.scope: Deactivated successfully.
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.835 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.837 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4141MB free_disk=59.93803405761719GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.838 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.840 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:26 np0005473739 podman[291190]: 2025-10-07 14:08:26.952573202 +0000 UTC m=+0.061887755 container create 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.970 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.971 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance cd77c7c3-e287-4a6a-b2b6-61655f604ec2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.971 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.971 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:08:26 np0005473739 nova_compute[259550]: 2025-10-07 14:08:26.992 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.000 2 INFO nova.virt.libvirt.driver [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Beginning live snapshot process#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.014 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.014 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:08:27 np0005473739 podman[291190]: 2025-10-07 14:08:26.920540236 +0000 UTC m=+0.029854809 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:08:27 np0005473739 systemd[1]: Started libpod-conmon-261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9.scope.
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.031 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:08:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.059 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:08:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.107 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:27 np0005473739 podman[291190]: 2025-10-07 14:08:27.126434889 +0000 UTC m=+0.235749452 container init 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:08:27 np0005473739 podman[291190]: 2025-10-07 14:08:27.137214617 +0000 UTC m=+0.246529170 container start 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:08:27 np0005473739 podman[291190]: 2025-10-07 14:08:27.15639138 +0000 UTC m=+0.265705943 container attach 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.268 2 DEBUG nova.virt.libvirt.imagebackend [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.516 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(5153d815f021464faf1a5737fa1826eb) on rbd image(cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:08:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1244: 305 pgs: 305 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 5.5 MiB/s wr, 293 op/s
Oct  7 10:08:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2651055100' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.590 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.595 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:08:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e141 do_prune osdmap full prune enabled
Oct  7 10:08:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e142 e142: 3 total, 3 up, 3 in
Oct  7 10:08:27 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e142: 3 total, 3 up, 3 in
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.667 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.727 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk@5153d815f021464faf1a5737fa1826eb to images/fd1fe7a6-77d9-4b4e-ad7a-8c1960b3d117 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.758 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.759 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:27 np0005473739 nova_compute[259550]: 2025-10-07 14:08:27.837 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/fd1fe7a6-77d9-4b4e-ad7a-8c1960b3d117 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:08:28 np0005473739 nova_compute[259550]: 2025-10-07 14:08:28.109 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(5153d815f021464faf1a5737fa1826eb) on rbd image(cd77c7c3-e287-4a6a-b2b6-61655f604ec2_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:08:28 np0005473739 fervent_ptolemy[291206]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:08:28 np0005473739 fervent_ptolemy[291206]: --> relative data size: 1.0
Oct  7 10:08:28 np0005473739 fervent_ptolemy[291206]: --> All data devices are unavailable
Oct  7 10:08:28 np0005473739 systemd[1]: libpod-261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9.scope: Deactivated successfully.
Oct  7 10:08:28 np0005473739 systemd[1]: libpod-261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9.scope: Consumed 1.212s CPU time.
Oct  7 10:08:28 np0005473739 podman[291190]: 2025-10-07 14:08:28.435788063 +0000 UTC m=+1.545102606 container died 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:08:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9ee7ae862be7955cc77b98fc15bc50809ba261da9094fa8a618b66fc2a176a6d-merged.mount: Deactivated successfully.
Oct  7 10:08:28 np0005473739 podman[291190]: 2025-10-07 14:08:28.498329605 +0000 UTC m=+1.607644158 container remove 261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_ptolemy, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:08:28 np0005473739 systemd[1]: libpod-conmon-261543a23e4f60568af436d47498a773ef4617e5a96db112faac32f4d1dcf2a9.scope: Deactivated successfully.
Oct  7 10:08:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e142 do_prune osdmap full prune enabled
Oct  7 10:08:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e143 e143: 3 total, 3 up, 3 in
Oct  7 10:08:28 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e143: 3 total, 3 up, 3 in
Oct  7 10:08:28 np0005473739 nova_compute[259550]: 2025-10-07 14:08:28.760 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:28 np0005473739 nova_compute[259550]: 2025-10-07 14:08:28.761 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:08:28 np0005473739 nova_compute[259550]: 2025-10-07 14:08:28.761 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:08:28 np0005473739 nova_compute[259550]: 2025-10-07 14:08:28.793 2 DEBUG nova.storage.rbd_utils [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(fd1fe7a6-77d9-4b4e-ad7a-8c1960b3d117) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:08:29 np0005473739 podman[291552]: 2025-10-07 14:08:29.209981865 +0000 UTC m=+0.047573273 container create bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 10:08:29 np0005473739 systemd[1]: Started libpod-conmon-bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f.scope.
Oct  7 10:08:29 np0005473739 podman[291552]: 2025-10-07 14:08:29.188227934 +0000 UTC m=+0.025819362 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:08:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:29 np0005473739 podman[291552]: 2025-10-07 14:08:29.324587848 +0000 UTC m=+0.162179266 container init bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:08:29 np0005473739 podman[291552]: 2025-10-07 14:08:29.334329008 +0000 UTC m=+0.171920406 container start bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 10:08:29 np0005473739 suspicious_thompson[291569]: 167 167
Oct  7 10:08:29 np0005473739 systemd[1]: libpod-bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f.scope: Deactivated successfully.
Oct  7 10:08:29 np0005473739 conmon[291569]: conmon bf5b33c7b7ac278c1ff3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f.scope/container/memory.events
Oct  7 10:08:29 np0005473739 podman[291552]: 2025-10-07 14:08:29.348683141 +0000 UTC m=+0.186274539 container attach bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:08:29 np0005473739 podman[291552]: 2025-10-07 14:08:29.350341497 +0000 UTC m=+0.187932895 container died bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:08:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d59a84d4d9d70ad77c684f39e4a1821b190827d8d355afb53ba053feb92265bf-merged.mount: Deactivated successfully.
Oct  7 10:08:29 np0005473739 podman[291552]: 2025-10-07 14:08:29.446740463 +0000 UTC m=+0.284331861 container remove bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 10:08:29 np0005473739 systemd[1]: libpod-conmon-bf5b33c7b7ac278c1ff3a245a95466ffb642bb6706bdc39e3ad555eefcd1048f.scope: Deactivated successfully.
Oct  7 10:08:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1247: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 425 MiB used, 60 GiB / 60 GiB avail; 9.3 MiB/s rd, 8.0 MiB/s wr, 445 op/s
Oct  7 10:08:29 np0005473739 podman[291595]: 2025-10-07 14:08:29.655290637 +0000 UTC m=+0.046613357 container create d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:08:29 np0005473739 systemd[1]: Started libpod-conmon-d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899.scope.
Oct  7 10:08:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e143 do_prune osdmap full prune enabled
Oct  7 10:08:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e144 e144: 3 total, 3 up, 3 in
Oct  7 10:08:29 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e144: 3 total, 3 up, 3 in
Oct  7 10:08:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:29 np0005473739 podman[291595]: 2025-10-07 14:08:29.633609538 +0000 UTC m=+0.024932308 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:08:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:29 np0005473739 podman[291595]: 2025-10-07 14:08:29.741210204 +0000 UTC m=+0.132532944 container init d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:08:29 np0005473739 podman[291595]: 2025-10-07 14:08:29.750836621 +0000 UTC m=+0.142159341 container start d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:08:29 np0005473739 podman[291595]: 2025-10-07 14:08:29.757884319 +0000 UTC m=+0.149207229 container attach d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:08:29 np0005473739 nova_compute[259550]: 2025-10-07 14:08:29.848 2 DEBUG nova.compute.manager [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:29 np0005473739 nova_compute[259550]: 2025-10-07 14:08:29.954 2 INFO nova.compute.manager [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] instance snapshotting#033[00m
Oct  7 10:08:30 np0005473739 nova_compute[259550]: 2025-10-07 14:08:30.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]: {
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:    "0": [
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:        {
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "devices": [
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "/dev/loop3"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            ],
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_name": "ceph_lv0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_size": "21470642176",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "name": "ceph_lv0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "tags": {
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cluster_name": "ceph",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.crush_device_class": "",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.encrypted": "0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osd_id": "0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.type": "block",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.vdo": "0"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            },
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "type": "block",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "vg_name": "ceph_vg0"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:        }
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:    ],
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:    "1": [
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:        {
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "devices": [
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "/dev/loop4"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            ],
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_name": "ceph_lv1",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_size": "21470642176",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "name": "ceph_lv1",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "tags": {
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cluster_name": "ceph",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.crush_device_class": "",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.encrypted": "0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osd_id": "1",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.type": "block",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.vdo": "0"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            },
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "type": "block",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "vg_name": "ceph_vg1"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:        }
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:    ],
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:    "2": [
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:        {
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "devices": [
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "/dev/loop5"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            ],
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_name": "ceph_lv2",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_size": "21470642176",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "name": "ceph_lv2",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "tags": {
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.cluster_name": "ceph",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.crush_device_class": "",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.encrypted": "0",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osd_id": "2",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.type": "block",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:                "ceph.vdo": "0"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            },
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "type": "block",
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:            "vg_name": "ceph_vg2"
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:        }
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]:    ]
Oct  7 10:08:30 np0005473739 practical_rosalind[291613]: }
Oct  7 10:08:30 np0005473739 systemd[1]: libpod-d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899.scope: Deactivated successfully.
Oct  7 10:08:30 np0005473739 conmon[291613]: conmon d33438c0f458b9673dad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899.scope/container/memory.events
Oct  7 10:08:30 np0005473739 podman[291595]: 2025-10-07 14:08:30.608220995 +0000 UTC m=+0.999543705 container died d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:08:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-26f895cb635e48c7e1ccf69f77c1672cafddd601e502125ffe5fcec3a17c89dd-merged.mount: Deactivated successfully.
Oct  7 10:08:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e144 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e144 do_prune osdmap full prune enabled
Oct  7 10:08:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e145 e145: 3 total, 3 up, 3 in
Oct  7 10:08:30 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e145: 3 total, 3 up, 3 in
Oct  7 10:08:30 np0005473739 podman[291595]: 2025-10-07 14:08:30.676702716 +0000 UTC m=+1.068025436 container remove d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_rosalind, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:08:30 np0005473739 systemd[1]: libpod-conmon-d33438c0f458b9673dad2ef136839c1f434e33462275fc31d9ba1f71ff6e5899.scope: Deactivated successfully.
Oct  7 10:08:31 np0005473739 nova_compute[259550]: 2025-10-07 14:08:31.161 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:31 np0005473739 nova_compute[259550]: 2025-10-07 14:08:31.162 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:31 np0005473739 nova_compute[259550]: 2025-10-07 14:08:31.163 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:08:31 np0005473739 nova_compute[259550]: 2025-10-07 14:08:31.163 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:31 np0005473739 podman[291772]: 2025-10-07 14:08:31.335784111 +0000 UTC m=+0.041283325 container create f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 10:08:31 np0005473739 systemd[1]: Started libpod-conmon-f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf.scope.
Oct  7 10:08:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:31 np0005473739 podman[291772]: 2025-10-07 14:08:31.401262421 +0000 UTC m=+0.106761665 container init f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:08:31 np0005473739 podman[291772]: 2025-10-07 14:08:31.41167947 +0000 UTC m=+0.117178684 container start f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:08:31 np0005473739 podman[291772]: 2025-10-07 14:08:31.318601771 +0000 UTC m=+0.024101005 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:08:31 np0005473739 podman[291772]: 2025-10-07 14:08:31.416113318 +0000 UTC m=+0.121612622 container attach f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:08:31 np0005473739 condescending_bouman[291789]: 167 167
Oct  7 10:08:31 np0005473739 systemd[1]: libpod-f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf.scope: Deactivated successfully.
Oct  7 10:08:31 np0005473739 podman[291772]: 2025-10-07 14:08:31.421916353 +0000 UTC m=+0.127415567 container died f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:08:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5e2ac87a5301fa650f3e662ee86d66f4030fe749abe0dacb6a8e0137993663a3-merged.mount: Deactivated successfully.
Oct  7 10:08:31 np0005473739 podman[291772]: 2025-10-07 14:08:31.474736435 +0000 UTC m=+0.180235639 container remove f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:08:31 np0005473739 systemd[1]: libpod-conmon-f748032d6fc5556a10aac672d9a9efb3052bf4fcfbadcfd44a4515a07b32ccbf.scope: Deactivated successfully.
Oct  7 10:08:31 np0005473739 podman[291786]: 2025-10-07 14:08:31.501294245 +0000 UTC m=+0.124937200 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Oct  7 10:08:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1250: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 5.4 MiB/s wr, 230 op/s
Oct  7 10:08:31 np0005473739 nova_compute[259550]: 2025-10-07 14:08:31.580 2 INFO nova.virt.libvirt.driver [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Beginning live snapshot process#033[00m
Oct  7 10:08:31 np0005473739 podman[291838]: 2025-10-07 14:08:31.677778641 +0000 UTC m=+0.041442079 container create 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct  7 10:08:31 np0005473739 systemd[1]: Started libpod-conmon-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope.
Oct  7 10:08:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:31 np0005473739 podman[291838]: 2025-10-07 14:08:31.658535447 +0000 UTC m=+0.022198905 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:08:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:31 np0005473739 podman[291838]: 2025-10-07 14:08:31.771520367 +0000 UTC m=+0.135183825 container init 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:08:31 np0005473739 podman[291838]: 2025-10-07 14:08:31.779061598 +0000 UTC m=+0.142725026 container start 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:08:31 np0005473739 podman[291838]: 2025-10-07 14:08:31.783338852 +0000 UTC m=+0.147002290 container attach 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:08:31 np0005473739 nova_compute[259550]: 2025-10-07 14:08:31.812 2 DEBUG nova.virt.libvirt.imagebackend [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011066690882008206 of space, bias 1.0, pg target 0.3320007264602462 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:08:32 np0005473739 nova_compute[259550]: 2025-10-07 14:08:32.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:32 np0005473739 nova_compute[259550]: 2025-10-07 14:08:32.483 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] creating snapshot(1788116cc84544c885e6c284acc207f7) on rbd image(4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:08:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:08:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624292419' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:08:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:08:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/624292419' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:08:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e145 do_prune osdmap full prune enabled
Oct  7 10:08:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e146 e146: 3 total, 3 up, 3 in
Oct  7 10:08:32 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e146: 3 total, 3 up, 3 in
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]: {
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "osd_id": 2,
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "type": "bluestore"
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:    },
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "osd_id": 1,
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "type": "bluestore"
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:    },
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "osd_id": 0,
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:        "type": "bluestore"
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]:    }
Oct  7 10:08:32 np0005473739 elastic_dijkstra[291855]: }
Oct  7 10:08:32 np0005473739 systemd[1]: libpod-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope: Deactivated successfully.
Oct  7 10:08:32 np0005473739 conmon[291855]: conmon 4e12269f81215b56eb81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope/container/memory.events
Oct  7 10:08:32 np0005473739 systemd[1]: libpod-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope: Consumed 1.081s CPU time.
Oct  7 10:08:32 np0005473739 nova_compute[259550]: 2025-10-07 14:08:32.906 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] cloning vms/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk@1788116cc84544c885e6c284acc207f7 to images/9856ae22-305c-4ebf-9385-686fe57711a8 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:08:32 np0005473739 podman[291940]: 2025-10-07 14:08:32.923152186 +0000 UTC m=+0.028583395 container died 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:08:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b8d48a75a27967033ad9a9dd386f35d844c1c721449e5f0434fa11df9a234af9-merged.mount: Deactivated successfully.
Oct  7 10:08:33 np0005473739 podman[291940]: 2025-10-07 14:08:33.031859991 +0000 UTC m=+0.137291170 container remove 4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_dijkstra, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:08:33 np0005473739 systemd[1]: libpod-conmon-4e12269f81215b56eb816a7bc2304d9ce09f8c0897798d5fc98e2e7e7c67e466.scope: Deactivated successfully.
Oct  7 10:08:33 np0005473739 podman[291939]: 2025-10-07 14:08:33.069749964 +0000 UTC m=+0.159375191 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.074 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] flattening images/9856ae22-305c-4ebf-9385-686fe57711a8 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:08:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:08:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:08:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:08:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:08:33 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7fe3a083-c568-4b8a-b055-72a47c03b0a2 does not exist
Oct  7 10:08:33 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8c9c984a-f166-4703-84c6-ffb08641d9b8 does not exist
Oct  7 10:08:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1252: 305 pgs: 2 active+clean+snaptrim, 6 active+clean+snaptrim_wait, 297 active+clean; 213 MiB data, 423 MiB used, 60 GiB / 60 GiB avail; 4.5 MiB/s rd, 4.4 MiB/s wr, 190 op/s
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.695 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [{"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.750 2 INFO nova.virt.libvirt.driver [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Snapshot image upload complete#033[00m
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.751 2 INFO nova.compute.manager [None req-cc8a8909-541f-4f99-948d-245414291ef8 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 7.05 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.755 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.755 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.756 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.756 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.756 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:08:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:08:33 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:08:33 np0005473739 nova_compute[259550]: 2025-10-07 14:08:33.974 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] removing snapshot(1788116cc84544c885e6c284acc207f7) on rbd image(4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:08:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e146 do_prune osdmap full prune enabled
Oct  7 10:08:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e147 e147: 3 total, 3 up, 3 in
Oct  7 10:08:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e147: 3 total, 3 up, 3 in
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.436 2 DEBUG nova.storage.rbd_utils [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] creating snapshot(snap) on rbd image(9856ae22-305c-4ebf-9385-686fe57711a8) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.491 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "31263139-61ff-4691-a1a9-a8d53fd7b388" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.492 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1254: 305 pgs: 305 active+clean; 292 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 9.0 MiB/s rd, 8.4 MiB/s wr, 202 op/s
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.577 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:08:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e147 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e147 do_prune osdmap full prune enabled
Oct  7 10:08:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e148 e148: 3 total, 3 up, 3 in
Oct  7 10:08:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e148: 3 total, 3 up, 3 in
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.714 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.715 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.725 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:08:35 np0005473739 nova_compute[259550]: 2025-10-07 14:08:35.726 2 INFO nova.compute.claims [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.028 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.112 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.113 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.223 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.328 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840478' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.506 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.515 2 DEBUG nova.compute.provider_tree [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.610 2 DEBUG nova.scheduler.client.report [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.756 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.758 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.761 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.767 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.767 2 INFO nova.compute.claims [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.965 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:08:36 np0005473739 nova_compute[259550]: 2025-10-07 14:08:36.966 2 DEBUG nova.network.neutron [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.048 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.154 2 INFO nova.virt.libvirt.driver [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Snapshot image upload complete#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.155 2 INFO nova.compute.manager [None req-f0af122e-289d-425e-8901-4efa04f63d06 b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 7.20 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.177 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.210 2 DEBUG nova.network.neutron [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.211 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.213 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:08:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e148 do_prune osdmap full prune enabled
Oct  7 10:08:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e149 e149: 3 total, 3 up, 3 in
Oct  7 10:08:37 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e149: 3 total, 3 up, 3 in
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1257: 305 pgs: 305 active+clean; 292 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.8 MiB/s wr, 201 op/s
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.635 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:08:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1791531714' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.638 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.639 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Creating image(s)#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.665 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.695 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.719 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.723 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.749 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.572s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.756 2 DEBUG nova.compute.provider_tree [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.788 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.788 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.789 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.789 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.810 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.814 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 31263139-61ff-4691-a1a9-a8d53fd7b388_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:37 np0005473739 nova_compute[259550]: 2025-10-07 14:08:37.843 2 DEBUG nova.scheduler.client.report [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.033 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.034 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.112 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.112 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.157 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.163 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 31263139-61ff-4691-a1a9-a8d53fd7b388_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.349s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.192 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.225 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] resizing rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.321 2 DEBUG nova.objects.instance [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lazy-loading 'migration_context' on Instance uuid 31263139-61ff-4691-a1a9-a8d53fd7b388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.342 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.344 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.344 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Creating image(s)#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.371 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.396 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.416 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.419 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.450 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.451 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Ensure instance console log exists: /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.452 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.452 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.452 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.454 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.458 2 WARNING nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.463 2 DEBUG nova.virt.libvirt.host [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.463 2 DEBUG nova.virt.libvirt.host [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.466 2 DEBUG nova.virt.libvirt.host [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.466 2 DEBUG nova.virt.libvirt.host [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.466 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.466 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.467 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.468 2 DEBUG nova.virt.hardware [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.471 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.511 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.512 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.512 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.513 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.534 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.538 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.566 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.567 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.567 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.568 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.568 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.570 2 INFO nova.compute.manager [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Terminating instance#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.571 2 DEBUG nova.compute.manager [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.576 2 DEBUG nova.policy [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '157a909ec18a483a901ec32a0a867038', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'b03b40fd15b945118fde82b6454dbced', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:08:38 np0005473739 kernel: tap95414c38-8a (unregistering): left promiscuous mode
Oct  7 10:08:38 np0005473739 NetworkManager[44949]: <info>  [1759846118.6191] device (tap95414c38-8a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:08:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:38Z|00146|binding|INFO|Releasing lport 95414c38-8a03-41fc-a460-bb80d2febc10 from this chassis (sb_readonly=0)
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:38Z|00147|binding|INFO|Setting lport 95414c38-8a03-41fc-a460-bb80d2febc10 down in Southbound
Oct  7 10:08:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:38Z|00148|binding|INFO|Removing iface tap95414c38-8a ovn-installed in OVS
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.638 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:1c:1f 10.100.0.11'], port_security=['fa:16:3e:34:1c:1f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cd77c7c3-e287-4a6a-b2b6-61655f604ec2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=95414c38-8a03-41fc-a460-bb80d2febc10) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:08:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.641 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 95414c38-8a03-41fc-a460-bb80d2febc10 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis#033[00m
Oct  7 10:08:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.642 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:08:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40922bba-1b0c-4b08-bf80-f9c777c7abcb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:38.645 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:38 np0005473739 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000015.scope: Deactivated successfully.
Oct  7 10:08:38 np0005473739 systemd[1]: machine-qemu\x2d25\x2dinstance\x2d00000015.scope: Consumed 4.398s CPU time.
Oct  7 10:08:38 np0005473739 systemd-machined[214580]: Machine qemu-25-instance-00000015 terminated.
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.823 2 INFO nova.virt.libvirt.driver [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Instance destroyed successfully.#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.824 2 DEBUG nova.objects.instance [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid cd77c7c3-e287-4a6a-b2b6-61655f604ec2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:38 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [NOTICE]   (290842) : haproxy version is 2.8.14-c23fe91
Oct  7 10:08:38 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [NOTICE]   (290842) : path to executable is /usr/sbin/haproxy
Oct  7 10:08:38 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [ALERT]    (290842) : Current worker (290844) exited with code 143 (Terminated)
Oct  7 10:08:38 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[290838]: [WARNING]  (290842) : All workers exited. Exiting... (0)
Oct  7 10:08:38 np0005473739 systemd[1]: libpod-3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28.scope: Deactivated successfully.
Oct  7 10:08:38 np0005473739 podman[292461]: 2025-10-07 14:08:38.841289958 +0000 UTC m=+0.071709068 container died 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.864 2 DEBUG nova.virt.libvirt.vif [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:08:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1004114679',display_name='tempest-ImagesTestJSON-server-1004114679',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1004114679',id=21,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:08:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=3,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-rgihze8b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:08:33Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=cd77c7c3-e287-4a6a-b2b6-61655f604ec2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='paused') vif={"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.865 2 DEBUG nova.network.os_vif_util [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "95414c38-8a03-41fc-a460-bb80d2febc10", "address": "fa:16:3e:34:1c:1f", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap95414c38-8a", "ovs_interfaceid": "95414c38-8a03-41fc-a460-bb80d2febc10", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.866 2 DEBUG nova.network.os_vif_util [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.867 2 DEBUG os_vif [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.869 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap95414c38-8a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.875 2 INFO os_vif [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:1c:1f,bridge_name='br-int',has_traffic_filtering=True,id=95414c38-8a03-41fc-a460-bb80d2febc10,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap95414c38-8a')#033[00m
Oct  7 10:08:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28-userdata-shm.mount: Deactivated successfully.
Oct  7 10:08:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0e6de4f7c968bc67535eb773c8b2db1d245a72f57212d42fe527c2c5ef20c1fd-merged.mount: Deactivated successfully.
Oct  7 10:08:38 np0005473739 podman[292461]: 2025-10-07 14:08:38.912765608 +0000 UTC m=+0.143184708 container cleanup 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:08:38 np0005473739 systemd[1]: libpod-conmon-3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28.scope: Deactivated successfully.
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.936 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.398s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.977 2 DEBUG nova.compute.manager [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-unplugged-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.978 2 DEBUG oslo_concurrency.lockutils [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.978 2 DEBUG oslo_concurrency.lockutils [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.978 2 DEBUG oslo_concurrency.lockutils [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.978 2 DEBUG nova.compute.manager [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] No waiting events found dispatching network-vif-unplugged-95414c38-8a03-41fc-a460-bb80d2febc10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:08:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2562838237' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:38 np0005473739 nova_compute[259550]: 2025-10-07 14:08:38.979 2 DEBUG nova.compute.manager [req-99e363eb-cd68-498b-a748-677c87376ed6 req-17e2de66-2a24-4690-9ed9-0c9ff6322aee 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-unplugged-95414c38-8a03-41fc-a460-bb80d2febc10 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:08:39 np0005473739 podman[292520]: 2025-10-07 14:08:39.00298185 +0000 UTC m=+0.064199677 container remove 3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.012 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[acf37255-5854-40c7-8c70-74a716835dc5]: (4, ('Tue Oct  7 02:08:38 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28)\n3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28\nTue Oct  7 02:08:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28)\n3cb9484fe231a86deb6314bc826fc7b2f1755c5b4256b4804e5d6c96709fce28\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.014 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[51a4e97a-37db-43ca-a79b-1b63260e5eb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.016 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:39 np0005473739 kernel: tap9f80456d-d0: left promiscuous mode
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.023 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f915b27-002f-45ad-b78a-2029747a2439]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.028 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.045 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d43a481-291e-420f-bcfb-bf5416cc513c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.047 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba12cf3e-ebd0-42d1-8d11-1e1b0b1f3ac8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.065 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.070 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.069 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf8fbd7f-f3e4-45d6-9ff9-855976d8d556]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 668317, 'reachable_time': 24712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292586, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:39 np0005473739 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.075 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:08:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:39.075 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7098b7d0-9a66-48c1-8895-6415ef79080c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.113 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] resizing rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.239 2 DEBUG nova.objects.instance [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'migration_context' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.265 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.266 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Ensure instance console log exists: /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.267 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.267 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.268 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.358 2 INFO nova.virt.libvirt.driver [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Deleting instance files /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_del#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.359 2 INFO nova.virt.libvirt.driver [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Deletion of /var/lib/nova/instances/cd77c7c3-e287-4a6a-b2b6-61655f604ec2_del complete#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.414 2 INFO nova.compute.manager [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 0.84 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.414 2 DEBUG oslo.service.loopingcall [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.415 2 DEBUG nova.compute.manager [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.415 2 DEBUG nova.network.neutron [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:08:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3876715671' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1258: 305 pgs: 305 active+clean; 299 MiB data, 458 MiB used, 60 GiB / 60 GiB avail; 7.9 MiB/s rd, 11 MiB/s wr, 282 op/s
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.550 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.551 2 DEBUG nova.objects.instance [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lazy-loading 'pci_devices' on Instance uuid 31263139-61ff-4691-a1a9-a8d53fd7b388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.566 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <uuid>31263139-61ff-4691-a1a9-a8d53fd7b388</uuid>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <name>instance-00000016</name>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerDiagnosticsTest-server-1281679422</nova:name>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:08:38</nova:creationTime>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <nova:user uuid="2014e38add1e412a8c2e1a1b678f687f">tempest-ServerDiagnosticsTest-6855210-project-member</nova:user>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <nova:project uuid="664831261352424ba813b5a956c752cd">tempest-ServerDiagnosticsTest-6855210</nova:project>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <entry name="serial">31263139-61ff-4691-a1a9-a8d53fd7b388</entry>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <entry name="uuid">31263139-61ff-4691-a1a9-a8d53fd7b388</entry>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/31263139-61ff-4691-a1a9-a8d53fd7b388_disk">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/console.log" append="off"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:08:39 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:08:39 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:08:39 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:08:39 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.623 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.623 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.624 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Using config drive#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.645 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.820 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Creating config drive at /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.824 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphfxn1pc0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.857 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Successfully created port: c10a9a85-702e-4bd4-92d9-474eb88ec422 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.961 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphfxn1pc0" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.989 2 DEBUG nova.storage.rbd_utils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] rbd image 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:39 np0005473739 nova_compute[259550]: 2025-10-07 14:08:39.994 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:40 np0005473739 nova_compute[259550]: 2025-10-07 14:08:40.154 2 DEBUG oslo_concurrency.processutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config 31263139-61ff-4691-a1a9-a8d53fd7b388_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:40 np0005473739 nova_compute[259550]: 2025-10-07 14:08:40.154 2 INFO nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Deleting local config drive /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388/disk.config because it was imported into RBD.#033[00m
Oct  7 10:08:40 np0005473739 systemd-machined[214580]: New machine qemu-26-instance-00000016.
Oct  7 10:08:40 np0005473739 systemd[1]: Started Virtual Machine qemu-26-instance-00000016.
Oct  7 10:08:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e149 do_prune osdmap full prune enabled
Oct  7 10:08:40 np0005473739 nova_compute[259550]: 2025-10-07 14:08:40.394 2 DEBUG nova.network.neutron [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e150 e150: 3 total, 3 up, 3 in
Oct  7 10:08:40 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e150: 3 total, 3 up, 3 in
Oct  7 10:08:40 np0005473739 nova_compute[259550]: 2025-10-07 14:08:40.425 2 INFO nova.compute.manager [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Took 1.01 seconds to deallocate network for instance.#033[00m
Oct  7 10:08:40 np0005473739 nova_compute[259550]: 2025-10-07 14:08:40.497 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:40 np0005473739 nova_compute[259550]: 2025-10-07 14:08:40.498 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:40 np0005473739 nova_compute[259550]: 2025-10-07 14:08:40.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:40 np0005473739 nova_compute[259550]: 2025-10-07 14:08:40.605 2 DEBUG oslo_concurrency.processutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e150 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e150 do_prune osdmap full prune enabled
Oct  7 10:08:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e151 e151: 3 total, 3 up, 3 in
Oct  7 10:08:40 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e151: 3 total, 3 up, 3 in
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.003 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Successfully updated port: c10a9a85-702e-4bd4-92d9-474eb88ec422 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.022 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.022 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.023 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.124 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846121.12446, 31263139-61ff-4691-a1a9-a8d53fd7b388 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.125 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.127 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.128 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.132 2 INFO nova.virt.libvirt.driver [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance spawned successfully.#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.132 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:08:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283864756' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.162 2 DEBUG oslo_concurrency.processutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.557s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.169 2 DEBUG nova.compute.provider_tree [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.177 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.183 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.183 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.184 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.184 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.185 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.185 2 DEBUG nova.virt.libvirt.driver [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.190 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.199 2 DEBUG nova.scheduler.client.report [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.222 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.225 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.225 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846121.1253805, 31263139-61ff-4691-a1a9-a8d53fd7b388 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.226 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] VM Started (Lifecycle Event)#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.256 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.758s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.263 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.266 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.302 2 INFO nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Took 3.67 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.302 2 DEBUG nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.305 2 DEBUG nova.compute.manager [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.306 2 DEBUG oslo_concurrency.lockutils [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.307 2 DEBUG oslo_concurrency.lockutils [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.307 2 DEBUG oslo_concurrency.lockutils [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.308 2 DEBUG nova.compute.manager [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] No waiting events found dispatching network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.308 2 WARNING nova.compute.manager [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received unexpected event network-vif-plugged-95414c38-8a03-41fc-a460-bb80d2febc10 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.308 2 DEBUG nova.compute.manager [req-0f8eb1f2-0ff1-4142-84fb-4b3acb2c7819 req-af36fae8-2e64-4f55-a595-28c92ee5731c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Received event network-vif-deleted-95414c38-8a03-41fc-a460-bb80d2febc10 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.311 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.316 2 INFO nova.scheduler.client.report [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance cd77c7c3-e287-4a6a-b2b6-61655f604ec2#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.362 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.362 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.394 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.397 2 INFO nova.compute.manager [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Took 5.71 seconds to build instance.#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.408 2 DEBUG oslo_concurrency.lockutils [None req-86ac0094-3c29-477e-9915-87a8a11af320 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "cd77c7c3-e287-4a6a-b2b6-61655f604ec2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.439 2 DEBUG oslo_concurrency.lockutils [None req-d5b42484-ca38-4ba5-a328-37111422d81c 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.947s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.482 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.483 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.489 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.489 2 INFO nova.compute.claims [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:08:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1261: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 325 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 153 KiB/s rd, 7.2 MiB/s wr, 223 op/s
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.562 2 DEBUG nova.compute.manager [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.563 2 DEBUG nova.compute.manager [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing instance network info cache due to event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.563 2 DEBUG oslo_concurrency.lockutils [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.634 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.691 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.692 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.692 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.693 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.693 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.699 2 INFO nova.compute.manager [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Terminating instance#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.701 2 DEBUG nova.compute.manager [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:08:41 np0005473739 kernel: tapd23018fc-ec (unregistering): left promiscuous mode
Oct  7 10:08:41 np0005473739 NetworkManager[44949]: <info>  [1759846121.7490] device (tapd23018fc-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:08:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:41Z|00149|binding|INFO|Releasing lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b from this chassis (sb_readonly=0)
Oct  7 10:08:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:41Z|00150|binding|INFO|Setting lport d23018fc-ec2d-4a03-8e09-88c7ecb34f8b down in Southbound
Oct  7 10:08:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:41Z|00151|binding|INFO|Removing iface tapd23018fc-ec ovn-installed in OVS
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.766 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:73:3d 10.100.0.3'], port_security=['fa:16:3e:84:73:3d 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '63a8d182eca84056a1214aff59d1a164', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ee4342df-374f-4e89-b1cb-d9f5b15e7a74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=70fa1751-9080-486f-a255-eba81ea8e3da, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:08:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.768 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d23018fc-ec2d-4a03-8e09-88c7ecb34f8b in datapath a90af50e-9409-4ab3-b31a-0927ca38c12d unbound from our chassis#033[00m
Oct  7 10:08:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.769 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network a90af50e-9409-4ab3-b31a-0927ca38c12d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:08:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.770 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[acad5f09-6021-40a2-93e4-2f3c407fb101]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:41.770 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d namespace which is not needed anymore#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:41 np0005473739 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000014.scope: Deactivated successfully.
Oct  7 10:08:41 np0005473739 systemd[1]: machine-qemu\x2d24\x2dinstance\x2d00000014.scope: Consumed 14.699s CPU time.
Oct  7 10:08:41 np0005473739 systemd-machined[214580]: Machine qemu-24-instance-00000014 terminated.
Oct  7 10:08:41 np0005473739 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [NOTICE]   (290256) : haproxy version is 2.8.14-c23fe91
Oct  7 10:08:41 np0005473739 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [NOTICE]   (290256) : path to executable is /usr/sbin/haproxy
Oct  7 10:08:41 np0005473739 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [WARNING]  (290256) : Exiting Master process...
Oct  7 10:08:41 np0005473739 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [ALERT]    (290256) : Current worker (290274) exited with code 143 (Terminated)
Oct  7 10:08:41 np0005473739 neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d[290218]: [WARNING]  (290256) : All workers exited. Exiting... (0)
Oct  7 10:08:41 np0005473739 systemd[1]: libpod-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e.scope: Deactivated successfully.
Oct  7 10:08:41 np0005473739 conmon[290218]: conmon 9e947329e49cc86ec51f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e.scope/container/memory.events
Oct  7 10:08:41 np0005473739 podman[292829]: 2025-10-07 14:08:41.924222444 +0000 UTC m=+0.053411258 container died 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.937 2 INFO nova.virt.libvirt.driver [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Instance destroyed successfully.#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.938 2 DEBUG nova.objects.instance [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lazy-loading 'resources' on Instance uuid 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.952 2 DEBUG nova.virt.libvirt.vif [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:07:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerTestJSON-server-370744096',display_name='tempest-ImagesOneServerTestJSON-server-370744096',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservertestjson-server-370744096',id=20,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:08:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='63a8d182eca84056a1214aff59d1a164',ramdisk_id='',reservation_id='r-4hhekc2t',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerTestJSON-961060000',owner_user_name='tempest-ImagesOneServerTestJSON-961060000-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:08:37Z,user_data=None,user_id='b1ccfdaee9324154bed6828c0fa32e6d',uuid=4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.953 2 DEBUG nova.network.os_vif_util [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converting VIF {"id": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "address": "fa:16:3e:84:73:3d", "network": {"id": "a90af50e-9409-4ab3-b31a-0927ca38c12d", "bridge": "br-int", "label": "tempest-ImagesOneServerTestJSON-1517624461-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "63a8d182eca84056a1214aff59d1a164", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd23018fc-ec", "ovs_interfaceid": "d23018fc-ec2d-4a03-8e09-88c7ecb34f8b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.953 2 DEBUG nova.network.os_vif_util [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.954 2 DEBUG os_vif [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:08:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e-userdata-shm.mount: Deactivated successfully.
Oct  7 10:08:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-62e91291afc6a80e340c9e03107adfd8c1875ed746c7741b6e4d627197ed67bb-merged.mount: Deactivated successfully.
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.962 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd23018fc-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:08:41 np0005473739 podman[292829]: 2025-10-07 14:08:41.970413119 +0000 UTC m=+0.099601933 container cleanup 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.971 2 INFO os_vif [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:73:3d,bridge_name='br-int',has_traffic_filtering=True,id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b,network=Network(a90af50e-9409-4ab3-b31a-0927ca38c12d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd23018fc-ec')#033[00m
Oct  7 10:08:41 np0005473739 systemd[1]: libpod-conmon-9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e.scope: Deactivated successfully.
Oct  7 10:08:41 np0005473739 nova_compute[259550]: 2025-10-07 14:08:41.994 2 DEBUG nova.network.neutron [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.013 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.013 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance network_info: |[{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.014 2 DEBUG oslo_concurrency.lockutils [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.014 2 DEBUG nova.network.neutron [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.017 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start _get_guest_xml network_info=[{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.026 2 WARNING nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:08:42 np0005473739 podman[292876]: 2025-10-07 14:08:42.040130252 +0000 UTC m=+0.045913018 container remove 9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.043 2 DEBUG nova.virt.libvirt.host [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.045 2 DEBUG nova.virt.libvirt.host [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.050 2 DEBUG nova.virt.libvirt.host [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.049 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[310652b1-ac24-4d38-8429-50ceef7602d3]: (4, ('Tue Oct  7 02:08:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d (9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e)\n9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e\nTue Oct  7 02:08:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d (9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e)\n9e947329e49cc86ec51ff210d6a262b0ddc94c65c9183faada4ce68426876d0e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.051 2 DEBUG nova.virt.libvirt.host [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.051 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.051 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[500f7f8f-0487-40d3-957c-75d57cff7b38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.051 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.052 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.052 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa90af50e-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.052 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.052 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.052 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.053 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.053 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.054 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.054 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:08:42 np0005473739 kernel: tapa90af50e-90: left promiscuous mode
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.058 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c10c196d-7e4d-4ffe-a611-21ce3605f450]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.066 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.066 2 DEBUG nova.virt.hardware [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.077 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.093 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e5668d-3a03-4798-a67e-0da21175190d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.094 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9be5ba74-f01e-4b92-9c79-ab42b72e5cac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.116 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5745ff6-c189-4c33-be89-df0456423acf]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 666866, 'reachable_time': 28830, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 292904, 'error': None, 'target': 'ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:42 np0005473739 systemd[1]: run-netns-ovnmeta\x2da90af50e\x2d9409\x2d4ab3\x2db31a\x2d0927ca38c12d.mount: Deactivated successfully.
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.121 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-a90af50e-9409-4ab3-b31a-0927ca38c12d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:08:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:42.121 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[144687fa-c6ef-402b-858e-2e95bf2685fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3738107768' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.168 2 DEBUG nova.compute.manager [None req-3c58f946-0c2e-417c-8c22-962adfc5d572 9a04c82aa6a043f4aa7ab4f2b5ac9df5 b46bb2637a044462b276ef40de0ce5c6 - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.172 2 INFO nova.compute.manager [None req-3c58f946-0c2e-417c-8c22-962adfc5d572 9a04c82aa6a043f4aa7ab4f2b5ac9df5 b46bb2637a044462b276ef40de0ce5c6 - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Retrieving diagnostics#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.190 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.196 2 DEBUG nova.compute.provider_tree [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.210 2 DEBUG nova.scheduler.client.report [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.236 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.237 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.362 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.363 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.422 2 INFO nova.virt.libvirt.driver [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Deleting instance files /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_del#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.423 2 INFO nova.virt.libvirt.driver [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Deletion of /var/lib/nova/instances/4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1_del complete#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.445 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.565 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:08:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/784667935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.590 2 INFO nova.compute.manager [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.592 2 DEBUG oslo.service.loopingcall [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.592 2 DEBUG nova.compute.manager [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.592 2 DEBUG nova.network.neutron [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.598 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.624 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.631 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.659 2 DEBUG nova.policy [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.665 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "31263139-61ff-4691-a1a9-a8d53fd7b388" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.666 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.666 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "31263139-61ff-4691-a1a9-a8d53fd7b388-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.667 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.667 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.668 2 INFO nova.compute.manager [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Terminating instance#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.669 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "refresh_cache-31263139-61ff-4691-a1a9-a8d53fd7b388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.669 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquired lock "refresh_cache-31263139-61ff-4691-a1a9-a8d53fd7b388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.669 2 DEBUG nova.network.neutron [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.758 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.759 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.759 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Creating image(s)#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.776 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.793 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.811 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.815 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.877 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.878 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.878 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.879 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.898 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.902 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:42 np0005473739 nova_compute[259550]: 2025-10-07 14:08:42.998 2 DEBUG nova.network.neutron [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:08:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/790973850' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.095 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.097 2 DEBUG nova.virt.libvirt.vif [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-822129567',display_name='tempest-AttachInterfacesUnderV243Test-server-822129567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-822129567',id=23,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqYH9kZe9rX7hHEHsSyYrS6rRZ+9wOpWI4SrDObDRhVhO+VRnrHxHs1eVVbdpvsvOSO+WGarbwizJejZEygylEjS/5KcFmLUFjCsp72PetsN1n04qwsYDZRBjJQ0LVJNw==',key_name='tempest-keypair-1173887905',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b03b40fd15b945118fde82b6454dbced',ramdisk_id='',reservation_id='r-z4cql8qa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-169595365',owner_user_name='tempest-AttachInterfacesUnderV243Test-169595365-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='157a909ec18a483a901ec32a0a867038',uuid=5fc2c826-a57b-4c9a-910a-48b72ec2ab75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.097 2 DEBUG nova.network.os_vif_util [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converting VIF {"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.098 2 DEBUG nova.network.os_vif_util [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.099 2 DEBUG nova.objects.instance [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'pci_devices' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.171 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <uuid>5fc2c826-a57b-4c9a-910a-48b72ec2ab75</uuid>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <name>instance-00000017</name>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-822129567</nova:name>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:08:42</nova:creationTime>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <nova:user uuid="157a909ec18a483a901ec32a0a867038">tempest-AttachInterfacesUnderV243Test-169595365-project-member</nova:user>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <nova:project uuid="b03b40fd15b945118fde82b6454dbced">tempest-AttachInterfacesUnderV243Test-169595365</nova:project>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <nova:port uuid="c10a9a85-702e-4bd4-92d9-474eb88ec422">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <entry name="serial">5fc2c826-a57b-4c9a-910a-48b72ec2ab75</entry>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <entry name="uuid">5fc2c826-a57b-4c9a-910a-48b72ec2ab75</entry>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:45:8d:83"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <target dev="tapc10a9a85-70"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/console.log" append="off"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:08:43 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:08:43 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:08:43 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:08:43 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.180 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Preparing to wait for external event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.180 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.180 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.180 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.181 2 DEBUG nova.virt.libvirt.vif [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-822129567',display_name='tempest-AttachInterfacesUnderV243Test-server-822129567',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-822129567',id=23,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqYH9kZe9rX7hHEHsSyYrS6rRZ+9wOpWI4SrDObDRhVhO+VRnrHxHs1eVVbdpvsvOSO+WGarbwizJejZEygylEjS/5KcFmLUFjCsp72PetsN1n04qwsYDZRBjJQ0LVJNw==',key_name='tempest-keypair-1173887905',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='b03b40fd15b945118fde82b6454dbced',ramdisk_id='',reservation_id='r-z4cql8qa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-169595365',owner_user_name='tempest-AttachInterfacesUnderV243Test-169595365-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='157a909ec18a483a901ec32a0a867038',uuid=5fc2c826-a57b-4c9a-910a-48b72ec2ab75,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.182 2 DEBUG nova.network.os_vif_util [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converting VIF {"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.182 2 DEBUG nova.network.os_vif_util [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.183 2 DEBUG os_vif [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc10a9a85-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.192 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc10a9a85-70, col_values=(('external_ids', {'iface-id': 'c10a9a85-702e-4bd4-92d9-474eb88ec422', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:45:8d:83', 'vm-uuid': '5fc2c826-a57b-4c9a-910a-48b72ec2ab75'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.193 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.291s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:43 np0005473739 NetworkManager[44949]: <info>  [1759846123.1954] manager: (tapc10a9a85-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.225 2 INFO os_vif [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70')#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.256 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.346 2 DEBUG nova.objects.instance [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.534 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.535 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Ensure instance console log exists: /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.535 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.536 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.536 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.540 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.541 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.541 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] No VIF found with MAC fa:16:3e:45:8d:83, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.542 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Using config drive#033[00m
Oct  7 10:08:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1262: 305 pgs: 2 active+clean+snaptrim, 5 active+clean+snaptrim_wait, 298 active+clean; 325 MiB data, 474 MiB used, 60 GiB / 60 GiB avail; 146 KiB/s rd, 6.9 MiB/s wr, 212 op/s
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.563 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.616 2 DEBUG nova.network.neutron [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.636 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Releasing lock "refresh_cache-31263139-61ff-4691-a1a9-a8d53fd7b388" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.637 2 DEBUG nova.compute.manager [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.684 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-unplugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.685 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.685 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.686 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.686 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] No waiting events found dispatching network-vif-unplugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.686 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-unplugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.686 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.687 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.687 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.687 2 DEBUG oslo_concurrency.lockutils [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.688 2 DEBUG nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] No waiting events found dispatching network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.688 2 WARNING nova.compute.manager [req-5c0c6879-7c75-4c2f-ab42-d0117ec1d36c req-48705803-2cf7-47cf-91ee-1cf876caf895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received unexpected event network-vif-plugged-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:08:43 np0005473739 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Deactivated successfully.
Oct  7 10:08:43 np0005473739 systemd[1]: machine-qemu\x2d26\x2dinstance\x2d00000016.scope: Consumed 3.411s CPU time.
Oct  7 10:08:43 np0005473739 systemd-machined[214580]: Machine qemu-26-instance-00000016 terminated.
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.863 2 INFO nova.virt.libvirt.driver [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance destroyed successfully.#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.864 2 DEBUG nova.objects.instance [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lazy-loading 'resources' on Instance uuid 31263139-61ff-4691-a1a9-a8d53fd7b388 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.909 2 DEBUG nova.network.neutron [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updated VIF entry in instance network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.910 2 DEBUG nova.network.neutron [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.958 2 DEBUG oslo_concurrency.lockutils [req-d26ca8db-edcc-479f-b3d9-5ab42ab27406 req-c96c5de1-3f24-4751-ace8-f677c792507c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.992 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Creating config drive at /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config#033[00m
Oct  7 10:08:43 np0005473739 nova_compute[259550]: 2025-10-07 14:08:43.998 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw7y9vi0d execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.104 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Successfully created port: 29476070-b1b0-4d1c-a313-d0fbd1793130 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.134 2 DEBUG nova.network.neutron [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.150 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw7y9vi0d" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.176 2 DEBUG nova.storage.rbd_utils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] rbd image 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.181 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.217 2 DEBUG nova.compute.manager [req-a676fee7-52bb-4a77-b542-640decef2b55 req-3d9aa5ae-2516-4a28-98ee-ce24760a84b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Received event network-vif-deleted-d23018fc-ec2d-4a03-8e09-88c7ecb34f8b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.218 2 INFO nova.compute.manager [req-a676fee7-52bb-4a77-b542-640decef2b55 req-3d9aa5ae-2516-4a28-98ee-ce24760a84b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Neutron deleted interface d23018fc-ec2d-4a03-8e09-88c7ecb34f8b; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.218 2 DEBUG nova.network.neutron [req-a676fee7-52bb-4a77-b542-640decef2b55 req-3d9aa5ae-2516-4a28-98ee-ce24760a84b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.247 2 INFO nova.compute.manager [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Took 1.65 seconds to deallocate network for instance.#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.295 2 DEBUG nova.compute.manager [req-a676fee7-52bb-4a77-b542-640decef2b55 req-3d9aa5ae-2516-4a28-98ee-ce24760a84b7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Detach interface failed, port_id=d23018fc-ec2d-4a03-8e09-88c7ecb34f8b, reason: Instance 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.359 2 DEBUG oslo_concurrency.processutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config 5fc2c826-a57b-4c9a-910a-48b72ec2ab75_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.178s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.360 2 INFO nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Deleting local config drive /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75/disk.config because it was imported into RBD.#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.397 2 INFO nova.virt.libvirt.driver [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Deleting instance files /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388_del#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.398 2 INFO nova.virt.libvirt.driver [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Deletion of /var/lib/nova/instances/31263139-61ff-4691-a1a9-a8d53fd7b388_del complete#033[00m
Oct  7 10:08:44 np0005473739 kernel: tapc10a9a85-70: entered promiscuous mode
Oct  7 10:08:44 np0005473739 NetworkManager[44949]: <info>  [1759846124.4165] manager: (tapc10a9a85-70): new Tun device (/org/freedesktop/NetworkManager/Devices/78)
Oct  7 10:08:44 np0005473739 systemd-udevd[292792]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:44Z|00152|binding|INFO|Claiming lport c10a9a85-702e-4bd4-92d9-474eb88ec422 for this chassis.
Oct  7 10:08:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:44Z|00153|binding|INFO|c10a9a85-702e-4bd4-92d9-474eb88ec422: Claiming fa:16:3e:45:8d:83 10.100.0.13
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:44 np0005473739 NetworkManager[44949]: <info>  [1759846124.4280] device (tapc10a9a85-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:08:44 np0005473739 NetworkManager[44949]: <info>  [1759846124.4300] device (tapc10a9a85-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:08:44 np0005473739 systemd-machined[214580]: New machine qemu-27-instance-00000017.
Oct  7 10:08:44 np0005473739 systemd[1]: Started Virtual Machine qemu-27-instance-00000017.
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:44Z|00154|binding|INFO|Setting lport c10a9a85-702e-4bd4-92d9-474eb88ec422 ovn-installed in OVS
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.557 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.557 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:44Z|00155|binding|INFO|Setting lport c10a9a85-702e-4bd4-92d9-474eb88ec422 up in Southbound
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.563 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:8d:83 10.100.0.13'], port_security=['fa:16:3e:45:8d:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5fc2c826-a57b-4c9a-910a-48b72ec2ab75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b03b40fd15b945118fde82b6454dbced', 'neutron:revision_number': '2', 'neutron:security_group_ids': '33a99878-f5f8-4125-a239-2333c49ae6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82ef4b2f-7b03-44f3-86f3-9cfeec6eece1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c10a9a85-702e-4bd4-92d9-474eb88ec422) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.565 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c10a9a85-702e-4bd4-92d9-474eb88ec422 in datapath 9ef175ad-b29a-40a5-8bff-1ae744434f58 bound to our chassis#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.566 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9ef175ad-b29a-40a5-8bff-1ae744434f58#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fbf9af8-5f92-4136-b350-8054ab26e3f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.580 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9ef175ad-b1 in ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.582 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9ef175ad-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.582 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bed8eeca-a39a-4cc4-8fa4-989a658d648e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.583 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98b41d01-92a5-4680-899e-c52335d05ebb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.600 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[511e52fc-3a65-4f7b-94ed-9b42e8a7172e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.622 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e15e46d8-77c3-4bbb-bcf9-7db9ba5d4d97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.653 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9cdebcd3-78fc-4eef-9186-ab3e0359eee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.653 2 DEBUG oslo_concurrency.processutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:44 np0005473739 NetworkManager[44949]: <info>  [1759846124.6706] manager: (tap9ef175ad-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/79)
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.669 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[31cc0707-829a-4efe-9d24-3d5a8f06afc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.706 2 INFO nova.compute.manager [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Took 1.07 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.707 2 DEBUG oslo.service.loopingcall [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.707 2 DEBUG nova.compute.manager [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.707 2 DEBUG nova.network.neutron [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.726 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fb3d29a0-b08d-44ee-9974-ea75b628f64a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.730 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6653ae5b-334d-4029-a575-a6b51d40bff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 NetworkManager[44949]: <info>  [1759846124.7558] device (tap9ef175ad-b0): carrier: link connected
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.763 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0527de4c-8692-4235-8b22-95c034193767]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.784 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5076f0c7-5a34-460b-8699-9819e40c47bd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ef175ad-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:ed:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670832, 'reachable_time': 34892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293264, 'error': None, 'target': 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.800 2 DEBUG nova.network.neutron [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.804 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a705b256-2775-4e74-820d-f16de7b4850c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:ed5b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 670832, 'tstamp': 670832}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293283, 'error': None, 'target': 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.826 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd872aca-7970-4c29-8d17-22552cf22fcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9ef175ad-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:ed:5b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 51], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670832, 'reachable_time': 34892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293284, 'error': None, 'target': 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43a1f25c-1d92-4961-a0e2-2a859d3ca4b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.899 2 DEBUG nova.network.neutron [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.939 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6e8f909c-bd10-4556-aa5c-916c39cdb4b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.941 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ef175ad-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.941 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.942 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9ef175ad-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:44 np0005473739 NetworkManager[44949]: <info>  [1759846124.9445] manager: (tap9ef175ad-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/80)
Oct  7 10:08:44 np0005473739 kernel: tap9ef175ad-b0: entered promiscuous mode
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9ef175ad-b0, col_values=(('external_ids', {'iface-id': 'ee8146fb-210f-43d1-b971-6942dccdfe6d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:44Z|00156|binding|INFO|Releasing lport ee8146fb-210f-43d1-b971-6942dccdfe6d from this chassis (sb_readonly=0)
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.953 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9ef175ad-b29a-40a5-8bff-1ae744434f58.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9ef175ad-b29a-40a5-8bff-1ae744434f58.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.955 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12dd2fab-616b-4060-a737-b333bd00ce8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.956 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-9ef175ad-b29a-40a5-8bff-1ae744434f58
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/9ef175ad-b29a-40a5-8bff-1ae744434f58.pid.haproxy
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 9ef175ad-b29a-40a5-8bff-1ae744434f58
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:08:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:44.957 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'env', 'PROCESS_TAG=haproxy-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9ef175ad-b29a-40a5-8bff-1ae744434f58.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:08:44 np0005473739 nova_compute[259550]: 2025-10-07 14:08:44.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3474795087' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:45 np0005473739 nova_compute[259550]: 2025-10-07 14:08:45.102 2 DEBUG oslo_concurrency.processutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:45 np0005473739 nova_compute[259550]: 2025-10-07 14:08:45.110 2 DEBUG nova.compute.provider_tree [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:08:45 np0005473739 nova_compute[259550]: 2025-10-07 14:08:45.120 2 INFO nova.compute.manager [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Took 0.41 seconds to deallocate network for instance.#033[00m
Oct  7 10:08:45 np0005473739 podman[293318]: 2025-10-07 14:08:45.35776778 +0000 UTC m=+0.054708273 container create ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:08:45 np0005473739 systemd[1]: Started libpod-conmon-ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77.scope.
Oct  7 10:08:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:45 np0005473739 podman[293318]: 2025-10-07 14:08:45.328689764 +0000 UTC m=+0.025630277 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:08:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11d889d5dab38b3fea3ecfaefc7e9381c5ac5e4cbaa2ea3617b44dc00bb8f71d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:45 np0005473739 podman[293318]: 2025-10-07 14:08:45.440034629 +0000 UTC m=+0.136975152 container init ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  7 10:08:45 np0005473739 podman[293318]: 2025-10-07 14:08:45.451959699 +0000 UTC m=+0.148900192 container start ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:08:45 np0005473739 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [NOTICE]   (293337) : New worker (293339) forked
Oct  7 10:08:45 np0005473739 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [NOTICE]   (293337) : Loading success.
Oct  7 10:08:45 np0005473739 nova_compute[259550]: 2025-10-07 14:08:45.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1263: 305 pgs: 305 active+clean; 160 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 8.0 MiB/s wr, 436 op/s
Oct  7 10:08:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e151 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e151 do_prune osdmap full prune enabled
Oct  7 10:08:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e152 e152: 3 total, 3 up, 3 in
Oct  7 10:08:45 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e152: 3 total, 3 up, 3 in
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.083 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846126.0824468, 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.084 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] VM Started (Lifecycle Event)#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.498 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Successfully updated port: 29476070-b1b0-4d1c-a313-d0fbd1793130 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.521 2 DEBUG nova.scheduler.client.report [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.736 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.737 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.737 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.752 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.758 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846126.0833654, 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.758 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.916 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 2.359s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.944 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.945 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.950 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:46 np0005473739 nova_compute[259550]: 2025-10-07 14:08:46.961 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.012 2 INFO nova.scheduler.client.report [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Deleted allocations for instance 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.027 2 DEBUG oslo_concurrency.processutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.056 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.309 2 DEBUG oslo_concurrency.lockutils [None req-f1c97c52-2bfe-48c8-bd27-3a6dc6c51cdb b1ccfdaee9324154bed6828c0fa32e6d 63a8d182eca84056a1214aff59d1a164 - - default default] Lock "4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:08:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2173230607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.511 2 DEBUG oslo_concurrency.processutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.517 2 DEBUG nova.compute.provider_tree [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:08:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1265: 305 pgs: 305 active+clean; 160 MiB data, 383 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 6.4 MiB/s wr, 385 op/s
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.703 2 DEBUG nova.scheduler.client.report [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.731 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.751 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.806s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.804 2 INFO nova.scheduler.client.report [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Deleted allocations for instance 31263139-61ff-4691-a1a9-a8d53fd7b388#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.880 2 DEBUG nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.880 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.881 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.881 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.881 2 DEBUG nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Processing event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.881 2 DEBUG nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.882 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.882 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.882 2 DEBUG oslo_concurrency.lockutils [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.882 2 DEBUG nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] No waiting events found dispatching network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.883 2 WARNING nova.compute.manager [req-9a2e83ae-529a-4c21-a6bb-375ebd28c5d0 req-1c420fd6-4d21-42a7-ab4b-0cd1ceb165ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received unexpected event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.883 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.889 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846127.8895493, 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.890 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.893 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.898 2 INFO nova.virt.libvirt.driver [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance spawned successfully.#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.899 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.919 2 DEBUG oslo_concurrency.lockutils [None req-d05b289e-c832-41f3-a5c4-874fdd1ca4dc 2014e38add1e412a8c2e1a1b678f687f 664831261352424ba813b5a956c752cd - - default default] Lock "31263139-61ff-4691-a1a9-a8d53fd7b388" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.932 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.937 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.941 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.941 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.942 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.942 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.942 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:47 np0005473739 nova_compute[259550]: 2025-10-07 14:08:47.943 2 DEBUG nova.virt.libvirt.driver [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:48 np0005473739 nova_compute[259550]: 2025-10-07 14:08:48.000 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:48 np0005473739 nova_compute[259550]: 2025-10-07 14:08:48.070 2 INFO nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Took 9.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:08:48 np0005473739 nova_compute[259550]: 2025-10-07 14:08:48.071 2 DEBUG nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:48 np0005473739 nova_compute[259550]: 2025-10-07 14:08:48.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:48 np0005473739 nova_compute[259550]: 2025-10-07 14:08:48.238 2 INFO nova.compute.manager [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Took 11.93 seconds to build instance.#033[00m
Oct  7 10:08:48 np0005473739 nova_compute[259550]: 2025-10-07 14:08:48.356 2 DEBUG oslo_concurrency.lockutils [None req-ea81899f-ef71-4d25-8e11-55de737ea57b 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1266: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 3.7 MiB/s rd, 2.4 MiB/s wr, 304 op/s
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.841 2 DEBUG nova.network.neutron [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updating instance_info_cache with network_info: [{"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.900 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.901 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance network_info: |[{"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.903 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start _get_guest_xml network_info=[{"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.910 2 WARNING nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.916 2 DEBUG nova.virt.libvirt.host [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.917 2 DEBUG nova.virt.libvirt.host [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.924 2 DEBUG nova.virt.libvirt.host [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.925 2 DEBUG nova.virt.libvirt.host [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.925 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.925 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.926 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.926 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.926 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.927 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.927 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.927 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.927 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.928 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.928 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.928 2 DEBUG nova.virt.hardware [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.931 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.991 2 DEBUG nova.compute.manager [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-changed-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.992 2 DEBUG nova.compute.manager [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Refreshing instance network info cache due to event network-changed-29476070-b1b0-4d1c-a313-d0fbd1793130. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.992 2 DEBUG oslo_concurrency.lockutils [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.993 2 DEBUG oslo_concurrency.lockutils [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:49 np0005473739 nova_compute[259550]: 2025-10-07 14:08:49.993 2 DEBUG nova.network.neutron [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Refreshing network info cache for port 29476070-b1b0-4d1c-a313-d0fbd1793130 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:08:50 np0005473739 podman[293415]: 2025-10-07 14:08:50.077125673 +0000 UTC m=+0.061462574 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_managed=true, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:08:50 np0005473739 podman[293414]: 2025-10-07 14:08:50.089142565 +0000 UTC m=+0.072950292 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:08:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216984279' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:50 np0005473739 nova_compute[259550]: 2025-10-07 14:08:50.404 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:50 np0005473739 nova_compute[259550]: 2025-10-07 14:08:50.433 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:50 np0005473739 nova_compute[259550]: 2025-10-07 14:08:50.438 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:50 np0005473739 nova_compute[259550]: 2025-10-07 14:08:50.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e152 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e152 do_prune osdmap full prune enabled
Oct  7 10:08:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 e153: 3 total, 3 up, 3 in
Oct  7 10:08:50 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e153: 3 total, 3 up, 3 in
Oct  7 10:08:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:08:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2171010429' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.000 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.562s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.002 2 DEBUG nova.virt.libvirt.vif [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-771633438',display_name='tempest-ImagesTestJSON-server-771633438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-771633438',id=24,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-6zal3iv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:42Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=46d99eec-2ec6-4ea6-acb1-c0a694dd2df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.003 2 DEBUG nova.network.os_vif_util [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.004 2 DEBUG nova.network.os_vif_util [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.005 2 DEBUG nova.objects.instance [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.024 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <uuid>46d99eec-2ec6-4ea6-acb1-c0a694dd2df1</uuid>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <name>instance-00000018</name>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesTestJSON-server-771633438</nova:name>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:08:49</nova:creationTime>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <nova:port uuid="29476070-b1b0-4d1c-a313-d0fbd1793130">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <entry name="serial">46d99eec-2ec6-4ea6-acb1-c0a694dd2df1</entry>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <entry name="uuid">46d99eec-2ec6-4ea6-acb1-c0a694dd2df1</entry>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:2d:c7:a1"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <target dev="tap29476070-b1"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/console.log" append="off"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:08:51 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:08:51 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:08:51 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:08:51 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.026 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Preparing to wait for external event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.027 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.027 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.027 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.028 2 DEBUG nova.virt.libvirt.vif [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:08:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-771633438',display_name='tempest-ImagesTestJSON-server-771633438',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-771633438',id=24,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-6zal3iv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:08:42Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=46d99eec-2ec6-4ea6-acb1-c0a694dd2df1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.029 2 DEBUG nova.network.os_vif_util [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.029 2 DEBUG nova.network.os_vif_util [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.030 2 DEBUG os_vif [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29476070-b1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.037 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap29476070-b1, col_values=(('external_ids', {'iface-id': '29476070-b1b0-4d1c-a313-d0fbd1793130', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2d:c7:a1', 'vm-uuid': '46d99eec-2ec6-4ea6-acb1-c0a694dd2df1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:51 np0005473739 NetworkManager[44949]: <info>  [1759846131.0403] manager: (tap29476070-b1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/81)
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.050 2 INFO os_vif [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1')#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.310 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.311 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.312 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:2d:c7:a1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.313 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Using config drive#033[00m
Oct  7 10:08:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:51Z|00157|binding|INFO|Releasing lport ee8146fb-210f-43d1-b971-6942dccdfe6d from this chassis (sb_readonly=0)
Oct  7 10:08:51 np0005473739 NetworkManager[44949]: <info>  [1759846131.4206] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/82)
Oct  7 10:08:51 np0005473739 NetworkManager[44949]: <info>  [1759846131.4217] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/83)
Oct  7 10:08:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:51Z|00158|binding|INFO|Releasing lport ee8146fb-210f-43d1-b971-6942dccdfe6d from this chassis (sb_readonly=0)
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.508 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1268: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 5.9 MiB/s rd, 2.7 MiB/s wr, 395 op/s
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.919 2 DEBUG nova.compute.manager [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.920 2 DEBUG nova.compute.manager [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing instance network info cache due to event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.920 2 DEBUG oslo_concurrency.lockutils [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.920 2 DEBUG oslo_concurrency.lockutils [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:08:51 np0005473739 nova_compute[259550]: 2025-10-07 14:08:51.921 2 DEBUG nova.network.neutron [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:08:52 np0005473739 nova_compute[259550]: 2025-10-07 14:08:52.013 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Creating config drive at /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config#033[00m
Oct  7 10:08:52 np0005473739 nova_compute[259550]: 2025-10-07 14:08:52.020 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvks_smpy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:52 np0005473739 nova_compute[259550]: 2025-10-07 14:08:52.158 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvks_smpy" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:52 np0005473739 nova_compute[259550]: 2025-10-07 14:08:52.187 2 DEBUG nova.storage.rbd_utils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:08:52 np0005473739 nova_compute[259550]: 2025-10-07 14:08:52.193 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:08:52 np0005473739 nova_compute[259550]: 2025-10-07 14:08:52.926 2 DEBUG nova.network.neutron [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updated VIF entry in instance network info cache for port 29476070-b1b0-4d1c-a313-d0fbd1793130. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:08:52 np0005473739 nova_compute[259550]: 2025-10-07 14:08:52.927 2 DEBUG nova.network.neutron [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updating instance_info_cache with network_info: [{"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:52 np0005473739 nova_compute[259550]: 2025-10-07 14:08:52.961 2 DEBUG oslo_concurrency.lockutils [req-9789ae0f-1582-47b6-ac8c-2e08d0b08a50 req-77768945-b840-4228-968e-78e403065de5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1269: 305 pgs: 305 active+clean; 134 MiB data, 387 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 23 KiB/s wr, 123 op/s
Oct  7 10:08:53 np0005473739 nova_compute[259550]: 2025-10-07 14:08:53.741 2 DEBUG oslo_concurrency.processutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:08:53 np0005473739 nova_compute[259550]: 2025-10-07 14:08:53.743 2 INFO nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Deleting local config drive /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1/disk.config because it was imported into RBD.#033[00m
Oct  7 10:08:53 np0005473739 kernel: tap29476070-b1: entered promiscuous mode
Oct  7 10:08:53 np0005473739 NetworkManager[44949]: <info>  [1759846133.8141] manager: (tap29476070-b1): new Tun device (/org/freedesktop/NetworkManager/Devices/84)
Oct  7 10:08:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:53Z|00159|binding|INFO|Claiming lport 29476070-b1b0-4d1c-a313-d0fbd1793130 for this chassis.
Oct  7 10:08:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:53Z|00160|binding|INFO|29476070-b1b0-4d1c-a313-d0fbd1793130: Claiming fa:16:3e:2d:c7:a1 10.100.0.14
Oct  7 10:08:53 np0005473739 nova_compute[259550]: 2025-10-07 14:08:53.821 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846118.817205, cd77c7c3-e287-4a6a-b2b6-61655f604ec2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:53 np0005473739 nova_compute[259550]: 2025-10-07 14:08:53.822 2 INFO nova.compute.manager [-] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:08:53 np0005473739 nova_compute[259550]: 2025-10-07 14:08:53.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:53Z|00161|binding|INFO|Setting lport 29476070-b1b0-4d1c-a313-d0fbd1793130 ovn-installed in OVS
Oct  7 10:08:53 np0005473739 nova_compute[259550]: 2025-10-07 14:08:53.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:53 np0005473739 nova_compute[259550]: 2025-10-07 14:08:53.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:53 np0005473739 systemd-udevd[293588]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:08:53 np0005473739 systemd-machined[214580]: New machine qemu-28-instance-00000018.
Oct  7 10:08:53 np0005473739 systemd[1]: Started Virtual Machine qemu-28-instance-00000018.
Oct  7 10:08:53 np0005473739 nova_compute[259550]: 2025-10-07 14:08:53.875 2 DEBUG nova.compute.manager [None req-67c33f92-1b14-4047-bf61-fca087f7c3e7 - - - - - -] [instance: cd77c7c3-e287-4a6a-b2b6-61655f604ec2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:53 np0005473739 NetworkManager[44949]: <info>  [1759846133.8825] device (tap29476070-b1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:08:53 np0005473739 NetworkManager[44949]: <info>  [1759846133.8840] device (tap29476070-b1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:08:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:53Z|00162|binding|INFO|Setting lport 29476070-b1b0-4d1c-a313-d0fbd1793130 up in Southbound
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.901 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:c7:a1 10.100.0.14'], port_security=['fa:16:3e:2d:c7:a1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '46d99eec-2ec6-4ea6-acb1-c0a694dd2df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=29476070-b1b0-4d1c-a313-d0fbd1793130) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.902 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 29476070-b1b0-4d1c-a313-d0fbd1793130 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.904 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.922 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa043a59-b7cc-482c-b7ae-28516cc0a24e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.923 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.926 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.926 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a94e69b0-4207-4622-b6c9-22d747ec6ee9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.928 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[84d19632-441e-459c-b4d8-73c476b41461]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.945 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fbce4366-1c07-4a81-9cb7-1f71843edff2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:53.974 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9188444e-57d0-4c66-bc3b-bd90b63dee7f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.014 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a87927a3-bce5-4129-a5ae-8f4bb4c504b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.021 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a06268ca-5c93-44b4-af4c-8fbd3fa2e3b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 NetworkManager[44949]: <info>  [1759846134.0227] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/85)
Oct  7 10:08:54 np0005473739 systemd-udevd[293590]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.069 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[86eb53bc-ac8d-4d6f-bdaa-68e9f9d78ed0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.072 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[49c0753c-41f0-432f-859b-59d3e18998f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 NetworkManager[44949]: <info>  [1759846134.0980] device (tap9f80456d-d0): carrier: link connected
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.107 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2e3c8540-77e1-494e-8b3b-0d8ad8e41ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.125 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa40c67-c4c7-423c-bd2d-3a7e0e3140fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671766, 'reachable_time': 29193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 293621, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:54Z|00163|binding|INFO|Releasing lport ee8146fb-210f-43d1-b971-6942dccdfe6d from this chassis (sb_readonly=0)
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.151 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[390e1fde-3b73-4368-9d98-35718c881ef6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 671766, 'tstamp': 671766}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 293622, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.172 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[44e580d6-7476-43d6-8c88-efc52bf5f608]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 53], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671766, 'reachable_time': 29193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 293623, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.208 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[49934b64-711a-494c-bc03-e442bc3726c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.282 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a66fea7e-b83f-4b24-8735-cbd526f67be6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.284 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.284 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.284 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:54 np0005473739 kernel: tap9f80456d-d0: entered promiscuous mode
Oct  7 10:08:54 np0005473739 NetworkManager[44949]: <info>  [1759846134.2882] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/86)
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:08:54Z|00164|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.307 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.308 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae6b3882-236f-4efc-a56f-59da3e634129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.309 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:08:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:54.310 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.333 2 DEBUG nova.compute.manager [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.335 2 DEBUG oslo_concurrency.lockutils [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.335 2 DEBUG oslo_concurrency.lockutils [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.335 2 DEBUG oslo_concurrency.lockutils [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.336 2 DEBUG nova.compute.manager [req-24d415d5-3f3d-4104-aadf-ecef165d858d req-8c037d13-ff5b-46e1-b2e9-d1ac3a3d4166 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Processing event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.345 2 DEBUG nova.network.neutron [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updated VIF entry in instance network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.346 2 DEBUG nova.network.neutron [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.408 2 DEBUG oslo_concurrency.lockutils [req-d8507d22-d05b-47c3-8834-1609ae0572e1 req-ad83e513-f061-439a-b8f9-1607e2f0ef7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:08:54 np0005473739 podman[293698]: 2025-10-07 14:08:54.699343869 +0000 UTC m=+0.062349207 container create 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:08:54 np0005473739 systemd[1]: Started libpod-conmon-8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188.scope.
Oct  7 10:08:54 np0005473739 podman[293698]: 2025-10-07 14:08:54.661544248 +0000 UTC m=+0.024549616 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:08:54 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:08:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2be2587149f0b517fdb58df03030dad6ce5b5e92bc9471fce4b0fce2cbca3a69/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:08:54 np0005473739 podman[293698]: 2025-10-07 14:08:54.784087844 +0000 UTC m=+0.147093202 container init 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:08:54 np0005473739 podman[293698]: 2025-10-07 14:08:54.790775562 +0000 UTC m=+0.153780900 container start 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:08:54 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [NOTICE]   (293718) : New worker (293720) forked
Oct  7 10:08:54 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [NOTICE]   (293718) : Loading success.
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.858 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846134.8572397, 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.859 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] VM Started (Lifecycle Event)#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.860 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.864 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.868 2 INFO nova.virt.libvirt.driver [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance spawned successfully.#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.869 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.907 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.908 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.909 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.910 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.910 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.911 2 DEBUG nova.virt.libvirt.driver [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.918 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.922 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.983 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.984 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846134.8579829, 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:54 np0005473739 nova_compute[259550]: 2025-10-07 14:08:54.984 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.019 2 INFO nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 12.26 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.020 2 DEBUG nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.023 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.029 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846134.8633146, 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.030 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.069 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.073 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.103 2 INFO nova.compute.manager [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 13.65 seconds to build instance.#033[00m
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.123 2 DEBUG oslo_concurrency.lockutils [None req-c682af9f-c828-49e3-9f2a-4f0aa6df9f91 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1270: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 34 KiB/s wr, 111 op/s
Oct  7 10:08:55 np0005473739 nova_compute[259550]: 2025-10-07 14:08:55.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:08:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:56.271 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:08:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:56.272 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.581 2 DEBUG nova.compute.manager [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.582 2 DEBUG oslo_concurrency.lockutils [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.582 2 DEBUG oslo_concurrency.lockutils [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.582 2 DEBUG oslo_concurrency.lockutils [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.583 2 DEBUG nova.compute.manager [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] No waiting events found dispatching network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.583 2 WARNING nova.compute.manager [req-bc0e0c73-2f82-4e6f-8152-ff5c7dd37d0d req-3ef7c5fd-87e8-44cd-b8b6-7225a7352259 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received unexpected event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.936 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846121.9351785, 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.937 2 INFO nova.compute.manager [-] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:08:56 np0005473739 nova_compute[259550]: 2025-10-07 14:08:56.953 2 DEBUG nova.compute.manager [None req-8ed40e0e-590a-4635-bf91-7c43be7e3fba - - - - - -] [instance: 4e2a0b1a-5345-4e44-9d70-3d5e353a5dd1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:57 np0005473739 nova_compute[259550]: 2025-10-07 14:08:57.128 2 DEBUG oslo_concurrency.lockutils [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:08:57 np0005473739 nova_compute[259550]: 2025-10-07 14:08:57.130 2 DEBUG oslo_concurrency.lockutils [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:08:57 np0005473739 nova_compute[259550]: 2025-10-07 14:08:57.130 2 DEBUG nova.compute.manager [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:57 np0005473739 nova_compute[259550]: 2025-10-07 14:08:57.136 2 DEBUG nova.compute.manager [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  7 10:08:57 np0005473739 nova_compute[259550]: 2025-10-07 14:08:57.137 2 DEBUG nova.objects.instance [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'flavor' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:08:57 np0005473739 nova_compute[259550]: 2025-10-07 14:08:57.163 2 DEBUG nova.virt.libvirt.driver [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:08:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:08:57.274 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:08:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1271: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 2.4 MiB/s rd, 34 KiB/s wr, 110 op/s
Oct  7 10:08:58 np0005473739 nova_compute[259550]: 2025-10-07 14:08:58.861 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846123.8593931, 31263139-61ff-4691-a1a9-a8d53fd7b388 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:08:58 np0005473739 nova_compute[259550]: 2025-10-07 14:08:58.861 2 INFO nova.compute.manager [-] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:08:58 np0005473739 nova_compute[259550]: 2025-10-07 14:08:58.898 2 DEBUG nova.compute.manager [None req-42d52f3f-6a37-44e0-8332-1391f52b419e - - - - - -] [instance: 31263139-61ff-4691-a1a9-a8d53fd7b388] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:08:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1272: 305 pgs: 305 active+clean; 134 MiB data, 381 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 15 KiB/s wr, 133 op/s
Oct  7 10:09:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:00.042 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:00.042 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:00.043 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.385 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "2f611962-32b5-4b21-b23b-303bbf54564d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.386 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.413 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.558 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.559 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.565 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.565 2 INFO nova.compute.claims [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:00 np0005473739 nova_compute[259550]: 2025-10-07 14:09:00.698 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.089 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/158927846' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.180 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.188 2 DEBUG nova.compute.provider_tree [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.202 2 DEBUG nova.scheduler.client.report [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.224 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.224 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.267 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.268 2 DEBUG nova.network.neutron [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.287 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.302 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.524 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.525 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.526 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Creating image(s)#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.543 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1273: 305 pgs: 305 active+clean; 146 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 910 KiB/s wr, 105 op/s
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.562 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.581 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.585 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.614 2 DEBUG nova.network.neutron [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.614 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.652 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.653 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.654 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.654 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.676 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:01 np0005473739 nova_compute[259550]: 2025-10-07 14:09:01.680 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2f611962-32b5-4b21-b23b-303bbf54564d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:02 np0005473739 podman[293846]: 2025-10-07 14:09:02.148477508 +0000 UTC m=+0.133582961 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct  7 10:09:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:02Z|00028|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:45:8d:83 10.100.0.13
Oct  7 10:09:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:02Z|00029|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:45:8d:83 10.100.0.13
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.201 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2f611962-32b5-4b21-b23b-303bbf54564d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.268 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] resizing rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.467 2 DEBUG nova.objects.instance [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lazy-loading 'migration_context' on Instance uuid 2f611962-32b5-4b21-b23b-303bbf54564d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.519 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.520 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Ensure instance console log exists: /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.520 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.520 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.521 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.522 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.528 2 WARNING nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.535 2 DEBUG nova.virt.libvirt.host [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.536 2 DEBUG nova.virt.libvirt.host [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.544 2 DEBUG nova.virt.libvirt.host [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.544 2 DEBUG nova.virt.libvirt.host [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.545 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.545 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.545 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.545 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.546 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.546 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.546 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.546 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.547 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.547 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.547 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.548 2 DEBUG nova.virt.hardware [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:09:02 np0005473739 nova_compute[259550]: 2025-10-07 14:09:02.550 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294341034' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.040 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.066 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.072 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1274: 305 pgs: 305 active+clean; 146 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 811 KiB/s wr, 94 op/s
Oct  7 10:09:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/770756175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.680 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.607s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.682 2 DEBUG nova.objects.instance [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2f611962-32b5-4b21-b23b-303bbf54564d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.699 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <uuid>2f611962-32b5-4b21-b23b-303bbf54564d</uuid>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <name>instance-00000019</name>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerExternalEventsTest-server-1051337348</nova:name>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:09:02</nova:creationTime>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <nova:user uuid="3dcaa1bf2963492faed5df9583344ef6">tempest-ServerExternalEventsTest-2003492389-project-member</nova:user>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <nova:project uuid="937a05b30aa24e7ea8e94bd7364ff355">tempest-ServerExternalEventsTest-2003492389</nova:project>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <entry name="serial">2f611962-32b5-4b21-b23b-303bbf54564d</entry>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <entry name="uuid">2f611962-32b5-4b21-b23b-303bbf54564d</entry>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/2f611962-32b5-4b21-b23b-303bbf54564d_disk">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/2f611962-32b5-4b21-b23b-303bbf54564d_disk.config">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/console.log" append="off"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:09:03 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:09:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:09:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:09:03 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.762 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.763 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.763 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Using config drive#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.786 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:03 np0005473739 podman[294006]: 2025-10-07 14:09:03.792950049 +0000 UTC m=+0.052136074 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.931 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Creating config drive at /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config#033[00m
Oct  7 10:09:03 np0005473739 nova_compute[259550]: 2025-10-07 14:09:03.957 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tkmmqlc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:04 np0005473739 nova_compute[259550]: 2025-10-07 14:09:04.116 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8tkmmqlc" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:04 np0005473739 nova_compute[259550]: 2025-10-07 14:09:04.144 2 DEBUG nova.storage.rbd_utils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] rbd image 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:04 np0005473739 nova_compute[259550]: 2025-10-07 14:09:04.148 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:04 np0005473739 nova_compute[259550]: 2025-10-07 14:09:04.336 2 DEBUG oslo_concurrency.processutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config 2f611962-32b5-4b21-b23b-303bbf54564d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:04 np0005473739 nova_compute[259550]: 2025-10-07 14:09:04.338 2 INFO nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Deleting local config drive /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d/disk.config because it was imported into RBD.#033[00m
Oct  7 10:09:04 np0005473739 systemd-machined[214580]: New machine qemu-29-instance-00000019.
Oct  7 10:09:04 np0005473739 systemd[1]: Started Virtual Machine qemu-29-instance-00000019.
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.340 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846145.3395026, 2f611962-32b5-4b21-b23b-303bbf54564d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.341 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.345 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.346 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.351 2 INFO nova.virt.libvirt.driver [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance spawned successfully.#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.352 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.448 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.452 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.515 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.516 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.517 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.517 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.518 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.518 2 DEBUG nova.virt.libvirt.driver [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1275: 305 pgs: 305 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 168 op/s
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.653 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.654 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846145.3436983, 2f611962-32b5-4b21-b23b-303bbf54564d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.654 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] VM Started (Lifecycle Event)#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.697 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.701 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.817 2 INFO nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Took 4.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.818 2 DEBUG nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.851 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:05 np0005473739 nova_compute[259550]: 2025-10-07 14:09:05.996 2 INFO nova.compute.manager [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Took 5.46 seconds to build instance.#033[00m
Oct  7 10:09:06 np0005473739 nova_compute[259550]: 2025-10-07 14:09:06.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:06 np0005473739 nova_compute[259550]: 2025-10-07 14:09:06.164 2 DEBUG oslo_concurrency.lockutils [None req-33af04e7-cf0b-49d0-8055-04c5de5ea414 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.778s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:07 np0005473739 nova_compute[259550]: 2025-10-07 14:09:07.208 2 DEBUG nova.virt.libvirt.driver [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:09:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1276: 305 pgs: 305 active+clean; 214 MiB data, 415 MiB used, 60 GiB / 60 GiB avail; 2.1 MiB/s rd, 3.9 MiB/s wr, 159 op/s
Oct  7 10:09:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:07Z|00030|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2d:c7:a1 10.100.0.14
Oct  7 10:09:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:07Z|00031|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2d:c7:a1 10.100.0.14
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.154 2 DEBUG nova.compute.manager [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Received event network-changed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.155 2 DEBUG nova.compute.manager [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Refreshing instance network info cache due to event network-changed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.155 2 DEBUG oslo_concurrency.lockutils [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] Acquiring lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.156 2 DEBUG oslo_concurrency.lockutils [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] Acquired lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.156 2 DEBUG nova.network.neutron [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.408 2 DEBUG nova.network.neutron [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.424 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "2f611962-32b5-4b21-b23b-303bbf54564d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.425 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.425 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "2f611962-32b5-4b21-b23b-303bbf54564d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.425 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.426 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.427 2 INFO nova.compute.manager [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Terminating instance#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.428 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.665 2 DEBUG nova.network.neutron [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.769 2 DEBUG oslo_concurrency.lockutils [None req-a868c127-fbf5-409f-b31f-f69c47c9b879 710f3481ff6440b7b50ab1f3efc7d084 3d771eced5314fe9ace3db8e61070833 - - default default] Releasing lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.771 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquired lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.771 2 DEBUG nova.network.neutron [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:08 np0005473739 nova_compute[259550]: 2025-10-07 14:09:08.946 2 DEBUG nova.network.neutron [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.175 2 DEBUG nova.network.neutron [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.189 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Releasing lock "refresh_cache-2f611962-32b5-4b21-b23b-303bbf54564d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.190 2 DEBUG nova.compute.manager [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.260 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.261 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:09 np0005473739 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Deactivated successfully.
Oct  7 10:09:09 np0005473739 systemd[1]: machine-qemu\x2d29\x2dinstance\x2d00000019.scope: Consumed 4.731s CPU time.
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.294 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:09:09 np0005473739 systemd-machined[214580]: Machine qemu-29-instance-00000019 terminated.
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.371 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.373 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.379 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.380 2 INFO nova.compute.claims [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.395 2 DEBUG nova.objects.instance [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'flavor' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.413 2 INFO nova.virt.libvirt.driver [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance destroyed successfully.#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.415 2 DEBUG nova.objects.instance [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lazy-loading 'resources' on Instance uuid 2f611962-32b5-4b21-b23b-303bbf54564d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.422 2 DEBUG oslo_concurrency.lockutils [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.422 2 DEBUG oslo_concurrency.lockutils [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.526 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1277: 305 pgs: 305 active+clean; 234 MiB data, 433 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.4 MiB/s wr, 258 op/s
Oct  7 10:09:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/279651159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.981 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:09 np0005473739 nova_compute[259550]: 2025-10-07 14:09:09.991 2 DEBUG nova.compute.provider_tree [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.013 2 DEBUG nova.scheduler.client.report [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.044 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.045 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.092 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.093 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.118 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.139 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.238 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.239 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.240 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Creating image(s)#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.262 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.286 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.315 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.321 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.353 2 DEBUG nova.policy [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21b4c507f5c443f4b43306c884b1d67f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.393 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.393 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.394 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.394 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.416 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.420 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.541 2 INFO nova.virt.libvirt.driver [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Deleting instance files /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d_del#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.542 2 INFO nova.virt.libvirt.driver [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Deletion of /var/lib/nova/instances/2f611962-32b5-4b21-b23b-303bbf54564d_del complete#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.599 2 INFO nova.compute.manager [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Took 1.41 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.600 2 DEBUG oslo.service.loopingcall [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.600 2 DEBUG nova.compute.manager [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.600 2 DEBUG nova.network.neutron [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.708 2 DEBUG nova.network.neutron [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.831 2 DEBUG nova.network.neutron [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.865 2 DEBUG nova.compute.manager [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.865 2 DEBUG nova.compute.manager [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing instance network info cache due to event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.866 2 DEBUG oslo_concurrency.lockutils [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.889 2 DEBUG nova.network.neutron [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:10 np0005473739 nova_compute[259550]: 2025-10-07 14:09:10.943 2 INFO nova.compute.manager [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Took 0.34 seconds to deallocate network for instance.#033[00m
Oct  7 10:09:11 np0005473739 kernel: tap29476070-b1 (unregistering): left promiscuous mode
Oct  7 10:09:11 np0005473739 NetworkManager[44949]: <info>  [1759846151.0645] device (tap29476070-b1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:09:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:11Z|00165|binding|INFO|Releasing lport 29476070-b1b0-4d1c-a313-d0fbd1793130 from this chassis (sb_readonly=0)
Oct  7 10:09:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:11Z|00166|binding|INFO|Setting lport 29476070-b1b0-4d1c-a313-d0fbd1793130 down in Southbound
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.079 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.659s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:11Z|00167|binding|INFO|Removing iface tap29476070-b1 ovn-installed in OVS
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.104 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2d:c7:a1 10.100.0.14'], port_security=['fa:16:3e:2d:c7:a1 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '46d99eec-2ec6-4ea6-acb1-c0a694dd2df1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=29476070-b1b0-4d1c-a313-d0fbd1793130) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.106 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 29476070-b1b0-4d1c-a313-d0fbd1793130 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.108 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a60c3d3-2f08-4f10-83e3-82baccea7818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.111 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.117 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.118 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:09:11 np0005473739 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Deactivated successfully.
Oct  7 10:09:11 np0005473739 systemd[1]: machine-qemu\x2d28\x2dinstance\x2d00000018.scope: Consumed 13.169s CPU time.
Oct  7 10:09:11 np0005473739 systemd-machined[214580]: Machine qemu-28-instance-00000018 terminated.
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.167 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] resizing rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:09:11 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [NOTICE]   (293718) : haproxy version is 2.8.14-c23fe91
Oct  7 10:09:11 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [NOTICE]   (293718) : path to executable is /usr/sbin/haproxy
Oct  7 10:09:11 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [WARNING]  (293718) : Exiting Master process...
Oct  7 10:09:11 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [ALERT]    (293718) : Current worker (293720) exited with code 143 (Terminated)
Oct  7 10:09:11 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[293714]: [WARNING]  (293718) : All workers exited. Exiting... (0)
Oct  7 10:09:11 np0005473739 systemd[1]: libpod-8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188.scope: Deactivated successfully.
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.281 2 DEBUG oslo_concurrency.processutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:11 np0005473739 podman[294354]: 2025-10-07 14:09:11.2840006 +0000 UTC m=+0.060224710 container died 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.324 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Successfully created port: 1a36fc68-b90f-4e28-a866-7dfb6f218d37 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.332 2 INFO nova.virt.libvirt.driver [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance shutdown successfully after 14 seconds.#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.347 2 INFO nova.virt.libvirt.driver [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance destroyed successfully.#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.347 2 DEBUG nova.objects.instance [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'numa_topology' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188-userdata-shm.mount: Deactivated successfully.
Oct  7 10:09:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2be2587149f0b517fdb58df03030dad6ce5b5e92bc9471fce4b0fce2cbca3a69-merged.mount: Deactivated successfully.
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.394 2 DEBUG nova.objects.instance [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'migration_context' on Instance uuid 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:11 np0005473739 podman[294354]: 2025-10-07 14:09:11.405644232 +0000 UTC m=+0.181868332 container cleanup 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:09:11 np0005473739 systemd[1]: libpod-conmon-8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188.scope: Deactivated successfully.
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.432 2 DEBUG nova.compute.manager [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.463 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.464 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Ensure instance console log exists: /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.464 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.464 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.464 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.505 2 DEBUG oslo_concurrency.lockutils [None req-4726a7f7-bfb5-42f4-9efb-fa04432ef23c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 14.375s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:11 np0005473739 podman[294414]: 2025-10-07 14:09:11.539032816 +0000 UTC m=+0.104954586 container remove 8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.546 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ad46ff0-9ee7-432e-af9d-15e7d79c70f1]: (4, ('Tue Oct  7 02:09:11 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188)\n8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188\nTue Oct  7 02:09:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188)\n8ef892cd7be98340ac3008728aca201da88dd336c7fa10c18d782c693c601188\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.548 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc983e4d-0783-4b0b-a5be-161cc21159f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.549 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:11 np0005473739 kernel: tap9f80456d-d0: left promiscuous mode
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1278: 305 pgs: 305 active+clean; 230 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.1 MiB/s wr, 245 op/s
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98f64a5a-3372-425f-9820-9683e5575b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.625 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[82033271-2276-4970-8c72-85e01185106d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.627 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e76810b8-814d-4b8f-9bfc-7de62bb977ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.646 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[86dd1058-5035-435d-9749-e625db0adba3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 671756, 'reachable_time': 34662, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294451, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:11 np0005473739 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.652 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:09:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:11.652 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a79218d5-d75e-4a2f-8732-9552faabe7a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1215526621' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.758 2 DEBUG oslo_concurrency.processutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.764 2 DEBUG nova.compute.provider_tree [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.817 2 DEBUG nova.scheduler.client.report [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.852 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.734s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:11 np0005473739 nova_compute[259550]: 2025-10-07 14:09:11.908 2 INFO nova.scheduler.client.report [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Deleted allocations for instance 2f611962-32b5-4b21-b23b-303bbf54564d#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.019 2 DEBUG oslo_concurrency.lockutils [None req-0fd40889-7722-4859-b279-8547de152d42 3dcaa1bf2963492faed5df9583344ef6 937a05b30aa24e7ea8e94bd7364ff355 - - default default] Lock "2f611962-32b5-4b21-b23b-303bbf54564d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.531 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Successfully updated port: 1a36fc68-b90f-4e28-a866-7dfb6f218d37 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.536 2 DEBUG nova.network.neutron [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.567 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.568 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquired lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.568 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.625 2 DEBUG oslo_concurrency.lockutils [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.625 2 DEBUG nova.compute.manager [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.625 2 DEBUG nova.compute.manager [None req-2df2bcf6-906e-4ce2-8973-740618e56219 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] network_info to inject: |[{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.627 2 DEBUG oslo_concurrency.lockutils [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.627 2 DEBUG nova.network.neutron [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.739 2 DEBUG nova.compute.manager [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-changed-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.739 2 DEBUG nova.compute.manager [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Refreshing instance network info cache due to event network-changed-1a36fc68-b90f-4e28-a866-7dfb6f218d37. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.740 2 DEBUG oslo_concurrency.lockutils [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:12 np0005473739 nova_compute[259550]: 2025-10-07 14:09:12.865 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1279: 305 pgs: 305 active+clean; 230 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.3 MiB/s wr, 224 op/s
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.971 2 DEBUG nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-unplugged-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.972 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.972 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.972 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.973 2 DEBUG nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] No waiting events found dispatching network-vif-unplugged-29476070-b1b0-4d1c-a313-d0fbd1793130 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.973 2 WARNING nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received unexpected event network-vif-unplugged-29476070-b1b0-4d1c-a313-d0fbd1793130 for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.973 2 DEBUG nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.973 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.974 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.974 2 DEBUG oslo_concurrency.lockutils [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.974 2 DEBUG nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] No waiting events found dispatching network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:13 np0005473739 nova_compute[259550]: 2025-10-07 14:09:13.974 2 WARNING nova.compute.manager [req-8cdc8716-b834-408f-aaef-fd2612fa32fb req-be8a6754-c885-4313-b6e7-d3f8f5bb7c3e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received unexpected event network-vif-plugged-29476070-b1b0-4d1c-a313-d0fbd1793130 for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.379 2 DEBUG nova.network.neutron [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Updating instance_info_cache with network_info: [{"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.493 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Releasing lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.493 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance network_info: |[{"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.494 2 DEBUG oslo_concurrency.lockutils [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.494 2 DEBUG nova.network.neutron [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Refreshing network info cache for port 1a36fc68-b90f-4e28-a866-7dfb6f218d37 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.496 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start _get_guest_xml network_info=[{"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.500 2 WARNING nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.504 2 DEBUG nova.virt.libvirt.host [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.505 2 DEBUG nova.virt.libvirt.host [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.509 2 DEBUG nova.virt.libvirt.host [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.510 2 DEBUG nova.virt.libvirt.host [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.510 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.511 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.511 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.511 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.512 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.512 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.512 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.513 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.513 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.513 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.514 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.514 2 DEBUG nova.virt.hardware [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.518 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.742 2 DEBUG nova.network.neutron [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updated VIF entry in instance network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.743 2 DEBUG nova.network.neutron [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}, {"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.804 2 DEBUG nova.objects.instance [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'flavor' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.828 2 DEBUG oslo_concurrency.lockutils [req-378113ef-5499-4a1a-9e15-c0bffec1ac9f req-01917955-1cc1-46a6-b382-f61c754564cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1831562224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.988 2 DEBUG oslo_concurrency.lockutils [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.988 2 DEBUG oslo_concurrency.lockutils [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:14 np0005473739 nova_compute[259550]: 2025-10-07 14:09:14.994 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.014 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.019 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.051 2 DEBUG nova.compute.manager [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.252 2 INFO nova.compute.manager [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] instance snapshotting#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.253 2 WARNING nova.compute.manager [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct  7 10:09:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3576184467' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.464 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.466 2 DEBUG nova.virt.libvirt.vif [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-527928282',display_name='tempest-ImagesOneServerNegativeTestJSON-server-527928282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-527928282',id=26,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7pnh0023',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:10Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=7ee2904f-492a-4ffe-bdc2-6f4ec3285851,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.466 2 DEBUG nova.network.os_vif_util [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.467 2 DEBUG nova.network.os_vif_util [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.468 2 DEBUG nova.objects.instance [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.478 2 INFO nova.virt.libvirt.driver [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Beginning cold snapshot process#033[00m
Oct  7 10:09:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1280: 305 pgs: 305 active+clean; 246 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.5 MiB/s rd, 7.1 MiB/s wr, 266 op/s
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.641 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <uuid>7ee2904f-492a-4ffe-bdc2-6f4ec3285851</uuid>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <name>instance-0000001a</name>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-527928282</nova:name>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:09:14</nova:creationTime>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <nova:user uuid="21b4c507f5c443f4b43306c884b1d67f">tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member</nova:user>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <nova:project uuid="c166ae9e4e0f43d38afaa35966f84b05">tempest-ImagesOneServerNegativeTestJSON-2130756304</nova:project>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <nova:port uuid="1a36fc68-b90f-4e28-a866-7dfb6f218d37">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <entry name="serial">7ee2904f-492a-4ffe-bdc2-6f4ec3285851</entry>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <entry name="uuid">7ee2904f-492a-4ffe-bdc2-6f4ec3285851</entry>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:30:ce:2c"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <target dev="tap1a36fc68-b9"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/console.log" append="off"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:09:15 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:09:15 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:09:15 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:09:15 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.642 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Preparing to wait for external event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.643 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.643 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.643 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.644 2 DEBUG nova.virt.libvirt.vif [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-527928282',display_name='tempest-ImagesOneServerNegativeTestJSON-server-527928282',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-527928282',id=26,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7pnh0023',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:10Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=7ee2904f-492a-4ffe-bdc2-6f4ec3285851,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.644 2 DEBUG nova.network.os_vif_util [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.645 2 DEBUG nova.network.os_vif_util [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.646 2 DEBUG os_vif [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.647 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.652 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a36fc68-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.652 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a36fc68-b9, col_values=(('external_ids', {'iface-id': '1a36fc68-b90f-4e28-a866-7dfb6f218d37', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:30:ce:2c', 'vm-uuid': '7ee2904f-492a-4ffe-bdc2-6f4ec3285851'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:15 np0005473739 NetworkManager[44949]: <info>  [1759846155.6554] manager: (tap1a36fc68-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/87)
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.666 2 INFO os_vif [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9')#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.750 2 DEBUG nova.network.neutron [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Updated VIF entry in instance network info cache for port 1a36fc68-b90f-4e28-a866-7dfb6f218d37. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:09:15 np0005473739 nova_compute[259550]: 2025-10-07 14:09:15.750 2 DEBUG nova.network.neutron [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Updating instance_info_cache with network_info: [{"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.050 2 DEBUG oslo_concurrency.lockutils [req-87cbcd83-d9b5-417d-8505-7a3dbc5ce9e6 req-bf0310e5-d69c-47e9-a3dc-13e9ecb4cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-7ee2904f-492a-4ffe-bdc2-6f4ec3285851" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.326 2 DEBUG nova.network.neutron [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.336 2 DEBUG nova.virt.libvirt.imagebackend [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.340 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.340 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.340 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No VIF found with MAC fa:16:3e:30:ce:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.341 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Using config drive#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.368 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.554 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(9cf4cd9b69b64d4f87be7f48d49ed96d) on rbd image(46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:09:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e153 do_prune osdmap full prune enabled
Oct  7 10:09:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e154 e154: 3 total, 3 up, 3 in
Oct  7 10:09:16 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e154: 3 total, 3 up, 3 in
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.692 2 DEBUG nova.compute.manager [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.693 2 DEBUG nova.compute.manager [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing instance network info cache due to event network-changed-c10a9a85-702e-4bd4-92d9-474eb88ec422. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.693 2 DEBUG oslo_concurrency.lockutils [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.815 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk@9cf4cd9b69b64d4f87be7f48d49ed96d to images/e0e0ef2a-e43a-42e6-af8b-792832119b67 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:09:16 np0005473739 nova_compute[259550]: 2025-10-07 14:09:16.966 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/e0e0ef2a-e43a-42e6-af8b-792832119b67 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.020 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Creating config drive at /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.029 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpxvzsg0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.182 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprpxvzsg0" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.214 2 DEBUG nova.storage.rbd_utils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.219 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1282: 305 pgs: 305 active+clean; 246 MiB data, 441 MiB used, 60 GiB / 60 GiB avail; 2.8 MiB/s rd, 4.7 MiB/s wr, 229 op/s
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.736 2 DEBUG nova.network.neutron [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.825 2 DEBUG oslo_concurrency.lockutils [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.826 2 DEBUG nova.compute.manager [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.826 2 DEBUG nova.compute.manager [None req-8941c0b8-4717-4979-b2dc-0357f2cb397c 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] network_info to inject: |[{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.831 2 DEBUG oslo_concurrency.lockutils [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:17 np0005473739 nova_compute[259550]: 2025-10-07 14:09:17.831 2 DEBUG nova.network.neutron [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Refreshing network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.010 2 DEBUG oslo_concurrency.processutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config 7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.791s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.010 2 INFO nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Deleting local config drive /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851/disk.config because it was imported into RBD.#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.016 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(9cf4cd9b69b64d4f87be7f48d49ed96d) on rbd image(46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:09:18 np0005473739 kernel: tap1a36fc68-b9: entered promiscuous mode
Oct  7 10:09:18 np0005473739 NetworkManager[44949]: <info>  [1759846158.0701] manager: (tap1a36fc68-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/88)
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:18Z|00168|binding|INFO|Claiming lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 for this chassis.
Oct  7 10:09:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:18Z|00169|binding|INFO|1a36fc68-b90f-4e28-a866-7dfb6f218d37: Claiming fa:16:3e:30:ce:2c 10.100.0.14
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.094 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ce:2c 10.100.0.14'], port_security=['fa:16:3e:30:ce:2c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7ee2904f-492a-4ffe-bdc2-6f4ec3285851', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '2', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1a36fc68-b90f-4e28-a866-7dfb6f218d37) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.096 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1a36fc68-b90f-4e28-a866-7dfb6f218d37 in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d bound to our chassis#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.097 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d#033[00m
Oct  7 10:09:18 np0005473739 systemd-udevd[294713]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:09:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:18Z|00170|binding|INFO|Setting lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 ovn-installed in OVS
Oct  7 10:09:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:18Z|00171|binding|INFO|Setting lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 up in Southbound
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1dd006-c447-44a2-a337-5950105e2e9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.110 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a5c95d4-11 in ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.112 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a5c95d4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.112 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61554bf8-7da3-44e4-ac0b-05d2f4d8ebf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.113 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b70849db-0cc0-4d8d-92bb-2f7d9393f062]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 NetworkManager[44949]: <info>  [1759846158.1208] device (tap1a36fc68-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:09:18 np0005473739 systemd-machined[214580]: New machine qemu-30-instance-0000001a.
Oct  7 10:09:18 np0005473739 NetworkManager[44949]: <info>  [1759846158.1215] device (tap1a36fc68-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.126 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dc54b3d8-8549-47a6-b6ba-def5498ada35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 systemd[1]: Started Virtual Machine qemu-30-instance-0000001a.
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.156 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[249f9d67-1bf8-4488-901f-0b87e9129aa5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.186 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[66de09c9-db01-4d38-8356-0dcdf6c24d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.192 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13b06e25-ffb8-4201-babb-6f58f5da6761]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 NetworkManager[44949]: <info>  [1759846158.1931] manager: (tap0a5c95d4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/89)
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.223 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[569e0d33-0d69-4711-b0be-57d3a933cc48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.226 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[477c1bf0-3708-4a6b-885e-fe962d285fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 NetworkManager[44949]: <info>  [1759846158.2550] device (tap0a5c95d4-10): carrier: link connected
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.259 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d10e0403-1bd9-4c94-9fe8-97405729e04f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dc890a6e-68d8-4295-95d9-70751865352c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674181, 'reachable_time': 39165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294753, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.300 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[daf6e8e4-26f7-42b9-867e-9478680ef080]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:6312'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 674181, 'tstamp': 674181}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 294754, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.329 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4757ef9c-2774-4410-bfe2-c8fa5f950bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 56], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674181, 'reachable_time': 39165, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 294755, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.362 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6052cfd3-06a4-46c4-bf8f-7d77eeb7ecca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.427 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c10b59ab-3999-48df-b715-32c5af23b7c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a5c95d4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 kernel: tap0a5c95d4-10: entered promiscuous mode
Oct  7 10:09:18 np0005473739 NetworkManager[44949]: <info>  [1759846158.4326] manager: (tap0a5c95d4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/90)
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.438 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a5c95d4-10, col_values=(('external_ids', {'iface-id': 'a8291172-baf1-4252-9a0d-af7ef7ffa931'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:18Z|00172|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.464 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.465 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13d0301b-f29c-4d11-8bbf-116a55cb6530]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.466 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.467 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'env', 'PROCESS_TAG=haproxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a5c95d4-1a77-48f5-83c0-afa976b7583d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.716 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.718 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.718 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.719 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.719 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.720 2 INFO nova.compute.manager [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Terminating instance#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.721 2 DEBUG nova.compute.manager [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:09:18 np0005473739 kernel: tapc10a9a85-70 (unregistering): left promiscuous mode
Oct  7 10:09:18 np0005473739 NetworkManager[44949]: <info>  [1759846158.7823] device (tapc10a9a85-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:18Z|00173|binding|INFO|Releasing lport c10a9a85-702e-4bd4-92d9-474eb88ec422 from this chassis (sb_readonly=0)
Oct  7 10:09:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:18Z|00174|binding|INFO|Setting lport c10a9a85-702e-4bd4-92d9-474eb88ec422 down in Southbound
Oct  7 10:09:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:18Z|00175|binding|INFO|Removing iface tapc10a9a85-70 ovn-installed in OVS
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:18 np0005473739 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Deactivated successfully.
Oct  7 10:09:18 np0005473739 systemd[1]: machine-qemu\x2d27\x2dinstance\x2d00000017.scope: Consumed 14.388s CPU time.
Oct  7 10:09:18 np0005473739 systemd-machined[214580]: Machine qemu-27-instance-00000017 terminated.
Oct  7 10:09:18 np0005473739 podman[294834]: 2025-10-07 14:09:18.862496158 +0000 UTC m=+0.069273603 container create 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:09:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e154 do_prune osdmap full prune enabled
Oct  7 10:09:18 np0005473739 systemd[1]: Started libpod-conmon-757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0.scope.
Oct  7 10:09:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e155 e155: 3 total, 3 up, 3 in
Oct  7 10:09:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:18.911 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:45:8d:83 10.100.0.13'], port_security=['fa:16:3e:45:8d:83 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5fc2c826-a57b-4c9a-910a-48b72ec2ab75', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'b03b40fd15b945118fde82b6454dbced', 'neutron:revision_number': '6', 'neutron:security_group_ids': '33a99878-f5f8-4125-a239-2333c49ae6de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=82ef4b2f-7b03-44f3-86f3-9cfeec6eece1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c10a9a85-702e-4bd4-92d9-474eb88ec422) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:09:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e155: 3 total, 3 up, 3 in
Oct  7 10:09:18 np0005473739 podman[294834]: 2025-10-07 14:09:18.8210503 +0000 UTC m=+0.027827795 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:09:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:18 np0005473739 NetworkManager[44949]: <info>  [1759846158.9447] manager: (tapc10a9a85-70): new Tun device (/org/freedesktop/NetworkManager/Devices/91)
Oct  7 10:09:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc2a49701af0e481b1ad6d0cc40f5f1da2f9ada3657154d7d31efc1fe61be3f5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:18 np0005473739 nova_compute[259550]: 2025-10-07 14:09:18.961 2 DEBUG nova.storage.rbd_utils [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(e0e0ef2a-e43a-42e6-af8b-792832119b67) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:09:18 np0005473739 podman[294834]: 2025-10-07 14:09:18.969184239 +0000 UTC m=+0.175961694 container init 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:09:18 np0005473739 podman[294834]: 2025-10-07 14:09:18.97631718 +0000 UTC m=+0.183094605 container start 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.002 2 INFO nova.virt.libvirt.driver [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Instance destroyed successfully.#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.003 2 DEBUG nova.objects.instance [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lazy-loading 'resources' on Instance uuid 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:19 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [NOTICE]   (294879) : New worker (294884) forked
Oct  7 10:09:19 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [NOTICE]   (294879) : Loading success.
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.047 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c10a9a85-702e-4bd4-92d9-474eb88ec422 in datapath 9ef175ad-b29a-40a5-8bff-1ae744434f58 unbound from our chassis#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.048 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9ef175ad-b29a-40a5-8bff-1ae744434f58, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.049 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ff32c721-2f79-4c67-b5b7-ac41438827f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.050 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 namespace which is not needed anymore#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.059 2 DEBUG nova.virt.libvirt.vif [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:08:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-822129567',display_name='tempest-AttachInterfacesUnderV243Test-server-822129567',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-822129567',id=23,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqYH9kZe9rX7hHEHsSyYrS6rRZ+9wOpWI4SrDObDRhVhO+VRnrHxHs1eVVbdpvsvOSO+WGarbwizJejZEygylEjS/5KcFmLUFjCsp72PetsN1n04qwsYDZRBjJQ0LVJNw==',key_name='tempest-keypair-1173887905',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:08:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='b03b40fd15b945118fde82b6454dbced',ramdisk_id='',reservation_id='r-z4cql8qa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-169595365',owner_user_name='tempest-AttachInterfacesUnderV243Test-169595365-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:09:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='157a909ec18a483a901ec32a0a867038',uuid=5fc2c826-a57b-4c9a-910a-48b72ec2ab75,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.060 2 DEBUG nova.network.os_vif_util [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converting VIF {"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.061 2 DEBUG nova.network.os_vif_util [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.061 2 DEBUG os_vif [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.065 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc10a9a85-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.076 2 INFO os_vif [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:45:8d:83,bridge_name='br-int',has_traffic_filtering=True,id=c10a9a85-702e-4bd4-92d9-474eb88ec422,network=Network(9ef175ad-b29a-40a5-8bff-1ae744434f58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc10a9a85-70')#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.104 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846159.104299, 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.105 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] VM Started (Lifecycle Event)#033[00m
Oct  7 10:09:19 np0005473739 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [NOTICE]   (293337) : haproxy version is 2.8.14-c23fe91
Oct  7 10:09:19 np0005473739 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [NOTICE]   (293337) : path to executable is /usr/sbin/haproxy
Oct  7 10:09:19 np0005473739 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [WARNING]  (293337) : Exiting Master process...
Oct  7 10:09:19 np0005473739 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [WARNING]  (293337) : Exiting Master process...
Oct  7 10:09:19 np0005473739 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [ALERT]    (293337) : Current worker (293339) exited with code 143 (Terminated)
Oct  7 10:09:19 np0005473739 neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58[293333]: [WARNING]  (293337) : All workers exited. Exiting... (0)
Oct  7 10:09:19 np0005473739 systemd[1]: libpod-ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77.scope: Deactivated successfully.
Oct  7 10:09:19 np0005473739 podman[294929]: 2025-10-07 14:09:19.205840384 +0000 UTC m=+0.054890958 container died ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:09:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77-userdata-shm.mount: Deactivated successfully.
Oct  7 10:09:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-11d889d5dab38b3fea3ecfaefc7e9381c5ac5e4cbaa2ea3617b44dc00bb8f71d-merged.mount: Deactivated successfully.
Oct  7 10:09:19 np0005473739 podman[294929]: 2025-10-07 14:09:19.303290539 +0000 UTC m=+0.152341103 container cleanup ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.311 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:19 np0005473739 systemd[1]: libpod-conmon-ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77.scope: Deactivated successfully.
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.317 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846159.1043916, 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.318 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.375 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.380 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:19 np0005473739 podman[294961]: 2025-10-07 14:09:19.394503687 +0000 UTC m=+0.067267079 container remove ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0567717d-5e26-41c6-8c0c-49a120b1c88d]: (4, ('Tue Oct  7 02:09:19 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 (ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77)\necad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77\nTue Oct  7 02:09:19 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 (ecad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77)\necad2978db1146902872acba3034e6def1f875da7e0f05b84e6600ef40538f77\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.404 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dfca3945-1a5b-4a75-8377-df8f633cb48f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.405 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9ef175ad-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:19 np0005473739 kernel: tap9ef175ad-b0: left promiscuous mode
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.431 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c99be2ad-784c-4eed-b881-fe762da7888b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.447 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.465 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e63d2a4b-4770-4ffc-8e16-bd6f6686a1c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.467 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab214ab1-eb9c-4a85-aa87-d123fb915c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.482 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[176719dc-9ce0-4d90-b89f-ba3d47c7c9a2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 670821, 'reachable_time': 16180, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 294980, 'error': None, 'target': 'ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.486 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9ef175ad-b29a-40a5-8bff-1ae744434f58 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:09:19 np0005473739 systemd[1]: run-netns-ovnmeta\x2d9ef175ad\x2db29a\x2d40a5\x2d8bff\x2d1ae744434f58.mount: Deactivated successfully.
Oct  7 10:09:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:19.486 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[71d39aa2-9d30-4cef-88a2-38dce4669faf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1284: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 305 MiB data, 477 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.3 MiB/s wr, 163 op/s
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.689 2 INFO nova.virt.libvirt.driver [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Deleting instance files /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75_del#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.690 2 INFO nova.virt.libvirt.driver [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Deletion of /var/lib/nova/instances/5fc2c826-a57b-4c9a-910a-48b72ec2ab75_del complete#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.859 2 INFO nova.compute.manager [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.859 2 DEBUG oslo.service.loopingcall [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.860 2 DEBUG nova.compute.manager [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.860 2 DEBUG nova.network.neutron [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:09:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e155 do_prune osdmap full prune enabled
Oct  7 10:09:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e156 e156: 3 total, 3 up, 3 in
Oct  7 10:09:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e156: 3 total, 3 up, 3 in
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.982 2 DEBUG nova.network.neutron [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updated VIF entry in instance network info cache for port c10a9a85-702e-4bd4-92d9-474eb88ec422. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:09:19 np0005473739 nova_compute[259550]: 2025-10-07 14:09:19.982 2 DEBUG nova.network.neutron [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [{"id": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "address": "fa:16:3e:45:8d:83", "network": {"id": "9ef175ad-b29a-40a5-8bff-1ae744434f58", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1491891106-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "b03b40fd15b945118fde82b6454dbced", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc10a9a85-70", "ovs_interfaceid": "c10a9a85-702e-4bd4-92d9-474eb88ec422", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.068 2 DEBUG oslo_concurrency.lockutils [req-74969ab7-9579-4110-b9e3-d6710f5f0c29 req-e9a0d42d-a9e0-4c2d-8b6c-1454aec11a13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5fc2c826-a57b-4c9a-910a-48b72ec2ab75" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.117 2 DEBUG nova.compute.manager [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.117 2 DEBUG oslo_concurrency.lockutils [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.118 2 DEBUG oslo_concurrency.lockutils [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.118 2 DEBUG oslo_concurrency.lockutils [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.119 2 DEBUG nova.compute.manager [req-bb5ff332-7a83-4a17-8004-7f19bbc90c2f req-ac7960cb-ddc2-4460-a186-f1ee25772f08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Processing event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.120 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.125 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846160.1238267, 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.125 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.128 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.132 2 INFO nova.virt.libvirt.driver [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance spawned successfully.#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.132 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.194 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.199 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.200 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.200 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.201 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.201 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.202 2 DEBUG nova.virt.libvirt.driver [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.206 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.332 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.447 2 INFO nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Took 10.21 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.447 2 DEBUG nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.675 2 INFO nova.compute.manager [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Took 11.33 seconds to build instance.#033[00m
Oct  7 10:09:20 np0005473739 nova_compute[259550]: 2025-10-07 14:09:20.763 2 DEBUG oslo_concurrency.lockutils [None req-049d1fe8-69c7-492d-a7c4-32296591e9d2 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.502s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e156 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:21 np0005473739 podman[294983]: 2025-10-07 14:09:21.087045353 +0000 UTC m=+0.070867295 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 10:09:21 np0005473739 podman[294982]: 2025-10-07 14:09:21.114211109 +0000 UTC m=+0.100575669 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:09:21 np0005473739 nova_compute[259550]: 2025-10-07 14:09:21.480 2 DEBUG nova.network.neutron [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:21 np0005473739 nova_compute[259550]: 2025-10-07 14:09:21.502 2 INFO nova.virt.libvirt.driver [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Snapshot image upload complete#033[00m
Oct  7 10:09:21 np0005473739 nova_compute[259550]: 2025-10-07 14:09:21.503 2 INFO nova.compute.manager [None req-be32ed10-b8b4-4b92-a5c7-71136f91a8d4 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 6.25 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 10:09:21 np0005473739 nova_compute[259550]: 2025-10-07 14:09:21.564 2 INFO nova.compute.manager [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Took 1.70 seconds to deallocate network for instance.#033[00m
Oct  7 10:09:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1286: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 289 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Oct  7 10:09:21 np0005473739 nova_compute[259550]: 2025-10-07 14:09:21.853 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:21 np0005473739 nova_compute[259550]: 2025-10-07 14:09:21.853 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:21 np0005473739 nova_compute[259550]: 2025-10-07 14:09:21.963 2 DEBUG oslo_concurrency.processutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.382 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.383 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.383 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.383 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.384 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] No waiting events found dispatching network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.384 2 WARNING nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received unexpected event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.384 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-unplugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.384 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.384 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.385 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.385 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] No waiting events found dispatching network-vif-unplugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.385 2 WARNING nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received unexpected event network-vif-unplugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.385 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.385 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.386 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.386 2 DEBUG oslo_concurrency.lockutils [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.386 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] No waiting events found dispatching network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.386 2 WARNING nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received unexpected event network-vif-plugged-c10a9a85-702e-4bd4-92d9-474eb88ec422 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.386 2 DEBUG nova.compute.manager [req-9dbf0a2a-ba7f-4ee1-88ef-d6ee33e50399 req-12595c69-7604-4b68-be38-3e9670a9a91b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Received event network-vif-deleted-c10a9a85-702e-4bd4-92d9-474eb88ec422 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2260568890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.530 2 DEBUG oslo_concurrency.processutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.567s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.535 2 DEBUG nova.compute.provider_tree [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:09:22
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'backups', 'volumes', 'default.rgw.log', 'default.rgw.control', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', '.mgr', 'images', 'cephfs.cephfs.data']
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.787 2 DEBUG nova.scheduler.client.report [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:09:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:09:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e156 do_prune osdmap full prune enabled
Oct  7 10:09:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e157 e157: 3 total, 3 up, 3 in
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:22 np0005473739 nova_compute[259550]: 2025-10-07 14:09:22.988 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:09:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e157: 3 total, 3 up, 3 in
Oct  7 10:09:23 np0005473739 nova_compute[259550]: 2025-10-07 14:09:23.256 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:23 np0005473739 nova_compute[259550]: 2025-10-07 14:09:23.337 2 INFO nova.scheduler.client.report [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Deleted allocations for instance 5fc2c826-a57b-4c9a-910a-48b72ec2ab75#033[00m
Oct  7 10:09:23 np0005473739 nova_compute[259550]: 2025-10-07 14:09:23.534 2 DEBUG oslo_concurrency.lockutils [None req-8b8e9505-c2b2-4b91-8559-d12dad117349 157a909ec18a483a901ec32a0a867038 b03b40fd15b945118fde82b6454dbced - - default default] Lock "5fc2c826-a57b-4c9a-910a-48b72ec2ab75" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.816s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1288: 305 pgs: 3 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 300 active+clean; 289 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 183 op/s
Oct  7 10:09:23 np0005473739 nova_compute[259550]: 2025-10-07 14:09:23.987 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.175 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.176 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.176 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.177 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.177 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.179 2 INFO nova.compute.manager [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Terminating instance#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.180 2 DEBUG nova.compute.manager [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.189 2 INFO nova.virt.libvirt.driver [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Instance destroyed successfully.#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.189 2 DEBUG nova.objects.instance [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.274 2 DEBUG nova.virt.libvirt.vif [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:08:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-771633438',display_name='tempest-ImagesTestJSON-server-771633438',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-771633438',id=24,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:08:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-6zal3iv7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:09:21Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=46d99eec-2ec6-4ea6-acb1-c0a694dd2df1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.275 2 DEBUG nova.network.os_vif_util [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "29476070-b1b0-4d1c-a313-d0fbd1793130", "address": "fa:16:3e:2d:c7:a1", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap29476070-b1", "ovs_interfaceid": "29476070-b1b0-4d1c-a313-d0fbd1793130", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.275 2 DEBUG nova.network.os_vif_util [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.276 2 DEBUG os_vif [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.278 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29476070-b1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.282 2 INFO os_vif [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2d:c7:a1,bridge_name='br-int',has_traffic_filtering=True,id=29476070-b1b0-4d1c-a313-d0fbd1793130,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap29476070-b1')#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.411 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846149.4102364, 2f611962-32b5-4b21-b23b-303bbf54564d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.412 2 INFO nova.compute.manager [-] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.501 2 DEBUG nova.compute.manager [None req-750bf0e5-3011-4dfc-9764-da75552238f9 - - - - - -] [instance: 2f611962-32b5-4b21-b23b-303bbf54564d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.846 2 INFO nova.virt.libvirt.driver [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Deleting instance files /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_del#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.847 2 INFO nova.virt.libvirt.driver [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Deletion of /var/lib/nova/instances/46d99eec-2ec6-4ea6-acb1-c0a694dd2df1_del complete#033[00m
Oct  7 10:09:24 np0005473739 nova_compute[259550]: 2025-10-07 14:09:24.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:25 np0005473739 nova_compute[259550]: 2025-10-07 14:09:25.070 2 INFO nova.compute.manager [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:09:25 np0005473739 nova_compute[259550]: 2025-10-07 14:09:25.071 2 DEBUG oslo.service.loopingcall [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:09:25 np0005473739 nova_compute[259550]: 2025-10-07 14:09:25.071 2 DEBUG nova.compute.manager [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:09:25 np0005473739 nova_compute[259550]: 2025-10-07 14:09:25.072 2 DEBUG nova.network.neutron [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:09:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1289: 305 pgs: 305 active+clean; 144 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 8.2 MiB/s rd, 4.9 MiB/s wr, 300 op/s
Oct  7 10:09:25 np0005473739 nova_compute[259550]: 2025-10-07 14:09:25.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e157 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e157 do_prune osdmap full prune enabled
Oct  7 10:09:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e158 e158: 3 total, 3 up, 3 in
Oct  7 10:09:25 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e158: 3 total, 3 up, 3 in
Oct  7 10:09:25 np0005473739 nova_compute[259550]: 2025-10-07 14:09:25.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:25 np0005473739 nova_compute[259550]: 2025-10-07 14:09:25.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.091 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.092 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.092 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.092 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.093 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.330 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846151.318896, 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.331 2 INFO nova.compute.manager [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.387 2 DEBUG nova.compute.manager [None req-6aeab903-6f3f-435a-aa38-e010bd5a8358 - - - - - -] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/917930297' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:26 np0005473739 nova_compute[259550]: 2025-10-07 14:09:26.619 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.023 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.024 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000001a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.196 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.197 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4306MB free_disk=59.930442810058594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.198 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.198 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.244 2 DEBUG nova.network.neutron [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.353 2 DEBUG nova.compute.manager [req-4febf6f9-3fe1-4562-b78d-eee4f843a5bb req-fe904663-4c33-47e9-b969-cb4a986e568d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Received event network-vif-deleted-29476070-b1b0-4d1c-a313-d0fbd1793130 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.354 2 INFO nova.compute.manager [req-4febf6f9-3fe1-4562-b78d-eee4f843a5bb req-fe904663-4c33-47e9-b969-cb4a986e568d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Neutron deleted interface 29476070-b1b0-4d1c-a313-d0fbd1793130; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.354 2 DEBUG nova.network.neutron [req-4febf6f9-3fe1-4562-b78d-eee4f843a5bb req-fe904663-4c33-47e9-b969-cb4a986e568d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.459 2 INFO nova.compute.manager [-] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Took 2.39 seconds to deallocate network for instance.#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.464 2 DEBUG nova.compute.manager [req-4febf6f9-3fe1-4562-b78d-eee4f843a5bb req-fe904663-4c33-47e9-b969-cb4a986e568d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1] Detach interface failed, port_id=29476070-b1b0-4d1c-a313-d0fbd1793130, reason: Instance 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.509 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.510 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:09:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1291: 305 pgs: 305 active+clean; 144 MiB data, 400 MiB used, 60 GiB / 60 GiB avail; 4.2 MiB/s rd, 1.3 MiB/s wr, 231 op/s
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.609 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.611 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.612 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.612 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.650 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.651 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.689 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.713 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:27 np0005473739 nova_compute[259550]: 2025-10-07 14:09:27.772 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.020 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.021 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.050 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:09:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822598896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.135 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.151 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.157 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.179 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.210 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.210 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.211 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.306 2 DEBUG oslo_concurrency.processutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1863809496' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.771 2 DEBUG oslo_concurrency.processutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.778 2 DEBUG nova.compute.provider_tree [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:28 np0005473739 nova_compute[259550]: 2025-10-07 14:09:28.911 2 DEBUG nova.scheduler.client.report [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.070 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.073 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 1.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.090 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.091 2 INFO nova.compute.claims [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.211 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.248 2 INFO nova.scheduler.client.report [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance 46d99eec-2ec6-4ea6-acb1-c0a694dd2df1#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.281 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1292: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.9 MiB/s rd, 6.0 KiB/s wr, 219 op/s
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:29 np0005473739 nova_compute[259550]: 2025-10-07 14:09:29.985 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:09:30 np0005473739 nova_compute[259550]: 2025-10-07 14:09:30.090 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:09:30 np0005473739 nova_compute[259550]: 2025-10-07 14:09:30.090 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:30 np0005473739 nova_compute[259550]: 2025-10-07 14:09:30.194 2 DEBUG oslo_concurrency.lockutils [None req-580ce9f2-ce81-45c2-a3ca-38dd2386657c a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "46d99eec-2ec6-4ea6-acb1-c0a694dd2df1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.018s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:30 np0005473739 nova_compute[259550]: 2025-10-07 14:09:30.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e158 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.017 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.441 2 DEBUG nova.compute.manager [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/627574911' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.507 2 INFO nova.compute.manager [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] instance snapshotting#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.519 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.529 2 DEBUG nova.compute.provider_tree [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1293: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 2.7 MiB/s rd, 5.6 KiB/s wr, 204 op/s
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.769 2 INFO nova.virt.libvirt.driver [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Beginning live snapshot process#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.836 2 DEBUG nova.scheduler.client.report [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.953 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.954 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.956 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 3.822s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.965 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:09:31 np0005473739 nova_compute[259550]: 2025-10-07 14:09:31.966 2 INFO nova.compute.claims [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.094 2 DEBUG nova.virt.libvirt.imagebackend [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.214 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.215 2 DEBUG nova.network.neutron [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.274 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.308 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.337 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] creating snapshot(7c558376131f4286b2374ce28a43ede9) on rbd image(7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.391 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.431 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.433 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.434 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Creating image(s)#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.461 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.491 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.525 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.529 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.599 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.601 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.602 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.602 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.626 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.631 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170871631' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3170871631' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.786 2 DEBUG nova.network.neutron [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.788 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e158 do_prune osdmap full prune enabled
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e159 e159: 3 total, 3 up, 3 in
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e159: 3 total, 3 up, 3 in
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1354197765' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:32 np0005473739 podman[295317]: 2025-10-07 14:09:32.941203903 +0000 UTC m=+0.114891271 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.955 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.565s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.965 2 DEBUG nova.compute.provider_tree [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:32 np0005473739 nova_compute[259550]: 2025-10-07 14:09:32.973 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.343s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.010 2 DEBUG nova.scheduler.client.report [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.054 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.055 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.062 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] cloning vms/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk@7c558376131f4286b2374ce28a43ede9 to images/b8bca106-0814-4401-a772-c94d18dd013b clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:09:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:33Z|00032|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:30:ce:2c 10.100.0.14
Oct  7 10:09:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:33Z|00033|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:30:ce:2c 10.100.0.14
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.102 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] resizing rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.162 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.162 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.185 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.232 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.242 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] flattening images/b8bca106-0814-4401-a772-c94d18dd013b flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.297 2 DEBUG nova.objects.instance [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'migration_context' on Instance uuid b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.314 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.314 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Ensure instance console log exists: /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.315 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.315 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.315 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.317 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.323 2 WARNING nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.342 2 DEBUG nova.virt.libvirt.host [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.344 2 DEBUG nova.virt.libvirt.host [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.351 2 DEBUG nova.virt.libvirt.host [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.352 2 DEBUG nova.virt.libvirt.host [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.352 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.352 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.353 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.353 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.353 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.354 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.355 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.355 2 DEBUG nova.virt.hardware [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.358 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:33Z|00176|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.409 2 DEBUG nova.policy [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.415 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.417 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.417 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Creating image(s)#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.454 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.507 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.537 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.546 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:33Z|00177|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:09:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1295: 305 pgs: 305 active+clean; 88 MiB data, 348 MiB used, 60 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 KiB/s wr, 33 op/s
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.627 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.627 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.628 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.629 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.653 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.656 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f707575c-3219-44e4-9655-ccc194e7385d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4253893063' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:33 np0005473739 nova_compute[259550]: 2025-10-07 14:09:33.953 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:34 np0005473739 podman[295716]: 2025-10-07 14:09:34.102134071 +0000 UTC m=+0.089962905 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.157 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.162 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:09:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8d8f6b55-9ccc-40a7-a7cd-343cfb324b8b does not exist
Oct  7 10:09:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1992dcd0-1526-47e6-92ad-a05567ec8796 does not exist
Oct  7 10:09:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 237c41b0-e65f-486d-9b22-187ce9c355d0 does not exist
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.194 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.195 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846158.959316, 5fc2c826-a57b-4c9a-910a-48b72ec2ab75 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.195 2 INFO nova.compute.manager [-] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.225 2 DEBUG nova.compute.manager [None req-4d914caf-5b44-4783-82a4-d16fc69e0019 - - - - - -] [instance: 5fc2c826-a57b-4c9a-910a-48b72ec2ab75] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.479 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] removing snapshot(7c558376131f4286b2374ce28a43ede9) on rbd image(7ee2904f-492a-4ffe-bdc2-6f4ec3285851_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.536 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f707575c-3219-44e4-9655-ccc194e7385d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.880s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/235483505' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.654 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.700 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.702 2 DEBUG nova.objects.instance [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.735 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <uuid>b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca</uuid>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <name>instance-0000001b</name>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-1302379366</nova:name>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:09:33</nova:creationTime>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <nova:user uuid="61181e933b9841959e5a4a707bc48adf">tempest-ServersAdminNegativeTestJSON-484006704-project-member</nova:user>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <nova:project uuid="5fb6da28aeda44c5b898c384c6853b38">tempest-ServersAdminNegativeTestJSON-484006704</nova:project>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <entry name="serial">b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca</entry>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <entry name="uuid">b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca</entry>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/console.log" append="off"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:09:34 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:09:34 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:09:34 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:09:34 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.843 2 DEBUG nova.objects.instance [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid f707575c-3219-44e4-9655-ccc194e7385d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.887 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Successfully created port: 63d67e4f-8bad-4402-b835-b12010348a29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.891 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.891 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.892 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Using config drive#033[00m
Oct  7 10:09:34 np0005473739 podman[296006]: 2025-10-07 14:09:34.899108461 +0000 UTC m=+0.054339382 container create 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.919 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.928 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.928 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Ensure instance console log exists: /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.928 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.929 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:34 np0005473739 nova_compute[259550]: 2025-10-07 14:09:34.929 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:34 np0005473739 systemd[1]: Started libpod-conmon-7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9.scope.
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:09:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:09:34 np0005473739 podman[296006]: 2025-10-07 14:09:34.869860149 +0000 UTC m=+0.025091100 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:09:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:35 np0005473739 podman[296006]: 2025-10-07 14:09:35.018603216 +0000 UTC m=+0.173834167 container init 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 10:09:35 np0005473739 podman[296006]: 2025-10-07 14:09:35.028015886 +0000 UTC m=+0.183246807 container start 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:09:35 np0005473739 podman[296006]: 2025-10-07 14:09:35.032961999 +0000 UTC m=+0.188192930 container attach 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 10:09:35 np0005473739 systemd[1]: libpod-7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9.scope: Deactivated successfully.
Oct  7 10:09:35 np0005473739 great_diffie[296040]: 167 167
Oct  7 10:09:35 np0005473739 conmon[296040]: conmon 7edb8644f1a03e83ae51 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9.scope/container/memory.events
Oct  7 10:09:35 np0005473739 podman[296006]: 2025-10-07 14:09:35.038579269 +0000 UTC m=+0.193810190 container died 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:09:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1ab84501ef79dcb94318cfde252df2251fd49742ffaf2cab7a512129d2d7dc86-merged.mount: Deactivated successfully.
Oct  7 10:09:35 np0005473739 podman[296006]: 2025-10-07 14:09:35.112608418 +0000 UTC m=+0.267839329 container remove 7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.132 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Creating config drive at /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config#033[00m
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.138 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapqavg5y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:35 np0005473739 systemd[1]: libpod-conmon-7edb8644f1a03e83ae51c31ed688e90b3976fe8ead67ff4fadd29c32fe84c4d9.scope: Deactivated successfully.
Oct  7 10:09:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e159 do_prune osdmap full prune enabled
Oct  7 10:09:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e160 e160: 3 total, 3 up, 3 in
Oct  7 10:09:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e160: 3 total, 3 up, 3 in
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.223 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] creating snapshot(snap) on rbd image(b8bca106-0814-4401-a772-c94d18dd013b) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.291 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapqavg5y" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.318 2 DEBUG nova.storage.rbd_utils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.322 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:35 np0005473739 podman[296083]: 2025-10-07 14:09:35.326083923 +0000 UTC m=+0.050428639 container create 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:09:35 np0005473739 systemd[1]: Started libpod-conmon-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope.
Oct  7 10:09:35 np0005473739 podman[296083]: 2025-10-07 14:09:35.305177944 +0000 UTC m=+0.029522680 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:09:35 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:09:35 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:09:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:35 np0005473739 podman[296083]: 2025-10-07 14:09:35.444361544 +0000 UTC m=+0.168706260 container init 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:09:35 np0005473739 podman[296083]: 2025-10-07 14:09:35.454499765 +0000 UTC m=+0.178844481 container start 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:09:35 np0005473739 podman[296083]: 2025-10-07 14:09:35.459799507 +0000 UTC m=+0.184144233 container attach 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.519 2 DEBUG oslo_concurrency.processutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.520 2 INFO nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Deleting local config drive /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca/disk.config because it was imported into RBD.#033[00m
Oct  7 10:09:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1297: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 240 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 280 op/s
Oct  7 10:09:35 np0005473739 nova_compute[259550]: 2025-10-07 14:09:35.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:35 np0005473739 systemd-machined[214580]: New machine qemu-31-instance-0000001b.
Oct  7 10:09:35 np0005473739 systemd[1]: Started Virtual Machine qemu-31-instance-0000001b.
Oct  7 10:09:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e160 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.023 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Successfully updated port: 63d67e4f-8bad-4402-b835-b12010348a29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.058 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.059 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.059 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.132 2 DEBUG nova.compute.manager [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-changed-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.133 2 DEBUG nova.compute.manager [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Refreshing instance network info cache due to event network-changed-63d67e4f-8bad-4402-b835-b12010348a29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.133 2 DEBUG oslo_concurrency.lockutils [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e160 do_prune osdmap full prune enabled
Oct  7 10:09:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e161 e161: 3 total, 3 up, 3 in
Oct  7 10:09:36 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e161: 3 total, 3 up, 3 in
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.294 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image b8bca106-0814-4401-a772-c94d18dd013b could not be found.
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID b8bca106-0814-4401-a772-c94d18dd013b
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver 
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver 
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image b8bca106-0814-4401-a772-c94d18dd013b could not be found.
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.444 2 ERROR nova.virt.libvirt.driver #033[00m
Oct  7 10:09:36 np0005473739 nova_compute[259550]: 2025-10-07 14:09:36.522 2 DEBUG nova.storage.rbd_utils [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] removing snapshot(snap) on rbd image(b8bca106-0814-4401-a772-c94d18dd013b) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:09:36 np0005473739 lucid_herschel[296119]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:09:36 np0005473739 lucid_herschel[296119]: --> relative data size: 1.0
Oct  7 10:09:36 np0005473739 lucid_herschel[296119]: --> All data devices are unavailable
Oct  7 10:09:36 np0005473739 systemd[1]: libpod-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope: Deactivated successfully.
Oct  7 10:09:36 np0005473739 systemd[1]: libpod-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope: Consumed 1.170s CPU time.
Oct  7 10:09:36 np0005473739 conmon[296119]: conmon 6fdcd26d13ccf20c58f9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope/container/memory.events
Oct  7 10:09:36 np0005473739 podman[296083]: 2025-10-07 14:09:36.690314915 +0000 UTC m=+1.414659621 container died 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:09:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5ad38c446b03bee7e834c53a7ade6376b48e164f34f30f5f58c3b9ccfb6ac37f-merged.mount: Deactivated successfully.
Oct  7 10:09:36 np0005473739 podman[296083]: 2025-10-07 14:09:36.755087415 +0000 UTC m=+1.479432131 container remove 6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=lucid_herschel, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:09:36 np0005473739 systemd[1]: libpod-conmon-6fdcd26d13ccf20c58f9f1bbf7f6269095056b140f6acbcb605a6a1eafa5a949.scope: Deactivated successfully.
Oct  7 10:09:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e161 do_prune osdmap full prune enabled
Oct  7 10:09:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e162 e162: 3 total, 3 up, 3 in
Oct  7 10:09:37 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e162: 3 total, 3 up, 3 in
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.406 2 DEBUG nova.network.neutron [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Updating instance_info_cache with network_info: [{"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.488 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846177.4881973, b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.490 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.494 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.495 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.495 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.495 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance network_info: |[{"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.496 2 DEBUG oslo_concurrency.lockutils [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.496 2 DEBUG nova.network.neutron [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Refreshing network info cache for port 63d67e4f-8bad-4402-b835-b12010348a29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.499 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start _get_guest_xml network_info=[{"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.507 2 INFO nova.virt.libvirt.driver [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance spawned successfully.#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.508 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.512 2 WARNING nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.515 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.520 2 DEBUG nova.virt.libvirt.host [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.520 2 DEBUG nova.virt.libvirt.host [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:09:37 np0005473739 podman[296411]: 2025-10-07 14:09:37.538322068 +0000 UTC m=+0.069259801 container create 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:09:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1300: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 240 MiB data, 429 MiB used, 60 GiB / 60 GiB avail; 10 MiB/s rd, 20 MiB/s wr, 417 op/s
Oct  7 10:09:37 np0005473739 systemd[1]: Started libpod-conmon-800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7.scope.
Oct  7 10:09:37 np0005473739 podman[296411]: 2025-10-07 14:09:37.505136642 +0000 UTC m=+0.036074395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:09:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:37 np0005473739 podman[296411]: 2025-10-07 14:09:37.648107703 +0000 UTC m=+0.179045446 container init 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:09:37 np0005473739 podman[296411]: 2025-10-07 14:09:37.65920606 +0000 UTC m=+0.190143783 container start 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:09:37 np0005473739 friendly_morse[296429]: 167 167
Oct  7 10:09:37 np0005473739 systemd[1]: libpod-800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7.scope: Deactivated successfully.
Oct  7 10:09:37 np0005473739 conmon[296429]: conmon 800c2d36c7cdc4df3330 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7.scope/container/memory.events
Oct  7 10:09:37 np0005473739 podman[296411]: 2025-10-07 14:09:37.664224703 +0000 UTC m=+0.195162426 container attach 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:09:37 np0005473739 podman[296411]: 2025-10-07 14:09:37.673526582 +0000 UTC m=+0.204464305 container died 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:09:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f5a6dd4675455ae89680f8f4ceb13283ce24670b3a7008b60008c2589bdb1350-merged.mount: Deactivated successfully.
Oct  7 10:09:37 np0005473739 podman[296411]: 2025-10-07 14:09:37.72805739 +0000 UTC m=+0.258995103 container remove 800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_morse, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:09:37 np0005473739 systemd[1]: libpod-conmon-800c2d36c7cdc4df3330da7d15715a2d5328a4e0ddb21c282c77fe2ff17112c7.scope: Deactivated successfully.
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.826 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.827 2 DEBUG nova.virt.libvirt.host [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.828 2 DEBUG nova.virt.libvirt.host [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.828 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.828 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.829 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.829 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.829 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.830 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.830 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.830 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.831 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.831 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.831 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.831 2 DEBUG nova.virt.hardware [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.834 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.878 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.880 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.881 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.881 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.882 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:37 np0005473739 nova_compute[259550]: 2025-10-07 14:09:37.882 2 DEBUG nova.virt.libvirt.driver [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:37 np0005473739 podman[296455]: 2025-10-07 14:09:37.940212579 +0000 UTC m=+0.052947455 container create f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:09:37 np0005473739 systemd[1]: Started libpod-conmon-f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226.scope.
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.008 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.009 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846177.4894216, b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.010 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] VM Started (Lifecycle Event)#033[00m
Oct  7 10:09:38 np0005473739 podman[296455]: 2025-10-07 14:09:37.914236736 +0000 UTC m=+0.026971632 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:09:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:38 np0005473739 podman[296455]: 2025-10-07 14:09:38.041958279 +0000 UTC m=+0.154693175 container init f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:09:38 np0005473739 podman[296455]: 2025-10-07 14:09:38.050290742 +0000 UTC m=+0.163025618 container start f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:09:38 np0005473739 podman[296455]: 2025-10-07 14:09:38.055558173 +0000 UTC m=+0.168293049 container attach f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.068 2 WARNING nova.compute.manager [None req-cb61af02-dcda-44da-93b0-bccd4a184398 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Image not found during snapshot: nova.exception.ImageNotFound: Image b8bca106-0814-4401-a772-c94d18dd013b could not be found.#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.094 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.101 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.259 2 INFO nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Took 5.83 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.261 2 DEBUG nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.320 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1075208629' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.349 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.380 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.392 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.449 2 INFO nova.compute.manager [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Took 10.70 seconds to build instance.#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.555 2 DEBUG oslo_concurrency.lockutils [None req-53b6bdd5-de11-49e8-a83c-23ac50dca6b2 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1257779255' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.857 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.862 2 DEBUG nova.virt.libvirt.vif [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-725999350',display_name='tempest-ImagesTestJSON-server-725999350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-725999350',id=28,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-1i0xrx4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:33Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=f707575c-3219-44e4-9655-ccc194e7385d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.863 2 DEBUG nova.network.os_vif_util [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.864 2 DEBUG nova.network.os_vif_util [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.866 2 DEBUG nova.objects.instance [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid f707575c-3219-44e4-9655-ccc194e7385d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.900 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <uuid>f707575c-3219-44e4-9655-ccc194e7385d</uuid>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <name>instance-0000001c</name>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesTestJSON-server-725999350</nova:name>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:09:37</nova:creationTime>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <nova:port uuid="63d67e4f-8bad-4402-b835-b12010348a29">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <entry name="serial">f707575c-3219-44e4-9655-ccc194e7385d</entry>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <entry name="uuid">f707575c-3219-44e4-9655-ccc194e7385d</entry>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f707575c-3219-44e4-9655-ccc194e7385d_disk">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f707575c-3219-44e4-9655-ccc194e7385d_disk.config">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:20:81:bb"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <target dev="tap63d67e4f-8b"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/console.log" append="off"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:09:38 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:09:38 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:09:38 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:09:38 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.906 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Preparing to wait for external event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.907 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.908 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.908 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.909 2 DEBUG nova.virt.libvirt.vif [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-725999350',display_name='tempest-ImagesTestJSON-server-725999350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-725999350',id=28,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-1i0xrx4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:33Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=f707575c-3219-44e4-9655-ccc194e7385d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.910 2 DEBUG nova.network.os_vif_util [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.911 2 DEBUG nova.network.os_vif_util [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.911 2 DEBUG os_vif [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.912 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.913 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.914 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.921 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.922 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.922 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.923 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.923 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.924 2 INFO nova.compute.manager [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Terminating instance#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.925 2 DEBUG nova.compute.manager [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]: {
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:    "0": [
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:        {
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "devices": [
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "/dev/loop3"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            ],
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_name": "ceph_lv0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_size": "21470642176",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "name": "ceph_lv0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "tags": {
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cluster_name": "ceph",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.crush_device_class": "",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.encrypted": "0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osd_id": "0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.type": "block",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.vdo": "0"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            },
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "type": "block",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "vg_name": "ceph_vg0"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:        }
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:    ],
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:    "1": [
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:        {
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "devices": [
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "/dev/loop4"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            ],
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_name": "ceph_lv1",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_size": "21470642176",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "name": "ceph_lv1",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "tags": {
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cluster_name": "ceph",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.crush_device_class": "",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.encrypted": "0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osd_id": "1",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.type": "block",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.vdo": "0"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            },
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "type": "block",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "vg_name": "ceph_vg1"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:        }
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:    ],
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:    "2": [
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:        {
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "devices": [
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "/dev/loop5"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            ],
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_name": "ceph_lv2",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_size": "21470642176",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "name": "ceph_lv2",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "tags": {
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.cluster_name": "ceph",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.crush_device_class": "",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.encrypted": "0",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osd_id": "2",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.type": "block",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:                "ceph.vdo": "0"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            },
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "type": "block",
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:            "vg_name": "ceph_vg2"
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:        }
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]:    ]
Oct  7 10:09:38 np0005473739 eager_elgamal[296485]: }
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap63d67e4f-8b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap63d67e4f-8b, col_values=(('external_ids', {'iface-id': '63d67e4f-8bad-4402-b835-b12010348a29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:20:81:bb', 'vm-uuid': 'f707575c-3219-44e4-9655-ccc194e7385d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:38 np0005473739 NetworkManager[44949]: <info>  [1759846178.9397] manager: (tap63d67e4f-8b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/92)
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:38 np0005473739 nova_compute[259550]: 2025-10-07 14:09:38.953 2 INFO os_vif [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b')#033[00m
Oct  7 10:09:38 np0005473739 systemd[1]: libpod-f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226.scope: Deactivated successfully.
Oct  7 10:09:38 np0005473739 podman[296455]: 2025-10-07 14:09:38.982668191 +0000 UTC m=+1.095403067 container died f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.079 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.080 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.080 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:20:81:bb, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.081 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Using config drive#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.104 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cb3591a41247c2c019a0ef0e7f50799233ec29b7764d631a76bec8f01b71e125-merged.mount: Deactivated successfully.
Oct  7 10:09:39 np0005473739 kernel: tap1a36fc68-b9 (unregistering): left promiscuous mode
Oct  7 10:09:39 np0005473739 NetworkManager[44949]: <info>  [1759846179.1768] device (tap1a36fc68-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.175 2 DEBUG nova.network.neutron [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Updated VIF entry in instance network info cache for port 63d67e4f-8bad-4402-b835-b12010348a29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.175 2 DEBUG nova.network.neutron [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Updating instance_info_cache with network_info: [{"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:39Z|00178|binding|INFO|Releasing lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 from this chassis (sb_readonly=0)
Oct  7 10:09:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:39Z|00179|binding|INFO|Setting lport 1a36fc68-b90f-4e28-a866-7dfb6f218d37 down in Southbound
Oct  7 10:09:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:39Z|00180|binding|INFO|Removing iface tap1a36fc68-b9 ovn-installed in OVS
Oct  7 10:09:39 np0005473739 podman[296455]: 2025-10-07 14:09:39.203958175 +0000 UTC m=+1.316693051 container remove f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:39 np0005473739 systemd[1]: libpod-conmon-f3202165179d021205e574f37c0ceecde9770aae064512b7b62d19fdf2eed226.scope: Deactivated successfully.
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:39 np0005473739 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Deactivated successfully.
Oct  7 10:09:39 np0005473739 systemd[1]: machine-qemu\x2d30\x2dinstance\x2d0000001a.scope: Consumed 13.783s CPU time.
Oct  7 10:09:39 np0005473739 systemd-machined[214580]: Machine qemu-30-instance-0000001a terminated.
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.333 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:30:ce:2c 10.100.0.14'], port_security=['fa:16:3e:30:ce:2c 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '7ee2904f-492a-4ffe-bdc2-6f4ec3285851', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '4', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1a36fc68-b90f-4e28-a866-7dfb6f218d37) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.336 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1a36fc68-b90f-4e28-a866-7dfb6f218d37 in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d unbound from our chassis#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.337 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.341 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d12efc0-c213-4b23-8b62-dfffa5691462]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.342 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace which is not needed anymore#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.353 2 DEBUG oslo_concurrency.lockutils [req-602cba1f-fa1b-44d6-ab4a-30305c5ffcde req-78c0d575-f4b8-4292-b523-4f1860945ad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f707575c-3219-44e4-9655-ccc194e7385d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.408 2 INFO nova.virt.libvirt.driver [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Instance destroyed successfully.#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.409 2 DEBUG nova.objects.instance [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'resources' on Instance uuid 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.460 2 DEBUG nova.virt.libvirt.vif [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-527928282',display_name='tempest-ImagesOneServerNegativeTestJSON-server-527928282',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-527928282',id=26,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:09:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7pnh0023',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:09:38Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=7ee2904f-492a-4ffe-bdc2-6f4ec3285851,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.463 2 DEBUG nova.network.os_vif_util [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "address": "fa:16:3e:30:ce:2c", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a36fc68-b9", "ovs_interfaceid": "1a36fc68-b90f-4e28-a866-7dfb6f218d37", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.466 2 DEBUG nova.network.os_vif_util [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.467 2 DEBUG os_vif [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.470 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a36fc68-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.484 2 INFO os_vif [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:30:ce:2c,bridge_name='br-int',has_traffic_filtering=True,id=1a36fc68-b90f-4e28-a866-7dfb6f218d37,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a36fc68-b9')#033[00m
Oct  7 10:09:39 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [NOTICE]   (294879) : haproxy version is 2.8.14-c23fe91
Oct  7 10:09:39 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [NOTICE]   (294879) : path to executable is /usr/sbin/haproxy
Oct  7 10:09:39 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [WARNING]  (294879) : Exiting Master process...
Oct  7 10:09:39 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [ALERT]    (294879) : Current worker (294884) exited with code 143 (Terminated)
Oct  7 10:09:39 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[294853]: [WARNING]  (294879) : All workers exited. Exiting... (0)
Oct  7 10:09:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1301: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 225 MiB data, 431 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 19 MiB/s wr, 558 op/s
Oct  7 10:09:39 np0005473739 systemd[1]: libpod-757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0.scope: Deactivated successfully.
Oct  7 10:09:39 np0005473739 podman[296678]: 2025-10-07 14:09:39.583487269 +0000 UTC m=+0.066395825 container died 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  7 10:09:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0-userdata-shm.mount: Deactivated successfully.
Oct  7 10:09:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fc2a49701af0e481b1ad6d0cc40f5f1da2f9ada3657154d7d31efc1fe61be3f5-merged.mount: Deactivated successfully.
Oct  7 10:09:39 np0005473739 podman[296678]: 2025-10-07 14:09:39.646313608 +0000 UTC m=+0.129222164 container cleanup 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:09:39 np0005473739 systemd[1]: libpod-conmon-757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0.scope: Deactivated successfully.
Oct  7 10:09:39 np0005473739 podman[296755]: 2025-10-07 14:09:39.764379373 +0000 UTC m=+0.079579818 container remove 757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.773 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[29985c4d-c466-41d7-b09d-4c4a0195c234]: (4, ('Tue Oct  7 02:09:39 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0)\n757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0\nTue Oct  7 02:09:39 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0)\n757bed424aa29dd8c795522a600099dba99ebc3f72f51673ca1e7cda54cb48e0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.775 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[867aa254-ab72-4c4d-ae53-8e63b54a35ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.776 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:39 np0005473739 kernel: tap0a5c95d4-10: left promiscuous mode
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.800 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Creating config drive at /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.812 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9526cc21-07d9-4225-9c0e-bd770822cf0d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.816 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrocjkop execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.843 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7403b7-bd48-4a1a-ae8d-c1a6c6af7df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7991d7c5-eab8-496e-bda8-04acd728e7c7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.867 2 DEBUG nova.compute.manager [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-unplugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG oslo_concurrency.lockutils [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG oslo_concurrency.lockutils [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG oslo_concurrency.lockutils [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG nova.compute.manager [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] No waiting events found dispatching network-vif-unplugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.868 2 DEBUG nova.compute.manager [req-a62e7747-4bde-4a05-9ee0-2cae8ffc363c req-88ef4ee8-9a81-4f15-8297-7052ea7830c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-unplugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.870 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dcfcc14e-54ff-4083-be29-c61be0e61b50]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 674174, 'reachable_time': 37190, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296791, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:39 np0005473739 systemd[1]: run-netns-ovnmeta\x2d0a5c95d4\x2d1a77\x2d48f5\x2d83c0\x2dafa976b7583d.mount: Deactivated successfully.
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.879 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:09:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:39.879 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[770a260a-69cb-4f2d-ba52-5d592688bd33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:39 np0005473739 nova_compute[259550]: 2025-10-07 14:09:39.971 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyrocjkop" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.005 2 DEBUG nova.storage.rbd_utils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image f707575c-3219-44e4-9655-ccc194e7385d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.010 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config f707575c-3219-44e4-9655-ccc194e7385d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:40 np0005473739 podman[296831]: 2025-10-07 14:09:40.079011492 +0000 UTC m=+0.060984481 container create 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 10:09:40 np0005473739 systemd[1]: Started libpod-conmon-461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0.scope.
Oct  7 10:09:40 np0005473739 podman[296831]: 2025-10-07 14:09:40.052674568 +0000 UTC m=+0.034647577 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:09:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:40 np0005473739 podman[296831]: 2025-10-07 14:09:40.19976753 +0000 UTC m=+0.181740539 container init 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  7 10:09:40 np0005473739 podman[296831]: 2025-10-07 14:09:40.210030224 +0000 UTC m=+0.192003213 container start 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:09:40 np0005473739 crazy_ramanujan[296865]: 167 167
Oct  7 10:09:40 np0005473739 systemd[1]: libpod-461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0.scope: Deactivated successfully.
Oct  7 10:09:40 np0005473739 podman[296831]: 2025-10-07 14:09:40.224687666 +0000 UTC m=+0.206660685 container attach 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:09:40 np0005473739 podman[296831]: 2025-10-07 14:09:40.226458493 +0000 UTC m=+0.208431512 container died 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:09:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fb189ee410f28ddd3eb718b05419a4ec278b0268cae55153b0fde6af9c12a598-merged.mount: Deactivated successfully.
Oct  7 10:09:40 np0005473739 podman[296831]: 2025-10-07 14:09:40.334150471 +0000 UTC m=+0.316123500 container remove 461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_ramanujan, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:09:40 np0005473739 systemd[1]: libpod-conmon-461c5bb98066ad00005835ed6fb79064dfa3e90542988c706af6e39d0eb1fbd0.scope: Deactivated successfully.
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.366 2 INFO nova.virt.libvirt.driver [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Deleting instance files /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_del#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.368 2 INFO nova.virt.libvirt.driver [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Deletion of /var/lib/nova/instances/7ee2904f-492a-4ffe-bdc2-6f4ec3285851_del complete#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.372 2 DEBUG oslo_concurrency.processutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config f707575c-3219-44e4-9655-ccc194e7385d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.373 2 INFO nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Deleting local config drive /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d/disk.config because it was imported into RBD.#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.438 2 INFO nova.compute.manager [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Took 1.51 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.440 2 DEBUG oslo.service.loopingcall [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.441 2 DEBUG nova.compute.manager [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.441 2 DEBUG nova.network.neutron [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:09:40 np0005473739 kernel: tap63d67e4f-8b: entered promiscuous mode
Oct  7 10:09:40 np0005473739 NetworkManager[44949]: <info>  [1759846180.4529] manager: (tap63d67e4f-8b): new Tun device (/org/freedesktop/NetworkManager/Devices/93)
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:40Z|00181|binding|INFO|Claiming lport 63d67e4f-8bad-4402-b835-b12010348a29 for this chassis.
Oct  7 10:09:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:40Z|00182|binding|INFO|63d67e4f-8bad-4402-b835-b12010348a29: Claiming fa:16:3e:20:81:bb 10.100.0.11
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.469 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:81:bb 10.100.0.11'], port_security=['fa:16:3e:20:81:bb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f707575c-3219-44e4-9655-ccc194e7385d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=63d67e4f-8bad-4402-b835-b12010348a29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.471 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 63d67e4f-8bad-4402-b835-b12010348a29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.472 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.489 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[828517a6-4aae-4e51-a7e8-b00a1eb2a440]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.490 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.494 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2312bd5d-0c00-479e-8178-4ebcd49646db]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.496 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4e69f9f-6f35-45a0-a82a-1d93b8e5e6ba]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 systemd-udevd[296911]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.511 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[15543cfa-0137-493d-8900-61a854a0bfae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 NetworkManager[44949]: <info>  [1759846180.5207] device (tap63d67e4f-8b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:09:40 np0005473739 systemd-machined[214580]: New machine qemu-32-instance-0000001c.
Oct  7 10:09:40 np0005473739 NetworkManager[44949]: <info>  [1759846180.5224] device (tap63d67e4f-8b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:09:40 np0005473739 systemd[1]: Started Virtual Machine qemu-32-instance-0000001c.
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[734770dc-2a7d-4df6-8b30-7bc305204e80]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 podman[296901]: 2025-10-07 14:09:40.543143397 +0000 UTC m=+0.058834754 container create d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:40Z|00183|binding|INFO|Setting lport 63d67e4f-8bad-4402-b835-b12010348a29 ovn-installed in OVS
Oct  7 10:09:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:40Z|00184|binding|INFO|Setting lport 63d67e4f-8bad-4402-b835-b12010348a29 up in Southbound
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.584 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f43dcc-c9be-45a2-b520-94a8fbdbdc95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 podman[296901]: 2025-10-07 14:09:40.520233535 +0000 UTC m=+0.035924872 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:09:40 np0005473739 NetworkManager[44949]: <info>  [1759846180.6165] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/94)
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.615 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2dfa4437-70b3-49ac-83f8-47e1c57f0b8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 systemd[1]: Started libpod-conmon-d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979.scope.
Oct  7 10:09:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.669 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[28f6de18-a3e8-4324-b924-2130a112195b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.678 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31119c8b-3679-45cd-b7fa-063666857100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 podman[296901]: 2025-10-07 14:09:40.6932763 +0000 UTC m=+0.208967637 container init d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:09:40 np0005473739 podman[296901]: 2025-10-07 14:09:40.7044994 +0000 UTC m=+0.220190717 container start d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:09:40 np0005473739 NetworkManager[44949]: <info>  [1759846180.7091] device (tap9f80456d-d0): carrier: link connected
Oct  7 10:09:40 np0005473739 podman[296901]: 2025-10-07 14:09:40.710627093 +0000 UTC m=+0.226318420 container attach d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.722 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f46042db-bbfb-4e41-9085-15e2c021d224]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.748 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.749 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c32c43c8-6fce-4cc1-92b3-9b99fb07731c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676427, 'reachable_time': 19367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 296957, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.773 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.782 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f3299fe-90d3-41d3-9bc2-4b6490bb8b2e]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 676427, 'tstamp': 676427}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 296965, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.809 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9b7d0e0-84b1-47db-bddd-0935e8401347]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 60], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676427, 'reachable_time': 19367, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 296974, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.852 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.853 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.857 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b59c88f2-0ac3-42c5-8d43-22e2365f6016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.862 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.863 2 INFO nova.compute.claims [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.917 2 DEBUG nova.compute.manager [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.918 2 DEBUG oslo_concurrency.lockutils [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.918 2 DEBUG oslo_concurrency.lockutils [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.918 2 DEBUG oslo_concurrency.lockutils [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.918 2 DEBUG nova.compute.manager [req-7a5ce5a0-64a3-4335-a591-948ff53890ba req-20e5ea05-ed7b-4e5f-9bd0-da7d79d156fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Processing event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[910235ff-7362-408e-99fc-fc7c48810d44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.939 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.940 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.940 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 kernel: tap9f80456d-d0: entered promiscuous mode
Oct  7 10:09:40 np0005473739 NetworkManager[44949]: <info>  [1759846180.9437] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/95)
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.947 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:40Z|00185|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct  7 10:09:40 np0005473739 nova_compute[259550]: 2025-10-07 14:09:40.966 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.967 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.969 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bff38459-6902-4fb4-8898-5a4a2e70c625]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.970 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:09:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:40.972 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.070 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:41 np0005473739 podman[297051]: 2025-10-07 14:09:41.422633563 +0000 UTC m=+0.076231679 container create d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.455 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846181.4548469, f707575c-3219-44e4-9655-ccc194e7385d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.456 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Started (Lifecycle Event)#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.459 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.466 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:09:41 np0005473739 podman[297051]: 2025-10-07 14:09:41.380168488 +0000 UTC m=+0.033766634 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.479 2 INFO nova.virt.libvirt.driver [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance spawned successfully.#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.480 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:09:41 np0005473739 systemd[1]: Started libpod-conmon-d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82.scope.
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.497 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.500 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6c453bc14a40343f7d04b0c2d60dac70d117a7f556b8366c9ea1b23c523770b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:41 np0005473739 podman[297051]: 2025-10-07 14:09:41.544710066 +0000 UTC m=+0.198308222 container init d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:09:41 np0005473739 podman[297051]: 2025-10-07 14:09:41.550609343 +0000 UTC m=+0.204207459 container start d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:09:41 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [NOTICE]   (297074) : New worker (297077) forked
Oct  7 10:09:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1302: 305 pgs: 305 active+clean; 173 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 8.6 MiB/s rd, 8.7 MiB/s wr, 486 op/s
Oct  7 10:09:41 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [NOTICE]   (297074) : Loading success.
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.584 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.584 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846181.4551175, f707575c-3219-44e4-9655-ccc194e7385d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.585 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:09:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3979242475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.590 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.591 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.591 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.591 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.592 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.592 2 DEBUG nova.virt.libvirt.driver [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.623 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.633 2 DEBUG nova.compute.provider_tree [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.651 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.660 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846181.464237, f707575c-3219-44e4-9655-ccc194e7385d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.660 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.678 2 DEBUG nova.scheduler.client.report [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.699 2 INFO nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 8.28 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.699 2 DEBUG nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.710 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.714 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.721 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.869s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.722 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.775 2 INFO nova.compute.manager [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 13.66 seconds to build instance.#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.797 2 DEBUG nova.network.neutron [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.803 2 DEBUG oslo_concurrency.lockutils [None req-7d6e24e2-3ab6-4629-955e-bc041302e055 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.810 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.810 2 DEBUG nova.network.neutron [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.816 2 INFO nova.compute.manager [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Took 1.37 seconds to deallocate network for instance.#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.837 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.862 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.863 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.864 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]: {
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "osd_id": 2,
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "type": "bluestore"
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:    },
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "osd_id": 1,
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "type": "bluestore"
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:    },
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "osd_id": 0,
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:        "type": "bluestore"
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]:    }
Oct  7 10:09:41 np0005473739 affectionate_dubinsky[296935]: }
Oct  7 10:09:41 np0005473739 systemd[1]: libpod-d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979.scope: Deactivated successfully.
Oct  7 10:09:41 np0005473739 systemd[1]: libpod-d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979.scope: Consumed 1.163s CPU time.
Oct  7 10:09:41 np0005473739 podman[296901]: 2025-10-07 14:09:41.908118738 +0000 UTC m=+1.423810055 container died d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:09:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2b2f544be7afb74c4f86e728f0febccafb9406b6897f853b3ce50df11b2694a7-merged.mount: Deactivated successfully.
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.955 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.958 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.958 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Creating image(s)#033[00m
Oct  7 10:09:41 np0005473739 podman[296901]: 2025-10-07 14:09:41.9856108 +0000 UTC m=+1.501302117 container remove d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:09:41 np0005473739 nova_compute[259550]: 2025-10-07 14:09:41.996 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:42 np0005473739 systemd[1]: libpod-conmon-d3bb0b8902cce29c188b18621505133383e3dac9a673b1cc9e00a13cd853b979.scope: Deactivated successfully.
Oct  7 10:09:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.053 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:09:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:09:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:09:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1e25390b-c3cf-4537-9cc8-c3d5877ea27f does not exist
Oct  7 10:09:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9a406c8e-ad54-4b25-8242-ad473bd1b6ff does not exist
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.092 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.097 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.129 2 DEBUG nova.network.neutron [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.130 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.131 2 DEBUG oslo_concurrency.processutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.172 2 DEBUG nova.compute.manager [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.172 2 DEBUG oslo_concurrency.lockutils [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.173 2 DEBUG oslo_concurrency.lockutils [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.173 2 DEBUG oslo_concurrency.lockutils [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.173 2 DEBUG nova.compute.manager [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] No waiting events found dispatching network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.173 2 WARNING nova.compute.manager [req-531f63e8-bfdd-4337-a296-f41d5ac99748 req-2b682af8-3c22-4a61-addf-024ebbac31dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received unexpected event network-vif-plugged-1a36fc68-b90f-4e28-a866-7dfb6f218d37 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.179 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.179 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.180 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.180 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.206 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.216 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.603 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.387s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1049852527' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:09:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.679 2 DEBUG oslo_concurrency.processutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.547s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.684 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] resizing rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.742 2 DEBUG nova.compute.provider_tree [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.814 2 DEBUG nova.scheduler.client.report [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.822 2 DEBUG nova.objects.instance [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'migration_context' on Instance uuid 643096ea-a4a9-4b3f-b8c1-130423a4248e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.899 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.918 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.919 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Ensure instance console log exists: /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.919 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.920 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.920 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.921 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.930 2 WARNING nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.936 2 DEBUG nova.virt.libvirt.host [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.937 2 DEBUG nova.virt.libvirt.host [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.941 2 DEBUG nova.virt.libvirt.host [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.942 2 DEBUG nova.virt.libvirt.host [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.942 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.943 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.943 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.943 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.944 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.945 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.945 2 DEBUG nova.virt.hardware [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:09:42 np0005473739 nova_compute[259550]: 2025-10-07 14:09:42.948 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.000 2 INFO nova.scheduler.client.report [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Deleted allocations for instance 7ee2904f-492a-4ffe-bdc2-6f4ec3285851#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.105 2 DEBUG oslo_concurrency.lockutils [None req-ca517157-a8fc-4cf3-9878-cc196245c4bc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "7ee2904f-492a-4ffe-bdc2-6f4ec3285851" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.246 2 DEBUG nova.compute.manager [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG oslo_concurrency.lockutils [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG oslo_concurrency.lockutils [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG oslo_concurrency.lockutils [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG nova.compute.manager [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] No waiting events found dispatching network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.247 2 WARNING nova.compute.manager [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received unexpected event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.247 2 DEBUG nova.compute.manager [req-c19b2554-c24b-4c2b-8c3d-7179380abf1c req-7aa9c36d-f400-41a4-a165-6d7f5e4631df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Received event network-vif-deleted-1a36fc68-b90f-4e28-a866-7dfb6f218d37 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/913645023' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.463 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.496 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.503 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1303: 305 pgs: 305 active+clean; 173 MiB data, 396 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.3 MiB/s wr, 219 op/s
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.752 2 DEBUG nova.objects.instance [None req-dc0aa1dd-d6fc-498d-a6d7-1380c1d1c3bf a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid f707575c-3219-44e4-9655-ccc194e7385d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.808 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846183.8070326, f707575c-3219-44e4-9655-ccc194e7385d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.808 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.858 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.866 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:43 np0005473739 nova_compute[259550]: 2025-10-07 14:09:43.943 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  7 10:09:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3200452805' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.058 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.555s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.060 2 DEBUG nova.objects.instance [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'pci_devices' on Instance uuid 643096ea-a4a9-4b3f-b8c1-130423a4248e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.108 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <uuid>643096ea-a4a9-4b3f-b8c1-130423a4248e</uuid>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <name>instance-0000001d</name>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersAdminNegativeTestJSON-server-127110333</nova:name>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:09:42</nova:creationTime>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <nova:user uuid="61181e933b9841959e5a4a707bc48adf">tempest-ServersAdminNegativeTestJSON-484006704-project-member</nova:user>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <nova:project uuid="5fb6da28aeda44c5b898c384c6853b38">tempest-ServersAdminNegativeTestJSON-484006704</nova:project>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <entry name="serial">643096ea-a4a9-4b3f-b8c1-130423a4248e</entry>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <entry name="uuid">643096ea-a4a9-4b3f-b8c1-130423a4248e</entry>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/643096ea-a4a9-4b3f-b8c1-130423a4248e_disk">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/console.log" append="off"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:09:44 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:09:44 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:09:44 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:09:44 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:09:44 np0005473739 kernel: tap63d67e4f-8b (unregistering): left promiscuous mode
Oct  7 10:09:44 np0005473739 NetworkManager[44949]: <info>  [1759846184.3016] device (tap63d67e4f-8b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:44Z|00186|binding|INFO|Releasing lport 63d67e4f-8bad-4402-b835-b12010348a29 from this chassis (sb_readonly=0)
Oct  7 10:09:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:44Z|00187|binding|INFO|Setting lport 63d67e4f-8bad-4402-b835-b12010348a29 down in Southbound
Oct  7 10:09:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:44Z|00188|binding|INFO|Removing iface tap63d67e4f-8b ovn-installed in OVS
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.319 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:20:81:bb 10.100.0.11'], port_security=['fa:16:3e:20:81:bb 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'f707575c-3219-44e4-9655-ccc194e7385d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=63d67e4f-8bad-4402-b835-b12010348a29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.321 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 63d67e4f-8bad-4402-b835-b12010348a29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.322 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b945dd7-f151-4ed1-8b3d-9bcc4d744303]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.324 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.341 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.341 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.341 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Using config drive#033[00m
Oct  7 10:09:44 np0005473739 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Deactivated successfully.
Oct  7 10:09:44 np0005473739 systemd[1]: machine-qemu\x2d32\x2dinstance\x2d0000001c.scope: Consumed 3.210s CPU time.
Oct  7 10:09:44 np0005473739 systemd-machined[214580]: Machine qemu-32-instance-0000001c terminated.
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.370 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:44 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [NOTICE]   (297074) : haproxy version is 2.8.14-c23fe91
Oct  7 10:09:44 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [NOTICE]   (297074) : path to executable is /usr/sbin/haproxy
Oct  7 10:09:44 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [WARNING]  (297074) : Exiting Master process...
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:44 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [ALERT]    (297074) : Current worker (297077) exited with code 143 (Terminated)
Oct  7 10:09:44 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[297066]: [WARNING]  (297074) : All workers exited. Exiting... (0)
Oct  7 10:09:44 np0005473739 systemd[1]: libpod-d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82.scope: Deactivated successfully.
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.533 2 DEBUG nova.compute.manager [None req-dc0aa1dd-d6fc-498d-a6d7-1380c1d1c3bf a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:44 np0005473739 podman[297467]: 2025-10-07 14:09:44.536197047 +0000 UTC m=+0.103140517 container died d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:09:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82-userdata-shm.mount: Deactivated successfully.
Oct  7 10:09:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e6c453bc14a40343f7d04b0c2d60dac70d117a7f556b8366c9ea1b23c523770b-merged.mount: Deactivated successfully.
Oct  7 10:09:44 np0005473739 podman[297467]: 2025-10-07 14:09:44.594924317 +0000 UTC m=+0.161867787 container cleanup d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:09:44 np0005473739 systemd[1]: libpod-conmon-d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82.scope: Deactivated successfully.
Oct  7 10:09:44 np0005473739 podman[297501]: 2025-10-07 14:09:44.675023057 +0000 UTC m=+0.052408241 container remove d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd6e75d6-f1e2-4446-b678-decd39795a1a]: (4, ('Tue Oct  7 02:09:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82)\nd8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82\nTue Oct  7 02:09:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (d8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82)\nd8c099cf62df37d27ab7d73c0ed0e1563b8419a584fbf8fc5aa63a2d14e09a82\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71df4dd0-1fd4-4553-87af-10c724db0a8a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.689 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:44 np0005473739 kernel: tap9f80456d-d0: left promiscuous mode
Oct  7 10:09:44 np0005473739 nova_compute[259550]: 2025-10-07 14:09:44.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.719 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7abf02d7-9155-4c3c-94f0-24621b316f53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.758 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67444a6c-4ecb-4f8d-a7f8-ba9312b9a085]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a0ace23-918d-4bb3-bb55-ad4ee64702cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[efebdc28-84c4-430b-842b-0a1ec1a3f08e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 676414, 'reachable_time': 34167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 297520, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:44 np0005473739 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.787 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:09:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:44.787 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e80ec2bd-b7aa-4626-9f36-9a60729bc2a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.024 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Creating config drive at /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.029 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplayu_dmu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.167 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplayu_dmu" returned: 0 in 0.138s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.201 2 DEBUG nova.storage.rbd_utils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] rbd image 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.205 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.384 2 DEBUG oslo_concurrency.processutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config 643096ea-a4a9-4b3f-b8c1-130423a4248e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.179s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.385 2 INFO nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Deleting local config drive /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e/disk.config because it was imported into RBD.#033[00m
Oct  7 10:09:45 np0005473739 systemd-machined[214580]: New machine qemu-33-instance-0000001d.
Oct  7 10:09:45 np0005473739 systemd[1]: Started Virtual Machine qemu-33-instance-0000001d.
Oct  7 10:09:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1304: 305 pgs: 305 active+clean; 181 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 5.1 MiB/s rd, 4.3 MiB/s wr, 325 op/s
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e162 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e162 do_prune osdmap full prune enabled
Oct  7 10:09:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e163 e163: 3 total, 3 up, 3 in
Oct  7 10:09:45 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e163: 3 total, 3 up, 3 in
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.901 2 DEBUG nova.compute.manager [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-unplugged-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.902 2 DEBUG oslo_concurrency.lockutils [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.902 2 DEBUG oslo_concurrency.lockutils [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.902 2 DEBUG oslo_concurrency.lockutils [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.902 2 DEBUG nova.compute.manager [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] No waiting events found dispatching network-vif-unplugged-63d67e4f-8bad-4402-b835-b12010348a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:45 np0005473739 nova_compute[259550]: 2025-10-07 14:09:45.903 2 WARNING nova.compute.manager [req-9ff14bf0-c6c2-4be4-9a13-97891c078925 req-0deb487d-85a2-4717-9639-4e2bc68ac5fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received unexpected event network-vif-unplugged-63d67e4f-8bad-4402-b835-b12010348a29 for instance with vm_state suspended and task_state None.#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.763 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.764 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.764 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846186.7629943, 643096ea-a4a9-4b3f-b8c1-130423a4248e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.764 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.770 2 INFO nova.virt.libvirt.driver [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance spawned successfully.#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.770 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.837 2 DEBUG nova.compute.manager [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.986 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.990 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.991 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.991 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.992 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.992 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.992 2 DEBUG nova.virt.libvirt.driver [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:09:46 np0005473739 nova_compute[259550]: 2025-10-07 14:09:46.999 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.055 2 INFO nova.compute.manager [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] instance snapshotting#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.055 2 WARNING nova.compute.manager [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] trying to snapshot a non-running instance: (state: 4 expected: 1)#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.159 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.160 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846186.7641366, 643096ea-a4a9-4b3f-b8c1-130423a4248e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.160 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] VM Started (Lifecycle Event)#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.241 2 INFO nova.virt.libvirt.driver [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Beginning cold snapshot process#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.250 2 INFO nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Took 5.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.251 2 DEBUG nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.252 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.261 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.414 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.469 2 INFO nova.compute.manager [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Took 6.65 seconds to build instance.#033[00m
Oct  7 10:09:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1306: 305 pgs: 305 active+clean; 181 MiB data, 390 MiB used, 60 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.0 MiB/s wr, 305 op/s
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.610 2 DEBUG oslo_concurrency.lockutils [None req-e2cc8a48-f505-4bb2-8147-37047534c786 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.682 2 DEBUG nova.virt.libvirt.imagebackend [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:09:47 np0005473739 nova_compute[259550]: 2025-10-07 14:09:47.869 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(38b3ebd818224332bf23562c9421071f) on rbd image(f707575c-3219-44e4-9655-ccc194e7385d_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.170 2 DEBUG nova.compute.manager [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.171 2 DEBUG oslo_concurrency.lockutils [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.171 2 DEBUG oslo_concurrency.lockutils [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.171 2 DEBUG oslo_concurrency.lockutils [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.171 2 DEBUG nova.compute.manager [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] No waiting events found dispatching network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.172 2 WARNING nova.compute.manager [req-d0b94a1a-4463-4099-ba96-3b5eae06603b req-aab40862-4fcb-4227-a9c5-33969db20c0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received unexpected event network-vif-plugged-63d67e4f-8bad-4402-b835-b12010348a29 for instance with vm_state suspended and task_state image_uploading.#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.535 2 DEBUG nova.objects.instance [None req-f5eba65c-e67e-43ad-8942-fe521d19dd61 f7c51012ac1149aa954786ee7320bd34 856d6220bca94fb080cb16cc9aed720e - - default default] Lazy-loading 'pci_devices' on Instance uuid 643096ea-a4a9-4b3f-b8c1-130423a4248e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.559 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846188.559075, 643096ea-a4a9-4b3f-b8c1-130423a4248e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.560 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.581 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.585 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:48 np0005473739 nova_compute[259550]: 2025-10-07 14:09:48.604 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  7 10:09:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e163 do_prune osdmap full prune enabled
Oct  7 10:09:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e164 e164: 3 total, 3 up, 3 in
Oct  7 10:09:49 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e164: 3 total, 3 up, 3 in
Oct  7 10:09:49 np0005473739 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Deactivated successfully.
Oct  7 10:09:49 np0005473739 systemd[1]: machine-qemu\x2d33\x2dinstance\x2d0000001d.scope: Consumed 3.161s CPU time.
Oct  7 10:09:49 np0005473739 systemd-machined[214580]: Machine qemu-33-instance-0000001d terminated.
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.150 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/f707575c-3219-44e4-9655-ccc194e7385d_disk@38b3ebd818224332bf23562c9421071f to images/e10d002e-132f-4664-81e6-2c97f55f74ab clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.195 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.196 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.221 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.240 2 DEBUG nova.compute.manager [None req-f5eba65c-e67e-43ad-8942-fe521d19dd61 f7c51012ac1149aa954786ee7320bd34 856d6220bca94fb080cb16cc9aed720e - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.317 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.318 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.327 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.328 2 INFO nova.compute.claims [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.444 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/e10d002e-132f-4664-81e6-2c97f55f74ab flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.527 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1308: 305 pgs: 305 active+clean; 181 MiB data, 391 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 2.7 MiB/s wr, 235 op/s
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.784 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(38b3ebd818224332bf23562c9421071f) on rbd image(f707575c-3219-44e4-9655-ccc194e7385d_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:09:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/415437241' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.977 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:49 np0005473739 nova_compute[259550]: 2025-10-07 14:09:49.984 2 DEBUG nova.compute.provider_tree [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.006 2 DEBUG nova.scheduler.client.report [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.035 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.036 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:09:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e164 do_prune osdmap full prune enabled
Oct  7 10:09:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e165 e165: 3 total, 3 up, 3 in
Oct  7 10:09:50 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e165: 3 total, 3 up, 3 in
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.096 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.097 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.107 2 DEBUG nova.storage.rbd_utils [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(e10d002e-132f-4664-81e6-2c97f55f74ab) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.145 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.166 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.259 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.261 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.265 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Creating image(s)#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.297 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.335 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.363 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.367 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.447 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.449 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.449 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.450 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.473 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.477 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6a715eda-3357-4298-88cc-596cc9986690_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.769 2 DEBUG nova.policy [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21b4c507f5c443f4b43306c884b1d67f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:09:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e165 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.898 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6a715eda-3357-4298-88cc-596cc9986690_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:50 np0005473739 nova_compute[259550]: 2025-10-07 14:09:50.977 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] resizing rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:09:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e165 do_prune osdmap full prune enabled
Oct  7 10:09:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e166 e166: 3 total, 3 up, 3 in
Oct  7 10:09:51 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e166: 3 total, 3 up, 3 in
Oct  7 10:09:51 np0005473739 nova_compute[259550]: 2025-10-07 14:09:51.141 2 DEBUG nova.objects.instance [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'migration_context' on Instance uuid 6a715eda-3357-4298-88cc-596cc9986690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:51 np0005473739 nova_compute[259550]: 2025-10-07 14:09:51.156 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:09:51 np0005473739 nova_compute[259550]: 2025-10-07 14:09:51.156 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Ensure instance console log exists: /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:09:51 np0005473739 nova_compute[259550]: 2025-10-07 14:09:51.157 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:51 np0005473739 nova_compute[259550]: 2025-10-07 14:09:51.157 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:51 np0005473739 nova_compute[259550]: 2025-10-07 14:09:51.158 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1311: 305 pgs: 305 active+clean; 196 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 1.4 MiB/s wr, 230 op/s
Oct  7 10:09:52 np0005473739 podman[297953]: 2025-10-07 14:09:52.080173683 +0000 UTC m=+0.065699547 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:09:52 np0005473739 podman[297954]: 2025-10-07 14:09:52.084237742 +0000 UTC m=+0.068400140 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.348 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Successfully created port: 18b5959f-439a-440f-8a00-0709f1f9730b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.729 2 INFO nova.virt.libvirt.driver [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Snapshot image upload complete#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.729 2 INFO nova.compute.manager [None req-4558f91f-2fee-4c0f-a315-d71e8be0dd0b a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 5.67 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.820 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.821 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.821 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "643096ea-a4a9-4b3f-b8c1-130423a4248e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.821 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.821 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.823 2 INFO nova.compute.manager [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Terminating instance#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.824 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "refresh_cache-643096ea-a4a9-4b3f-b8c1-130423a4248e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.824 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquired lock "refresh_cache-643096ea-a4a9-4b3f-b8c1-130423a4248e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:52 np0005473739 nova_compute[259550]: 2025-10-07 14:09:52.824 2 DEBUG nova.network.neutron [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.036 2 DEBUG nova.network.neutron [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.290 2 DEBUG nova.network.neutron [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.345 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Successfully updated port: 18b5959f-439a-440f-8a00-0709f1f9730b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.348 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Releasing lock "refresh_cache-643096ea-a4a9-4b3f-b8c1-130423a4248e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.348 2 DEBUG nova.compute.manager [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.355 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.355 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.362 2 INFO nova.virt.libvirt.driver [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance destroyed successfully.#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.362 2 DEBUG nova.objects.instance [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'resources' on Instance uuid 643096ea-a4a9-4b3f-b8c1-130423a4248e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.364 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.364 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquired lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.365 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.384 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.444 2 DEBUG nova.compute.manager [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-changed-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.445 2 DEBUG nova.compute.manager [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Refreshing instance network info cache due to event network-changed-18b5959f-439a-440f-8a00-0709f1f9730b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.445 2 DEBUG oslo_concurrency.lockutils [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.467 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.467 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.475 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.476 2 INFO nova.compute.claims [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.515 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1312: 305 pgs: 305 active+clean; 196 MiB data, 399 MiB used, 60 GiB / 60 GiB avail; 6.1 MiB/s rd, 1.4 MiB/s wr, 221 op/s
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.626 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.988 2 INFO nova.virt.libvirt.driver [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Deleting instance files /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e_del#033[00m
Oct  7 10:09:53 np0005473739 nova_compute[259550]: 2025-10-07 14:09:53.990 2 INFO nova.virt.libvirt.driver [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Deletion of /var/lib/nova/instances/643096ea-a4a9-4b3f-b8c1-130423a4248e_del complete#033[00m
Oct  7 10:09:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3347357936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.098 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.104 2 DEBUG nova.compute.provider_tree [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.154 2 DEBUG nova.scheduler.client.report [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.161 2 INFO nova.compute.manager [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.162 2 DEBUG oslo.service.loopingcall [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.162 2 DEBUG nova.compute.manager [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.162 2 DEBUG nova.network.neutron [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.272 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.804s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.273 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.386 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.387 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.395 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846179.3892694, 7ee2904f-492a-4ffe-bdc2-6f4ec3285851 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.395 2 INFO nova.compute.manager [-] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.434 2 DEBUG nova.compute.manager [None req-aff8345b-8cbe-480a-9ae1-c7283f88e60b - - - - - -] [instance: 7ee2904f-492a-4ffe-bdc2-6f4ec3285851] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.444 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.495 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.622 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.624 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.624 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Creating image(s)#033[00m
Oct  7 10:09:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e166 do_prune osdmap full prune enabled
Oct  7 10:09:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e167 e167: 3 total, 3 up, 3 in
Oct  7 10:09:54 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e167: 3 total, 3 up, 3 in
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.661 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.693 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.723 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.728 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.770 2 DEBUG nova.network.neutron [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updating instance_info_cache with network_info: [{"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.775 2 DEBUG nova.policy [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8656e87752c0498891bd00461451ea40', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'da79ea7f82af425b975ddff6ef03a59a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.779 2 DEBUG nova.network.neutron [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.798 2 DEBUG nova.network.neutron [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.801 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Releasing lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.801 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance network_info: |[{"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.802 2 DEBUG oslo_concurrency.lockutils [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.802 2 DEBUG nova.network.neutron [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Refreshing network info cache for port 18b5959f-439a-440f-8a00-0709f1f9730b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.805 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start _get_guest_xml network_info=[{"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.811 2 WARNING nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.817 2 DEBUG nova.virt.libvirt.host [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.818 2 DEBUG nova.virt.libvirt.host [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.822 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.824 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.825 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.825 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.848 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.854 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 75fea627-8d48-4772-86a2-ca6bc47ed597_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.887 2 INFO nova.compute.manager [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.894 2 DEBUG nova.virt.libvirt.host [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.895 2 DEBUG nova.virt.libvirt.host [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.895 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.896 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.896 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.897 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.897 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.897 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.898 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.898 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.898 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.899 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.899 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.899 2 DEBUG nova.virt.hardware [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:09:54 np0005473739 nova_compute[259550]: 2025-10-07 14:09:54.903 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.019 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.020 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.132 2 DEBUG oslo_concurrency.processutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.248 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 75fea627-8d48-4772-86a2-ca6bc47ed597_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.335 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.336 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.336 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "f707575c-3219-44e4-9655-ccc194e7385d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.336 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.336 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.337 2 INFO nova.compute.manager [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Terminating instance#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.338 2 DEBUG nova.compute.manager [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.345 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] resizing rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2687494201' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.375 2 INFO nova.virt.libvirt.driver [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Instance destroyed successfully.#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.376 2 DEBUG nova.objects.instance [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid f707575c-3219-44e4-9655-ccc194e7385d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.395 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.413 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.416 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.444 2 DEBUG nova.virt.libvirt.vif [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-725999350',display_name='tempest-ImagesTestJSON-server-725999350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-725999350',id=28,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:09:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-1i0xrx4z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:09:52Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=f707575c-3219-44e4-9655-ccc194e7385d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.445 2 DEBUG nova.network.os_vif_util [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "63d67e4f-8bad-4402-b835-b12010348a29", "address": "fa:16:3e:20:81:bb", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap63d67e4f-8b", "ovs_interfaceid": "63d67e4f-8bad-4402-b835-b12010348a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.445 2 DEBUG nova.network.os_vif_util [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.446 2 DEBUG os_vif [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap63d67e4f-8b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.460 2 INFO os_vif [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:20:81:bb,bridge_name='br-int',has_traffic_filtering=True,id=63d67e4f-8bad-4402-b835-b12010348a29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap63d67e4f-8b')#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.521 2 DEBUG nova.objects.instance [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lazy-loading 'migration_context' on Instance uuid 75fea627-8d48-4772-86a2-ca6bc47ed597 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515382844' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1314: 305 pgs: 305 active+clean; 267 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 6.2 MiB/s rd, 13 MiB/s wr, 446 op/s
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.590 2 DEBUG oslo_concurrency.processutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.596 2 DEBUG nova.compute.provider_tree [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.607 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.608 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Ensure instance console log exists: /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.608 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.608 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.609 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.648 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Successfully created port: cdae5f4d-2069-4d30-bd74-31e9459986db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.660 2 DEBUG nova.scheduler.client.report [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.696 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.736 2 INFO nova.scheduler.client.report [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Deleted allocations for instance 643096ea-a4a9-4b3f-b8c1-130423a4248e#033[00m
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e167 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e167 do_prune osdmap full prune enabled
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e168 e168: 3 total, 3 up, 3 in
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e168: 3 total, 3 up, 3 in
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.852 2 DEBUG oslo_concurrency.lockutils [None req-c532dd2b-adf9-4273-9af2-6a33e1798b69 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "643096ea-a4a9-4b3f-b8c1-130423a4248e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.031s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3062624384' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.888 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.889 2 DEBUG nova.virt.libvirt.vif [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-799592627',display_name='tempest-ImagesOneServerNegativeTestJSON-server-799592627',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-799592627',id=30,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7fs0zddu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:50Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=6a715eda-3357-4298-88cc-596cc9986690,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.889 2 DEBUG nova.network.os_vif_util [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.890 2 DEBUG nova.network.os_vif_util [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.890 2 DEBUG nova.objects.instance [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6a715eda-3357-4298-88cc-596cc9986690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.938 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <uuid>6a715eda-3357-4298-88cc-596cc9986690</uuid>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <name>instance-0000001e</name>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-799592627</nova:name>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:09:54</nova:creationTime>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <nova:user uuid="21b4c507f5c443f4b43306c884b1d67f">tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member</nova:user>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <nova:project uuid="c166ae9e4e0f43d38afaa35966f84b05">tempest-ImagesOneServerNegativeTestJSON-2130756304</nova:project>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <nova:port uuid="18b5959f-439a-440f-8a00-0709f1f9730b">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <entry name="serial">6a715eda-3357-4298-88cc-596cc9986690</entry>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <entry name="uuid">6a715eda-3357-4298-88cc-596cc9986690</entry>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/6a715eda-3357-4298-88cc-596cc9986690_disk">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/6a715eda-3357-4298-88cc-596cc9986690_disk.config">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:2a:94:47"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <target dev="tap18b5959f-43"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/console.log" append="off"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:09:55 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:09:55 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:09:55 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:09:55 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.939 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Preparing to wait for external event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.939 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.939 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.939 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.940 2 DEBUG nova.virt.libvirt.vif [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-799592627',display_name='tempest-ImagesOneServerNegativeTestJSON-server-799592627',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-799592627',id=30,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7fs0zddu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:50Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=6a715eda-3357-4298-88cc-596cc9986690,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.941 2 DEBUG nova.network.os_vif_util [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.943 2 DEBUG nova.network.os_vif_util [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.943 2 DEBUG os_vif [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.948 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap18b5959f-43, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.948 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap18b5959f-43, col_values=(('external_ids', {'iface-id': '18b5959f-439a-440f-8a00-0709f1f9730b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2a:94:47', 'vm-uuid': '6a715eda-3357-4298-88cc-596cc9986690'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:55 np0005473739 NetworkManager[44949]: <info>  [1759846195.9508] manager: (tap18b5959f-43): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/96)
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:55 np0005473739 nova_compute[259550]: 2025-10-07 14:09:55.957 2 INFO os_vif [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43')#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.004 2 INFO nova.virt.libvirt.driver [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Deleting instance files /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d_del#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.004 2 INFO nova.virt.libvirt.driver [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Deletion of /var/lib/nova/instances/f707575c-3219-44e4-9655-ccc194e7385d_del complete#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.033 2 DEBUG nova.network.neutron [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updated VIF entry in instance network info cache for port 18b5959f-439a-440f-8a00-0709f1f9730b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.034 2 DEBUG nova.network.neutron [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updating instance_info_cache with network_info: [{"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.053 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.053 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.054 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No VIF found with MAC fa:16:3e:2a:94:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.054 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Using config drive#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.074 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.082 2 DEBUG oslo_concurrency.lockutils [req-d0c33a7f-4a01-43ba-a47a-0080439e935c req-3b9332f6-3f56-4320-acff-6b9e5d0814aa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6a715eda-3357-4298-88cc-596cc9986690" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.136 2 INFO nova.compute.manager [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.137 2 DEBUG oslo.service.loopingcall [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.137 2 DEBUG nova.compute.manager [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.137 2 DEBUG nova.network.neutron [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.909 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Creating config drive at /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.914 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjbqhg7my execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:56.991 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:09:56 np0005473739 nova_compute[259550]: 2025-10-07 14:09:56.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:56.993 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.046 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Successfully updated port: cdae5f4d-2069-4d30-bd74-31e9459986db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.055 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjbqhg7my" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.087 2 DEBUG nova.storage.rbd_utils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 6a715eda-3357-4298-88cc-596cc9986690_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.092 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config 6a715eda-3357-4298-88cc-596cc9986690_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.133 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.133 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquired lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.134 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.248 2 DEBUG nova.network.neutron [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.270 2 INFO nova.compute.manager [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Took 1.13 seconds to deallocate network for instance.#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.291 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.318 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.319 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.434 2 DEBUG oslo_concurrency.processutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1316: 305 pgs: 305 active+clean; 267 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 1.9 MiB/s rd, 11 MiB/s wr, 297 op/s
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.716 2 DEBUG nova.compute.manager [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-changed-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.716 2 DEBUG nova.compute.manager [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Refreshing instance network info cache due to event network-changed-cdae5f4d-2069-4d30-bd74-31e9459986db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.717 2 DEBUG oslo_concurrency.lockutils [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.717 2 DEBUG nova.compute.manager [req-0a9234c8-2fbf-4b0d-99e9-b70bf95a9e61 req-ea419f89-07de-4b36-a0ef-f2de0a3d80ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Received event network-vif-deleted-63d67e4f-8bad-4402-b835-b12010348a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.845 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.846 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.846 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.846 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.846 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.848 2 INFO nova.compute.manager [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Terminating instance#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.848 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "refresh_cache-b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.848 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquired lock "refresh_cache-b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.849 2 DEBUG nova.network.neutron [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:09:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2096327' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.932 2 DEBUG oslo_concurrency.processutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.940 2 DEBUG nova.compute.provider_tree [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.944 2 DEBUG oslo_concurrency.processutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config 6a715eda-3357-4298-88cc-596cc9986690_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.852s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.944 2 INFO nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Deleting local config drive /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690/disk.config because it was imported into RBD.#033[00m
Oct  7 10:09:57 np0005473739 nova_compute[259550]: 2025-10-07 14:09:57.957 2 DEBUG nova.scheduler.client.report [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:58 np0005473739 kernel: tap18b5959f-43: entered promiscuous mode
Oct  7 10:09:58 np0005473739 NetworkManager[44949]: <info>  [1759846198.0051] manager: (tap18b5959f-43): new Tun device (/org/freedesktop/NetworkManager/Devices/97)
Oct  7 10:09:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:58Z|00189|binding|INFO|Claiming lport 18b5959f-439a-440f-8a00-0709f1f9730b for this chassis.
Oct  7 10:09:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:58Z|00190|binding|INFO|18b5959f-439a-440f-8a00-0709f1f9730b: Claiming fa:16:3e:2a:94:47 10.100.0.3
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.023 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:58 np0005473739 systemd-udevd[298397]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:09:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:58Z|00191|binding|INFO|Setting lport 18b5959f-439a-440f-8a00-0709f1f9730b ovn-installed in OVS
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.044 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:58 np0005473739 NetworkManager[44949]: <info>  [1759846198.0601] device (tap18b5959f-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:09:58 np0005473739 NetworkManager[44949]: <info>  [1759846198.0611] device (tap18b5959f-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:09:58 np0005473739 systemd-machined[214580]: New machine qemu-34-instance-0000001e.
Oct  7 10:09:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:58Z|00192|binding|INFO|Setting lport 18b5959f-439a-440f-8a00-0709f1f9730b up in Southbound
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.073 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:94:47 10.100.0.3'], port_security=['fa:16:3e:2a:94:47 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6a715eda-3357-4298-88cc-596cc9986690', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '2', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=18b5959f-439a-440f-8a00-0709f1f9730b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:09:58 np0005473739 systemd[1]: Started Virtual Machine qemu-34-instance-0000001e.
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.074 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 18b5959f-439a-440f-8a00-0709f1f9730b in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d bound to our chassis#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.075 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.088 2 INFO nova.scheduler.client.report [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance f707575c-3219-44e4-9655-ccc194e7385d#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.090 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7288b2f6-6516-4f01-9c15-7b5bcc0b7a0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.092 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a5c95d4-11 in ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.094 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a5c95d4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.095 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[09968225-31cb-4561-a445-2eaa5548128f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc23f60-bbca-4ae8-8aa6-ecc34724996b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.109 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a50e53f5-996f-4d07-b498-5a2dc6656bc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.126 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9bdf761d-2384-4774-ac2d-2f6ab10769c0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.166 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[313c7143-36cf-426b-96df-2d4d12d9483f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 NetworkManager[44949]: <info>  [1759846198.1742] manager: (tap0a5c95d4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/98)
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.173 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a76fe3d7-b43d-4e69-ac33-14195ee3432d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.219 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3b8c51f0-9c74-40e3-9a17-16508b57a960]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.223 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[087dd499-8132-4ede-97af-4f8479778019]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.240 2 DEBUG nova.network.neutron [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.250 2 DEBUG oslo_concurrency.lockutils [None req-25da4ca4-5820-4cec-8403-465f9c8f778e a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "f707575c-3219-44e4-9655-ccc194e7385d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:58 np0005473739 NetworkManager[44949]: <info>  [1759846198.2555] device (tap0a5c95d4-10): carrier: link connected
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.261 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1465a50c-dbf0-497a-910a-798cb6310206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.281 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab22a44-3755-446e-9968-911770344643]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678182, 'reachable_time': 28794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298433, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.302 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[069ea50f-fa3d-400c-ac86-a475b22db606]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:6312'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678182, 'tstamp': 678182}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298434, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.321 2 DEBUG nova.network.neutron [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updating instance_info_cache with network_info: [{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.325 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3060eb52-fe4c-4d98-a0c8-2c521bb7c267]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 63], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678182, 'reachable_time': 28794, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298435, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.346 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Releasing lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.346 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance network_info: |[{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.347 2 DEBUG oslo_concurrency.lockutils [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.347 2 DEBUG nova.network.neutron [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Refreshing network info cache for port cdae5f4d-2069-4d30-bd74-31e9459986db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.350 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start _get_guest_xml network_info=[{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.357 2 WARNING nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.366 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78667487-b744-408b-9b73-65c782157503]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.367 2 DEBUG nova.virt.libvirt.host [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.368 2 DEBUG nova.virt.libvirt.host [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.373 2 DEBUG nova.virt.libvirt.host [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.373 2 DEBUG nova.virt.libvirt.host [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.374 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.374 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.374 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.375 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.376 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.376 2 DEBUG nova.virt.hardware [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.379 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.433 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7160d2e8-5e6b-49f9-9fd3-34b0f2a3b66d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.435 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.435 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.437 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a5c95d4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:58 np0005473739 kernel: tap0a5c95d4-10: entered promiscuous mode
Oct  7 10:09:58 np0005473739 NetworkManager[44949]: <info>  [1759846198.4398] manager: (tap0a5c95d4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/99)
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.442 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a5c95d4-10, col_values=(('external_ids', {'iface-id': 'a8291172-baf1-4252-9a0d-af7ef7ffa931'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:09:58Z|00193|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.463 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.464 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[09beeb1f-59fe-4bfd-9fae-f3c4689c4463]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.464 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:09:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:09:58.465 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'env', 'PROCESS_TAG=haproxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a5c95d4-1a77-48f5-83c0-afa976b7583d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.471 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.471 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.486 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.564 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.565 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.571 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.572 2 INFO nova.compute.claims [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.584 2 DEBUG nova.network.neutron [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.602 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Releasing lock "refresh_cache-b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.603 2 DEBUG nova.compute.manager [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:09:58 np0005473739 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Deactivated successfully.
Oct  7 10:09:58 np0005473739 systemd[1]: machine-qemu\x2d31\x2dinstance\x2d0000001b.scope: Consumed 14.716s CPU time.
Oct  7 10:09:58 np0005473739 systemd-machined[214580]: Machine qemu-31-instance-0000001b terminated.
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.747 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.834 2 INFO nova.virt.libvirt.driver [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance destroyed successfully.#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.835 2 DEBUG nova.objects.instance [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lazy-loading 'resources' on Instance uuid b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3755273547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.875 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:58 np0005473739 podman[298528]: 2025-10-07 14:09:58.903372761 +0000 UTC m=+0.071963712 container create 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.908 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:58 np0005473739 nova_compute[259550]: 2025-10-07 14:09:58.925 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:58 np0005473739 systemd[1]: Started libpod-conmon-73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2.scope.
Oct  7 10:09:58 np0005473739 podman[298528]: 2025-10-07 14:09:58.863905894 +0000 UTC m=+0.032496905 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:09:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:09:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b77270b12605ac3132ea4eef6c757661be710663f1f44a5cebdb8882546cdf8c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:09:59 np0005473739 podman[298528]: 2025-10-07 14:09:59.009654365 +0000 UTC m=+0.178245356 container init 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:09:59 np0005473739 podman[298528]: 2025-10-07 14:09:59.016232198 +0000 UTC m=+0.184823159 container start 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:09:59 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [NOTICE]   (298607) : New worker (298609) forked
Oct  7 10:09:59 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [NOTICE]   (298607) : Loading success.
Oct  7 10:09:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:09:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3207458592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.300 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.306 2 DEBUG nova.compute.provider_tree [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.310 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846199.3104303, 6a715eda-3357-4298-88cc-596cc9986690 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.311 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] VM Started (Lifecycle Event)#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.325 2 DEBUG nova.scheduler.client.report [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.330 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.337 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846199.3114696, 6a715eda-3357-4298-88cc-596cc9986690 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.337 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.341 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.342 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.355 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.359 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.387 2 INFO nova.virt.libvirt.driver [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Deleting instance files /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_del#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.389 2 INFO nova.virt.libvirt.driver [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Deletion of /var/lib/nova/instances/b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca_del complete#033[00m
Oct  7 10:09:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:09:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2694528085' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.394 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.401 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.401 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.412 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.413 2 DEBUG nova.virt.libvirt.vif [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1141910716',display_name='tempest-ServersTestJSON-server-1141910716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1141910716',id=31,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB8dBtsZMOusOa4IijROkUdbBylBDiqYzwyIPuD+vzYyGEcKB5fbetYwHVvk9xfq9490L3yO+927Gy6izYCdieX2E8nhzbrcXYwgrKrlKM3UVrnT4xZuHJ7KYzTZGsV5OA==',key_name='tempest-keypair-878357788',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da79ea7f82af425b975ddff6ef03a59a',ramdisk_id='',reservation_id='r-h9cj7vml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-218965943',owner_user_name='tempest-ServersTestJSON-218965943-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8656e87752c0498891bd00461451ea40',uuid=75fea627-8d48-4772-86a2-ca6bc47ed597,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.413 2 DEBUG nova.network.os_vif_util [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converting VIF {"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.414 2 DEBUG nova.network.os_vif_util [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.415 2 DEBUG nova.objects.instance [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lazy-loading 'pci_devices' on Instance uuid 75fea627-8d48-4772-86a2-ca6bc47ed597 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.430 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.440 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <uuid>75fea627-8d48-4772-86a2-ca6bc47ed597</uuid>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <name>instance-0000001f</name>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestJSON-server-1141910716</nova:name>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:09:58</nova:creationTime>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <nova:user uuid="8656e87752c0498891bd00461451ea40">tempest-ServersTestJSON-218965943-project-member</nova:user>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <nova:project uuid="da79ea7f82af425b975ddff6ef03a59a">tempest-ServersTestJSON-218965943</nova:project>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <nova:port uuid="cdae5f4d-2069-4d30-bd74-31e9459986db">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <entry name="serial">75fea627-8d48-4772-86a2-ca6bc47ed597</entry>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <entry name="uuid">75fea627-8d48-4772-86a2-ca6bc47ed597</entry>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/75fea627-8d48-4772-86a2-ca6bc47ed597_disk">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6e:ba:59"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <target dev="tapcdae5f4d-20"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/console.log" append="off"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:09:59 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:09:59 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:09:59 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:09:59 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.442 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Preparing to wait for external event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.443 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.443 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.443 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.445 2 DEBUG nova.virt.libvirt.vif [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1141910716',display_name='tempest-ServersTestJSON-server-1141910716',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1141910716',id=31,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB8dBtsZMOusOa4IijROkUdbBylBDiqYzwyIPuD+vzYyGEcKB5fbetYwHVvk9xfq9490L3yO+927Gy6izYCdieX2E8nhzbrcXYwgrKrlKM3UVrnT4xZuHJ7KYzTZGsV5OA==',key_name='tempest-keypair-878357788',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='da79ea7f82af425b975ddff6ef03a59a',ramdisk_id='',reservation_id='r-h9cj7vml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-218965943',owner_user_name='tempest-ServersTestJSON-218965943-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8656e87752c0498891bd00461451ea40',uuid=75fea627-8d48-4772-86a2-ca6bc47ed597,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.445 2 DEBUG nova.network.os_vif_util [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converting VIF {"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.446 2 DEBUG nova.network.os_vif_util [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.446 2 DEBUG os_vif [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.448 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.452 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcdae5f4d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcdae5f4d-20, col_values=(('external_ids', {'iface-id': 'cdae5f4d-2069-4d30-bd74-31e9459986db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6e:ba:59', 'vm-uuid': '75fea627-8d48-4772-86a2-ca6bc47ed597'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:09:59 np0005473739 NetworkManager[44949]: <info>  [1759846199.4551] manager: (tapcdae5f4d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/100)
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.460 2 INFO nova.compute.manager [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.460 2 DEBUG oslo.service.loopingcall [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.460 2 DEBUG nova.compute.manager [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.460 2 DEBUG nova.network.neutron [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.464 2 INFO os_vif [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20')#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.531 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.539 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846184.5341327, f707575c-3219-44e4-9655-ccc194e7385d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.539 2 INFO nova.compute.manager [-] [instance: f707575c-3219-44e4-9655-ccc194e7385d] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:09:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1317: 305 pgs: 305 active+clean; 239 MiB data, 450 MiB used, 60 GiB / 60 GiB avail; 1.7 MiB/s rd, 10 MiB/s wr, 341 op/s
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.592 2 DEBUG nova.compute.manager [None req-07f7a84b-4d3b-4a73-bd2e-52acc3dc4c68 - - - - - -] [instance: f707575c-3219-44e4-9655-ccc194e7385d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.618 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.619 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.620 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] No VIF found with MAC fa:16:3e:6e:ba:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.621 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Using config drive#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.654 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.745 2 DEBUG nova.network.neutron [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.786 2 DEBUG nova.policy [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.802 2 DEBUG nova.network.neutron [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.854 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.856 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.856 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Creating image(s)#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.881 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.911 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.947 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.952 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.989 2 DEBUG nova.network.neutron [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updated VIF entry in instance network info cache for port cdae5f4d-2069-4d30-bd74-31e9459986db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.990 2 DEBUG nova.network.neutron [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updating instance_info_cache with network_info: [{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.993 2 DEBUG nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.994 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.994 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.994 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.995 2 DEBUG nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Processing event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.995 2 DEBUG nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.995 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.996 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.996 2 DEBUG oslo_concurrency.lockutils [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.996 2 DEBUG nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] No waiting events found dispatching network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.996 2 WARNING nova.compute.manager [req-0209439a-88e6-498c-b8fb-ef73e7b5b6c8 req-f5676d57-73bb-4b4e-b7a8-d60d54f764a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received unexpected event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.998 2 INFO nova.compute.manager [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Took 0.54 seconds to deallocate network for instance.#033[00m
Oct  7 10:09:59 np0005473739 nova_compute[259550]: 2025-10-07 14:09:59.998 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.006 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.007 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846200.0065184, 6a715eda-3357-4298-88cc-596cc9986690 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.007 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.012 2 INFO nova.virt.libvirt.driver [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance spawned successfully.#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.013 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.016 2 DEBUG oslo_concurrency.lockutils [req-a3dbefeb-e5ea-4f37-8e18-9cda8f7a7cb6 req-92d4d666-856e-4736-a622-093c10cb42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.020 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.020 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.021 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.021 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.043 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.044 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.045 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.050 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.054 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.092 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.098 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.098 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.104 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.109 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.110 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.110 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.111 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.111 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.111 2 DEBUG nova.virt.libvirt.driver [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.137 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Creating config drive at /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.142 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp31881ybv execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.256 2 DEBUG oslo_concurrency.processutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.293 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.300 2 INFO nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Took 10.04 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.301 2 DEBUG nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.303 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp31881ybv" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.332 2 DEBUG nova.storage.rbd_utils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] rbd image 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.338 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.370 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Successfully created port: 7f14c952-52c5-4957-ae99-09ff563a62a9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.405 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.501 2 INFO nova.compute.manager [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Took 11.22 seconds to build instance.#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.508 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.542 2 DEBUG oslo_concurrency.lockutils [None req-811b0d7d-4519-4d33-b087-d53005d1d1fc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.544 2 DEBUG oslo_concurrency.processutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config 75fea627-8d48-4772-86a2-ca6bc47ed597_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.206s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.544 2 INFO nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Deleting local config drive /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597/disk.config because it was imported into RBD.#033[00m
Oct  7 10:10:00 np0005473739 NetworkManager[44949]: <info>  [1759846200.6116] manager: (tapcdae5f4d-20): new Tun device (/org/freedesktop/NetworkManager/Devices/101)
Oct  7 10:10:00 np0005473739 kernel: tapcdae5f4d-20: entered promiscuous mode
Oct  7 10:10:00 np0005473739 systemd-udevd[298427]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:10:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:00Z|00194|binding|INFO|Claiming lport cdae5f4d-2069-4d30-bd74-31e9459986db for this chassis.
Oct  7 10:10:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:00Z|00195|binding|INFO|cdae5f4d-2069-4d30-bd74-31e9459986db: Claiming fa:16:3e:6e:ba:59 10.100.0.11
Oct  7 10:10:00 np0005473739 NetworkManager[44949]: <info>  [1759846200.6336] device (tapcdae5f4d-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.633 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:ba:59 10.100.0.11'], port_security=['fa:16:3e:6e:ba:59 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '75fea627-8d48-4772-86a2-ca6bc47ed597', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da79ea7f82af425b975ddff6ef03a59a', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f73ed2d1-0e14-4944-923b-e8979adde65e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d43ba345-e1ae-4a7f-bf78-bc4ea18d4057, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=cdae5f4d-2069-4d30-bd74-31e9459986db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:00 np0005473739 NetworkManager[44949]: <info>  [1759846200.6352] device (tapcdae5f4d-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.635 161536 INFO neutron.agent.ovn.metadata.agent [-] Port cdae5f4d-2069-4d30-bd74-31e9459986db in datapath f753e53f-89de-4a31-8fea-3e9e1684d72a bound to our chassis#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.637 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network f753e53f-89de-4a31-8fea-3e9e1684d72a#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.655 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[006d84fe-711a-4294-8601-38eab7e4cd76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.656 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapf753e53f-81 in ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.659 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapf753e53f-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.659 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f034ca9-e138-4081-ac67-897357871b77]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 systemd-machined[214580]: New machine qemu-35-instance-0000001f.
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.660 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0268c978-87dc-4a2b-a4c0-74c9c2c9953c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.662 2 DEBUG nova.objects.instance [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:00 np0005473739 systemd[1]: Started Virtual Machine qemu-35-instance-0000001f.
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.673 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e41d515b-2ac9-4a9d-a72a-87f7fddaf361]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.689 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.689 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Ensure instance console log exists: /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.690 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.690 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.691 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.703 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5a01c1-2eda-41c8-a5e3-43dd4aeaf3d5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:00Z|00196|binding|INFO|Setting lport cdae5f4d-2069-4d30-bd74-31e9459986db ovn-installed in OVS
Oct  7 10:10:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:00Z|00197|binding|INFO|Setting lport cdae5f4d-2069-4d30-bd74-31e9459986db up in Southbound
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.739 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fe551248-7767-410d-bf0e-47cffb7c4dab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.747 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f4009113-d290-4b5d-b09f-4a571f29c78f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 NetworkManager[44949]: <info>  [1759846200.7488] manager: (tapf753e53f-80): new Veth device (/org/freedesktop/NetworkManager/Devices/102)
Oct  7 10:10:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2950897051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.792 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e34b23a6-bb9c-4190-a65d-f240d292eb06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.794 2 DEBUG oslo_concurrency.processutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.538s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.796 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef6c4122-5c10-4c8d-a12e-7dbb44505a45]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e168 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e168 do_prune osdmap full prune enabled
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.803 2 DEBUG nova.compute.provider_tree [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e169 e169: 3 total, 3 up, 3 in
Oct  7 10:10:00 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e169: 3 total, 3 up, 3 in
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.833 2 DEBUG nova.scheduler.client.report [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:00 np0005473739 NetworkManager[44949]: <info>  [1759846200.8350] device (tapf753e53f-80): carrier: link connected
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.841 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1a7587a8-18e2-4ed7-9f9b-eea2cde19a52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.863 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[340c894d-6978-4b2a-94f8-f2a6bd384092]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf753e53f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:5c:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678439, 'reachable_time': 36927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 298921, 'error': None, 'target': 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.882 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.784s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.883 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7086cd00-4aca-4cd2-85c1-6b099eb77061]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed8:5ca8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678439, 'tstamp': 678439}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 298922, 'error': None, 'target': 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.907 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[77d77f33-8f73-4e99-9f95-017f6cc02555]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapf753e53f-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d8:5c:a8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 65], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678439, 'reachable_time': 36927, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 298923, 'error': None, 'target': 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:00 np0005473739 nova_compute[259550]: 2025-10-07 14:10:00.940 2 INFO nova.scheduler.client.report [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Deleted allocations for instance b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca#033[00m
Oct  7 10:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:00.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cda73f0-466f-40de-a68e-eca089f92d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.023 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6278cd10-9ad6-4bc0-a7a3-7151065d9954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf753e53f-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf753e53f-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:01 np0005473739 NetworkManager[44949]: <info>  [1759846201.0281] manager: (tapf753e53f-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/103)
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:01 np0005473739 kernel: tapf753e53f-80: entered promiscuous mode
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.032 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapf753e53f-80, col_values=(('external_ids', {'iface-id': 'd1eb01c1-56fa-4d3a-85d1-57cfa37e8fb9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:01Z|00198|binding|INFO|Releasing lport d1eb01c1-56fa-4d3a-85d1-57cfa37e8fb9 from this chassis (sb_readonly=0)
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.036 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/f753e53f-89de-4a31-8fea-3e9e1684d72a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/f753e53f-89de-4a31-8fea-3e9e1684d72a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.037 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4405c6-b259-45a2-a584-8ffa913e4558]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.038 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-f753e53f-89de-4a31-8fea-3e9e1684d72a
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/f753e53f-89de-4a31-8fea-3e9e1684d72a.pid.haproxy
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID f753e53f-89de-4a31-8fea-3e9e1684d72a
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:10:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:01.039 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'env', 'PROCESS_TAG=haproxy-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/f753e53f-89de-4a31-8fea-3e9e1684d72a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.111 2 DEBUG oslo_concurrency.lockutils [None req-51d964b5-feca-4d94-83a1-30cc63291a3d 61181e933b9841959e5a4a707bc48adf 5fb6da28aeda44c5b898c384c6853b38 - - default default] Lock "b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:01 np0005473739 podman[298997]: 2025-10-07 14:10:01.509091924 +0000 UTC m=+0.065760529 container create 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:10:01 np0005473739 systemd[1]: Started libpod-conmon-01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643.scope.
Oct  7 10:10:01 np0005473739 podman[298997]: 2025-10-07 14:10:01.475612994 +0000 UTC m=+0.032281619 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:10:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1319: 305 pgs: 305 active+clean; 212 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 210 KiB/s rd, 4.8 MiB/s wr, 253 op/s
Oct  7 10:10:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b521a0ec4b82aa4091ed78acfb32e87147f7bbf599891bee81258eeecedcd051/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:01 np0005473739 podman[298997]: 2025-10-07 14:10:01.618336576 +0000 UTC m=+0.175005211 container init 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:10:01 np0005473739 podman[298997]: 2025-10-07 14:10:01.627392163 +0000 UTC m=+0.184060778 container start 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:10:01 np0005473739 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [NOTICE]   (299017) : New worker (299019) forked
Oct  7 10:10:01 np0005473739 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [NOTICE]   (299017) : Loading success.
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.685 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846201.684147, 75fea627-8d48-4772-86a2-ca6bc47ed597 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.685 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] VM Started (Lifecycle Event)#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.718 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.721 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Successfully updated port: 7f14c952-52c5-4957-ae99-09ff563a62a9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.725 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846201.6876004, 75fea627-8d48-4772-86a2-ca6bc47ed597 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.725 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.918 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.919 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.919 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.920 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.924 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:01 np0005473739 nova_compute[259550]: 2025-10-07 14:10:01.952 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.051 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.127 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.127 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Processing event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-changed-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.128 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Refreshing instance network info cache due to event network-changed-7f14c952-52c5-4957-ae99-09ff563a62a9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.129 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.129 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.136 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846202.1358852, 75fea627-8d48-4772-86a2-ca6bc47ed597 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.136 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.141 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.146 2 INFO nova.virt.libvirt.driver [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance spawned successfully.#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.146 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.155 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.173 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.182 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.183 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.183 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.184 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.185 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.185 2 DEBUG nova.virt.libvirt.driver [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.198 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.253 2 INFO nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Took 7.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.254 2 DEBUG nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.319 2 INFO nova.compute.manager [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Took 8.87 seconds to build instance.#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.336 2 DEBUG oslo_concurrency.lockutils [None req-110bfce5-9572-4c70-99e5-1a01db53c38e 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.981s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.908 2 DEBUG nova.network.neutron [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updating instance_info_cache with network_info: [{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.977 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.977 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance network_info: |[{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.978 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.978 2 DEBUG nova.network.neutron [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Refreshing network info cache for port 7f14c952-52c5-4957-ae99-09ff563a62a9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.981 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start _get_guest_xml network_info=[{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.989 2 WARNING nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.994 2 DEBUG nova.virt.libvirt.host [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.994 2 DEBUG nova.virt.libvirt.host [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.997 2 DEBUG nova.virt.libvirt.host [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.998 2 DEBUG nova.virt.libvirt.host [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.998 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.998 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:10:02 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:02.999 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.000 2 DEBUG nova.virt.hardware [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.003 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:03 np0005473739 podman[299028]: 2025-10-07 14:10:03.127392692 +0000 UTC m=+0.108677317 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.225 2 DEBUG nova.compute.manager [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.273 2 INFO nova.compute.manager [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] instance snapshotting#033[00m
Oct  7 10:10:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:10:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931805357' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.528 2 INFO nova.virt.libvirt.driver [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Beginning live snapshot process#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.549 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.546s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1320: 305 pgs: 305 active+clean; 212 MiB data, 438 MiB used, 60 GiB / 60 GiB avail; 81 KiB/s rd, 1.7 MiB/s wr, 121 op/s
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.589 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:03 np0005473739 nova_compute[259550]: 2025-10-07 14:10:03.595 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:10:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/340070330' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.100 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.101 2 DEBUG nova.virt.libvirt.vif [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-85849554',display_name='tempest-ImagesTestJSON-server-85849554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-85849554',id=32,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-3510ii32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:59Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=20fd8b19-2eb6-4c42-a320-d9d23f6f4912,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.102 2 DEBUG nova.network.os_vif_util [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.102 2 DEBUG nova.network.os_vif_util [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.104 2 DEBUG nova.objects.instance [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.132 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <uuid>20fd8b19-2eb6-4c42-a320-d9d23f6f4912</uuid>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <name>instance-00000020</name>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesTestJSON-server-85849554</nova:name>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:10:02</nova:creationTime>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <nova:port uuid="7f14c952-52c5-4957-ae99-09ff563a62a9">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <entry name="serial">20fd8b19-2eb6-4c42-a320-d9d23f6f4912</entry>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <entry name="uuid">20fd8b19-2eb6-4c42-a320-d9d23f6f4912</entry>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:35:62:58"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <target dev="tap7f14c952-52"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/console.log" append="off"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:10:04 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:10:04 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:10:04 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:10:04 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.133 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Preparing to wait for external event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.133 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.133 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.134 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.135 2 DEBUG nova.virt.libvirt.vif [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:09:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-85849554',display_name='tempest-ImagesTestJSON-server-85849554',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-85849554',id=32,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-3510ii32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:09:59Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=20fd8b19-2eb6-4c42-a320-d9d23f6f4912,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.135 2 DEBUG nova.network.os_vif_util [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.136 2 DEBUG nova.network.os_vif_util [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.136 2 DEBUG os_vif [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7f14c952-52, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7f14c952-52, col_values=(('external_ids', {'iface-id': '7f14c952-52c5-4957-ae99-09ff563a62a9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:35:62:58', 'vm-uuid': '20fd8b19-2eb6-4c42-a320-d9d23f6f4912'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:04 np0005473739 NetworkManager[44949]: <info>  [1759846204.2212] manager: (tap7f14c952-52): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/104)
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.234 2 DEBUG nova.virt.libvirt.imagebackend [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.242 2 INFO os_vif [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52')#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.243 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846189.241949, 643096ea-a4a9-4b3f-b8c1-130423a4248e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.244 2 INFO nova.compute.manager [-] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.314 2 DEBUG nova.compute.manager [None req-b71628a9-9a1c-46bc-9206-06b4c81d5ec3 - - - - - -] [instance: 643096ea-a4a9-4b3f-b8c1-130423a4248e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:04 np0005473739 podman[299151]: 2025-10-07 14:10:04.344989127 +0000 UTC m=+0.059409292 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent)
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.435 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] creating snapshot(ce057ab149974b87af48d06eea75efe6) on rbd image(6a715eda-3357-4298-88cc-596cc9986690_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.491 2 DEBUG nova.network.neutron [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updated VIF entry in instance network info cache for port 7f14c952-52c5-4957-ae99-09ff563a62a9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.494 2 DEBUG nova.network.neutron [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updating instance_info_cache with network_info: [{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e169 do_prune osdmap full prune enabled
Oct  7 10:10:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e170 e170: 3 total, 3 up, 3 in
Oct  7 10:10:04 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e170: 3 total, 3 up, 3 in
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.601 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.603 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.603 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:35:62:58, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.604 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Using config drive#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.642 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.648 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG oslo_concurrency.lockutils [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.649 2 DEBUG nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] No waiting events found dispatching network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.650 2 WARNING nova.compute.manager [req-930d1609-9d64-4d6b-877c-3fcc4461f398 req-f5ebbd14-5890-4b56-a34e-7ca73467c428 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received unexpected event network-vif-plugged-cdae5f4d-2069-4d30-bd74-31e9459986db for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.678 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] cloning vms/6a715eda-3357-4298-88cc-596cc9986690_disk@ce057ab149974b87af48d06eea75efe6 to images/b9cff922-c050-4d43-8b7e-2053a2028ec0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.787 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] flattening images/b9cff922-c050-4d43-8b7e-2053a2028ec0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:10:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:04Z|00199|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:10:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:04Z|00200|binding|INFO|Releasing lport d1eb01c1-56fa-4d3a-85d1-57cfa37e8fb9 from this chassis (sb_readonly=0)
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:04 np0005473739 NetworkManager[44949]: <info>  [1759846204.8887] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/105)
Oct  7 10:10:04 np0005473739 NetworkManager[44949]: <info>  [1759846204.8897] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/106)
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:04Z|00201|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:10:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:04Z|00202|binding|INFO|Releasing lport d1eb01c1-56fa-4d3a-85d1-57cfa37e8fb9 from this chassis (sb_readonly=0)
Oct  7 10:10:04 np0005473739 nova_compute[259550]: 2025-10-07 14:10:04.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:04.995 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.073 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] removing snapshot(ce057ab149974b87af48d06eea75efe6) on rbd image(6a715eda-3357-4298-88cc-596cc9986690_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.217 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Creating config drive at /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.223 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3qoabcg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.365 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw3qoabcg" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.392 2 DEBUG nova.storage.rbd_utils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.396 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.434 2 DEBUG nova.compute.manager [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-changed-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.435 2 DEBUG nova.compute.manager [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Refreshing instance network info cache due to event network-changed-cdae5f4d-2069-4d30-bd74-31e9459986db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.435 2 DEBUG oslo_concurrency.lockutils [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.436 2 DEBUG oslo_concurrency.lockutils [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.436 2 DEBUG nova.network.neutron [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Refreshing network info cache for port cdae5f4d-2069-4d30-bd74-31e9459986db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.578 2 DEBUG oslo_concurrency.processutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config 20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.579 2 INFO nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Deleting local config drive /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912/disk.config because it was imported into RBD.#033[00m
Oct  7 10:10:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1322: 305 pgs: 305 active+clean; 204 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 6.8 MiB/s rd, 4.9 MiB/s wr, 440 op/s
Oct  7 10:10:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e170 do_prune osdmap full prune enabled
Oct  7 10:10:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e171 e171: 3 total, 3 up, 3 in
Oct  7 10:10:05 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e171: 3 total, 3 up, 3 in
Oct  7 10:10:05 np0005473739 NetworkManager[44949]: <info>  [1759846205.6485] manager: (tap7f14c952-52): new Tun device (/org/freedesktop/NetworkManager/Devices/107)
Oct  7 10:10:05 np0005473739 kernel: tap7f14c952-52: entered promiscuous mode
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:05Z|00203|binding|INFO|Claiming lport 7f14c952-52c5-4957-ae99-09ff563a62a9 for this chassis.
Oct  7 10:10:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:05Z|00204|binding|INFO|7f14c952-52c5-4957-ae99-09ff563a62a9: Claiming fa:16:3e:35:62:58 10.100.0.12
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.678 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] creating snapshot(snap) on rbd image(b9cff922-c050-4d43-8b7e-2053a2028ec0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:10:05 np0005473739 systemd-machined[214580]: New machine qemu-36-instance-00000020.
Oct  7 10:10:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:05Z|00205|binding|INFO|Setting lport 7f14c952-52c5-4957-ae99-09ff563a62a9 ovn-installed in OVS
Oct  7 10:10:05 np0005473739 systemd[1]: Started Virtual Machine qemu-36-instance-00000020.
Oct  7 10:10:05 np0005473739 systemd-udevd[299348]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:10:05 np0005473739 NetworkManager[44949]: <info>  [1759846205.7304] device (tap7f14c952-52): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:10:05 np0005473739 NetworkManager[44949]: <info>  [1759846205.7320] device (tap7f14c952-52): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:10:05 np0005473739 nova_compute[259550]: 2025-10-07 14:10:05.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.753 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:62:58 10.100.0.12'], port_security=['fa:16:3e:35:62:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20fd8b19-2eb6-4c42-a320-d9d23f6f4912', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=7f14c952-52c5-4957-ae99-09ff563a62a9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:05Z|00206|binding|INFO|Setting lport 7f14c952-52c5-4957-ae99-09ff563a62a9 up in Southbound
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.754 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 7f14c952-52c5-4957-ae99-09ff563a62a9 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.755 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.770 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17f4413c-c61a-4334-aa71-1aaea29e911b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.771 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.774 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.774 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba618198-dd0f-4fcc-a215-9f4a47ab1881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.775 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[879075b2-32e4-4877-9ab3-2c00c2621a67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.793 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6d45ac08-1737-42a1-808f-44ccd1a27bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e171 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12902a3f-c252-42bb-8639-b0e97c839fcb]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.870 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[07e0d300-c03f-4728-a48b-7e608e6ae8dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 systemd-udevd[299352]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:10:05 np0005473739 NetworkManager[44949]: <info>  [1759846205.8842] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/108)
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.882 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1414cfb1-35a9-4a92-9050-71ae77b9747a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.927 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ac795d1-cb70-45f9-990f-875363535c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.930 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7be9f388-4322-4ecd-add4-911b6b6077cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:05 np0005473739 NetworkManager[44949]: <info>  [1759846205.9651] device (tap9f80456d-d0): carrier: link connected
Oct  7 10:10:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:05.973 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a1aceb67-0dde-48a1-81be-5ce91d8a11cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.001 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdfb177-c375-4d6f-a296-117f009f1b31]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 35158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299390, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.020 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68496312-09d5-430f-ac91-db859f70d622]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678952, 'tstamp': 678952}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 299409, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.040 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aed382a4-9f71-4733-843e-7183b345d4d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 35158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 299418, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.099 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f0a595c8-a5d0-49dc-97cf-f8f323bcb397]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.172 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea12977-433b-48d2-8ad0-2e959ff68723]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.175 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.175 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.176 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:06 np0005473739 NetworkManager[44949]: <info>  [1759846206.1793] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/109)
Oct  7 10:10:06 np0005473739 kernel: tap9f80456d-d0: entered promiscuous mode
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.183 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:06Z|00207|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.208 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.209 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e84846a-caff-4909-b1fa-a6c6f2a3fa56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.210 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:10:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:06.212 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:10:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e171 do_prune osdmap full prune enabled
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.617 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846206.6166348, 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.619 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] VM Started (Lifecycle Event)#033[00m
Oct  7 10:10:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e172 e172: 3 total, 3 up, 3 in
Oct  7 10:10:06 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e172: 3 total, 3 up, 3 in
Oct  7 10:10:06 np0005473739 podman[299457]: 2025-10-07 14:10:06.644602434 +0000 UTC m=+0.070800152 container create d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.688 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:06 np0005473739 systemd[1]: Started libpod-conmon-d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d.scope.
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.695 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846206.6170514, 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.696 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:10:06 np0005473739 podman[299457]: 2025-10-07 14:10:06.608462335 +0000 UTC m=+0.034660083 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:10:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cdf6c25b6716cbbe88cb0b1eb25df944f3562caf1cdf4795012d2140fef4cae0/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:06 np0005473739 podman[299457]: 2025-10-07 14:10:06.74446374 +0000 UTC m=+0.170661478 container init d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  7 10:10:06 np0005473739 podman[299457]: 2025-10-07 14:10:06.750318993 +0000 UTC m=+0.176516711 container start d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:10:06 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [NOTICE]   (299476) : New worker (299478) forked
Oct  7 10:10:06 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [NOTICE]   (299476) : Loading success.
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.804 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.809 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image b9cff922-c050-4d43-8b7e-2053a2028ec0 could not be found.
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID b9cff922-c050-4d43-8b7e-2053a2028ec0
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver 
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver 
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image b9cff922-c050-4d43-8b7e-2053a2028ec0 could not be found.
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.846 2 ERROR nova.virt.libvirt.driver #033[00m
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.893 2 DEBUG nova.storage.rbd_utils [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] removing snapshot(snap) on rbd image(b9cff922-c050-4d43-8b7e-2053a2028ec0) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.896 2 DEBUG nova.network.neutron [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updated VIF entry in instance network info cache for port cdae5f4d-2069-4d30-bd74-31e9459986db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:10:06 np0005473739 nova_compute[259550]: 2025-10-07 14:10:06.897 2 DEBUG nova.network.neutron [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updating instance_info_cache with network_info: [{"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.087 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.291 2 DEBUG oslo_concurrency.lockutils [req-aa44b4fe-f44c-4e5d-af9e-cd4d3d8efc5d req-3256a92d-f7ab-4702-925b-0fd6560b44c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-75fea627-8d48-4772-86a2-ca6bc47ed597" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.537 2 DEBUG nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.537 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.537 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.537 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Processing event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG oslo_concurrency.lockutils [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.538 2 DEBUG nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] No waiting events found dispatching network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.539 2 WARNING nova.compute.manager [req-2588af81-f97e-4426-b26b-e8a36de4ee01 req-43adbc73-e433-42bf-ab00-68323e42fca0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received unexpected event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.539 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.551 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846207.5510323, 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.551 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.553 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.556 2 INFO nova.virt.libvirt.driver [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance spawned successfully.#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.556 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.585 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1325: 305 pgs: 305 active+clean; 204 MiB data, 412 MiB used, 60 GiB / 60 GiB avail; 8.9 MiB/s rd, 4.3 MiB/s wr, 425 op/s
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.590 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.593 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.593 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.593 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.594 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.594 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.595 2 DEBUG nova.virt.libvirt.driver [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e172 do_prune osdmap full prune enabled
Oct  7 10:10:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e173 e173: 3 total, 3 up, 3 in
Oct  7 10:10:07 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e173: 3 total, 3 up, 3 in
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.711 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.803 2 INFO nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 7.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:10:07 np0005473739 nova_compute[259550]: 2025-10-07 14:10:07.804 2 DEBUG nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:08 np0005473739 nova_compute[259550]: 2025-10-07 14:10:08.119 2 INFO nova.compute.manager [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 9.58 seconds to build instance.#033[00m
Oct  7 10:10:08 np0005473739 nova_compute[259550]: 2025-10-07 14:10:08.194 2 WARNING nova.compute.manager [None req-e2c304b8-402a-43e3-9246-5d5441c9e953 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Image not found during snapshot: nova.exception.ImageNotFound: Image b9cff922-c050-4d43-8b7e-2053a2028ec0 could not be found.#033[00m
Oct  7 10:10:08 np0005473739 nova_compute[259550]: 2025-10-07 14:10:08.261 2 DEBUG oslo_concurrency.lockutils [None req-0beed423-f253-4be0-8175-19e84c2482da a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.790s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1327: 305 pgs: 305 active+clean; 204 MiB data, 420 MiB used, 60 GiB / 60 GiB avail; 12 MiB/s rd, 4.3 MiB/s wr, 403 op/s
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.701 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.702 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.703 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.703 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.703 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.705 2 INFO nova.compute.manager [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Terminating instance#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.706 2 DEBUG nova.compute.manager [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:10:09 np0005473739 kernel: tap18b5959f-43 (unregistering): left promiscuous mode
Oct  7 10:10:09 np0005473739 NetworkManager[44949]: <info>  [1759846209.7542] device (tap18b5959f-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:10:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:09Z|00208|binding|INFO|Releasing lport 18b5959f-439a-440f-8a00-0709f1f9730b from this chassis (sb_readonly=0)
Oct  7 10:10:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:09Z|00209|binding|INFO|Setting lport 18b5959f-439a-440f-8a00-0709f1f9730b down in Southbound
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:09Z|00210|binding|INFO|Removing iface tap18b5959f-43 ovn-installed in OVS
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.817 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2a:94:47 10.100.0.3'], port_security=['fa:16:3e:2a:94:47 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '6a715eda-3357-4298-88cc-596cc9986690', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '4', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=18b5959f-439a-440f-8a00-0709f1f9730b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.819 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 18b5959f-439a-440f-8a00-0709f1f9730b in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d unbound from our chassis#033[00m
Oct  7 10:10:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.820 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:10:09 np0005473739 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Deactivated successfully.
Oct  7 10:10:09 np0005473739 systemd[1]: machine-qemu\x2d34\x2dinstance\x2d0000001e.scope: Consumed 10.641s CPU time.
Oct  7 10:10:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[afac75ad-f9a4-4aa7-a31f-7bc25a1f2dba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:09.822 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace which is not needed anymore#033[00m
Oct  7 10:10:09 np0005473739 systemd-machined[214580]: Machine qemu-34-instance-0000001e terminated.
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.874 2 DEBUG nova.compute.manager [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.955 2 INFO nova.compute.manager [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] instance snapshotting#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.966 2 INFO nova.virt.libvirt.driver [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Instance destroyed successfully.#033[00m
Oct  7 10:10:09 np0005473739 nova_compute[259550]: 2025-10-07 14:10:09.967 2 DEBUG nova.objects.instance [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'resources' on Instance uuid 6a715eda-3357-4298-88cc-596cc9986690 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.049 2 DEBUG nova.virt.libvirt.vif [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-799592627',display_name='tempest-ImagesOneServerNegativeTestJSON-server-799592627',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-799592627',id=30,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-7fs0zddu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:08Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=6a715eda-3357-4298-88cc-596cc9986690,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.049 2 DEBUG nova.network.os_vif_util [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "18b5959f-439a-440f-8a00-0709f1f9730b", "address": "fa:16:3e:2a:94:47", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap18b5959f-43", "ovs_interfaceid": "18b5959f-439a-440f-8a00-0709f1f9730b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.050 2 DEBUG nova.network.os_vif_util [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.050 2 DEBUG os_vif [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.053 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap18b5959f-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.061 2 INFO os_vif [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2a:94:47,bridge_name='br-int',has_traffic_filtering=True,id=18b5959f-439a-440f-8a00-0709f1f9730b,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap18b5959f-43')#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.114 2 INFO nova.virt.libvirt.driver [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Beginning live snapshot process#033[00m
Oct  7 10:10:10 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [NOTICE]   (298607) : haproxy version is 2.8.14-c23fe91
Oct  7 10:10:10 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [NOTICE]   (298607) : path to executable is /usr/sbin/haproxy
Oct  7 10:10:10 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [WARNING]  (298607) : Exiting Master process...
Oct  7 10:10:10 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [WARNING]  (298607) : Exiting Master process...
Oct  7 10:10:10 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [ALERT]    (298607) : Current worker (298609) exited with code 143 (Terminated)
Oct  7 10:10:10 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[298600]: [WARNING]  (298607) : All workers exited. Exiting... (0)
Oct  7 10:10:10 np0005473739 systemd[1]: libpod-73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2.scope: Deactivated successfully.
Oct  7 10:10:10 np0005473739 podman[299556]: 2025-10-07 14:10:10.217198672 +0000 UTC m=+0.242093794 container died 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.373 2 DEBUG nova.virt.libvirt.imagebackend [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.568 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(b032de4bb5ff4d25b63fdb13e521c53d) on rbd image(20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:10:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2-userdata-shm.mount: Deactivated successfully.
Oct  7 10:10:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b77270b12605ac3132ea4eef6c757661be710663f1f44a5cebdb8882546cdf8c-merged.mount: Deactivated successfully.
Oct  7 10:10:10 np0005473739 nova_compute[259550]: 2025-10-07 14:10:10.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e173 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:10 np0005473739 podman[299556]: 2025-10-07 14:10:10.851367602 +0000 UTC m=+0.876262744 container cleanup 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:10:10 np0005473739 systemd[1]: libpod-conmon-73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2.scope: Deactivated successfully.
Oct  7 10:10:11 np0005473739 nova_compute[259550]: 2025-10-07 14:10:11.074 2 DEBUG nova.compute.manager [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-unplugged-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:11 np0005473739 nova_compute[259550]: 2025-10-07 14:10:11.075 2 DEBUG oslo_concurrency.lockutils [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:11 np0005473739 nova_compute[259550]: 2025-10-07 14:10:11.076 2 DEBUG oslo_concurrency.lockutils [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:11 np0005473739 nova_compute[259550]: 2025-10-07 14:10:11.076 2 DEBUG oslo_concurrency.lockutils [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:11 np0005473739 nova_compute[259550]: 2025-10-07 14:10:11.076 2 DEBUG nova.compute.manager [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] No waiting events found dispatching network-vif-unplugged-18b5959f-439a-440f-8a00-0709f1f9730b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:11 np0005473739 nova_compute[259550]: 2025-10-07 14:10:11.076 2 DEBUG nova.compute.manager [req-cc0df120-9087-4756-adf5-a53dc44ee329 req-71541f96-3393-474e-9af7-159fb0aea20e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-unplugged-18b5959f-439a-440f-8a00-0709f1f9730b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:10:11 np0005473739 podman[299657]: 2025-10-07 14:10:11.556166477 +0000 UTC m=+0.663841800 container remove 73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.562 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d00552c3-ef44-48a2-b70b-2b05663edda8]: (4, ('Tue Oct  7 02:10:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2)\n73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2\nTue Oct  7 02:10:10 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2)\n73504acf84b2766cb603d822dd8a9ff6efac354822cfd24fc991c0d0263573c2\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.564 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a12158f0-6983-4c4c-a32a-6e18a9c80a48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.565 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:11 np0005473739 nova_compute[259550]: 2025-10-07 14:10:11.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:11 np0005473739 kernel: tap0a5c95d4-10: left promiscuous mode
Oct  7 10:10:11 np0005473739 nova_compute[259550]: 2025-10-07 14:10:11.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1328: 305 pgs: 305 active+clean; 181 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 2.1 MiB/s wr, 241 op/s
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5474952a-3d45-437e-bea5-2b39cd3ea89c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.625 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c73b979a-78c8-48c3-90a5-bc6b413ad55f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.627 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66361437-d505-4e5b-a7a0-49c613f94424]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[31f2b76a-c67e-4fe2-b89a-7c0aba4d7926]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678172, 'reachable_time': 44213, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 299673, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:11 np0005473739 systemd[1]: run-netns-ovnmeta\x2d0a5c95d4\x2d1a77\x2d48f5\x2d83c0\x2dafa976b7583d.mount: Deactivated successfully.
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.650 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:10:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:11.650 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6ed03f-6cf3-42ba-b5a5-07defe9502fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e173 do_prune osdmap full prune enabled
Oct  7 10:10:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e174 e174: 3 total, 3 up, 3 in
Oct  7 10:10:12 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e174: 3 total, 3 up, 3 in
Oct  7 10:10:12 np0005473739 nova_compute[259550]: 2025-10-07 14:10:12.548 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk@b032de4bb5ff4d25b63fdb13e521c53d to images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:10:12 np0005473739 nova_compute[259550]: 2025-10-07 14:10:12.700 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.068 2 INFO nova.virt.libvirt.driver [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Deleting instance files /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690_del#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.070 2 INFO nova.virt.libvirt.driver [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Deletion of /var/lib/nova/instances/6a715eda-3357-4298-88cc-596cc9986690_del complete#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.094 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(b032de4bb5ff4d25b63fdb13e521c53d) on rbd image(20fd8b19-2eb6-4c42-a320-d9d23f6f4912_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.224 2 DEBUG nova.compute.manager [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.226 2 DEBUG oslo_concurrency.lockutils [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6a715eda-3357-4298-88cc-596cc9986690-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.226 2 DEBUG oslo_concurrency.lockutils [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.227 2 DEBUG oslo_concurrency.lockutils [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.227 2 DEBUG nova.compute.manager [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] No waiting events found dispatching network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.227 2 WARNING nova.compute.manager [req-4a6397f0-3608-4122-9c4b-44a0537c7b00 req-1f840e61-7c3f-43d1-a77b-814d7678ab1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received unexpected event network-vif-plugged-18b5959f-439a-440f-8a00-0709f1f9730b for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.235 2 INFO nova.compute.manager [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Took 3.53 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.236 2 DEBUG oslo.service.loopingcall [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.237 2 DEBUG nova.compute.manager [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.237 2 DEBUG nova.network.neutron [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:10:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1330: 305 pgs: 305 active+clean; 181 MiB data, 421 MiB used, 60 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 208 op/s
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.829 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846198.827514, b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.830 2 INFO nova.compute.manager [-] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.915 2 DEBUG nova.compute.manager [None req-9c0f6664-afda-49e2-a5c7-a70f4bac13fa - - - - - -] [instance: b3dfda48-6c7b-4961-b4e5-d29b9a60a7ca] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e174 do_prune osdmap full prune enabled
Oct  7 10:10:13 np0005473739 nova_compute[259550]: 2025-10-07 14:10:13.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e175 e175: 3 total, 3 up, 3 in
Oct  7 10:10:13 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e175: 3 total, 3 up, 3 in
Oct  7 10:10:14 np0005473739 nova_compute[259550]: 2025-10-07 14:10:14.058 2 DEBUG nova.storage.rbd_utils [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(e82f6cd3-df6c-4f54-b0d1-baa8f3a03435) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:10:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e175 do_prune osdmap full prune enabled
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.199 2 DEBUG nova.network.neutron [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e176 e176: 3 total, 3 up, 3 in
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.308 2 DEBUG nova.compute.manager [req-7d984218-dbeb-4d48-be82-b19f01d0ce9d req-0f75ff97-3a72-4abf-b380-5d183704af77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Received event network-vif-deleted-18b5959f-439a-440f-8a00-0709f1f9730b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.308 2 INFO nova.compute.manager [req-7d984218-dbeb-4d48-be82-b19f01d0ce9d req-0f75ff97-3a72-4abf-b380-5d183704af77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Neutron deleted interface 18b5959f-439a-440f-8a00-0709f1f9730b; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.309 2 DEBUG nova.network.neutron [req-7d984218-dbeb-4d48-be82-b19f01d0ce9d req-0f75ff97-3a72-4abf-b380-5d183704af77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:15 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e176: 3 total, 3 up, 3 in
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.431 2 INFO nova.compute.manager [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Took 2.19 seconds to deallocate network for instance.#033[00m
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.502 2 DEBUG nova.compute.manager [req-7d984218-dbeb-4d48-be82-b19f01d0ce9d req-0f75ff97-3a72-4abf-b380-5d183704af77 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Detach interface failed, port_id=18b5959f-439a-440f-8a00-0709f1f9730b, reason: Instance 6a715eda-3357-4298-88cc-596cc9986690 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:10:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:15Z|00034|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6e:ba:59 10.100.0.11
Oct  7 10:10:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:15Z|00035|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6e:ba:59 10.100.0.11
Oct  7 10:10:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1333: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 198 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 7.0 MiB/s wr, 282 op/s
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.671 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.672 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.769 2 DEBUG oslo_concurrency.processutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e176 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e176 do_prune osdmap full prune enabled
Oct  7 10:10:15 np0005473739 nova_compute[259550]: 2025-10-07 14:10:15.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e177 e177: 3 total, 3 up, 3 in
Oct  7 10:10:15 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e177: 3 total, 3 up, 3 in
Oct  7 10:10:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1087346923' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:16 np0005473739 nova_compute[259550]: 2025-10-07 14:10:16.251 2 DEBUG oslo_concurrency.processutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:16 np0005473739 nova_compute[259550]: 2025-10-07 14:10:16.259 2 DEBUG nova.compute.provider_tree [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:16 np0005473739 nova_compute[259550]: 2025-10-07 14:10:16.335 2 DEBUG nova.scheduler.client.report [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:16 np0005473739 nova_compute[259550]: 2025-10-07 14:10:16.467 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.795s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:16 np0005473739 nova_compute[259550]: 2025-10-07 14:10:16.657 2 INFO nova.scheduler.client.report [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Deleted allocations for instance 6a715eda-3357-4298-88cc-596cc9986690#033[00m
Oct  7 10:10:16 np0005473739 nova_compute[259550]: 2025-10-07 14:10:16.746 2 DEBUG oslo_concurrency.lockutils [None req-01d0d7b3-587d-4702-8dbf-d3a5af1c3cbc 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "6a715eda-3357-4298-88cc-596cc9986690" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:16 np0005473739 nova_compute[259550]: 2025-10-07 14:10:16.862 2 INFO nova.virt.libvirt.driver [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Snapshot image upload complete#033[00m
Oct  7 10:10:16 np0005473739 nova_compute[259550]: 2025-10-07 14:10:16.863 2 INFO nova.compute.manager [None req-99dd7e8b-2e5f-4116-a5bb-64eb64c30bc7 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 6.91 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 10:10:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1335: 305 pgs: 5 active+clean+snaptrim_wait, 2 active+clean+snaptrim, 298 active+clean; 198 MiB data, 416 MiB used, 60 GiB / 60 GiB avail; 4.4 MiB/s rd, 7.8 MiB/s wr, 215 op/s
Oct  7 10:10:19 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #44. Immutable memtables: 1.
Oct  7 10:10:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1336: 305 pgs: 305 active+clean; 209 MiB data, 426 MiB used, 60 GiB / 60 GiB avail; 4.3 MiB/s rd, 7.8 MiB/s wr, 304 op/s
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.048 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.049 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.063 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.119 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.120 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.129 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.130 2 INFO nova.compute.claims [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.301 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2822149612' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.802 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.809 2 DEBUG nova.compute.provider_tree [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e177 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e177 do_prune osdmap full prune enabled
Oct  7 10:10:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e178 e178: 3 total, 3 up, 3 in
Oct  7 10:10:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e178: 3 total, 3 up, 3 in
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.829 2 DEBUG nova.scheduler.client.report [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.873 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.874 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:10:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:20Z|00036|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:35:62:58 10.100.0.12
Oct  7 10:10:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:20Z|00037|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:35:62:58 10.100.0.12
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.941 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.941 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:10:20 np0005473739 nova_compute[259550]: 2025-10-07 14:10:20.981 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.001 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.081 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.082 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.083 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Creating image(s)#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.114 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.138 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.165 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.173 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.212 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.213 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.228 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.239 2 DEBUG nova.policy [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '21b4c507f5c443f4b43306c884b1d67f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.247 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.248 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.248 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.248 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.269 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.274 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 30c3d75d-d021-411b-a277-f81ff1f707b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.337 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.337 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.345 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.345 2 INFO nova.compute.claims [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.507 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1338: 305 pgs: 305 active+clean; 227 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.1 MiB/s wr, 204 op/s
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.679 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 30c3d75d-d021-411b-a277-f81ff1f707b8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.405s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.760 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] resizing rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.869 2 DEBUG nova.objects.instance [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'migration_context' on Instance uuid 30c3d75d-d021-411b-a277-f81ff1f707b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.889 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.889 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Ensure instance console log exists: /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.890 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.890 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.890 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1669863368' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.990 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:21 np0005473739 nova_compute[259550]: 2025-10-07 14:10:21.997 2 DEBUG nova.compute.provider_tree [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.021 2 DEBUG nova.scheduler.client.report [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.051 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.713s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.052 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.116 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.117 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.143 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.165 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.269 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.270 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.271 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Creating image(s)#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.295 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.326 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.354 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.359 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "bcdb7363edcb468af4a4e5b5827ee7a05a3370a5" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.360 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "bcdb7363edcb468af4a4e5b5827ee7a05a3370a5" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.367 2 DEBUG nova.policy [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.389 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.390 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.390 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.390 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.391 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.392 2 INFO nova.compute.manager [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Terminating instance#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.393 2 DEBUG nova.compute.manager [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:10:22 np0005473739 kernel: tapcdae5f4d-20 (unregistering): left promiscuous mode
Oct  7 10:10:22 np0005473739 NetworkManager[44949]: <info>  [1759846222.4585] device (tapcdae5f4d-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:22Z|00211|binding|INFO|Releasing lport cdae5f4d-2069-4d30-bd74-31e9459986db from this chassis (sb_readonly=0)
Oct  7 10:10:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:22Z|00212|binding|INFO|Setting lport cdae5f4d-2069-4d30-bd74-31e9459986db down in Southbound
Oct  7 10:10:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:22Z|00213|binding|INFO|Removing iface tapcdae5f4d-20 ovn-installed in OVS
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.480 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:ba:59 10.100.0.11'], port_security=['fa:16:3e:6e:ba:59 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '75fea627-8d48-4772-86a2-ca6bc47ed597', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'da79ea7f82af425b975ddff6ef03a59a', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f73ed2d1-0e14-4944-923b-e8979adde65e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.192'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d43ba345-e1ae-4a7f-bf78-bc4ea18d4057, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=cdae5f4d-2069-4d30-bd74-31e9459986db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.482 161536 INFO neutron.agent.ovn.metadata.agent [-] Port cdae5f4d-2069-4d30-bd74-31e9459986db in datapath f753e53f-89de-4a31-8fea-3e9e1684d72a unbound from our chassis#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.483 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network f753e53f-89de-4a31-8fea-3e9e1684d72a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.487 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c944664-18d0-4856-a2ab-e9654dc75a9d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.487 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a namespace which is not needed anymore#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Deactivated successfully.
Oct  7 10:10:22 np0005473739 systemd[1]: machine-qemu\x2d35\x2dinstance\x2d0000001f.scope: Consumed 13.883s CPU time.
Oct  7 10:10:22 np0005473739 systemd-machined[214580]: Machine qemu-35-instance-0000001f terminated.
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 podman[300054]: 2025-10-07 14:10:22.568647408 +0000 UTC m=+0.074152430 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid)
Oct  7 10:10:22 np0005473739 podman[300051]: 2025-10-07 14:10:22.578037584 +0000 UTC m=+0.089563875 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.593 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Successfully created port: 76ff867f-df0e-4526-b0c7-6bc2163fd52a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:10:22
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.control', 'images', '.mgr', 'cephfs.cephfs.data', 'default.rgw.meta', 'cephfs.cephfs.meta', 'vms', 'backups', 'default.rgw.log', 'volumes']
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.634 2 INFO nova.virt.libvirt.driver [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Instance destroyed successfully.#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.635 2 DEBUG nova.objects.instance [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lazy-loading 'resources' on Instance uuid 75fea627-8d48-4772-86a2-ca6bc47ed597 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:22 np0005473739 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [NOTICE]   (299017) : haproxy version is 2.8.14-c23fe91
Oct  7 10:10:22 np0005473739 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [NOTICE]   (299017) : path to executable is /usr/sbin/haproxy
Oct  7 10:10:22 np0005473739 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [WARNING]  (299017) : Exiting Master process...
Oct  7 10:10:22 np0005473739 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [ALERT]    (299017) : Current worker (299019) exited with code 143 (Terminated)
Oct  7 10:10:22 np0005473739 neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a[299013]: [WARNING]  (299017) : All workers exited. Exiting... (0)
Oct  7 10:10:22 np0005473739 systemd[1]: libpod-01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643.scope: Deactivated successfully.
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:10:22 np0005473739 podman[300110]: 2025-10-07 14:10:22.650404837 +0000 UTC m=+0.054270338 container died 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.652 2 DEBUG nova.virt.libvirt.vif [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1141910716',display_name='tempest-ServersTestJSON-server-1141910716',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1141910716',id=31,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB8dBtsZMOusOa4IijROkUdbBylBDiqYzwyIPuD+vzYyGEcKB5fbetYwHVvk9xfq9490L3yO+927Gy6izYCdieX2E8nhzbrcXYwgrKrlKM3UVrnT4xZuHJ7KYzTZGsV5OA==',key_name='tempest-keypair-878357788',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='da79ea7f82af425b975ddff6ef03a59a',ramdisk_id='',reservation_id='r-h9cj7vml',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-218965943',owner_user_name='tempest-ServersTestJSON-218965943-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8656e87752c0498891bd00461451ea40',uuid=75fea627-8d48-4772-86a2-ca6bc47ed597,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.653 2 DEBUG nova.network.os_vif_util [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converting VIF {"id": "cdae5f4d-2069-4d30-bd74-31e9459986db", "address": "fa:16:3e:6e:ba:59", "network": {"id": "f753e53f-89de-4a31-8fea-3e9e1684d72a", "bridge": "br-int", "label": "tempest-ServersTestJSON-965574313-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "da79ea7f82af425b975ddff6ef03a59a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcdae5f4d-20", "ovs_interfaceid": "cdae5f4d-2069-4d30-bd74-31e9459986db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.654 2 DEBUG nova.network.os_vif_util [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.654 2 DEBUG os_vif [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.656 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcdae5f4d-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.661 2 INFO os_vif [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6e:ba:59,bridge_name='br-int',has_traffic_filtering=True,id=cdae5f4d-2069-4d30-bd74-31e9459986db,network=Network(f753e53f-89de-4a31-8fea-3e9e1684d72a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcdae5f4d-20')#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.685 2 DEBUG nova.virt.libvirt.imagebackend [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  7 10:10:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643-userdata-shm.mount: Deactivated successfully.
Oct  7 10:10:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b521a0ec4b82aa4091ed78acfb32e87147f7bbf599891bee81258eeecedcd051-merged.mount: Deactivated successfully.
Oct  7 10:10:22 np0005473739 podman[300110]: 2025-10-07 14:10:22.699510128 +0000 UTC m=+0.103375629 container cleanup 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:10:22 np0005473739 systemd[1]: libpod-conmon-01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643.scope: Deactivated successfully.
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.738 2 DEBUG nova.virt.libvirt.imagebackend [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.739 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning images/e82f6cd3-df6c-4f54-b0d1-baa8f3a03435@snap to None/9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:10:22 np0005473739 podman[300179]: 2025-10-07 14:10:22.795197582 +0000 UTC m=+0.070874193 container remove 01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.802 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f164f8b9-7533-419c-9e0a-4fc338de55d9]: (4, ('Tue Oct  7 02:10:22 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a (01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643)\n01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643\nTue Oct  7 02:10:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a (01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643)\n01b976680b29898125c5da2882e3b9bf72b324757b10105240633d8b929bb643\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.805 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66754a4e-bc61-4700-a73f-06c5c76d78a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf753e53f-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 kernel: tapf753e53f-80: left promiscuous mode
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.835 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[666e2cdb-36f5-4172-96d1-1e3aa8edd5d3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.867 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f676115-3a4a-4ae6-842f-a41f1ace8caa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.870 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b8fbb0-b0a4-44ac-8857-f48194b81fe1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.891 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b2fc1cdf-a7b5-401a-bd85-38011d4ffe86]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678429, 'reachable_time': 40972, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300249, 'error': None, 'target': 'ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:22 np0005473739 systemd[1]: run-netns-ovnmeta\x2df753e53f\x2d89de\x2d4a31\x2d8fea\x2d3e9e1684d72a.mount: Deactivated successfully.
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.894 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-f753e53f-89de-4a31-8fea-3e9e1684d72a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:10:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:22.895 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c308055c-a7e2-4514-94ed-928d28d9e1ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:22 np0005473739 nova_compute[259550]: 2025-10-07 14:10:22.925 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "bcdb7363edcb468af4a4e5b5827ee7a05a3370a5" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.073 2 DEBUG nova.objects.instance [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid 9fd9e34f-3492-4726-ab46-1b4bab671dbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.093 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Successfully created port: 6e3a4d42-ab7b-4d31-93af-858be34d84f8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.104 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.104 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Ensure instance console log exists: /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.105 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.105 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.105 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.276 2 INFO nova.virt.libvirt.driver [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Deleting instance files /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597_del#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.277 2 INFO nova.virt.libvirt.driver [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Deletion of /var/lib/nova/instances/75fea627-8d48-4772-86a2-ca6bc47ed597_del complete#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.396 2 INFO nova.compute.manager [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.396 2 DEBUG oslo.service.loopingcall [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.397 2 DEBUG nova.compute.manager [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.397 2 DEBUG nova.network.neutron [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:10:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1339: 305 pgs: 305 active+clean; 227 MiB data, 451 MiB used, 60 GiB / 60 GiB avail; 453 KiB/s rd, 1.6 MiB/s wr, 122 op/s
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.791 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Successfully updated port: 76ff867f-df0e-4526-b0c7-6bc2163fd52a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.808 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.809 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquired lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.809 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:23 np0005473739 nova_compute[259550]: 2025-10-07 14:10:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.012 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.106 2 DEBUG nova.compute.manager [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-changed-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.107 2 DEBUG nova.compute.manager [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Refreshing instance network info cache due to event network-changed-76ff867f-df0e-4526-b0c7-6bc2163fd52a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.107 2 DEBUG oslo_concurrency.lockutils [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.524 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Successfully updated port: 6e3a4d42-ab7b-4d31-93af-858be34d84f8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.544 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.545 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.545 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.647 2 DEBUG nova.compute.manager [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-changed-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.647 2 DEBUG nova.compute.manager [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Refreshing instance network info cache due to event network-changed-6e3a4d42-ab7b-4d31-93af-858be34d84f8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.648 2 DEBUG oslo_concurrency.lockutils [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.710 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.952 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846209.9506023, 6a715eda-3357-4298-88cc-596cc9986690 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.953 2 INFO nova.compute.manager [-] [instance: 6a715eda-3357-4298-88cc-596cc9986690] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.975 2 DEBUG nova.compute.manager [None req-548fb4b0-6df4-4274-b933-869fddb6dcd6 - - - - - -] [instance: 6a715eda-3357-4298-88cc-596cc9986690] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:24 np0005473739 nova_compute[259550]: 2025-10-07 14:10:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.033 2 DEBUG nova.network.neutron [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.051 2 INFO nova.compute.manager [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Took 1.65 seconds to deallocate network for instance.#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.140 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.140 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.261 2 DEBUG oslo_concurrency.processutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.420 2 DEBUG nova.network.neutron [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updating instance_info_cache with network_info: [{"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.440 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Releasing lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.440 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance network_info: |[{"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.441 2 DEBUG oslo_concurrency.lockutils [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.441 2 DEBUG nova.network.neutron [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Refreshing network info cache for port 76ff867f-df0e-4526-b0c7-6bc2163fd52a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.444 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start _get_guest_xml network_info=[{"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.449 2 WARNING nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.457 2 DEBUG nova.virt.libvirt.host [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.458 2 DEBUG nova.virt.libvirt.host [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.461 2 DEBUG nova.virt.libvirt.host [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.461 2 DEBUG nova.virt.libvirt.host [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.462 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.462 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.462 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.462 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.463 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.464 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.464 2 DEBUG nova.virt.hardware [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.467 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1340: 305 pgs: 305 active+clean; 213 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 791 KiB/s rd, 5.3 MiB/s wr, 251 op/s
Oct  7 10:10:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3166834448' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.719 2 DEBUG oslo_concurrency.processutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.727 2 DEBUG nova.compute.provider_tree [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.745 2 DEBUG nova.scheduler.client.report [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.768 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.780 2 DEBUG nova.network.neutron [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Updating instance_info_cache with network_info: [{"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.796 2 INFO nova.scheduler.client.report [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Deleted allocations for instance 75fea627-8d48-4772-86a2-ca6bc47ed597#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.798 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.798 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance network_info: |[{"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.799 2 DEBUG oslo_concurrency.lockutils [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.799 2 DEBUG nova.network.neutron [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Refreshing network info cache for port 6e3a4d42-ab7b-4d31-93af-858be34d84f8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:10:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.801 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start _get_guest_xml network_info=[{"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:10:09Z,direct_url=<?>,disk_format='raw',id=e82f6cd3-df6c-4f54-b0d1-baa8f3a03435,min_disk=1,min_ram=0,name='tempest-test-snap-1066436141',owner='1a6abfd8cc6f4507886ed10873d1f95c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:10:16Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': 'e82f6cd3-df6c-4f54-b0d1-baa8f3a03435'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.843 2 WARNING nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.851 2 DEBUG nova.virt.libvirt.host [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.852 2 DEBUG nova.virt.libvirt.host [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.855 2 DEBUG nova.virt.libvirt.host [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.855 2 DEBUG nova.virt.libvirt.host [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.855 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:10:09Z,direct_url=<?>,disk_format='raw',id=e82f6cd3-df6c-4f54-b0d1-baa8f3a03435,min_disk=1,min_ram=0,name='tempest-test-snap-1066436141',owner='1a6abfd8cc6f4507886ed10873d1f95c',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:10:16Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.856 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.857 2 DEBUG nova.virt.hardware [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.861 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:10:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3874137499' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.898 2 DEBUG oslo_concurrency.lockutils [None req-996c3e07-8bb1-428a-a61e-8889eda16a50 8656e87752c0498891bd00461451ea40 da79ea7f82af425b975ddff6ef03a59a - - default default] Lock "75fea627-8d48-4772-86a2-ca6bc47ed597" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.903 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.925 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.928 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:25 np0005473739 nova_compute[259550]: 2025-10-07 14:10:25.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.190 2 DEBUG nova.compute.manager [req-43fa91e8-8db0-43a2-b5e8-e153ae7cbb9a req-f1e523b4-c166-4355-a270-9251e73ea00c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Received event network-vif-deleted-cdae5f4d-2069-4d30-bd74-31e9459986db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2133751308' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.329 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.354 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.359 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2194597908' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.400 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.402 2 DEBUG nova.virt.libvirt.vif [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1409154713',id=33,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-20bg9heg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:21Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=30c3d75d-d021-411b-a277-f81ff1f707b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.402 2 DEBUG nova.network.os_vif_util [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.403 2 DEBUG nova.network.os_vif_util [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.405 2 DEBUG nova.objects.instance [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30c3d75d-d021-411b-a277-f81ff1f707b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.424 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <uuid>30c3d75d-d021-411b-a277-f81ff1f707b8</uuid>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <name>instance-00000021</name>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesOneServerNegativeTestJSON-server-1409154713</nova:name>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:10:25</nova:creationTime>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:user uuid="21b4c507f5c443f4b43306c884b1d67f">tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member</nova:user>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:project uuid="c166ae9e4e0f43d38afaa35966f84b05">tempest-ImagesOneServerNegativeTestJSON-2130756304</nova:project>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:port uuid="76ff867f-df0e-4526-b0c7-6bc2163fd52a">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="serial">30c3d75d-d021-411b-a277-f81ff1f707b8</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="uuid">30c3d75d-d021-411b-a277-f81ff1f707b8</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/30c3d75d-d021-411b-a277-f81ff1f707b8_disk">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:dc:66:96"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <target dev="tap76ff867f-df"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/console.log" append="off"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:10:26 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:10:26 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.424 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Preparing to wait for external event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.425 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.425 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.425 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.426 2 DEBUG nova.virt.libvirt.vif [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1409154713',id=33,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-20bg9heg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:21Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=30c3d75d-d021-411b-a277-f81ff1f707b8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.426 2 DEBUG nova.network.os_vif_util [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.427 2 DEBUG nova.network.os_vif_util [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.427 2 DEBUG os_vif [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.429 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76ff867f-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.432 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76ff867f-df, col_values=(('external_ids', {'iface-id': '76ff867f-df0e-4526-b0c7-6bc2163fd52a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:66:96', 'vm-uuid': '30c3d75d-d021-411b-a277-f81ff1f707b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:26 np0005473739 NetworkManager[44949]: <info>  [1759846226.4349] manager: (tap76ff867f-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/110)
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.440 2 INFO os_vif [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df')#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.499 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.500 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.500 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] No VIF found with MAC fa:16:3e:dc:66:96, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.500 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Using config drive#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.522 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1827650791' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.798 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.800 2 DEBUG nova.virt.libvirt.vif [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1746458742',display_name='tempest-ImagesTestJSON-server-1746458742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1746458742',id=34,image_ref='e82f6cd3-df6c-4f54-b0d1-baa8f3a03435',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-ooe0chln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='20fd8b19-2eb6-4c42-a320-d9d23f6f4912',image_min_disk='1',image_min_ram='0',image_owner_id='1a6abfd8cc6f4507886ed10873d1f95c',image_owner_project_name='tempest-ImagesTestJSON-194092869',image_owner_user_name='tempest-ImagesTestJSON-194092869-project-member',image_user_id='a27a7178326846e69ab9eaae7c70b274',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:22Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=9fd9e34f-3492-4726-ab46-1b4bab671dbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.800 2 DEBUG nova.network.os_vif_util [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.801 2 DEBUG nova.network.os_vif_util [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.802 2 DEBUG nova.objects.instance [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid 9fd9e34f-3492-4726-ab46-1b4bab671dbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.820 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <uuid>9fd9e34f-3492-4726-ab46-1b4bab671dbc</uuid>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <name>instance-00000022</name>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesTestJSON-server-1746458742</nova:name>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:10:25</nova:creationTime>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="e82f6cd3-df6c-4f54-b0d1-baa8f3a03435"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <nova:port uuid="6e3a4d42-ab7b-4d31-93af-858be34d84f8">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="serial">9fd9e34f-3492-4726-ab46-1b4bab671dbc</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="uuid">9fd9e34f-3492-4726-ab46-1b4bab671dbc</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:49:e0:a9"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <target dev="tap6e3a4d42-ab"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/console.log" append="off"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:10:26 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:10:26 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:10:26 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:10:26 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.821 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Preparing to wait for external event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.821 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.821 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.821 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.822 2 DEBUG nova.virt.libvirt.vif [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1746458742',display_name='tempest-ImagesTestJSON-server-1746458742',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1746458742',id=34,image_ref='e82f6cd3-df6c-4f54-b0d1-baa8f3a03435',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-ooe0chln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='20fd8b19-2eb6-4c42-a320-d9d23f6f4912',image_min_disk='1',image_min_ram='0',image_owner_id='1a6abfd8cc6f4507886ed10873d1f95c',image_owner_project_name='tempest-ImagesTestJSON-194092869',image_owner_user_name='tempest-ImagesTestJSON-194092869-project-member',image_user_id='a27a7178326846e69ab9eaae7c70b274',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:22Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=9fd9e34f-3492-4726-ab46-1b4bab671dbc,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.822 2 DEBUG nova.network.os_vif_util [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.823 2 DEBUG nova.network.os_vif_util [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.823 2 DEBUG os_vif [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e3a4d42-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.827 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e3a4d42-ab, col_values=(('external_ids', {'iface-id': '6e3a4d42-ab7b-4d31-93af-858be34d84f8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:49:e0:a9', 'vm-uuid': '9fd9e34f-3492-4726-ab46-1b4bab671dbc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:26 np0005473739 NetworkManager[44949]: <info>  [1759846226.8295] manager: (tap6e3a4d42-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/111)
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.838 2 INFO os_vif [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab')#033[00m
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #60. Immutable memtables: 0.
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.867482) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 31] Flushing memtable with next log file: 60
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226867522, "job": 31, "event": "flush_started", "num_memtables": 1, "num_entries": 2326, "num_deletes": 268, "total_data_size": 3325810, "memory_usage": 3370256, "flush_reason": "Manual Compaction"}
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 31] Level-0 flush table #61: started
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.886 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.886 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.887 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:49:e0:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.887 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Using config drive#033[00m
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226888455, "cf_name": "default", "job": 31, "event": "table_file_creation", "file_number": 61, "file_size": 3271313, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 25659, "largest_seqno": 27984, "table_properties": {"data_size": 3260618, "index_size": 6933, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2693, "raw_key_size": 22790, "raw_average_key_size": 21, "raw_value_size": 3238993, "raw_average_value_size": 3018, "num_data_blocks": 301, "num_entries": 1073, "num_filter_entries": 1073, "num_deletions": 268, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846052, "oldest_key_time": 1759846052, "file_creation_time": 1759846226, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 61, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 31] Flush lasted 21041 microseconds, and 7912 cpu microseconds.
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.888518) [db/flush_job.cc:967] [default] [JOB 31] Level-0 flush table #61: 3271313 bytes OK
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.888550) [db/memtable_list.cc:519] [default] Level-0 commit table #61 started
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.890199) [db/memtable_list.cc:722] [default] Level-0 commit table #61: memtable #1 done
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.890213) EVENT_LOG_v1 {"time_micros": 1759846226890209, "job": 31, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.890232) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 31] Try to delete WAL files size 3315800, prev total WAL file size 3315800, number of live WAL files 2.
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000057.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.891277) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032323539' seq:72057594037927935, type:22 .. '7061786F730032353131' seq:0, type:0; will stop at (end)
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 32] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 31 Base level 0, inputs: [61(3194KB)], [59(7053KB)]
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226891364, "job": 32, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [61], "files_L6": [59], "score": -1, "input_data_size": 10493595, "oldest_snapshot_seqno": -1}
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.913 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.923 2 DEBUG nova.network.neutron [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Updated VIF entry in instance network info cache for port 6e3a4d42-ab7b-4d31-93af-858be34d84f8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.924 2 DEBUG nova.network.neutron [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Updating instance_info_cache with network_info: [{"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 32] Generated table #62: 5261 keys, 8772442 bytes, temperature: kUnknown
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226959909, "cf_name": "default", "job": 32, "event": "table_file_creation", "file_number": 62, "file_size": 8772442, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8735509, "index_size": 22679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13189, "raw_key_size": 130819, "raw_average_key_size": 24, "raw_value_size": 8638990, "raw_average_value_size": 1642, "num_data_blocks": 932, "num_entries": 5261, "num_filter_entries": 5261, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846226, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 62, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.960220) [db/compaction/compaction_job.cc:1663] [default] [JOB 32] Compacted 1@0 + 1@6 files to L6 => 8772442 bytes
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.961905) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 152.9 rd, 127.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 6.9 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(5.9) write-amplify(2.7) OK, records in: 5798, records dropped: 537 output_compression: NoCompression
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.961921) EVENT_LOG_v1 {"time_micros": 1759846226961913, "job": 32, "event": "compaction_finished", "compaction_time_micros": 68650, "compaction_time_cpu_micros": 24321, "output_level": 6, "num_output_files": 1, "total_output_size": 8772442, "num_input_records": 5798, "num_output_records": 5261, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000061.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226962592, "job": 32, "event": "table_file_deletion", "file_number": 61}
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000059.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846226963873, "job": 32, "event": "table_file_deletion", "file_number": 59}
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.891129) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964071) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964079) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964081) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964082) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:10:26 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:10:26.964084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.965 2 DEBUG oslo_concurrency.lockutils [req-a7887907-5634-42ab-afe9-3d28430cbfae req-cc5f6736-a65f-4dbc-92c1-b73f30fa64fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9fd9e34f-3492-4726-ab46-1b4bab671dbc" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:26 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.992 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Creating config drive at /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:26.999 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5ez1q30 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.050 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.051 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.051 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.052 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.052 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.158 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf5ez1q30" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.192 2 DEBUG nova.storage.rbd_utils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] rbd image 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.201 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.365 2 DEBUG oslo_concurrency.processutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config 30c3d75d-d021-411b-a277-f81ff1f707b8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.366 2 INFO nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Deleting local config drive /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8/disk.config because it was imported into RBD.#033[00m
Oct  7 10:10:27 np0005473739 NetworkManager[44949]: <info>  [1759846227.4190] manager: (tap76ff867f-df): new Tun device (/org/freedesktop/NetworkManager/Devices/112)
Oct  7 10:10:27 np0005473739 kernel: tap76ff867f-df: entered promiscuous mode
Oct  7 10:10:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:27Z|00214|binding|INFO|Claiming lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a for this chassis.
Oct  7 10:10:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:27Z|00215|binding|INFO|76ff867f-df0e-4526-b0c7-6bc2163fd52a: Claiming fa:16:3e:dc:66:96 10.100.0.8
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.434 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:66:96 10.100.0.8'], port_security=['fa:16:3e:dc:66:96 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '30c3d75d-d021-411b-a277-f81ff1f707b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '2', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=76ff867f-df0e-4526-b0c7-6bc2163fd52a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.435 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 76ff867f-df0e-4526-b0c7-6bc2163fd52a in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d bound to our chassis#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.437 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d#033[00m
Oct  7 10:10:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:27Z|00216|binding|INFO|Setting lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a ovn-installed in OVS
Oct  7 10:10:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:27Z|00217|binding|INFO|Setting lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a up in Southbound
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.453 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8b3b0720-c443-47d4-abe7-6f4ab3cc23dc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.457 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap0a5c95d4-11 in ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.462 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap0a5c95d4-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.462 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2fb441c3-fd87-47f4-99ea-872cc17e216e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.464 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4957d74-d585-4ce8-8f7c-c025a45ceba6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 systemd-machined[214580]: New machine qemu-37-instance-00000021.
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:27 np0005473739 systemd[1]: Started Virtual Machine qemu-37-instance-00000021.
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.484 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e35acc90-8dba-4c8c-b249-0a94e4320cca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 systemd-udevd[300570]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:10:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1961907499' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.511 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d794bc42-aa91-4d23-b34e-55fdb313f769]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 NetworkManager[44949]: <info>  [1759846227.5253] device (tap76ff867f-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:10:27 np0005473739 NetworkManager[44949]: <info>  [1759846227.5264] device (tap76ff867f-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.537 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.553 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[998ccf25-5c39-4982-a86e-9f1b08ad3907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 NetworkManager[44949]: <info>  [1759846227.5613] manager: (tap0a5c95d4-10): new Veth device (/org/freedesktop/NetworkManager/Devices/113)
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.562 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23c48f02-c1c2-4256-b689-6a3aae9095aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.583 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Creating config drive at /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.591 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjbqgkq4f execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1341: 305 pgs: 305 active+clean; 213 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 773 KiB/s rd, 5.2 MiB/s wr, 245 op/s
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.618 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c2db0790-b81d-4946-976d-82d94c581d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef26bd9b-1ed0-4fb1-91fd-c7757a2539b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.631 2 DEBUG nova.network.neutron [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updated VIF entry in instance network info cache for port 76ff867f-df0e-4526-b0c7-6bc2163fd52a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.633 2 DEBUG nova.network.neutron [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updating instance_info_cache with network_info: [{"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:27 np0005473739 NetworkManager[44949]: <info>  [1759846227.6559] device (tap0a5c95d4-10): carrier: link connected
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.658 2 DEBUG oslo_concurrency.lockutils [req-f3a32691-a391-45ce-8b69-a23ff7d65e85 req-1344a254-7c5e-4fa4-8917-181c061fb0a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30c3d75d-d021-411b-a277-f81ff1f707b8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.665 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4807642a-ed6d-4b08-8402-436ebb2748fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a579270d-4969-4e2c-81ce-d1b894d782b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681122, 'reachable_time': 24103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300608, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.689 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.690 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000021 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.695 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.695 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000020 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.698 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.698 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000022 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.702 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd7d5998-3e92-4632-a0b7-7687e305e547]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe34:6312'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 681122, 'tstamp': 681122}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300609, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.719 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8407f9f0-2657-4a52-a99b-447635cc8aa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap0a5c95d4-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:34:63:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 71], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681122, 'reachable_time': 24103, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 300610, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.741 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjbqgkq4f" returned: 0 in 0.151s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[677b6865-3450-4d60-8151-83b7310e2600]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.774 2 DEBUG nova.storage.rbd_utils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.786 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.834 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[edc1949d-5b6f-4c7b-bcbe-ad06981f2dcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.837 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.838 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.839 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a5c95d4-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:27 np0005473739 NetworkManager[44949]: <info>  [1759846227.8430] manager: (tap0a5c95d4-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/114)
Oct  7 10:10:27 np0005473739 kernel: tap0a5c95d4-10: entered promiscuous mode
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.852 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap0a5c95d4-10, col_values=(('external_ids', {'iface-id': 'a8291172-baf1-4252-9a0d-af7ef7ffa931'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:27Z|00218|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.895 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0e7e689-bb4a-48fc-8001-1c0602a8c07a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.898 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/0a5c95d4-1a77-48f5-83c0-afa976b7583d.pid.haproxy
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 0a5c95d4-1a77-48f5-83c0-afa976b7583d
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:10:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:27.899 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'env', 'PROCESS_TAG=haproxy-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/0a5c95d4-1a77-48f5-83c0-afa976b7583d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.969 2 DEBUG oslo_concurrency.processutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config 9fd9e34f-3492-4726-ab46-1b4bab671dbc_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:27 np0005473739 nova_compute[259550]: 2025-10-07 14:10:27.970 2 INFO nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Deleting local config drive /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc/disk.config because it was imported into RBD.#033[00m
Oct  7 10:10:28 np0005473739 kernel: tap6e3a4d42-ab: entered promiscuous mode
Oct  7 10:10:28 np0005473739 systemd-udevd[300598]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:10:28 np0005473739 NetworkManager[44949]: <info>  [1759846228.0358] manager: (tap6e3a4d42-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/115)
Oct  7 10:10:28 np0005473739 NetworkManager[44949]: <info>  [1759846228.0498] device (tap6e3a4d42-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:10:28 np0005473739 NetworkManager[44949]: <info>  [1759846228.0509] device (tap6e3a4d42-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:10:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:28Z|00219|binding|INFO|Claiming lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 for this chassis.
Oct  7 10:10:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:28Z|00220|binding|INFO|6e3a4d42-ab7b-4d31-93af-858be34d84f8: Claiming fa:16:3e:49:e0:a9 10.100.0.8
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.098 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:e0:a9 10.100.0.8'], port_security=['fa:16:3e:49:e0:a9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9fd9e34f-3492-4726-ab46-1b4bab671dbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e3a4d42-ab7b-4d31-93af-858be34d84f8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:28Z|00221|binding|INFO|Setting lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 ovn-installed in OVS
Oct  7 10:10:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:28Z|00222|binding|INFO|Setting lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 up in Southbound
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:28 np0005473739 systemd-machined[214580]: New machine qemu-38-instance-00000022.
Oct  7 10:10:28 np0005473739 systemd[1]: Started Virtual Machine qemu-38-instance-00000022.
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.170 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.171 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4148MB free_disk=59.92217254638672GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.172 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.173 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.269 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 30c3d75d-d021-411b-a277-f81ff1f707b8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9fd9e34f-3492-4726-ab46-1b4bab671dbc actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.271 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.271 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.299 2 DEBUG nova.compute.manager [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.300 2 DEBUG oslo_concurrency.lockutils [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.300 2 DEBUG oslo_concurrency.lockutils [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.300 2 DEBUG oslo_concurrency.lockutils [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.301 2 DEBUG nova.compute.manager [req-1d280c00-0c40-46dc-873c-9de896087780 req-fad27fc9-5b4d-4a52-a3b8-5118eb83c363 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Processing event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:10:28 np0005473739 podman[300745]: 2025-10-07 14:10:28.340080156 +0000 UTC m=+0.063835419 container create 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.375 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:28 np0005473739 systemd[1]: Started libpod-conmon-63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f.scope.
Oct  7 10:10:28 np0005473739 podman[300745]: 2025-10-07 14:10:28.306213965 +0000 UTC m=+0.029969238 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.421 2 DEBUG nova.compute.manager [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.421 2 DEBUG oslo_concurrency.lockutils [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.422 2 DEBUG oslo_concurrency.lockutils [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.422 2 DEBUG oslo_concurrency.lockutils [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.422 2 DEBUG nova.compute.manager [req-74547d21-f5f0-4c51-874c-0db5f0e0076a req-7176c6d8-0327-4182-b18c-90c1af7e9d74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Processing event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:10:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c23965c9c9ac11f5345effbb29f6f66907d2d2062094a35123a593e4f89e7ac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:28 np0005473739 podman[300745]: 2025-10-07 14:10:28.464732393 +0000 UTC m=+0.188487676 container init 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  7 10:10:28 np0005473739 podman[300745]: 2025-10-07 14:10:28.475485275 +0000 UTC m=+0.199240528 container start 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  7 10:10:28 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [NOTICE]   (300784) : New worker (300786) forked
Oct  7 10:10:28 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [NOTICE]   (300784) : Loading success.
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.568 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e3a4d42-ab7b-4d31-93af-858be34d84f8 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.571 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.591 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[389746a1-03a6-496f-ac10-c9adcb14bc29]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d8937223-bc85-4477-a3ec-efb446f5b0f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.628 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f80a6129-74c0-45ae-8b6e-231540f254c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.629 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.631 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846228.6286561, 30c3d75d-d021-411b-a277-f81ff1f707b8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.632 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] VM Started (Lifecycle Event)#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.637 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.646 2 INFO nova.virt.libvirt.driver [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance spawned successfully.#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.646 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.655 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.659 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.665 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1f9e6b-5fc7-4143-9026-1d1ce762f352]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.669 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.669 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.670 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.670 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.670 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.671 2 DEBUG nova.virt.libvirt.driver [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.679 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.679 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846228.630425, 30c3d75d-d021-411b-a277-f81ff1f707b8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.679 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.686 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e245435f-a31c-4466-9724-c9b53fb9863c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 35158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300843, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.702 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.705 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fedd1b4-107b-47de-8d32-f5ff5bcaf197]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f80456d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678969, 'tstamp': 678969}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300844, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f80456d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678972, 'tstamp': 678972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300844, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.707 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.707 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846228.6373038, 30c3d75d-d021-411b-a277-f81ff1f707b8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.707 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:28.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.734 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.740 2 INFO nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Took 7.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.740 2 DEBUG nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.743 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.782 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.834 2 INFO nova.compute.manager [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Took 8.73 seconds to build instance.#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.851 2 DEBUG oslo_concurrency.lockutils [None req-879f73fd-25ed-47af-8694-c8c6d4715391 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/303149936' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.895 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.902 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.924 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.953 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:10:28 np0005473739 nova_compute[259550]: 2025-10-07 14:10:28.953 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.781s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:29Z|00223|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct  7 10:10:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:29Z|00224|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.077 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846229.0773168, 9fd9e34f-3492-4726-ab46-1b4bab671dbc => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.078 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] VM Started (Lifecycle Event)#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.080 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.085 2 DEBUG nova.virt.libvirt.driver [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.088 2 INFO nova.virt.libvirt.driver [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance spawned successfully.#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.089 2 INFO nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Took 6.82 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.090 2 DEBUG nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.101 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.108 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.155 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.156 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846229.0776527, 9fd9e34f-3492-4726-ab46-1b4bab671dbc => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.156 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.171 2 INFO nova.compute.manager [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Took 7.85 seconds to build instance.#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.192 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.196 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846229.0843832, 9fd9e34f-3492-4726-ab46-1b4bab671dbc => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.196 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.198 2 DEBUG oslo_concurrency.lockutils [None req-9345fbf2-a596-40a3-99d5-e3b74839a7a9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.986s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:29Z|00225|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct  7 10:10:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:29Z|00226|binding|INFO|Releasing lport a8291172-baf1-4252-9a0d-af7ef7ffa931 from this chassis (sb_readonly=0)
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.218 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.223 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:29 np0005473739 nova_compute[259550]: 2025-10-07 14:10:29.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1342: 305 pgs: 305 active+clean; 213 MiB data, 483 MiB used, 60 GiB / 60 GiB avail; 554 KiB/s rd, 4.7 MiB/s wr, 199 op/s
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.102 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.103 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.103 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.103 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.104 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.105 2 INFO nova.compute.manager [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Terminating instance#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.106 2 DEBUG nova.compute.manager [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:10:30 np0005473739 kernel: tap6e3a4d42-ab (unregistering): left promiscuous mode
Oct  7 10:10:30 np0005473739 NetworkManager[44949]: <info>  [1759846230.1610] device (tap6e3a4d42-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:10:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:30Z|00227|binding|INFO|Releasing lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 from this chassis (sb_readonly=0)
Oct  7 10:10:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:30Z|00228|binding|INFO|Setting lport 6e3a4d42-ab7b-4d31-93af-858be34d84f8 down in Southbound
Oct  7 10:10:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:30Z|00229|binding|INFO|Removing iface tap6e3a4d42-ab ovn-installed in OVS
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.187 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:49:e0:a9 10.100.0.8'], port_security=['fa:16:3e:49:e0:a9 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '9fd9e34f-3492-4726-ab46-1b4bab671dbc', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e3a4d42-ab7b-4d31-93af-858be34d84f8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.188 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e3a4d42-ab7b-4d31-93af-858be34d84f8 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.189 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.208 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bcc224c1-d42e-4fe9-b81e-d6d69020b5bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:30 np0005473739 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Deactivated successfully.
Oct  7 10:10:30 np0005473739 systemd[1]: machine-qemu\x2d38\x2dinstance\x2d00000022.scope: Consumed 1.807s CPU time.
Oct  7 10:10:30 np0005473739 systemd-machined[214580]: Machine qemu-38-instance-00000022 terminated.
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.247 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[70c5fbe6-5b9a-4f19-834e-8ffface3a158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.251 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b382d5d4-b1ce-4c76-9d12-2308149538c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.286 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b6f6aca4-f3f8-4426-9812-42822d09cc46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.307 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4097a6e-e8ed-453c-aa94-9dea546e78f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 67], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678952, 'reachable_time': 35158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300856, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.328 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[35708e14-1c58-4195-bd3c-b0fed74dc5e9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9f80456d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678969, 'tstamp': 678969}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300857, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9f80456d-d1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 678972, 'tstamp': 678972}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 300857, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.332 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.351 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.352 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.353 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:30.353 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.357 2 INFO nova.virt.libvirt.driver [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Instance destroyed successfully.#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.357 2 DEBUG nova.objects.instance [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid 9fd9e34f-3492-4726-ab46-1b4bab671dbc obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.380 2 DEBUG nova.virt.libvirt.vif [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:10:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1746458742',display_name='tempest-ImagesTestJSON-server-1746458742',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1746458742',id=34,image_ref='e82f6cd3-df6c-4f54-b0d1-baa8f3a03435',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-ooe0chln',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='20fd8b19-2eb6-4c42-a320-d9d23f6f4912',image_min_disk='1',image_min_ram='0',image_owner_id='1a6abfd8cc6f4507886ed10873d1f95c',image_owner_project_name='tempest-ImagesTestJSON-194092869',image_owner_user_name='tempest-ImagesTestJSON-194092869-project-member',image_user_id='a27a7178326846e69ab9eaae7c70b274',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:29Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=9fd9e34f-3492-4726-ab46-1b4bab671dbc,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.382 2 DEBUG nova.network.os_vif_util [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "address": "fa:16:3e:49:e0:a9", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e3a4d42-ab", "ovs_interfaceid": "6e3a4d42-ab7b-4d31-93af-858be34d84f8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.382 2 DEBUG nova.network.os_vif_util [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.383 2 DEBUG os_vif [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.386 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e3a4d42-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.393 2 INFO os_vif [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:49:e0:a9,bridge_name='br-int',has_traffic_filtering=True,id=6e3a4d42-ab7b-4d31-93af-858be34d84f8,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e3a4d42-ab')#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.755 2 INFO nova.virt.libvirt.driver [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Deleting instance files /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc_del#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.756 2 INFO nova.virt.libvirt.driver [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Deletion of /var/lib/nova/instances/9fd9e34f-3492-4726-ab46-1b4bab671dbc_del complete#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.790 2 DEBUG nova.compute.manager [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.790 2 DEBUG oslo_concurrency.lockutils [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.791 2 DEBUG oslo_concurrency.lockutils [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.791 2 DEBUG oslo_concurrency.lockutils [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.791 2 DEBUG nova.compute.manager [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] No waiting events found dispatching network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.791 2 WARNING nova.compute.manager [req-e024119f-2d03-4546-87a3-dc6daefd3d65 req-9a37571d-5069-40ee-bfda-4a944f33bbd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received unexpected event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a for instance with vm_state active and task_state None.#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.813 2 INFO nova.compute.manager [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Took 0.71 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.814 2 DEBUG oslo.service.loopingcall [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.814 2 DEBUG nova.compute.manager [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.814 2 DEBUG nova.network.neutron [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:10:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e178 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.865 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.865 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.865 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] No waiting events found dispatching network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.866 2 WARNING nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received unexpected event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-unplugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.866 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.867 2 DEBUG oslo_concurrency.lockutils [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.867 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] No waiting events found dispatching network-vif-unplugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:30 np0005473739 nova_compute[259550]: 2025-10-07 14:10:30.867 2 DEBUG nova.compute.manager [req-c4b8326b-2ea2-4caf-ae58-83e8da3e8a8a req-9dcdfff0-e8b4-483c-9c2a-9afe1f603e6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-unplugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.036 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.037 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.037 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.037 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.037 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.038 2 INFO nova.compute.manager [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Terminating instance#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.039 2 DEBUG nova.compute.manager [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:10:31 np0005473739 kernel: tap76ff867f-df (unregistering): left promiscuous mode
Oct  7 10:10:31 np0005473739 NetworkManager[44949]: <info>  [1759846231.0778] device (tap76ff867f-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:10:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:31Z|00230|binding|INFO|Releasing lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a from this chassis (sb_readonly=0)
Oct  7 10:10:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:31Z|00231|binding|INFO|Setting lport 76ff867f-df0e-4526-b0c7-6bc2163fd52a down in Southbound
Oct  7 10:10:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:31Z|00232|binding|INFO|Removing iface tap76ff867f-df ovn-installed in OVS
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:31 np0005473739 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Deactivated successfully.
Oct  7 10:10:31 np0005473739 systemd[1]: machine-qemu\x2d37\x2dinstance\x2d00000021.scope: Consumed 3.264s CPU time.
Oct  7 10:10:31 np0005473739 systemd-machined[214580]: Machine qemu-37-instance-00000021 terminated.
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.232 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:66:96 10.100.0.8'], port_security=['fa:16:3e:dc:66:96 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '30c3d75d-d021-411b-a277-f81ff1f707b8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c166ae9e4e0f43d38afaa35966f84b05', 'neutron:revision_number': '4', 'neutron:security_group_ids': '306ac68f-7d3a-41d3-a9d1-b809ff5ece38', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f6dab0b8-b058-4fe6-95e9-ca808f08d05f, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=76ff867f-df0e-4526-b0c7-6bc2163fd52a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.234 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 76ff867f-df0e-4526-b0c7-6bc2163fd52a in datapath 0a5c95d4-1a77-48f5-83c0-afa976b7583d unbound from our chassis#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.235 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 0a5c95d4-1a77-48f5-83c0-afa976b7583d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.236 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[322e670b-6c64-4bb4-b19f-7b6ed16c37d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.236 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d namespace which is not needed anymore#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.281 2 INFO nova.virt.libvirt.driver [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Instance destroyed successfully.#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.283 2 DEBUG nova.objects.instance [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lazy-loading 'resources' on Instance uuid 30c3d75d-d021-411b-a277-f81ff1f707b8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.378 2 DEBUG nova.virt.libvirt.vif [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:10:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',display_name='tempest-ImagesOneServerNegativeTestJSON-server-1409154713',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagesoneservernegativetestjson-server-1409154713',id=33,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c166ae9e4e0f43d38afaa35966f84b05',ramdisk_id='',reservation_id='r-20bg9heg',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesOneServerNegativeTestJSON-2130756304',owner_user_name='tempest-ImagesOneServerNegativeTestJSON-2130756304-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:28Z,user_data=None,user_id='21b4c507f5c443f4b43306c884b1d67f',uuid=30c3d75d-d021-411b-a277-f81ff1f707b8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.380 2 DEBUG nova.network.os_vif_util [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converting VIF {"id": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "address": "fa:16:3e:dc:66:96", "network": {"id": "0a5c95d4-1a77-48f5-83c0-afa976b7583d", "bridge": "br-int", "label": "tempest-ImagesOneServerNegativeTestJSON-1287734888-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "c166ae9e4e0f43d38afaa35966f84b05", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76ff867f-df", "ovs_interfaceid": "76ff867f-df0e-4526-b0c7-6bc2163fd52a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.381 2 DEBUG nova.network.os_vif_util [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.382 2 DEBUG os_vif [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.384 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76ff867f-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.390 2 INFO os_vif [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:66:96,bridge_name='br-int',has_traffic_filtering=True,id=76ff867f-df0e-4526-b0c7-6bc2163fd52a,network=Network(0a5c95d4-1a77-48f5-83c0-afa976b7583d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76ff867f-df')#033[00m
Oct  7 10:10:31 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [NOTICE]   (300784) : haproxy version is 2.8.14-c23fe91
Oct  7 10:10:31 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [NOTICE]   (300784) : path to executable is /usr/sbin/haproxy
Oct  7 10:10:31 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [WARNING]  (300784) : Exiting Master process...
Oct  7 10:10:31 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [WARNING]  (300784) : Exiting Master process...
Oct  7 10:10:31 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [ALERT]    (300784) : Current worker (300786) exited with code 143 (Terminated)
Oct  7 10:10:31 np0005473739 neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d[300762]: [WARNING]  (300784) : All workers exited. Exiting... (0)
Oct  7 10:10:31 np0005473739 systemd[1]: libpod-63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f.scope: Deactivated successfully.
Oct  7 10:10:31 np0005473739 podman[300922]: 2025-10-07 14:10:31.40426372 +0000 UTC m=+0.049612665 container died 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:10:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f-userdata-shm.mount: Deactivated successfully.
Oct  7 10:10:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4c23965c9c9ac11f5345effbb29f6f66907d2d2062094a35123a593e4f89e7ac-merged.mount: Deactivated successfully.
Oct  7 10:10:31 np0005473739 podman[300922]: 2025-10-07 14:10:31.457810997 +0000 UTC m=+0.103159922 container cleanup 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:10:31 np0005473739 systemd[1]: libpod-conmon-63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f.scope: Deactivated successfully.
Oct  7 10:10:31 np0005473739 podman[300969]: 2025-10-07 14:10:31.536180277 +0000 UTC m=+0.052790208 container remove 63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.545 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[efd5f975-2c37-46b7-ada7-3cac08f319fd]: (4, ('Tue Oct  7 02:10:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f)\n63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f\nTue Oct  7 02:10:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d (63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f)\n63d7f54cd84fc0e0895445bb14fc9b14d3e377d304684adb0a7468eb7d1ba71f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.548 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[155ecb57-2372-4455-a4a6-58522bf0df51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.549 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a5c95d4-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:31 np0005473739 kernel: tap0a5c95d4-10: left promiscuous mode
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.575 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[573cfc31-d68d-41d3-9446-a1f667c98588]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78da40c4-fcc7-4cf9-948f-ebc508a17deb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac78187c-7e4d-40b7-9600-ba95577f4a3b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1343: 305 pgs: 305 active+clean; 214 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 196 op/s
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.623 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe99e09-5b7e-4a08-954b-f4a220daadde]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 681111, 'reachable_time': 24043, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 300984, 'error': None, 'target': 'ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.626 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-0a5c95d4-1a77-48f5-83c0-afa976b7583d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:31.626 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f17baa-a806-48e3-a424-403aad175a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:31 np0005473739 systemd[1]: run-netns-ovnmeta\x2d0a5c95d4\x2d1a77\x2d48f5\x2d83c0\x2dafa976b7583d.mount: Deactivated successfully.
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.833 2 INFO nova.virt.libvirt.driver [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Deleting instance files /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8_del#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.833 2 INFO nova.virt.libvirt.driver [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Deletion of /var/lib/nova/instances/30c3d75d-d021-411b-a277-f81ff1f707b8_del complete#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.931 2 INFO nova.compute.manager [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Took 0.89 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.932 2 DEBUG oslo.service.loopingcall [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.932 2 DEBUG nova.compute.manager [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.932 2 DEBUG nova.network.neutron [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.954 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.954 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:10:31 np0005473739 nova_compute[259550]: 2025-10-07 14:10:31.954 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.065 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.065 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011091491258058215 of space, bias 1.0, pg target 0.33274473774174645 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0010121097056716806 of space, bias 1.0, pg target 0.3036329117015042 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.339 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.340 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.341 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.477 2 DEBUG nova.network.neutron [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.508 2 INFO nova.compute.manager [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Took 1.69 seconds to deallocate network for instance.#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.591 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.591 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:10:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592856576' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:10:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:10:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3592856576' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.680 2 DEBUG oslo_concurrency.processutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.917 2 DEBUG nova.network.neutron [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:32 np0005473739 nova_compute[259550]: 2025-10-07 14:10:32.983 2 INFO nova.compute.manager [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Took 1.05 seconds to deallocate network for instance.#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.076 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-unplugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.077 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.077 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.077 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.078 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] No waiting events found dispatching network-vif-unplugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.078 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-unplugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.078 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-deleted-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.078 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.079 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.079 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.080 2 DEBUG oslo_concurrency.lockutils [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.080 2 DEBUG nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] No waiting events found dispatching network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.080 2 WARNING nova.compute.manager [req-7179c28b-88bb-4e2c-9c7d-c3fb492c0db1 req-ee53979f-b0e8-497f-af29-3f66b1ab4d9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received unexpected event network-vif-plugged-76ff867f-df0e-4526-b0c7-6bc2163fd52a for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.082 2 DEBUG nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.082 2 DEBUG oslo_concurrency.lockutils [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.083 2 DEBUG oslo_concurrency.lockutils [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.083 2 DEBUG oslo_concurrency.lockutils [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.083 2 DEBUG nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] No waiting events found dispatching network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.083 2 WARNING nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Received unexpected event network-vif-plugged-6e3a4d42-ab7b-4d31-93af-858be34d84f8 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.084 2 DEBUG nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Received event network-vif-deleted-76ff867f-df0e-4526-b0c7-6bc2163fd52a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.084 2 INFO nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Neutron deleted interface 76ff867f-df0e-4526-b0c7-6bc2163fd52a; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.084 2 DEBUG nova.network.neutron [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3513709040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.147 2 DEBUG oslo_concurrency.processutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.159 2 DEBUG nova.compute.provider_tree [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.172 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.176 2 DEBUG nova.compute.manager [req-13ba572b-8a9b-4891-9baf-3d8764d83a9c req-535ad52c-8dc1-4071-ba5c-cd9f8cf6f411 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Detach interface failed, port_id=76ff867f-df0e-4526-b0c7-6bc2163fd52a, reason: Instance 30c3d75d-d021-411b-a277-f81ff1f707b8 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.253 2 DEBUG nova.scheduler.client.report [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.366 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.368 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.440 2 INFO nova.scheduler.client.report [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance 9fd9e34f-3492-4726-ab46-1b4bab671dbc#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.447 2 DEBUG oslo_concurrency.processutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1344: 305 pgs: 305 active+clean; 214 MiB data, 461 MiB used, 60 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.3 MiB/s wr, 176 op/s
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.626 2 DEBUG oslo_concurrency.lockutils [None req-b229311a-2484-4893-9736-38005d074bc9 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "9fd9e34f-3492-4726-ab46-1b4bab671dbc" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.523s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3779656676' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.911 2 DEBUG oslo_concurrency.processutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.918 2 DEBUG nova.compute.provider_tree [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:33 np0005473739 nova_compute[259550]: 2025-10-07 14:10:33.987 2 DEBUG nova.scheduler.client.report [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:34 np0005473739 nova_compute[259550]: 2025-10-07 14:10:34.089 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:34 np0005473739 nova_compute[259550]: 2025-10-07 14:10:34.146 2 INFO nova.scheduler.client.report [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Deleted allocations for instance 30c3d75d-d021-411b-a277-f81ff1f707b8#033[00m
Oct  7 10:10:34 np0005473739 podman[301030]: 2025-10-07 14:10:34.1620583 +0000 UTC m=+0.125627853 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Oct  7 10:10:34 np0005473739 nova_compute[259550]: 2025-10-07 14:10:34.182 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updating instance_info_cache with network_info: [{"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:34 np0005473739 nova_compute[259550]: 2025-10-07 14:10:34.342 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-20fd8b19-2eb6-4c42-a320-d9d23f6f4912" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:34 np0005473739 nova_compute[259550]: 2025-10-07 14:10:34.343 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:10:34 np0005473739 nova_compute[259550]: 2025-10-07 14:10:34.343 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:34 np0005473739 nova_compute[259550]: 2025-10-07 14:10:34.344 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:10:34 np0005473739 nova_compute[259550]: 2025-10-07 14:10:34.364 2 DEBUG oslo_concurrency.lockutils [None req-ae27b29e-a804-455c-8345-dab14c7b83ec 21b4c507f5c443f4b43306c884b1d67f c166ae9e4e0f43d38afaa35966f84b05 - - default default] Lock "30c3d75d-d021-411b-a277-f81ff1f707b8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.327s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:35 np0005473739 podman[301058]: 2025-10-07 14:10:35.060641539 +0000 UTC m=+0.051465673 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 10:10:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:10:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 6207 writes, 27K keys, 6207 commit groups, 1.0 writes per commit group, ingest: 0.04 GB, 0.02 MB/s#012Cumulative WAL: 6207 writes, 6207 syncs, 1.00 writes per sync, written: 0.04 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1590 writes, 7380 keys, 1590 commit groups, 1.0 writes per commit group, ingest: 9.67 MB, 0.02 MB/s#012Interval WAL: 1590 writes, 1590 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0     81.9      0.41              0.10        16    0.025       0      0       0.0       0.0#012  L6      1/0    8.37 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   3.2    137.9    112.0      0.97              0.32        15    0.064     70K   8353       0.0       0.0#012 Sum      1/0    8.37 MB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   4.2     97.0    103.1      1.37              0.42        31    0.044     70K   8353       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   4.4    126.7    130.7      0.37              0.14        10    0.037     26K   3092       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   0.0    137.9    112.0      0.97              0.32        15    0.064     70K   8353       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     82.5      0.40              0.10        15    0.027       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 2400.0 total, 600.0 interval#012Flush(GB): cumulative 0.033, interval 0.011#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.14 GB write, 0.06 MB/s write, 0.13 GB read, 0.06 MB/s read, 1.4 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.4 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 15.32 MB table_size: 0 occupancy: 18446744073709551615 collections: 5 last_copies: 0 last_secs: 0.000172 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(996,14.75 MB,4.85334%) FilterBlock(32,201.23 KB,0.0646441%) IndexBlock(32,374.61 KB,0.120339%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 10:10:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e178 do_prune osdmap full prune enabled
Oct  7 10:10:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e179 e179: 3 total, 3 up, 3 in
Oct  7 10:10:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e179: 3 total, 3 up, 3 in
Oct  7 10:10:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1346: 305 pgs: 305 active+clean; 159 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 48 KiB/s wr, 202 op/s
Oct  7 10:10:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:35 np0005473739 nova_compute[259550]: 2025-10-07 14:10:35.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.847 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.848 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.848 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.849 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.849 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.850 2 INFO nova.compute.manager [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Terminating instance#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.851 2 DEBUG nova.compute.manager [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:10:36 np0005473739 kernel: tap7f14c952-52 (unregistering): left promiscuous mode
Oct  7 10:10:36 np0005473739 NetworkManager[44949]: <info>  [1759846236.9116] device (tap7f14c952-52): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:10:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:36Z|00233|binding|INFO|Releasing lport 7f14c952-52c5-4957-ae99-09ff563a62a9 from this chassis (sb_readonly=0)
Oct  7 10:10:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:36Z|00234|binding|INFO|Setting lport 7f14c952-52c5-4957-ae99-09ff563a62a9 down in Southbound
Oct  7 10:10:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:36Z|00235|binding|INFO|Removing iface tap7f14c952-52 ovn-installed in OVS
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.965 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:35:62:58 10.100.0.12'], port_security=['fa:16:3e:35:62:58 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '20fd8b19-2eb6-4c42-a320-d9d23f6f4912', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '4', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=7f14c952-52c5-4957-ae99-09ff563a62a9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.966 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 7f14c952-52c5-4957-ae99-09ff563a62a9 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb unbound from our chassis#033[00m
Oct  7 10:10:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.967 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:10:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.968 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d2e61c2-ccc9-40cb-82af-cf66fc58af9f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:36.969 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace which is not needed anymore#033[00m
Oct  7 10:10:36 np0005473739 nova_compute[259550]: 2025-10-07 14:10:36.974 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:37 np0005473739 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Deactivated successfully.
Oct  7 10:10:37 np0005473739 systemd[1]: machine-qemu\x2d36\x2dinstance\x2d00000020.scope: Consumed 13.885s CPU time.
Oct  7 10:10:37 np0005473739 systemd-machined[214580]: Machine qemu-36-instance-00000020 terminated.
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.094 2 INFO nova.virt.libvirt.driver [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Instance destroyed successfully.#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.095 2 DEBUG nova.objects.instance [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'resources' on Instance uuid 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.114 2 DEBUG nova.virt.libvirt.vif [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:09:57Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-85849554',display_name='tempest-ImagesTestJSON-server-85849554',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-85849554',id=32,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:10:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-3510ii32',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:10:16Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=20fd8b19-2eb6-4c42-a320-d9d23f6f4912,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.116 2 DEBUG nova.network.os_vif_util [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "7f14c952-52c5-4957-ae99-09ff563a62a9", "address": "fa:16:3e:35:62:58", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7f14c952-52", "ovs_interfaceid": "7f14c952-52c5-4957-ae99-09ff563a62a9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:37 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [NOTICE]   (299476) : haproxy version is 2.8.14-c23fe91
Oct  7 10:10:37 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [NOTICE]   (299476) : path to executable is /usr/sbin/haproxy
Oct  7 10:10:37 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [WARNING]  (299476) : Exiting Master process...
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.119 2 DEBUG nova.network.os_vif_util [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.120 2 DEBUG os_vif [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:10:37 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [ALERT]    (299476) : Current worker (299478) exited with code 143 (Terminated)
Oct  7 10:10:37 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[299472]: [WARNING]  (299476) : All workers exited. Exiting... (0)
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.122 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7f14c952-52, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:37 np0005473739 systemd[1]: libpod-d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d.scope: Deactivated successfully.
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.129 2 INFO os_vif [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:35:62:58,bridge_name='br-int',has_traffic_filtering=True,id=7f14c952-52c5-4957-ae99-09ff563a62a9,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7f14c952-52')#033[00m
Oct  7 10:10:37 np0005473739 podman[301101]: 2025-10-07 14:10:37.130328043 +0000 UTC m=+0.055156932 container died d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:10:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d-userdata-shm.mount: Deactivated successfully.
Oct  7 10:10:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cdf6c25b6716cbbe88cb0b1eb25df944f3562caf1cdf4795012d2140fef4cae0-merged.mount: Deactivated successfully.
Oct  7 10:10:37 np0005473739 podman[301101]: 2025-10-07 14:10:37.186214932 +0000 UTC m=+0.111043821 container cleanup d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  7 10:10:37 np0005473739 systemd[1]: libpod-conmon-d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d.scope: Deactivated successfully.
Oct  7 10:10:37 np0005473739 podman[301160]: 2025-10-07 14:10:37.259642682 +0000 UTC m=+0.048255209 container remove d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.266 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[88996f9f-1a2e-4897-bf13-32131c9db4b1]: (4, ('Tue Oct  7 02:10:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d)\nd54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d\nTue Oct  7 02:10:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb (d54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d)\nd54e7a0e62ebc3df8ec12724721e2c4bd6f26536e62d89807d5d925c754d2c8d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.270 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[812ecd78-e7af-4b5e-90b1-f079a8f011f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.271 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:37 np0005473739 kernel: tap9f80456d-d0: left promiscuous mode
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.273 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.296 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a43564e9-dd09-44c1-83cd-a245f1c047d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.326 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[adc525f2-97fc-48b7-92d8-d97402ee3e02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.327 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17fbf39e-9d4c-4521-98ea-aaf55b553d4a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.338 2 DEBUG nova.compute.manager [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-unplugged-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.338 2 DEBUG oslo_concurrency.lockutils [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.338 2 DEBUG oslo_concurrency.lockutils [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.339 2 DEBUG oslo_concurrency.lockutils [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.339 2 DEBUG nova.compute.manager [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] No waiting events found dispatching network-vif-unplugged-7f14c952-52c5-4957-ae99-09ff563a62a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.339 2 DEBUG nova.compute.manager [req-5858ada1-2c5e-4b43-9f75-cb7cf6149fe0 req-f0eb64f8-3db5-401e-88b4-5f015aac7cae 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-unplugged-7f14c952-52c5-4957-ae99-09ff563a62a9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.344 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5218af23-35e1-432a-b902-4a40d8f56a6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 678942, 'reachable_time': 36542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 301175, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:37 np0005473739 systemd[1]: run-netns-ovnmeta\x2d9f80456d\x2dd8a6\x2d4e61\x2db6cb\x2db509cd650dbb.mount: Deactivated successfully.
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.349 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:10:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:37.349 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f71c98-2c1c-4408-9c58-ae20730c554d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.557 2 INFO nova.virt.libvirt.driver [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Deleting instance files /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_del#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.558 2 INFO nova.virt.libvirt.driver [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Deletion of /var/lib/nova/instances/20fd8b19-2eb6-4c42-a320-d9d23f6f4912_del complete#033[00m
Oct  7 10:10:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1347: 305 pgs: 305 active+clean; 159 MiB data, 440 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 48 KiB/s wr, 202 op/s
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.633 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846222.6328354, 75fea627-8d48-4772-86a2-ca6bc47ed597 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.634 2 INFO nova.compute.manager [-] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.989 2 INFO nova.compute.manager [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.989 2 DEBUG oslo.service.loopingcall [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.990 2 DEBUG nova.compute.manager [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:10:37 np0005473739 nova_compute[259550]: 2025-10-07 14:10:37.990 2 DEBUG nova.network.neutron [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:10:38 np0005473739 nova_compute[259550]: 2025-10-07 14:10:38.058 2 DEBUG nova.compute.manager [None req-60e0e5a4-c64a-424b-8a6f-126e49b5d641 - - - - - -] [instance: 75fea627-8d48-4772-86a2-ca6bc47ed597] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.234 2 DEBUG nova.network.neutron [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.485 2 INFO nova.compute.manager [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Took 1.50 seconds to deallocate network for instance.#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.564 2 DEBUG nova.compute.manager [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.565 2 DEBUG oslo_concurrency.lockutils [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.565 2 DEBUG oslo_concurrency.lockutils [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.566 2 DEBUG oslo_concurrency.lockutils [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.566 2 DEBUG nova.compute.manager [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] No waiting events found dispatching network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.566 2 WARNING nova.compute.manager [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received unexpected event network-vif-plugged-7f14c952-52c5-4957-ae99-09ff563a62a9 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.567 2 DEBUG nova.compute.manager [req-c07cda13-fe41-4699-bcc5-67b16ad94f4d req-2b09246d-fcd1-4316-825e-b487717e6182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Received event network-vif-deleted-7f14c952-52c5-4957-ae99-09ff563a62a9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1348: 305 pgs: 305 active+clean; 67 MiB data, 389 MiB used, 60 GiB / 60 GiB avail; 3.4 MiB/s rd, 22 KiB/s wr, 227 op/s
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.625 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.625 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:39 np0005473739 nova_compute[259550]: 2025-10-07 14:10:39.672 2 DEBUG oslo_concurrency.processutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667987412' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:40 np0005473739 nova_compute[259550]: 2025-10-07 14:10:40.128 2 DEBUG oslo_concurrency.processutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:40 np0005473739 nova_compute[259550]: 2025-10-07 14:10:40.136 2 DEBUG nova.compute.provider_tree [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:40 np0005473739 nova_compute[259550]: 2025-10-07 14:10:40.198 2 DEBUG nova.scheduler.client.report [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:40 np0005473739 nova_compute[259550]: 2025-10-07 14:10:40.237 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:40 np0005473739 nova_compute[259550]: 2025-10-07 14:10:40.298 2 INFO nova.scheduler.client.report [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Deleted allocations for instance 20fd8b19-2eb6-4c42-a320-d9d23f6f4912#033[00m
Oct  7 10:10:40 np0005473739 nova_compute[259550]: 2025-10-07 14:10:40.668 2 DEBUG oslo_concurrency.lockutils [None req-bbb32f1f-7f4b-43c6-8597-cced3e7f7f54 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "20fd8b19-2eb6-4c42-a320-d9d23f6f4912" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e179 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e179 do_prune osdmap full prune enabled
Oct  7 10:10:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e180 e180: 3 total, 3 up, 3 in
Oct  7 10:10:40 np0005473739 nova_compute[259550]: 2025-10-07 14:10:40.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:40 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e180: 3 total, 3 up, 3 in
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.166 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.168 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.199 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.280 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.280 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.287 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.287 2 INFO nova.compute.claims [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.437 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1350: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 3.0 MiB/s rd, 6.7 KiB/s wr, 245 op/s
Oct  7 10:10:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:10:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3171583419' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.893 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.900 2 DEBUG nova.compute.provider_tree [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.949 2 DEBUG nova.scheduler.client.report [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.988 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:41 np0005473739 nova_compute[259550]: 2025-10-07 14:10:41.989 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.054 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.055 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.145 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.190 2 DEBUG nova.policy [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a27a7178326846e69ab9eaae7c70b274', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.338 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.735 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.738 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.738 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Creating image(s)#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.760 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.782 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.804 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.808 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.885 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.886 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.887 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.887 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.914 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:42 np0005473739 nova_compute[259550]: 2025-10-07 14:10:42.919 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ade69915-c1e6-4b99-b8e6-8031c7f04049_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:10:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8e9c7d61-31bd-4982-87ef-55537c5c580c does not exist
Oct  7 10:10:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1fd5de01-3e39-4529-990e-f4c6a884b8cf does not exist
Oct  7 10:10:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8a6053b5-6adb-4a82-9d66-35c8f6a28269 does not exist
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.356 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ade69915-c1e6-4b99-b8e6-8031c7f04049_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.436s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.384 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Successfully created port: 3f4a637c-ddfa-49f0-b263-224957faec29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.421 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] resizing rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.515 2 DEBUG nova.objects.instance [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'migration_context' on Instance uuid ade69915-c1e6-4b99-b8e6-8031c7f04049 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.538 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.539 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Ensure instance console log exists: /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.540 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.540 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:43 np0005473739 nova_compute[259550]: 2025-10-07 14:10:43.541 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1351: 305 pgs: 305 active+clean; 41 MiB data, 372 MiB used, 60 GiB / 60 GiB avail; 60 KiB/s rd, 3.4 KiB/s wr, 86 op/s
Oct  7 10:10:43 np0005473739 podman[301658]: 2025-10-07 14:10:43.64942099 +0000 UTC m=+0.037555038 container create 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:10:43 np0005473739 systemd[1]: Started libpod-conmon-365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce.scope.
Oct  7 10:10:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:43 np0005473739 podman[301658]: 2025-10-07 14:10:43.63417621 +0000 UTC m=+0.022310278 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:10:43 np0005473739 podman[301658]: 2025-10-07 14:10:43.746338228 +0000 UTC m=+0.134472306 container init 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:10:43 np0005473739 podman[301658]: 2025-10-07 14:10:43.755617352 +0000 UTC m=+0.143751400 container start 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 10:10:43 np0005473739 beautiful_bell[301675]: 167 167
Oct  7 10:10:43 np0005473739 systemd[1]: libpod-365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce.scope: Deactivated successfully.
Oct  7 10:10:43 np0005473739 podman[301658]: 2025-10-07 14:10:43.768008167 +0000 UTC m=+0.156142235 container attach 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 10:10:43 np0005473739 podman[301658]: 2025-10-07 14:10:43.768722896 +0000 UTC m=+0.156856944 container died 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:10:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f67d2389a839e8824a6e2278bc254cd2a95fd06f2f8e538bdfc3a91b4928b99f-merged.mount: Deactivated successfully.
Oct  7 10:10:43 np0005473739 podman[301658]: 2025-10-07 14:10:43.84498493 +0000 UTC m=+0.233118978 container remove 365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:10:43 np0005473739 systemd[1]: libpod-conmon-365c479e7389555fc7ff437eb5ce7f11a149e9ab4b8c3dc726f48306816dc9ce.scope: Deactivated successfully.
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:10:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:10:44 np0005473739 podman[301699]: 2025-10-07 14:10:44.029193763 +0000 UTC m=+0.059353441 container create 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default)
Oct  7 10:10:44 np0005473739 systemd[1]: Started libpod-conmon-70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341.scope.
Oct  7 10:10:44 np0005473739 podman[301699]: 2025-10-07 14:10:43.997165811 +0000 UTC m=+0.027325509 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:10:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:44 np0005473739 podman[301699]: 2025-10-07 14:10:44.141233148 +0000 UTC m=+0.171392846 container init 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 10:10:44 np0005473739 podman[301699]: 2025-10-07 14:10:44.150997595 +0000 UTC m=+0.181157273 container start 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 10:10:44 np0005473739 podman[301699]: 2025-10-07 14:10:44.154304271 +0000 UTC m=+0.184463969 container attach 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:10:44 np0005473739 nova_compute[259550]: 2025-10-07 14:10:44.824 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Successfully updated port: 3f4a637c-ddfa-49f0-b263-224957faec29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:10:44 np0005473739 nova_compute[259550]: 2025-10-07 14:10:44.849 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:44 np0005473739 nova_compute[259550]: 2025-10-07 14:10:44.849 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquired lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:44 np0005473739 nova_compute[259550]: 2025-10-07 14:10:44.849 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:10:44 np0005473739 nova_compute[259550]: 2025-10-07 14:10:44.932 2 DEBUG nova.compute.manager [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-changed-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:44 np0005473739 nova_compute[259550]: 2025-10-07 14:10:44.933 2 DEBUG nova.compute.manager [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Refreshing instance network info cache due to event network-changed-3f4a637c-ddfa-49f0-b263-224957faec29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:10:44 np0005473739 nova_compute[259550]: 2025-10-07 14:10:44.933 2 DEBUG oslo_concurrency.lockutils [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.008 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.353 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846230.3524852, 9fd9e34f-3492-4726-ab46-1b4bab671dbc => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.354 2 INFO nova.compute.manager [-] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:10:45 np0005473739 happy_lamport[301715]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:10:45 np0005473739 happy_lamport[301715]: --> relative data size: 1.0
Oct  7 10:10:45 np0005473739 happy_lamport[301715]: --> All data devices are unavailable
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.371 2 DEBUG nova.compute.manager [None req-9d290afd-4bda-4424-a241-fd1607a95526 - - - - - -] [instance: 9fd9e34f-3492-4726-ab46-1b4bab671dbc] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:45 np0005473739 systemd[1]: libpod-70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341.scope: Deactivated successfully.
Oct  7 10:10:45 np0005473739 systemd[1]: libpod-70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341.scope: Consumed 1.141s CPU time.
Oct  7 10:10:45 np0005473739 podman[301699]: 2025-10-07 14:10:45.401367161 +0000 UTC m=+1.431526869 container died 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:10:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9c1215145b27a16fc1a943db5afc1d7bacf5f6e6892e883ea93251d82314325d-merged.mount: Deactivated successfully.
Oct  7 10:10:45 np0005473739 podman[301699]: 2025-10-07 14:10:45.468784583 +0000 UTC m=+1.498944261 container remove 70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_lamport, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 10:10:45 np0005473739 systemd[1]: libpod-conmon-70567af0335b0444f10cccdf99f5855f116d594fcbc815a0f21e9a5bae0d3341.scope: Deactivated successfully.
Oct  7 10:10:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1352: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.639 2 DEBUG nova.network.neutron [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Updating instance_info_cache with network_info: [{"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.665 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Releasing lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.665 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance network_info: |[{"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.666 2 DEBUG oslo_concurrency.lockutils [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.666 2 DEBUG nova.network.neutron [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Refreshing network info cache for port 3f4a637c-ddfa-49f0-b263-224957faec29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.668 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Start _get_guest_xml network_info=[{"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.675 2 WARNING nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.687 2 DEBUG nova.virt.libvirt.host [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.689 2 DEBUG nova.virt.libvirt.host [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.695 2 DEBUG nova.virt.libvirt.host [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.696 2 DEBUG nova.virt.libvirt.host [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.697 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.697 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.698 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.699 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.699 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.700 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.700 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.701 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.701 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.702 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.702 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.703 2 DEBUG nova.virt.hardware [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.709 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:45 np0005473739 nova_compute[259550]: 2025-10-07 14:10:45.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:10:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3474544031' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:10:46 np0005473739 podman[301920]: 2025-10-07 14:10:46.198128205 +0000 UTC m=+0.059979058 container create f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.200 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.233 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.241 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:46 np0005473739 systemd[1]: Started libpod-conmon-f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540.scope.
Oct  7 10:10:46 np0005473739 podman[301920]: 2025-10-07 14:10:46.16711589 +0000 UTC m=+0.028966803 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.279 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846231.278565, 30c3d75d-d021-411b-a277-f81ff1f707b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.281 2 INFO nova.compute.manager [-] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:10:46 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:46 np0005473739 podman[301920]: 2025-10-07 14:10:46.323379646 +0000 UTC m=+0.185230479 container init f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:10:46 np0005473739 podman[301920]: 2025-10-07 14:10:46.333398721 +0000 UTC m=+0.195249544 container start f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:10:46 np0005473739 trusting_shockley[301957]: 167 167
Oct  7 10:10:46 np0005473739 systemd[1]: libpod-f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540.scope: Deactivated successfully.
Oct  7 10:10:46 np0005473739 podman[301920]: 2025-10-07 14:10:46.3440592 +0000 UTC m=+0.205910023 container attach f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:10:46 np0005473739 podman[301920]: 2025-10-07 14:10:46.344483212 +0000 UTC m=+0.206334035 container died f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:10:46 np0005473739 systemd[1]: var-lib-containers-storage-overlay-11554c8187328bfa4866e24a142c7ce5304a918dc86495f24e5ab4dc6128e285-merged.mount: Deactivated successfully.
Oct  7 10:10:46 np0005473739 podman[301920]: 2025-10-07 14:10:46.439775147 +0000 UTC m=+0.301625960 container remove f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_shockley, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:10:46 np0005473739 systemd[1]: libpod-conmon-f37b571df4622975a76000f03cf5500c929a38b2f588bbeb9d615cce3b978540.scope: Deactivated successfully.
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.622 2 DEBUG nova.compute.manager [None req-3c22aed7-3298-4543-857d-808663300e77 - - - - - -] [instance: 30c3d75d-d021-411b-a277-f81ff1f707b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:46 np0005473739 podman[301999]: 2025-10-07 14:10:46.688448153 +0000 UTC m=+0.057095212 container create fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 10:10:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:10:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549352093' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.717 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.719 2 DEBUG nova.virt.libvirt.vif [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1443725263',display_name='tempest-ImagesTestJSON-server-1443725263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1443725263',id=35,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-pcdh9zoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:42Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=ade69915-c1e6-4b99-b8e6-8031c7f04049,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.720 2 DEBUG nova.network.os_vif_util [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.720 2 DEBUG nova.network.os_vif_util [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.722 2 DEBUG nova.objects.instance [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lazy-loading 'pci_devices' on Instance uuid ade69915-c1e6-4b99-b8e6-8031c7f04049 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:10:46 np0005473739 systemd[1]: Started libpod-conmon-fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850.scope.
Oct  7 10:10:46 np0005473739 podman[301999]: 2025-10-07 14:10:46.662966673 +0000 UTC m=+0.031613762 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:10:46 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.780 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <uuid>ade69915-c1e6-4b99-b8e6-8031c7f04049</uuid>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <name>instance-00000023</name>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <nova:name>tempest-ImagesTestJSON-server-1443725263</nova:name>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:10:45</nova:creationTime>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <nova:user uuid="a27a7178326846e69ab9eaae7c70b274">tempest-ImagesTestJSON-194092869-project-member</nova:user>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <nova:project uuid="1a6abfd8cc6f4507886ed10873d1f95c">tempest-ImagesTestJSON-194092869</nova:project>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <nova:port uuid="3f4a637c-ddfa-49f0-b263-224957faec29">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <entry name="serial">ade69915-c1e6-4b99-b8e6-8031c7f04049</entry>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <entry name="uuid">ade69915-c1e6-4b99-b8e6-8031c7f04049</entry>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ade69915-c1e6-4b99-b8e6-8031c7f04049_disk">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:10:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:a9:f0:a5"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <target dev="tap3f4a637c-dd"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/console.log" append="off"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:10:46 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:10:46 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:10:46 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:10:46 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.782 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Preparing to wait for external event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.783 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.783 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.783 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.784 2 DEBUG nova.virt.libvirt.vif [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:10:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ImagesTestJSON-server-1443725263',display_name='tempest-ImagesTestJSON-server-1443725263',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-imagestestjson-server-1443725263',id=35,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a6abfd8cc6f4507886ed10873d1f95c',ramdisk_id='',reservation_id='r-pcdh9zoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ImagesTestJSON-194092869',owner_user_name='tempest-ImagesTestJSON-194092869-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:10:42Z,user_data=None,user_id='a27a7178326846e69ab9eaae7c70b274',uuid=ade69915-c1e6-4b99-b8e6-8031c7f04049,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.784 2 DEBUG nova.network.os_vif_util [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converting VIF {"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.785 2 DEBUG nova.network.os_vif_util [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.785 2 DEBUG os_vif [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.786 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.787 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.792 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3f4a637c-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:46 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.793 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3f4a637c-dd, col_values=(('external_ids', {'iface-id': '3f4a637c-ddfa-49f0-b263-224957faec29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a9:f0:a5', 'vm-uuid': 'ade69915-c1e6-4b99-b8e6-8031c7f04049'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:46 np0005473739 NetworkManager[44949]: <info>  [1759846246.7971] manager: (tap3f4a637c-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/116)
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:10:46 np0005473739 podman[301999]: 2025-10-07 14:10:46.805386857 +0000 UTC m=+0.174033916 container init fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:46 np0005473739 nova_compute[259550]: 2025-10-07 14:10:46.809 2 INFO os_vif [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a9:f0:a5,bridge_name='br-int',has_traffic_filtering=True,id=3f4a637c-ddfa-49f0-b263-224957faec29,network=Network(9f80456d-d8a6-4e61-b6cb-b509cd650dbb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3f4a637c-dd')#033[00m
Oct  7 10:10:46 np0005473739 podman[301999]: 2025-10-07 14:10:46.813735047 +0000 UTC m=+0.182382086 container start fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:10:46 np0005473739 podman[301999]: 2025-10-07 14:10:46.828789142 +0000 UTC m=+0.197436181 container attach fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.042 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.045 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.045 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No VIF found with MAC fa:16:3e:a9:f0:a5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.048 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Using config drive#033[00m
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.100 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]: {
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:    "0": [
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:        {
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "devices": [
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "/dev/loop3"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            ],
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_name": "ceph_lv0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_size": "21470642176",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "name": "ceph_lv0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "tags": {
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cluster_name": "ceph",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.crush_device_class": "",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.encrypted": "0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osd_id": "0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.type": "block",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.vdo": "0"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            },
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "type": "block",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "vg_name": "ceph_vg0"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:        }
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:    ],
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:    "1": [
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:        {
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "devices": [
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "/dev/loop4"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            ],
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_name": "ceph_lv1",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_size": "21470642176",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "name": "ceph_lv1",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "tags": {
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cluster_name": "ceph",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.crush_device_class": "",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.encrypted": "0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osd_id": "1",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.type": "block",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.vdo": "0"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            },
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "type": "block",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "vg_name": "ceph_vg1"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:        }
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:    ],
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:    "2": [
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:        {
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "devices": [
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "/dev/loop5"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            ],
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_name": "ceph_lv2",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_size": "21470642176",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "name": "ceph_lv2",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "tags": {
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.cluster_name": "ceph",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.crush_device_class": "",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.encrypted": "0",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osd_id": "2",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.type": "block",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:                "ceph.vdo": "0"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            },
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "type": "block",
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:            "vg_name": "ceph_vg2"
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:        }
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]:    ]
Oct  7 10:10:47 np0005473739 intelligent_shirley[302017]: }
Oct  7 10:10:47 np0005473739 systemd[1]: libpod-fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850.scope: Deactivated successfully.
Oct  7 10:10:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1353: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 61 KiB/s rd, 2.1 MiB/s wr, 90 op/s
Oct  7 10:10:47 np0005473739 podman[302047]: 2025-10-07 14:10:47.642583413 +0000 UTC m=+0.033563133 container died fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.688 2 DEBUG nova.network.neutron [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Updated VIF entry in instance network info cache for port 3f4a637c-ddfa-49f0-b263-224957faec29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.690 2 DEBUG nova.network.neutron [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Updating instance_info_cache with network_info: [{"id": "3f4a637c-ddfa-49f0-b263-224957faec29", "address": "fa:16:3e:a9:f0:a5", "network": {"id": "9f80456d-d8a6-4e61-b6cb-b509cd650dbb", "bridge": "br-int", "label": "tempest-ImagesTestJSON-385162121-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a6abfd8cc6f4507886ed10873d1f95c", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3f4a637c-dd", "ovs_interfaceid": "3f4a637c-ddfa-49f0-b263-224957faec29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.728 2 DEBUG oslo_concurrency.lockutils [req-6fff1c39-337d-4c0f-8682-3ead0600bc17 req-4772288d-4346-4eb9-b561-1cb7b01e5b0c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ade69915-c1e6-4b99-b8e6-8031c7f04049" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:10:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-97eb45926d9350bedb3b4379deea2431b2c80f627c2461d79f3c293bb9fa2d07-merged.mount: Deactivated successfully.
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.883 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Creating config drive at /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config#033[00m
Oct  7 10:10:47 np0005473739 nova_compute[259550]: 2025-10-07 14:10:47.888 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntermyhp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:48 np0005473739 nova_compute[259550]: 2025-10-07 14:10:48.047 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpntermyhp" returned: 0 in 0.159s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:48 np0005473739 podman[302047]: 2025-10-07 14:10:48.076415206 +0000 UTC m=+0.467394926 container remove fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_shirley, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:10:48 np0005473739 systemd[1]: libpod-conmon-fdd59f3a67f7d60279e4529b06cabe68319e5063c267c94ed7204da48ecea850.scope: Deactivated successfully.
Oct  7 10:10:48 np0005473739 nova_compute[259550]: 2025-10-07 14:10:48.127 2 DEBUG nova.storage.rbd_utils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] rbd image ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:10:48 np0005473739 nova_compute[259550]: 2025-10-07 14:10:48.132 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:10:48 np0005473739 podman[302244]: 2025-10-07 14:10:48.753867194 +0000 UTC m=+0.027283738 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:10:48 np0005473739 podman[302244]: 2025-10-07 14:10:48.975343665 +0000 UTC m=+0.248760149 container create 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:10:48 np0005473739 nova_compute[259550]: 2025-10-07 14:10:48.986 2 DEBUG oslo_concurrency.processutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config ade69915-c1e6-4b99-b8e6-8031c7f04049_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.854s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:10:48 np0005473739 nova_compute[259550]: 2025-10-07 14:10:48.989 2 INFO nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Deleting local config drive /var/lib/nova/instances/ade69915-c1e6-4b99-b8e6-8031c7f04049/disk.config because it was imported into RBD.#033[00m
Oct  7 10:10:49 np0005473739 kernel: tap3f4a637c-dd: entered promiscuous mode
Oct  7 10:10:49 np0005473739 NetworkManager[44949]: <info>  [1759846249.0672] manager: (tap3f4a637c-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/117)
Oct  7 10:10:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:49Z|00236|binding|INFO|Claiming lport 3f4a637c-ddfa-49f0-b263-224957faec29 for this chassis.
Oct  7 10:10:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:49Z|00237|binding|INFO|3f4a637c-ddfa-49f0-b263-224957faec29: Claiming fa:16:3e:a9:f0:a5 10.100.0.4
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.085 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a9:f0:a5 10.100.0.4'], port_security=['fa:16:3e:a9:f0:a5 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'ade69915-c1e6-4b99-b8e6-8031c7f04049', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a6abfd8cc6f4507886ed10873d1f95c', 'neutron:revision_number': '2', 'neutron:security_group_ids': '407c951d-89f8-4ecd-9c4f-22770721088e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd54fd3b-aa1b-4c47-bd66-2e5553ec4906, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3f4a637c-ddfa-49f0-b263-224957faec29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.087 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3f4a637c-ddfa-49f0-b263-224957faec29 in datapath 9f80456d-d8a6-4e61-b6cb-b509cd650dbb bound to our chassis#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.089 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9f80456d-d8a6-4e61-b6cb-b509cd650dbb#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1cfa3292-e8ff-479a-9824-05a985046dfc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.107 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9f80456d-d1 in ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:10:49 np0005473739 systemd-udevd[302273]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.108 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9f80456d-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.109 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00566a89-ba14-4a93-8cf2-ebbe3c15c2a0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.109 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b6d4452-661e-4a24-9216-188ee97698d6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 NetworkManager[44949]: <info>  [1759846249.1297] device (tap3f4a637c-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:10:49 np0005473739 NetworkManager[44949]: <info>  [1759846249.1312] device (tap3f4a637c-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.130 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[58938916-69d7-4180-ae4a-59d0901b37de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 systemd[1]: Started libpod-conmon-701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821.scope.
Oct  7 10:10:49 np0005473739 systemd-machined[214580]: New machine qemu-39-instance-00000023.
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.166 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f5eb224a-6397-4949-84d8-bfa864d613c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 systemd[1]: Started Virtual Machine qemu-39-instance-00000023.
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:49Z|00238|binding|INFO|Setting lport 3f4a637c-ddfa-49f0-b263-224957faec29 ovn-installed in OVS
Oct  7 10:10:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:49Z|00239|binding|INFO|Setting lport 3f4a637c-ddfa-49f0-b263-224957faec29 up in Southbound
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.205 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5a4143e7-a61a-4452-ba86-8180e5319f91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 NetworkManager[44949]: <info>  [1759846249.2151] manager: (tap9f80456d-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/118)
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.213 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ef96bb8-52bc-45f8-8e6c-a95b318b181d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.254 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[be9f2b71-048a-4150-b8e0-197cf587a706]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.257 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f5b76f9c-fd55-43bc-8b90-15cac15fa3a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 podman[302244]: 2025-10-07 14:10:49.272096915 +0000 UTC m=+0.545513379 container init 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 10:10:49 np0005473739 podman[302244]: 2025-10-07 14:10:49.282194182 +0000 UTC m=+0.555610626 container start 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:10:49 np0005473739 vigilant_ritchie[302279]: 167 167
Oct  7 10:10:49 np0005473739 systemd[1]: libpod-701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821.scope: Deactivated successfully.
Oct  7 10:10:49 np0005473739 NetworkManager[44949]: <info>  [1759846249.2946] device (tap9f80456d-d0): carrier: link connected
Oct  7 10:10:49 np0005473739 podman[302244]: 2025-10-07 14:10:49.297652788 +0000 UTC m=+0.571069262 container attach 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:10:49 np0005473739 podman[302244]: 2025-10-07 14:10:49.299851245 +0000 UTC m=+0.573267699 container died 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.300 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d3f9b031-ded8-432f-8573-f29f17d974aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.320 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[007583c6-bb93-4503-aa24-415f95ceddaf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683285, 'reachable_time': 33985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 302312, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 systemd[1]: var-lib-containers-storage-overlay-239a97716585be1fdfb668d9f1ff41246e43d97d20ec68ef553d16d5b3631d8e-merged.mount: Deactivated successfully.
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.344 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5fba90da-57b6-4319-9d97-35e9d20a91d7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe36:18ea'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 683285, 'tstamp': 683285}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 302320, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 podman[302244]: 2025-10-07 14:10:49.355998861 +0000 UTC m=+0.629415325 container remove 701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_ritchie, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.366 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4abc2b2f-eab3-46cc-a65c-f1e56813a24a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9f80456d-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:36:18:ea'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 77], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 683285, 'reachable_time': 33985, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 302324, 'error': None, 'target': 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 systemd[1]: libpod-conmon-701ca9aceb7755af3ffda173f28f26cac5c4f18467d3b80c397b212e58df9821.scope: Deactivated successfully.
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.403 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65c2f072-d090-4e7d-b6c3-f9b90b4728a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.475 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[029d7506-d488-47c5-9b8c-895af94c1896]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.477 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f80456d-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.477 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.477 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f80456d-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:49 np0005473739 kernel: tap9f80456d-d0: entered promiscuous mode
Oct  7 10:10:49 np0005473739 NetworkManager[44949]: <info>  [1759846249.4807] manager: (tap9f80456d-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/119)
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.483 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9f80456d-d0, col_values=(('external_ids', {'iface-id': 'aff8269b-7a34-4fc6-ae31-f73de236b2d6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:10:49Z|00240|binding|INFO|Releasing lport aff8269b-7a34-4fc6-ae31-f73de236b2d6 from this chassis (sb_readonly=0)
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.485 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.486 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d82194f0-3cc7-45b7-b232-a67fe8507930]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.487 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.pid.haproxy
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 9f80456d-d8a6-4e61-b6cb-b509cd650dbb
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:10:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:10:49.488 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'env', 'PROCESS_TAG=haproxy-9f80456d-d8a6-4e61-b6cb-b509cd650dbb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9f80456d-d8a6-4e61-b6cb-b509cd650dbb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:49 np0005473739 podman[302339]: 2025-10-07 14:10:49.563221718 +0000 UTC m=+0.054721929 container create 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:10:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1354: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 32 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct  7 10:10:49 np0005473739 systemd[1]: Started libpod-conmon-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope.
Oct  7 10:10:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:49 np0005473739 podman[302339]: 2025-10-07 14:10:49.542785821 +0000 UTC m=+0.034286052 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:10:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:49 np0005473739 podman[302339]: 2025-10-07 14:10:49.672452809 +0000 UTC m=+0.163953020 container init 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 10:10:49 np0005473739 podman[302339]: 2025-10-07 14:10:49.680682816 +0000 UTC m=+0.172183027 container start 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:10:49 np0005473739 podman[302339]: 2025-10-07 14:10:49.684629539 +0000 UTC m=+0.176129750 container attach 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.913 2 DEBUG nova.compute.manager [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.915 2 DEBUG oslo_concurrency.lockutils [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.915 2 DEBUG oslo_concurrency.lockutils [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.915 2 DEBUG oslo_concurrency.lockutils [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:49 np0005473739 nova_compute[259550]: 2025-10-07 14:10:49.916 2 DEBUG nova.compute.manager [req-b972e6f0-cd93-43f8-8ac0-c1c0fe3b12df req-eb10c370-7557-47d0-842b-818da860ab23 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Processing event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:10:49 np0005473739 podman[302427]: 2025-10-07 14:10:49.951823613 +0000 UTC m=+0.080605380 container create 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:10:50 np0005473739 systemd[1]: Started libpod-conmon-67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154.scope.
Oct  7 10:10:50 np0005473739 podman[302427]: 2025-10-07 14:10:49.91824433 +0000 UTC m=+0.047026187 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:10:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:10:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc49fbb0f3c7ff3dc9220d2764fd372dbc613429af02b3304af3c7bad83bf708/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:10:50 np0005473739 podman[302427]: 2025-10-07 14:10:50.060432547 +0000 UTC m=+0.189214374 container init 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:10:50 np0005473739 podman[302427]: 2025-10-07 14:10:50.066881707 +0000 UTC m=+0.195663484 container start 67aadd87c0dea632c211ced9243f4d583d8d061cd4ebb81c028ccce2ae71f154 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:10:50 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [NOTICE]   (302446) : New worker (302448) forked
Oct  7 10:10:50 np0005473739 neutron-haproxy-ovnmeta-9f80456d-d8a6-4e61-b6cb-b509cd650dbb[302442]: [NOTICE]   (302446) : Loading success.
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.177 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.179 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846250.1767797, ade69915-c1e6-4b99-b8e6-8031c7f04049 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.179 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] VM Started (Lifecycle Event)#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.186 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.193 2 INFO nova.virt.libvirt.driver [-] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Instance spawned successfully.#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.194 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.230 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.232 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.232 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.233 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.233 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.234 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.234 2 DEBUG nova.virt.libvirt.driver [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.241 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.276 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.277 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846250.1779625, ade69915-c1e6-4b99-b8e6-8031c7f04049 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.277 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.302 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.306 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846250.1860697, ade69915-c1e6-4b99-b8e6-8031c7f04049 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.307 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.315 2 INFO nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Took 7.58 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.315 2 DEBUG nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.327 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.330 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.363 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.393 2 INFO nova.compute.manager [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Took 9.14 seconds to build instance.#033[00m
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.408 2 DEBUG oslo_concurrency.lockutils [None req-c9ba20da-719c-4d54-aef8-314062671eae a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]: {
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "osd_id": 2,
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "type": "bluestore"
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:    },
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "osd_id": 1,
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "type": "bluestore"
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:    },
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "osd_id": 0,
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:        "type": "bluestore"
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]:    }
Oct  7 10:10:50 np0005473739 thirsty_cori[302394]: }
Oct  7 10:10:50 np0005473739 systemd[1]: libpod-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope: Deactivated successfully.
Oct  7 10:10:50 np0005473739 systemd[1]: libpod-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope: Consumed 1.083s CPU time.
Oct  7 10:10:50 np0005473739 conmon[302394]: conmon 3631ac98888937be144d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope/container/memory.events
Oct  7 10:10:50 np0005473739 podman[302339]: 2025-10-07 14:10:50.806070247 +0000 UTC m=+1.297570458 container died 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:10:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ca76e4db4c88349d0faec3a83404c284b199856d9ce9753cb1b24e6ef7e9c052-merged.mount: Deactivated successfully.
Oct  7 10:10:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e180 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:50 np0005473739 nova_compute[259550]: 2025-10-07 14:10:50.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:50 np0005473739 podman[302339]: 2025-10-07 14:10:50.878639405 +0000 UTC m=+1.370139606 container remove 3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=thirsty_cori, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:10:50 np0005473739 systemd[1]: libpod-conmon-3631ac98888937be144db128fca73aa7a57711c4a0c0067b4395f030bc7a7895.scope: Deactivated successfully.
Oct  7 10:10:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:10:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:10:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:10:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:10:50 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6ffa1f0b-d345-4090-93d1-904fcc816c03 does not exist
Oct  7 10:10:50 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7f0d8731-ebcd-45f5-bbb0-ee29338ea039 does not exist
Oct  7 10:10:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1355: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 25 KiB/s rd, 2.0 MiB/s wr, 38 op/s
Oct  7 10:10:51 np0005473739 nova_compute[259550]: 2025-10-07 14:10:51.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:10:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.037 2 DEBUG nova.compute.manager [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.038 2 DEBUG oslo_concurrency.lockutils [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.038 2 DEBUG oslo_concurrency.lockutils [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.039 2 DEBUG oslo_concurrency.lockutils [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ade69915-c1e6-4b99-b8e6-8031c7f04049-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.039 2 DEBUG nova.compute.manager [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] No waiting events found dispatching network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.039 2 WARNING nova.compute.manager [req-b0fc59eb-004a-4647-bdb3-fad3bb32c538 req-c948a720-7223-4bee-89d7-43888e6ff80c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Received unexpected event network-vif-plugged-3f4a637c-ddfa-49f0-b263-224957faec29 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.093 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846237.0919805, 20fd8b19-2eb6-4c42-a320-d9d23f6f4912 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.094 2 INFO nova.compute.manager [-] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.145 2 DEBUG nova.compute.manager [None req-9f1ba29d-32e1-4f64-83ba-0f404b7c80fc - - - - - -] [instance: 20fd8b19-2eb6-4c42-a320-d9d23f6f4912] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.657 2 DEBUG nova.compute.manager [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:10:52 np0005473739 nova_compute[259550]: 2025-10-07 14:10:52.699 2 INFO nova.compute.manager [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] instance snapshotting#033[00m
Oct  7 10:10:53 np0005473739 nova_compute[259550]: 2025-10-07 14:10:53.000 2 INFO nova.virt.libvirt.driver [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] [instance: ade69915-c1e6-4b99-b8e6-8031c7f04049] Beginning live snapshot process#033[00m
Oct  7 10:10:53 np0005473739 podman[302546]: 2025-10-07 14:10:53.089614012 +0000 UTC m=+0.073451222 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:10:53 np0005473739 podman[302547]: 2025-10-07 14:10:53.108682543 +0000 UTC m=+0.078691640 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:10:53 np0005473739 nova_compute[259550]: 2025-10-07 14:10:53.166 2 DEBUG nova.virt.libvirt.imagebackend [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:10:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e180 do_prune osdmap full prune enabled
Oct  7 10:10:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e181 e181: 3 total, 3 up, 3 in
Oct  7 10:10:53 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e181: 3 total, 3 up, 3 in
Oct  7 10:10:53 np0005473739 nova_compute[259550]: 2025-10-07 14:10:53.425 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(033204b4218f46faa810881a075a08b2) on rbd image(ade69915-c1e6-4b99-b8e6-8031c7f04049_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:10:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1357: 305 pgs: 305 active+clean; 88 MiB data, 393 MiB used, 60 GiB / 60 GiB avail; 27 KiB/s rd, 2.1 MiB/s wr, 41 op/s
Oct  7 10:10:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e181 do_prune osdmap full prune enabled
Oct  7 10:10:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e182 e182: 3 total, 3 up, 3 in
Oct  7 10:10:54 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e182: 3 total, 3 up, 3 in
Oct  7 10:10:54 np0005473739 nova_compute[259550]: 2025-10-07 14:10:54.420 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] cloning vms/ade69915-c1e6-4b99-b8e6-8031c7f04049_disk@033204b4218f46faa810881a075a08b2 to images/4dd3cc09-1db8-41d2-819d-1b5662935e7d clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:10:54 np0005473739 nova_compute[259550]: 2025-10-07 14:10:54.553 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] flattening images/4dd3cc09-1db8-41d2-819d-1b5662935e7d flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:10:54 np0005473739 nova_compute[259550]: 2025-10-07 14:10:54.826 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] removing snapshot(033204b4218f46faa810881a075a08b2) on rbd image(ade69915-c1e6-4b99-b8e6-8031c7f04049_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:10:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e182 do_prune osdmap full prune enabled
Oct  7 10:10:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e183 e183: 3 total, 3 up, 3 in
Oct  7 10:10:55 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e183: 3 total, 3 up, 3 in
Oct  7 10:10:55 np0005473739 nova_compute[259550]: 2025-10-07 14:10:55.407 2 DEBUG nova.storage.rbd_utils [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] creating snapshot(snap) on rbd image(4dd3cc09-1db8-41d2-819d-1b5662935e7d) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:10:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1360: 305 pgs: 2 active+clean+snaptrim, 303 active+clean; 99 MiB data, 394 MiB used, 60 GiB / 60 GiB avail; 5.5 MiB/s rd, 185 KiB/s wr, 213 op/s
Oct  7 10:10:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e183 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:10:55 np0005473739 nova_compute[259550]: 2025-10-07 14:10:55.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:10:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e183 do_prune osdmap full prune enabled
Oct  7 10:10:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e184 e184: 3 total, 3 up, 3 in
Oct  7 10:10:56 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e184: 3 total, 3 up, 3 in
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver [None req-51196028-2c52-4454-9a12-f6cdc21f9211 a27a7178326846e69ab9eaae7c70b274 1a6abfd8cc6f4507886ed10873d1f95c - - default default] Failed to snapshot image: nova.exception.ImageNotFound: Image 4dd3cc09-1db8-41d2-819d-1b5662935e7d could not be found.
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver glanceclient.exc.HTTPNotFound: HTTP 404 Not Found: No image found with ID 4dd3cc09-1db8-41d2-819d-1b5662935e7d
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver 
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver During handling of the above exception, another exception occurred:
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver 
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver Traceback (most recent call last):
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py", line 3082, in snapshot
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     self._image_api.update(context, image_id, metadata,
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1243, in update
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return session.update(context, image_id, image_info, data=data,
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 693, in update
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     _reraise_translated_image_exception(image_id)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 1031, in _reraise_translated_image_exception
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     raise new_exc.with_traceback(exc_trace)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 691, in update
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     image = self._update_v2(context, sent_service_image_meta, data)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 700, in _update_v2
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     image = self._client.call(
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/nova/image/glance.py", line 191, in call
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     result = getattr(controller, method)(*args, **kwargs)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 440, in update
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     unvalidated_image = self.get(image_id)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 197, in get
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self._get(image_id)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/utils.py", line 649, in inner
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return RequestIdProxy(wrapped(*args, **kwargs))
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/v2/images.py", line 190, in _get
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     resp, body = self.http_client.get(url, headers=header)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/keystoneauth1/adapter.py", line 395, in get
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self.request(url, 'GET', **kwargs)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 380, in request
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     return self._handle_response(resp)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver   File "/usr/lib/python3.9/site-packages/glanceclient/common/http.py", line 120, in _handle_response
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver     raise exc.from_response(resp, resp.content)
Oct  7 10:10:56 np0005473739 nova_compute[259550]: 2025-10-07 14:10:56.586 2 ERROR nova.virt.libvirt.driver nova.exception.ImageNotFound: Image 4dd3cc09-1db8-41d2-819d-1b5662935e7d could not be found.
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.735 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846320.7352803, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.737 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Started (Lifecycle Event)#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.739 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.744 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.748 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance spawned successfully.#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.749 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.760 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.770 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.775 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.776 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.776 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.777 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.777 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.778 2 DEBUG nova.virt.libvirt.driver [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.806 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.806 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846320.7354438, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.807 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.839 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.843 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846320.7433336, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.843 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.852 2 INFO nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Creating config drive at /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.857 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpczr9cuan execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e203 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e203 do_prune osdmap full prune enabled
Oct  7 10:12:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e204 e204: 3 total, 3 up, 3 in
Oct  7 10:12:00 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e204: 3 total, 3 up, 3 in
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.904 2 INFO nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Took 9.48 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.905 2 DEBUG nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.906 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.923 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:00 np0005473739 nova_compute[259550]: 2025-10-07 14:12:00.960 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.000 2 INFO nova.compute.manager [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Took 10.46 seconds to build instance.#033[00m
Oct  7 10:12:01 np0005473739 rsyslogd[1004]: imjournal: 2832 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.004 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpczr9cuan" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.038 2 DEBUG nova.storage.rbd_utils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.044 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.084 2 DEBUG oslo_concurrency.lockutils [None req-d6d2921e-31a9-4a04-90a9-5229b5084637 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.617s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.277 2 DEBUG oslo_concurrency.processutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config 9197df26-c585-4b8f-8ed6-695ee8e233a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.233s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.278 2 INFO nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Deleting local config drive /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8/disk.config because it was imported into RBD.#033[00m
Oct  7 10:12:01 np0005473739 kernel: tap61102ef2-43: entered promiscuous mode
Oct  7 10:12:01 np0005473739 NetworkManager[44949]: <info>  [1759846321.3675] manager: (tap61102ef2-43): new Tun device (/org/freedesktop/NetworkManager/Devices/135)
Oct  7 10:12:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:01Z|00270|binding|INFO|Claiming lport 61102ef2-439c-478d-a441-fe00e54b7ff7 for this chassis.
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:01Z|00271|binding|INFO|61102ef2-439c-478d-a441-fe00e54b7ff7: Claiming fa:16:3e:6b:88:da 10.100.0.3
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.377 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:88:da 10.100.0.3'], port_security=['fa:16:3e:6b:88:da 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9197df26-c585-4b8f-8ed6-695ee8e233a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=61102ef2-439c-478d-a441-fe00e54b7ff7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.379 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 61102ef2-439c-478d-a441-fe00e54b7ff7 in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f bound to our chassis#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.381 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8accac57-ab45-4b9b-95ed-86c2c65f202f#033[00m
Oct  7 10:12:01 np0005473739 systemd-udevd[307154]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:12:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:01Z|00272|binding|INFO|Setting lport 61102ef2-439c-478d-a441-fe00e54b7ff7 ovn-installed in OVS
Oct  7 10:12:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:01Z|00273|binding|INFO|Setting lport 61102ef2-439c-478d-a441-fe00e54b7ff7 up in Southbound
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.397 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d5346226-27f0-429e-8438-393bc950b11d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.398 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8accac57-a1 in ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.400 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8accac57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.400 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[52a99fbf-ca22-4544-b90e-200d92d0fbd1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.403 2 DEBUG nova.network.neutron [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updated VIF entry in instance network info cache for port 61102ef2-439c-478d-a441-fe00e54b7ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.403 2 DEBUG nova.network.neutron [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updating instance_info_cache with network_info: [{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.402 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2fd85b-7017-459c-b6be-679061280c13]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 NetworkManager[44949]: <info>  [1759846321.4154] device (tap61102ef2-43): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:12:01 np0005473739 NetworkManager[44949]: <info>  [1759846321.4166] device (tap61102ef2-43): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:12:01 np0005473739 systemd-machined[214580]: New machine qemu-45-instance-00000029.
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.427 2 DEBUG oslo_concurrency.lockutils [req-913d747b-b685-4430-996e-151798b15b00 req-b43badf7-84d1-4667-a305-959b564f508a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.429 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b2c35f82-0d05-41c6-b16a-48d8f3ce53c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 systemd[1]: Started Virtual Machine qemu-45-instance-00000029.
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.450 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eb4ca715-54a7-409f-8d59-d01c7426a289]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.481 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e6fc7723-ac6e-4a5f-be8e-437d011a2765]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.487 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1df86b74-0c9c-4ea9-8373-54f8e5eeca6d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 NetworkManager[44949]: <info>  [1759846321.4885] manager: (tap8accac57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/136)
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.525 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[19867c17-bdb3-489f-b1c3-ff9646bdd1fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.529 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[36a657a4-bc72-44b0-9808-67c36ba73f5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 NetworkManager[44949]: <info>  [1759846321.5575] device (tap8accac57-a0): carrier: link connected
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.566 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5a8e8db6-70b8-4505-b1d8-e94d94bc7813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.584 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1f6bd02-4834-4b1e-9b34-86e94a1b80f8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690512, 'reachable_time': 44706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307189, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.604 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[361785b6-d0dd-4970-adf8-4d1f171804e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:e89f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 690512, 'tstamp': 690512}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 307190, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.627 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de234898-1845-4cbb-a6d7-9103182d0b4a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 88], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690512, 'reachable_time': 44706, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 307191, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1414: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 133 KiB/s rd, 5.3 MiB/s wr, 199 op/s
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.668 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1018653c-d832-4620-993a-4cb22669462b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.745 2 DEBUG nova.compute.manager [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.746 2 DEBUG oslo_concurrency.lockutils [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.746 2 DEBUG oslo_concurrency.lockutils [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.747 2 DEBUG oslo_concurrency.lockutils [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.747 2 DEBUG nova.compute.manager [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] No waiting events found dispatching network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.747 2 WARNING nova.compute.manager [req-a6d64eaa-7652-44ef-9f84-b655b649ee7d req-d81d669f-8db1-4759-8bdc-f4c9e35947a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received unexpected event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad for instance with vm_state active and task_state None.#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.756 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e6bb88-44a4-4e24-be47-5de8290b430f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8accac57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:01 np0005473739 NetworkManager[44949]: <info>  [1759846321.7611] manager: (tap8accac57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/137)
Oct  7 10:12:01 np0005473739 kernel: tap8accac57-a0: entered promiscuous mode
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8accac57-a0, col_values=(('external_ids', {'iface-id': 'a487ff40-6fa2-404e-b7fc-dbcc968fecc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:01Z|00274|binding|INFO|Releasing lport a487ff40-6fa2-404e-b7fc-dbcc968fecc3 from this chassis (sb_readonly=0)
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.787 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.788 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8f29a6-800e-4202-b589-416bf7a1a26f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.789 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:12:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:01.790 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'env', 'PROCESS_TAG=haproxy-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8accac57-ab45-4b9b-95ed-86c2c65f202f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.850 2 DEBUG nova.compute.manager [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.850 2 DEBUG oslo_concurrency.lockutils [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.851 2 DEBUG oslo_concurrency.lockutils [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.851 2 DEBUG oslo_concurrency.lockutils [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:01 np0005473739 nova_compute[259550]: 2025-10-07 14:12:01.852 2 DEBUG nova.compute.manager [req-40464907-8f6c-49ed-93bf-71189ac30f7c req-638d4e05-28d2-4400-852a-51ede1f0d4fe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Processing event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:12:02 np0005473739 podman[307265]: 2025-10-07 14:12:02.201431611 +0000 UTC m=+0.023172920 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.446 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.447 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846322.445502, 9197df26-c585-4b8f-8ed6-695ee8e233a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.447 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] VM Started (Lifecycle Event)#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.452 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.458 2 INFO nova.virt.libvirt.driver [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance spawned successfully.#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.458 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:12:02 np0005473739 podman[307265]: 2025-10-07 14:12:02.463590223 +0000 UTC m=+0.285331512 container create 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.464 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.468 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.481 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.482 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.483 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.483 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.484 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.485 2 DEBUG nova.virt.libvirt.driver [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.489 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.489 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846322.4458292, 9197df26-c585-4b8f-8ed6-695ee8e233a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.490 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.517 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:02 np0005473739 systemd[1]: Started libpod-conmon-13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da.scope.
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.523 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846322.4492142, 9197df26-c585-4b8f-8ed6-695ee8e233a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.525 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.546 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.551 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.553 2 INFO nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Took 9.11 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.554 2 DEBUG nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:02 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:12:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d3f9a3ddea3ddccc76f392646315e89a823382f99eda108d937fed0f1f005d5d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.578 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:02 np0005473739 podman[307265]: 2025-10-07 14:12:02.579138279 +0000 UTC m=+0.400879588 container init 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:12:02 np0005473739 podman[307265]: 2025-10-07 14:12:02.584820829 +0000 UTC m=+0.406562118 container start 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:12:02 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [NOTICE]   (307283) : New worker (307285) forked
Oct  7 10:12:02 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [NOTICE]   (307283) : Loading success.
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.611 2 INFO nova.compute.manager [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Took 10.26 seconds to build instance.#033[00m
Oct  7 10:12:02 np0005473739 nova_compute[259550]: 2025-10-07 14:12:02.627 2 DEBUG oslo_concurrency.lockutils [None req-bc5e0ec5-affb-49ba-96c2-9f65423a6ec2 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.507s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e204 do_prune osdmap full prune enabled
Oct  7 10:12:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e205 e205: 3 total, 3 up, 3 in
Oct  7 10:12:02 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e205: 3 total, 3 up, 3 in
Oct  7 10:12:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1416: 305 pgs: 305 active+clean; 134 MiB data, 464 MiB used, 60 GiB / 60 GiB avail; 40 KiB/s rd, 1.2 MiB/s wr, 63 op/s
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.308 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.309 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.309 2 INFO nova.compute.manager [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Shelving#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.331 2 DEBUG nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.346 2 DEBUG nova.compute.manager [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.346 2 DEBUG oslo_concurrency.lockutils [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.347 2 DEBUG oslo_concurrency.lockutils [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.347 2 DEBUG oslo_concurrency.lockutils [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.347 2 DEBUG nova.compute.manager [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] No waiting events found dispatching network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.348 2 WARNING nova.compute.manager [req-1f7616a7-f37f-4c86-a39e-e711993ea1b4 req-ccaa44d2-53ed-4427-ba20-1a58d26bc19f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received unexpected event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 for instance with vm_state active and task_state shelving.#033[00m
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.923 2 INFO nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Rebuilding instance#033[00m
Oct  7 10:12:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e205 do_prune osdmap full prune enabled
Oct  7 10:12:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e206 e206: 3 total, 3 up, 3 in
Oct  7 10:12:04 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e206: 3 total, 3 up, 3 in
Oct  7 10:12:04 np0005473739 nova_compute[259550]: 2025-10-07 14:12:04.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.081 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.096 2 DEBUG nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.137 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_requests' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.148 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.162 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.171 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.181 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.185 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:12:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1418: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 7.7 MiB/s rd, 56 KiB/s wr, 412 op/s
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.811 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846310.8107893, cf0c1f82-7b3b-4b62-a766-c12de66a966a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.813 2 INFO nova.compute.manager [-] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.836 2 DEBUG nova.compute.manager [None req-784f1500-68bd-4d55-93f2-77be0e9e5cc2 - - - - - -] [instance: cf0c1f82-7b3b-4b62-a766-c12de66a966a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:05 np0005473739 nova_compute[259550]: 2025-10-07 14:12:05.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:07.030 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:07 np0005473739 podman[307294]: 2025-10-07 14:12:07.129211721 +0000 UTC m=+0.105566365 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent)
Oct  7 10:12:07 np0005473739 podman[307295]: 2025-10-07 14:12:07.1348832 +0000 UTC m=+0.111505492 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:12:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1419: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 6.9 MiB/s rd, 28 KiB/s wr, 337 op/s
Oct  7 10:12:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1420: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 24 KiB/s wr, 300 op/s
Oct  7 10:12:09 np0005473739 nova_compute[259550]: 2025-10-07 14:12:09.992 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:10 np0005473739 nova_compute[259550]: 2025-10-07 14:12:10.331 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846315.3309925, 34abf10d-dbb0-41fb-abde-a52be331cc12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:10 np0005473739 nova_compute[259550]: 2025-10-07 14:12:10.332 2 INFO nova.compute.manager [-] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:12:10 np0005473739 nova_compute[259550]: 2025-10-07 14:12:10.350 2 DEBUG nova.compute.manager [None req-a36e1fd4-9130-451b-9170-b38ea665e286 - - - - - -] [instance: 34abf10d-dbb0-41fb-abde-a52be331cc12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e206 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e206 do_prune osdmap full prune enabled
Oct  7 10:12:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e207 e207: 3 total, 3 up, 3 in
Oct  7 10:12:10 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e207: 3 total, 3 up, 3 in
Oct  7 10:12:10 np0005473739 nova_compute[259550]: 2025-10-07 14:12:10.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1422: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 5.8 MiB/s rd, 24 KiB/s wr, 300 op/s
Oct  7 10:12:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1423: 305 pgs: 305 active+clean; 134 MiB data, 465 MiB used, 60 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.3 KiB/s wr, 77 op/s
Oct  7 10:12:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:12:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.0 total, 600.0 interval#012Cumulative writes: 16K writes, 66K keys, 16K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 16K writes, 4982 syncs, 3.27 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 44.80 MB, 0.07 MB/s#012Interval WAL: 10K writes, 3892 syncs, 2.64 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:12:14 np0005473739 nova_compute[259550]: 2025-10-07 14:12:14.376 2 DEBUG nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:12:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:14Z|00038|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5e:93:42 10.100.0.13
Oct  7 10:12:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:14Z|00039|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5e:93:42 10.100.0.13
Oct  7 10:12:14 np0005473739 nova_compute[259550]: 2025-10-07 14:12:14.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:15Z|00040|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6b:88:da 10.100.0.3
Oct  7 10:12:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:15Z|00041|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6b:88:da 10.100.0.3
Oct  7 10:12:15 np0005473739 nova_compute[259550]: 2025-10-07 14:12:15.235 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:12:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1424: 305 pgs: 305 active+clean; 190 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 5.0 MiB/s wr, 121 op/s
Oct  7 10:12:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e207 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:15 np0005473739 nova_compute[259550]: 2025-10-07 14:12:15.964 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:17 np0005473739 kernel: tap51e23698-81 (unregistering): left promiscuous mode
Oct  7 10:12:17 np0005473739 NetworkManager[44949]: <info>  [1759846337.5139] device (tap51e23698-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:12:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:17Z|00275|binding|INFO|Releasing lport 51e23698-81ea-413b-922d-5c9081ccd2ad from this chassis (sb_readonly=0)
Oct  7 10:12:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:17Z|00276|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad down in Southbound
Oct  7 10:12:17 np0005473739 nova_compute[259550]: 2025-10-07 14:12:17.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:17Z|00277|binding|INFO|Removing iface tap51e23698-81 ovn-installed in OVS
Oct  7 10:12:17 np0005473739 nova_compute[259550]: 2025-10-07 14:12:17.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.566 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:93:42 10.100.0.13'], port_security=['fa:16:3e:5e:93:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=51e23698-81ea-413b-922d-5c9081ccd2ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.567 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 51e23698-81ea-413b-922d-5c9081ccd2ad in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.569 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.571 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[658c510d-3b82-4755-a9b3-d374fa30c4f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.571 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore#033[00m
Oct  7 10:12:17 np0005473739 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct  7 10:12:17 np0005473739 systemd[1]: machine-qemu\x2d44\x2dinstance\x2d00000028.scope: Consumed 15.097s CPU time.
Oct  7 10:12:17 np0005473739 systemd-machined[214580]: Machine qemu-44-instance-00000028 terminated.
Oct  7 10:12:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1425: 305 pgs: 305 active+clean; 190 MiB data, 501 MiB used, 59 GiB / 60 GiB avail; 481 KiB/s rd, 5.0 MiB/s wr, 121 op/s
Oct  7 10:12:17 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [NOTICE]   (306985) : haproxy version is 2.8.14-c23fe91
Oct  7 10:12:17 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [NOTICE]   (306985) : path to executable is /usr/sbin/haproxy
Oct  7 10:12:17 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [WARNING]  (306985) : Exiting Master process...
Oct  7 10:12:17 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [ALERT]    (306985) : Current worker (306987) exited with code 143 (Terminated)
Oct  7 10:12:17 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[306980]: [WARNING]  (306985) : All workers exited. Exiting... (0)
Oct  7 10:12:17 np0005473739 systemd[1]: libpod-abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be.scope: Deactivated successfully.
Oct  7 10:12:17 np0005473739 podman[307362]: 2025-10-07 14:12:17.752916842 +0000 UTC m=+0.052462780 container died abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  7 10:12:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be-userdata-shm.mount: Deactivated successfully.
Oct  7 10:12:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1832c0cf068be4050196c94b34b8807b44bf95e594cb825be8b96ba74a72be4e-merged.mount: Deactivated successfully.
Oct  7 10:12:17 np0005473739 podman[307362]: 2025-10-07 14:12:17.805511424 +0000 UTC m=+0.105057372 container cleanup abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  7 10:12:17 np0005473739 systemd[1]: libpod-conmon-abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be.scope: Deactivated successfully.
Oct  7 10:12:17 np0005473739 podman[307402]: 2025-10-07 14:12:17.873598074 +0000 UTC m=+0.045843416 container remove abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.879 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74d041a4-75fa-4b3a-bfcc-7be1d8cbcee1]: (4, ('Tue Oct  7 02:12:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be)\nabcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be\nTue Oct  7 02:12:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (abcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be)\nabcf3476bd94c1613240e05c947cd71e1d9875d5103b0b8f636767a0e3e2d7be\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.881 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b2d98d0-410e-4322-a2bf-f7d00d7a8a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.884 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:17 np0005473739 nova_compute[259550]: 2025-10-07 14:12:17.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:17 np0005473739 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct  7 10:12:17 np0005473739 nova_compute[259550]: 2025-10-07 14:12:17.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.913 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91fd8d1a-f3c2-4337-a9e1-3cdfdff7debb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.939 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9b6a5e0-170e-4f35-91e8-89c23bc0cf4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[364b834a-c69c-4eae-a1b0-df8d7c227660]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.960 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[77521962-2a58-49c4-88f3-853c5d22a036]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690198, 'reachable_time': 29804, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307420, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:17 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.967 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:12:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:17.968 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8d538371-432e-4c8d-baa5-a68a890caf9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:17 np0005473739 kernel: tap61102ef2-43 (unregistering): left promiscuous mode
Oct  7 10:12:17 np0005473739 NetworkManager[44949]: <info>  [1759846337.9728] device (tap61102ef2-43): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:12:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:17Z|00278|binding|INFO|Releasing lport 61102ef2-439c-478d-a441-fe00e54b7ff7 from this chassis (sb_readonly=0)
Oct  7 10:12:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:17Z|00279|binding|INFO|Setting lport 61102ef2-439c-478d-a441-fe00e54b7ff7 down in Southbound
Oct  7 10:12:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:17Z|00280|binding|INFO|Removing iface tap61102ef2-43 ovn-installed in OVS
Oct  7 10:12:17 np0005473739 nova_compute[259550]: 2025-10-07 14:12:17.980 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.002 2 DEBUG nova.compute.manager [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-unplugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.003 2 DEBUG oslo_concurrency.lockutils [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.003 2 DEBUG oslo_concurrency.lockutils [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.004 2 DEBUG oslo_concurrency.lockutils [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.004 2 DEBUG nova.compute.manager [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] No waiting events found dispatching network-vif-unplugged-51e23698-81ea-413b-922d-5c9081ccd2ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.004 2 WARNING nova.compute.manager [req-9ebb3062-56dc-4da5-b1f4-eba83d2e4aca req-992be93c-ebc7-41d4-a473-2a7c9933475e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received unexpected event network-vif-unplugged-51e23698-81ea-413b-922d-5c9081ccd2ad for instance with vm_state active and task_state rebuilding.#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.026 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6b:88:da 10.100.0.3'], port_security=['fa:16:3e:6b:88:da 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '9197df26-c585-4b8f-8ed6-695ee8e233a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=61102ef2-439c-478d-a441-fe00e54b7ff7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.027 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 61102ef2-439c-478d-a441-fe00e54b7ff7 in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.028 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.029 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b2cad69-2a23-4eda-a217-0d7b68eaaa47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.029 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace which is not needed anymore#033[00m
Oct  7 10:12:18 np0005473739 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Deactivated successfully.
Oct  7 10:12:18 np0005473739 systemd[1]: machine-qemu\x2d45\x2dinstance\x2d00000029.scope: Consumed 13.408s CPU time.
Oct  7 10:12:18 np0005473739 systemd-machined[214580]: Machine qemu-45-instance-00000029 terminated.
Oct  7 10:12:18 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [NOTICE]   (307283) : haproxy version is 2.8.14-c23fe91
Oct  7 10:12:18 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [NOTICE]   (307283) : path to executable is /usr/sbin/haproxy
Oct  7 10:12:18 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [WARNING]  (307283) : Exiting Master process...
Oct  7 10:12:18 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [ALERT]    (307283) : Current worker (307285) exited with code 143 (Terminated)
Oct  7 10:12:18 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[307279]: [WARNING]  (307283) : All workers exited. Exiting... (0)
Oct  7 10:12:18 np0005473739 systemd[1]: libpod-13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da.scope: Deactivated successfully.
Oct  7 10:12:18 np0005473739 podman[307445]: 2025-10-07 14:12:18.169241695 +0000 UTC m=+0.047753716 container died 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:12:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:12:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 18K writes, 74K keys, 18K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.03 MB/s#012Cumulative WAL: 18K writes, 6001 syncs, 3.16 writes per sync, written: 0.06 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 11K writes, 45K keys, 11K commit groups, 1.0 writes per commit group, ingest: 45.99 MB, 0.08 MB/s#012Interval WAL: 11K writes, 4529 syncs, 2.58 writes per sync, written: 0.04 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:12:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da-userdata-shm.mount: Deactivated successfully.
Oct  7 10:12:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d3f9a3ddea3ddccc76f392646315e89a823382f99eda108d937fed0f1f005d5d-merged.mount: Deactivated successfully.
Oct  7 10:12:18 np0005473739 podman[307445]: 2025-10-07 14:12:18.209782991 +0000 UTC m=+0.088295012 container cleanup 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:12:18 np0005473739 NetworkManager[44949]: <info>  [1759846338.2143] manager: (tap61102ef2-43): new Tun device (/org/freedesktop/NetworkManager/Devices/138)
Oct  7 10:12:18 np0005473739 systemd[1]: libpod-conmon-13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da.scope: Deactivated successfully.
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.252 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.258 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance destroyed successfully.#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.267 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance destroyed successfully.#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.268 2 DEBUG nova.virt.libvirt.vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:04Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.269 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.269 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.270 2 DEBUG os_vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.271 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51e23698-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.275 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.281 2 INFO os_vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81')#033[00m
Oct  7 10:12:18 np0005473739 podman[307481]: 2025-10-07 14:12:18.295783451 +0000 UTC m=+0.054534664 container remove 13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.300 2 DEBUG nova.compute.manager [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-vif-unplugged-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.301 2 DEBUG oslo_concurrency.lockutils [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.301 2 DEBUG oslo_concurrency.lockutils [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.302 2 DEBUG oslo_concurrency.lockutils [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.302 2 DEBUG nova.compute.manager [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] No waiting events found dispatching network-vif-unplugged-61102ef2-439c-478d-a441-fe00e54b7ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.302 2 WARNING nova.compute.manager [req-10e35511-f946-468a-8bc4-25bfc0a4157c req-725ae496-a225-450d-bf88-609cc02b0db6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received unexpected event network-vif-unplugged-61102ef2-439c-478d-a441-fe00e54b7ff7 for instance with vm_state active and task_state shelving.#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.304 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73052105-442b-44fc-a302-8023a3767018]: (4, ('Tue Oct  7 02:12:18 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da)\n13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da\nTue Oct  7 02:12:18 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da)\n13b32ce42d44f69e0c94046a3d87193f2505e8b5cdd1417e83fac32bc3cd05da\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.307 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[15120734-33f3-46ce-bad5-6e875aeeb210]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.308 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:18 np0005473739 kernel: tap8accac57-a0: left promiscuous mode
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.333 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0fc6b01b-d251-423f-aec8-7e2e1b640dcf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.362 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f295dcc-bf56-44cb-baca-f911567c1592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.364 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7dad0d7d-b83e-4556-92c1-f8adb29868df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.379 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b841e9f-1193-49c4-a076-116d4e8db973]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 690504, 'reachable_time': 43210, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 307523, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.381 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:12:18 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:18.381 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[abd2916c-5454-4266-ae28-5ef041f28e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.397 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance shutdown successfully after 14 seconds.#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.404 2 INFO nova.virt.libvirt.driver [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance destroyed successfully.#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.405 2 DEBUG nova.objects.instance [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9197df26-c585-4b8f-8ed6-695ee8e233a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.710 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deleting instance files /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_del#033[00m
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.711 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deletion of /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_del complete#033[00m
Oct  7 10:12:18 np0005473739 systemd[1]: run-netns-ovnmeta\x2d8accac57\x2dab45\x2d4b9b\x2d95ed\x2d86c2c65f202f.mount: Deactivated successfully.
Oct  7 10:12:18 np0005473739 nova_compute[259550]: 2025-10-07 14:12:18.928 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Beginning cold snapshot process#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.126 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.127 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating image(s)#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.147 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.168 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.188 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.191 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.282 2 DEBUG nova.virt.libvirt.imagebackend [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.286 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.287 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.290 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.290 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.310 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.315 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.573 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] creating snapshot(1f97b8b6de8848e0ac87f9a9349cc406) on rbd image(9197df26-c585-4b8f-8ed6-695ee8e233a8_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:12:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1426: 305 pgs: 305 active+clean; 140 MiB data, 496 MiB used, 60 GiB / 60 GiB avail; 796 KiB/s rd, 5.2 MiB/s wr, 164 op/s
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.653 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.338s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.709 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:12:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e207 do_prune osdmap full prune enabled
Oct  7 10:12:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e208 e208: 3 total, 3 up, 3 in
Oct  7 10:12:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e208: 3 total, 3 up, 3 in
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.827 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] cloning vms/9197df26-c585-4b8f-8ed6-695ee8e233a8_disk@1f97b8b6de8848e0ac87f9a9349cc406 to images/afce8312-1adb-47e3-b920-51cdbb68b862 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.861 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.862 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Ensure instance console log exists: /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.863 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.863 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.863 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.865 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Start _get_guest_xml network_info=[{"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.870 2 WARNING nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.881 2 DEBUG nova.virt.libvirt.host [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.882 2 DEBUG nova.virt.libvirt.host [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.889 2 DEBUG nova.virt.libvirt.host [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.890 2 DEBUG nova.virt.libvirt.host [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.890 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.890 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.891 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.891 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.891 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.892 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.893 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.893 2 DEBUG nova.virt.hardware [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.893 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.919 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:19 np0005473739 nova_compute[259550]: 2025-10-07 14:12:19.967 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] flattening images/afce8312-1adb-47e3-b920-51cdbb68b862 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.104 2 DEBUG nova.compute.manager [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.105 2 DEBUG oslo_concurrency.lockutils [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.106 2 DEBUG oslo_concurrency.lockutils [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.106 2 DEBUG oslo_concurrency.lockutils [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.106 2 DEBUG nova.compute.manager [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] No waiting events found dispatching network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.107 2 WARNING nova.compute.manager [req-d6558eaa-01a8-479f-ae9e-a4c598445123 req-0a6999d0-a056-4d4d-b880-a0b8789b27c2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received unexpected event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.408 2 DEBUG nova.compute.manager [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.408 2 DEBUG oslo_concurrency.lockutils [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.409 2 DEBUG oslo_concurrency.lockutils [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.409 2 DEBUG oslo_concurrency.lockutils [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.409 2 DEBUG nova.compute.manager [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] No waiting events found dispatching network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.409 2 WARNING nova.compute.manager [req-981ffef8-1bb5-4aa1-b31a-f894bf4ef33c req-4c059105-aa22-479d-9cfa-8d9268d01101 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received unexpected event network-vif-plugged-61102ef2-439c-478d-a441-fe00e54b7ff7 for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.422 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] removing snapshot(1f97b8b6de8848e0ac87f9a9349cc406) on rbd image(9197df26-c585-4b8f-8ed6-695ee8e233a8_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:12:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:12:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2820408890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.446 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.472 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.477 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e208 do_prune osdmap full prune enabled
Oct  7 10:12:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e209 e209: 3 total, 3 up, 3 in
Oct  7 10:12:20 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e209: 3 total, 3 up, 3 in
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.793 2 DEBUG nova.storage.rbd_utils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] creating snapshot(snap) on rbd image(afce8312-1adb-47e3-b920-51cdbb68b862) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:12:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e209 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:12:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3663148886' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.969 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.970 2 DEBUG nova.virt.libvirt.vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:18Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.970 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.971 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.973 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <uuid>d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</uuid>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <name>instance-00000028</name>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1971655449</nova:name>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:12:19</nova:creationTime>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <nova:port uuid="51e23698-81ea-413b-922d-5c9081ccd2ad">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <entry name="serial">d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</entry>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <entry name="uuid">d7bf9b41-6a09-4d71-8e33-cb428d71b2a8</entry>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:5e:93:42"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <target dev="tap51e23698-81"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/console.log" append="off"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:12:20 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:12:20 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:12:20 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:12:20 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.975 2 DEBUG nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Preparing to wait for external event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.975 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.975 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.975 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.976 2 DEBUG nova.virt.libvirt.vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:00Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:18Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.976 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.977 2 DEBUG nova.network.os_vif_util [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.977 2 DEBUG os_vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.978 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap51e23698-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.982 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap51e23698-81, col_values=(('external_ids', {'iface-id': '51e23698-81ea-413b-922d-5c9081ccd2ad', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5e:93:42', 'vm-uuid': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:20 np0005473739 NetworkManager[44949]: <info>  [1759846340.9854] manager: (tap51e23698-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/139)
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:20 np0005473739 nova_compute[259550]: 2025-10-07 14:12:20.998 2 INFO os_vif [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81')#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.192 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.193 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.194 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:5e:93:42, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.194 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Using config drive#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.218 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.236 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.261 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'keypairs' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1429: 305 pgs: 305 active+clean; 157 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 8.9 MiB/s wr, 299 op/s
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.675 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Creating config drive at /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.680 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxi1kwsto execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e209 do_prune osdmap full prune enabled
Oct  7 10:12:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e210 e210: 3 total, 3 up, 3 in
Oct  7 10:12:21 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e210: 3 total, 3 up, 3 in
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.840 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxi1kwsto" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.875 2 DEBUG nova.storage.rbd_utils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:21 np0005473739 nova_compute[259550]: 2025-10-07 14:12:21.880 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.081 2 DEBUG oslo_concurrency.processutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.082 2 INFO nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deleting local config drive /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8/disk.config because it was imported into RBD.#033[00m
Oct  7 10:12:22 np0005473739 kernel: tap51e23698-81: entered promiscuous mode
Oct  7 10:12:22 np0005473739 NetworkManager[44949]: <info>  [1759846342.1527] manager: (tap51e23698-81): new Tun device (/org/freedesktop/NetworkManager/Devices/140)
Oct  7 10:12:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:22Z|00281|binding|INFO|Claiming lport 51e23698-81ea-413b-922d-5c9081ccd2ad for this chassis.
Oct  7 10:12:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:22Z|00282|binding|INFO|51e23698-81ea-413b-922d-5c9081ccd2ad: Claiming fa:16:3e:5e:93:42 10.100.0.13
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.167 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:93:42 10.100.0.13'], port_security=['fa:16:3e:5e:93:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=51e23698-81ea-413b-922d-5c9081ccd2ad) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.168 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 51e23698-81ea-413b-922d-5c9081ccd2ad in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.169 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.186 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b15b6325-57ab-474e-8d0d-67b8426cee1c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.187 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:12:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:22Z|00283|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad ovn-installed in OVS
Oct  7 10:12:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:22Z|00284|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad up in Southbound
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.188 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.188 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41079891-9f26-4f06-af71-140aa9778881]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:22 np0005473739 systemd-udevd[307972]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.189 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c45044fd-17c8-4531-8b43-9f67a54ac01e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 systemd-machined[214580]: New machine qemu-46-instance-00000028.
Oct  7 10:12:22 np0005473739 systemd[1]: Started Virtual Machine qemu-46-instance-00000028.
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.212 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e4ba10a7-3825-4943-b927-0d6d272f6868]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 NetworkManager[44949]: <info>  [1759846342.2180] device (tap51e23698-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:12:22 np0005473739 NetworkManager[44949]: <info>  [1759846342.2193] device (tap51e23698-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.233 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56e162a2-48aa-4d4a-b345-8314e9156ec7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.271 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab236e8-836e-4521-9c57-7a6f8951e707]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 NetworkManager[44949]: <info>  [1759846342.2781] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/141)
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[378641a8-8515-4006-a7ca-a65ab8488460]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.322 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfbe253-24b7-4b90-8da7-64b433a6d1c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.325 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f08389-bb6f-4bed-a049-bba53a0517ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 NetworkManager[44949]: <info>  [1759846342.3563] device (tapd2cb8ca0-10): carrier: link connected
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.365 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[23d95bda-a3b4-4f22-ad9a-49c150fb9f42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.385 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[33a6949e-7907-43c5-b8c7-876b428a40a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692592, 'reachable_time': 27735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308009, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.405 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[374b6cf9-9588-4b5b-9d1b-74a37f931943]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 692592, 'tstamp': 692592}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 308010, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.426 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a3fed8-9d1d-4c69-aceb-e564c17a194e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 92], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692592, 'reachable_time': 27735, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 308011, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:12:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 2400.1 total, 600.0 interval#012Cumulative writes: 14K writes, 57K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.05 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 4390 syncs, 3.30 writes per sync, written: 0.05 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 8519 writes, 33K keys, 8519 commit groups, 1.0 writes per commit group, ingest: 32.79 MB, 0.05 MB/s#012Interval WAL: 8519 writes, 3358 syncs, 2.54 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4ee4a63d-c4c0-474d-87a1-61053116b61b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.542 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc95339d-130f-44bb-a5c0-76d86dcf8129]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:22 np0005473739 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct  7 10:12:22 np0005473739 NetworkManager[44949]: <info>  [1759846342.5852] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/142)
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:22Z|00285|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.630 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:12:22
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'vms', '.mgr', 'default.rgw.control', 'images', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.data', '.rgw.root', 'cephfs.cephfs.meta']
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.635 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[160ef450-817d-4222-ba47-2763b7d7c66a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.636 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:12:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:22.636 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:12:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.972 2 DEBUG nova.compute.manager [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.973 2 DEBUG oslo_concurrency.lockutils [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.973 2 DEBUG oslo_concurrency.lockutils [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.973 2 DEBUG oslo_concurrency.lockutils [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:22 np0005473739 nova_compute[259550]: 2025-10-07 14:12:22.974 2 DEBUG nova.compute.manager [req-b95e1aa4-3448-4efb-a599-b762c3ec1834 req-c9b2fccf-d5ed-4d73-8c1e-80174c7a438a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Processing event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:12:23 np0005473739 podman[308047]: 2025-10-07 14:12:23.092908316 +0000 UTC m=+0.071285164 container create 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:12:23 np0005473739 podman[308047]: 2025-10-07 14:12:23.047982445 +0000 UTC m=+0.026359323 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:12:23 np0005473739 systemd[1]: Started libpod-conmon-96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e.scope.
Oct  7 10:12:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:12:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b4b4a21acf9e777111cfeaa2eb1f82dd98dbafde04206345d29037f6c6bf1ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:12:23 np0005473739 podman[308047]: 2025-10-07 14:12:23.21172763 +0000 UTC m=+0.190104508 container init 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:12:23 np0005473739 podman[308047]: 2025-10-07 14:12:23.218799935 +0000 UTC m=+0.197176783 container start 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.225 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Snapshot image upload complete#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.226 2 DEBUG nova.compute.manager [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:23 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [NOTICE]   (308083) : New worker (308087) forked
Oct  7 10:12:23 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [NOTICE]   (308083) : Loading success.
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.289 2 INFO nova.compute.manager [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Shelve offloading#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.304 2 INFO nova.virt.libvirt.driver [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance destroyed successfully.#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.305 2 DEBUG nova.compute.manager [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.308 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.308 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.309 2 DEBUG nova.network.neutron [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:12:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1431: 305 pgs: 305 active+clean; 157 MiB data, 508 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 217 op/s
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.798 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.798 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846343.7974691, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.798 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Started (Lifecycle Event)#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.801 2 DEBUG nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.805 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.810 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance spawned successfully.#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.810 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.828 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.832 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.841 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.842 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.843 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.843 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.844 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.844 2 DEBUG nova.virt.libvirt.driver [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.854 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846343.80283, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.854 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.885 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.889 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846343.8048258, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.889 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.912 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.920 2 DEBUG nova.compute.manager [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.921 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.951 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.984 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.985 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:23 np0005473739 nova_compute[259550]: 2025-10-07 14:12:23.985 2 DEBUG nova.objects.instance [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:12:24 np0005473739 nova_compute[259550]: 2025-10-07 14:12:24.041 2 DEBUG oslo_concurrency.lockutils [None req-7cae0597-c025-408c-98c4-9d719aad55ac 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 10:12:24 np0005473739 nova_compute[259550]: 2025-10-07 14:12:24.990 2 DEBUG nova.network.neutron [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updating instance_info_cache with network_info: [{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.019 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:25 np0005473739 podman[308122]: 2025-10-07 14:12:25.097106379 +0000 UTC m=+0.080071686 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:12:25 np0005473739 podman[308123]: 2025-10-07 14:12:25.097555621 +0000 UTC m=+0.074163031 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.598 2 DEBUG nova.compute.manager [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.599 2 DEBUG oslo_concurrency.lockutils [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.599 2 DEBUG oslo_concurrency.lockutils [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.600 2 DEBUG oslo_concurrency.lockutils [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.600 2 DEBUG nova.compute.manager [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] No waiting events found dispatching network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.601 2 WARNING nova.compute.manager [req-40040635-73f2-49f3-9870-f263e894a633 req-a02a4fb2-4b94-43e2-b1dc-fac81e1bb9b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received unexpected event network-vif-plugged-51e23698-81ea-413b-922d-5c9081ccd2ad for instance with vm_state active and task_state None.#033[00m
Oct  7 10:12:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1432: 305 pgs: 305 active+clean; 246 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 8.9 MiB/s rd, 11 MiB/s wr, 323 op/s
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.663 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.664 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.665 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.666 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.666 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.668 2 INFO nova.compute.manager [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Terminating instance#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.670 2 DEBUG nova.compute.manager [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:12:25 np0005473739 kernel: tap51e23698-81 (unregistering): left promiscuous mode
Oct  7 10:12:25 np0005473739 NetworkManager[44949]: <info>  [1759846345.7291] device (tap51e23698-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:12:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:25Z|00286|binding|INFO|Releasing lport 51e23698-81ea-413b-922d-5c9081ccd2ad from this chassis (sb_readonly=0)
Oct  7 10:12:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:25Z|00287|binding|INFO|Setting lport 51e23698-81ea-413b-922d-5c9081ccd2ad down in Southbound
Oct  7 10:12:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:25Z|00288|binding|INFO|Removing iface tap51e23698-81 ovn-installed in OVS
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.755 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5e:93:42 10.100.0.13'], port_security=['fa:16:3e:5e:93:42 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'd7bf9b41-6a09-4d71-8e33-cb428d71b2a8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=51e23698-81ea-413b-922d-5c9081ccd2ad) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.756 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 51e23698-81ea-413b-922d-5c9081ccd2ad in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis#033[00m
Oct  7 10:12:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.757 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:12:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.759 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a00aa0e-1910-47e9-ade6-c92999e35875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:25.761 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore#033[00m
Oct  7 10:12:25 np0005473739 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Deactivated successfully.
Oct  7 10:12:25 np0005473739 systemd[1]: machine-qemu\x2d46\x2dinstance\x2d00000028.scope: Consumed 3.401s CPU time.
Oct  7 10:12:25 np0005473739 systemd-machined[214580]: Machine qemu-46-instance-00000028 terminated.
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e210 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e210 do_prune osdmap full prune enabled
Oct  7 10:12:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e211 e211: 3 total, 3 up, 3 in
Oct  7 10:12:25 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e211: 3 total, 3 up, 3 in
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.913 2 INFO nova.virt.libvirt.driver [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Instance destroyed successfully.#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.914 2 DEBUG nova.objects.instance [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.924 2 DEBUG nova.virt.libvirt.vif [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:11:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1971655449',display_name='tempest-ServerDiskConfigTestJSON-server-1971655449',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1971655449',id=40,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-sgyxuhwo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:12:24Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=d7bf9b41-6a09-4d71-8e33-cb428d71b2a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.925 2 DEBUG nova.network.os_vif_util [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "51e23698-81ea-413b-922d-5c9081ccd2ad", "address": "fa:16:3e:5e:93:42", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap51e23698-81", "ovs_interfaceid": "51e23698-81ea-413b-922d-5c9081ccd2ad", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.925 2 DEBUG nova.network.os_vif_util [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.926 2 DEBUG os_vif [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.928 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.928 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap51e23698-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:25 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [NOTICE]   (308083) : haproxy version is 2.8.14-c23fe91
Oct  7 10:12:25 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [NOTICE]   (308083) : path to executable is /usr/sbin/haproxy
Oct  7 10:12:25 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [WARNING]  (308083) : Exiting Master process...
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.944 2 INFO os_vif [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5e:93:42,bridge_name='br-int',has_traffic_filtering=True,id=51e23698-81ea-413b-922d-5c9081ccd2ad,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap51e23698-81')#033[00m
Oct  7 10:12:25 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [ALERT]    (308083) : Current worker (308087) exited with code 143 (Terminated)
Oct  7 10:12:25 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[308063]: [WARNING]  (308083) : All workers exited. Exiting... (0)
Oct  7 10:12:25 np0005473739 systemd[1]: libpod-96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e.scope: Deactivated successfully.
Oct  7 10:12:25 np0005473739 podman[308187]: 2025-10-07 14:12:25.956708124 +0000 UTC m=+0.056496887 container died 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:25 np0005473739 nova_compute[259550]: 2025-10-07 14:12:25.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:12:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e-userdata-shm.mount: Deactivated successfully.
Oct  7 10:12:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5b4b4a21acf9e777111cfeaa2eb1f82dd98dbafde04206345d29037f6c6bf1ad-merged.mount: Deactivated successfully.
Oct  7 10:12:26 np0005473739 podman[308187]: 2025-10-07 14:12:26.003626827 +0000 UTC m=+0.103415590 container cleanup 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:12:26 np0005473739 systemd[1]: libpod-conmon-96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e.scope: Deactivated successfully.
Oct  7 10:12:26 np0005473739 podman[308244]: 2025-10-07 14:12:26.072272701 +0000 UTC m=+0.043868074 container remove 96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.078 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bedbcec3-ded7-4eba-b39f-1d1e45aab58b]: (4, ('Tue Oct  7 02:12:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e)\n96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e\nTue Oct  7 02:12:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e)\n96fd378b465bde9f8d312c66336981c816e9ce28c1b3b31e958ff9feb4335c6e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.080 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fbaafb79-3481-4c51-968b-18ebfd30c1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.081 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:26 np0005473739 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1dbb0100-b25b-4768-badd-cbcb35b00ad1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.141 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c5f913e-9d8c-4e17-963d-22da99dcebcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.152 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c10900f6-d974-411b-a854-bf23a05a8b8d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.171 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[18712b0a-9d57-4bff-9055-132e7487e668]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 692582, 'reachable_time': 43051, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 308262, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.174 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:12:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:26.175 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b6c7bdee-ad9e-4098-98a8-457a96487cf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:26 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.369 2 INFO nova.virt.libvirt.driver [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Instance destroyed successfully.#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.369 2 DEBUG nova.objects.instance [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'resources' on Instance uuid 9197df26-c585-4b8f-8ed6-695ee8e233a8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.384 2 DEBUG nova.virt.libvirt.vif [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:11:50Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-2085542773',display_name='tempest-DeleteServersTestJSON-server-2085542773',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-2085542773',id=41,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-2g13mdnp',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member',shelved_at='2025-10-07T14:12:23.226411',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='afce8312-1adb-47e3-b920-51cdbb68b862'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:12:19Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=9197df26-c585-4b8f-8ed6-695ee8e233a8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.385 2 DEBUG nova.network.os_vif_util [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap61102ef2-43", "ovs_interfaceid": "61102ef2-439c-478d-a441-fe00e54b7ff7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.386 2 DEBUG nova.network.os_vif_util [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.387 2 DEBUG os_vif [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.389 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap61102ef2-43, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.402 2 INFO nova.virt.libvirt.driver [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deleting instance files /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_del#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.403 2 INFO nova.virt.libvirt.driver [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deletion of /var/lib/nova/instances/d7bf9b41-6a09-4d71-8e33-cb428d71b2a8_del complete#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.406 2 INFO os_vif [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6b:88:da,bridge_name='br-int',has_traffic_filtering=True,id=61102ef2-439c-478d-a441-fe00e54b7ff7,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap61102ef2-43')#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.577 2 INFO nova.compute.manager [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Took 0.91 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.579 2 DEBUG oslo.service.loopingcall [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.579 2 DEBUG nova.compute.manager [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.580 2 DEBUG nova.network.neutron [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.885 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Deleting instance files /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8_del#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.886 2 INFO nova.virt.libvirt.driver [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Deletion of /var/lib/nova/instances/9197df26-c585-4b8f-8ed6-695ee8e233a8_del complete#033[00m
Oct  7 10:12:26 np0005473739 nova_compute[259550]: 2025-10-07 14:12:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.212 2 INFO nova.scheduler.client.report [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Deleted allocations for instance 9197df26-c585-4b8f-8ed6-695ee8e233a8#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.367 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.368 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.422 2 DEBUG oslo_concurrency.processutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1434: 305 pgs: 305 active+clean; 246 MiB data, 537 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 7.0 MiB/s wr, 172 op/s
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.763 2 DEBUG nova.compute.manager [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Received event network-changed-61102ef2-439c-478d-a441-fe00e54b7ff7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.764 2 DEBUG nova.compute.manager [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Refreshing instance network info cache due to event network-changed-61102ef2-439c-478d-a441-fe00e54b7ff7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.764 2 DEBUG oslo_concurrency.lockutils [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.764 2 DEBUG oslo_concurrency.lockutils [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.765 2 DEBUG nova.network.neutron [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Refreshing network info cache for port 61102ef2-439c-478d-a441-fe00e54b7ff7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:12:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:12:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3602547399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.946 2 DEBUG oslo_concurrency.processutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.957 2 DEBUG nova.compute.provider_tree [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:27 np0005473739 nova_compute[259550]: 2025-10-07 14:12:27.983 2 DEBUG nova.network.neutron [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.016 2 DEBUG nova.scheduler.client.report [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.035 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.097 2 INFO nova.compute.manager [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Took 1.52 seconds to deallocate network for instance.#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.148 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.780s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.151 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.151 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.151 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.151 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.295 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.297 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.377 2 DEBUG oslo_concurrency.processutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.448 2 DEBUG oslo_concurrency.lockutils [None req-4df59c56-4e2c-4842-82d4-6b725d391d54 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "9197df26-c585-4b8f-8ed6-695ee8e233a8" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 24.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:12:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3637371592' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.625 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.776 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.779 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.832 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.834 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4196MB free_disk=59.92162322998047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.834 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:12:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/502396541' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.868 2 DEBUG oslo_concurrency.processutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.874 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:12:28 np0005473739 nova_compute[259550]: 2025-10-07 14:12:28.880 2 DEBUG nova.compute.provider_tree [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.015 2 DEBUG nova.scheduler.client.report [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.080 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.137 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.140 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.183 2 DEBUG nova.network.neutron [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updated VIF entry in instance network info cache for port 61102ef2-439c-478d-a441-fe00e54b7ff7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.184 2 DEBUG nova.network.neutron [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Updating instance_info_cache with network_info: [{"id": "61102ef2-439c-478d-a441-fe00e54b7ff7", "address": "fa:16:3e:6b:88:da", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": null, "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap61102ef2-43", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.224 2 INFO nova.scheduler.client.report [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Deleted allocations for instance d7bf9b41-6a09-4d71-8e33-cb428d71b2a8#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.310 2 DEBUG oslo_concurrency.lockutils [req-5414bdbb-b6e7-48aa-a10b-08ffa375ba23 req-85c8c1e9-8d5b-468a-bc51-d8e02b938675 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9197df26-c585-4b8f-8ed6-695ee8e233a8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.341 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4e86b418-6e7f-4e2e-9146-a847920ed11f has been scheduled to this compute host, the scheduler has made an allocation against this compute node but the instance has yet to start. Skipping heal of allocation: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1692#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.341 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.342 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.386 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.545 2 DEBUG oslo_concurrency.lockutils [None req-da12bf03-431b-4356-834f-7d155be9cd9b 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "d7bf9b41-6a09-4d71-8e33-cb428d71b2a8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.880s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1435: 305 pgs: 305 active+clean; 169 MiB data, 498 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.1 MiB/s wr, 278 op/s
Oct  7 10:12:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:12:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2842463122' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.848 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.854 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:12:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e211 do_prune osdmap full prune enabled
Oct  7 10:12:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e212 e212: 3 total, 3 up, 3 in
Oct  7 10:12:29 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e212: 3 total, 3 up, 3 in
Oct  7 10:12:29 np0005473739 nova_compute[259550]: 2025-10-07 14:12:29.936 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.011 2 DEBUG nova.compute.manager [req-b5e4d3be-5034-421e-b529-a7e34246483f req-2d99fe3d-9400-4ceb-b34e-55cca40ad3eb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Received event network-vif-deleted-51e23698-81ea-413b-922d-5c9081ccd2ad external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.022 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.883s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.024 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.030 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.031 2 INFO nova.compute.claims [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.350 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:12:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4006067363' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.836 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.845 2 DEBUG nova.compute.provider_tree [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.889 2 DEBUG nova.scheduler.client.report [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:12:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.926 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.903s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.927 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.981 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:12:30 np0005473739 nova_compute[259550]: 2025-10-07 14:12:30.981 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.025 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.043 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.094 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.207 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.211 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.212 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating image(s)#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.234 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.259 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.286 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.290 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.374 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.375 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.376 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.376 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.399 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.403 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1437: 305 pgs: 305 active+clean; 95 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 6.3 MiB/s rd, 6.1 MiB/s wr, 313 op/s
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.778 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.375s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.849 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.886 2 DEBUG nova.policy [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bf568df6a8d461a83d287493b393589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de6794f6448744329cf2081eb5b889a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.974 2 DEBUG nova.objects.instance [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.993 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.993 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Ensure instance console log exists: /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.994 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.994 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:31 np0005473739 nova_compute[259550]: 2025-10-07 14:12:31.994 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0011979217539232923 of space, bias 1.0, pg target 0.3593765261769877 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:12:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:12:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:12:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3783419720' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:12:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:12:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3783419720' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:12:32 np0005473739 nova_compute[259550]: 2025-10-07 14:12:32.995 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Successfully created port: 75d3896e-b08f-4485-b4d7-dff914242597 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:12:33 np0005473739 nova_compute[259550]: 2025-10-07 14:12:33.227 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846338.2266135, 9197df26-c585-4b8f-8ed6-695ee8e233a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:33 np0005473739 nova_compute[259550]: 2025-10-07 14:12:33.228 2 INFO nova.compute.manager [-] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:12:33 np0005473739 nova_compute[259550]: 2025-10-07 14:12:33.248 2 DEBUG nova.compute.manager [None req-365da1a0-58e6-4ab1-bfe0-63033f49af11 - - - - - -] [instance: 9197df26-c585-4b8f-8ed6-695ee8e233a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1438: 305 pgs: 305 active+clean; 95 MiB data, 472 MiB used, 60 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.5 KiB/s wr, 164 op/s
Oct  7 10:12:33 np0005473739 nova_compute[259550]: 2025-10-07 14:12:33.800 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Successfully updated port: 75d3896e-b08f-4485-b4d7-dff914242597 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:12:33 np0005473739 nova_compute[259550]: 2025-10-07 14:12:33.816 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:33 np0005473739 nova_compute[259550]: 2025-10-07 14:12:33.817 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquired lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:33 np0005473739 nova_compute[259550]: 2025-10-07 14:12:33.817 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:12:34 np0005473739 nova_compute[259550]: 2025-10-07 14:12:34.779 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:12:34 np0005473739 nova_compute[259550]: 2025-10-07 14:12:34.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:34 np0005473739 nova_compute[259550]: 2025-10-07 14:12:34.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.007 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.020 2 DEBUG nova.compute.manager [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-changed-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.021 2 DEBUG nova.compute.manager [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Refreshing instance network info cache due to event network-changed-75d3896e-b08f-4485-b4d7-dff914242597. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.021 2 DEBUG oslo_concurrency.lockutils [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.095 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.096 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.114 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.183 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.184 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.194 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.194 2 INFO nova.compute.claims [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.304 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.519 2 DEBUG nova.network.neutron [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Updating instance_info_cache with network_info: [{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.540 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Releasing lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.541 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance network_info: |[{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.542 2 DEBUG oslo_concurrency.lockutils [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.542 2 DEBUG nova.network.neutron [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Refreshing network info cache for port 75d3896e-b08f-4485-b4d7-dff914242597 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.545 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start _get_guest_xml network_info=[{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.551 2 WARNING nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.562 2 DEBUG nova.virt.libvirt.host [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.562 2 DEBUG nova.virt.libvirt.host [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.566 2 DEBUG nova.virt.libvirt.host [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.567 2 DEBUG nova.virt.libvirt.host [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.567 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.568 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.568 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.568 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.569 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.569 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.569 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.570 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.570 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.570 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.570 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.571 2 DEBUG nova.virt.hardware [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.574 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1439: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 186 op/s
Oct  7 10:12:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:12:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1465989475' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.772 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.779 2 DEBUG nova.compute.provider_tree [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.798 2 DEBUG nova.scheduler.client.report [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.823 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.824 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.873 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.873 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.892 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:12:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e212 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e212 do_prune osdmap full prune enabled
Oct  7 10:12:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 e213: 3 total, 3 up, 3 in
Oct  7 10:12:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e213: 3 total, 3 up, 3 in
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.917 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.942 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.942 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.964 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:12:35 np0005473739 nova_compute[259550]: 2025-10-07 14:12:35.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.027 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.028 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.029 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Creating image(s)#033[00m
Oct  7 10:12:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:12:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2614234729' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.069 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.095 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.116 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.120 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.162 2 DEBUG nova.policy [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ba82e1a9f12417391d78758ae9bb83c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f5ee4e560ed4660a6685a086282a370', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.172 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.598s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.197 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.202 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.239 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.119s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.241 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.242 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.243 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.268 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.272 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.315 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.316 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.325 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.325 2 INFO nova.compute.claims [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.483 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.608 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.680 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] resizing rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:12:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:12:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1783457451' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.712 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Successfully created port: 130e9c49-53e8-495e-ac38-4d3e63f49011 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.720 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.724 2 DEBUG nova.virt.libvirt.vif [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:31Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.725 2 DEBUG nova.network.os_vif_util [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.726 2 DEBUG nova.network.os_vif_util [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.727 2 DEBUG nova.objects.instance [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.745 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <uuid>4e86b418-6e7f-4e2e-9146-a847920ed11f</uuid>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <name>instance-0000002a</name>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1086034539</nova:name>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:12:35</nova:creationTime>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <nova:port uuid="75d3896e-b08f-4485-b4d7-dff914242597">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <entry name="serial">4e86b418-6e7f-4e2e-9146-a847920ed11f</entry>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <entry name="uuid">4e86b418-6e7f-4e2e-9146-a847920ed11f</entry>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e86b418-6e7f-4e2e-9146-a847920ed11f_disk">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:a0:f3:88"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <target dev="tap75d3896e-b0"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/console.log" append="off"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:12:36 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:12:36 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:12:36 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:12:36 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.746 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Preparing to wait for external event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.747 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.747 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.747 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.748 2 DEBUG nova.virt.libvirt.vif [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:31Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.749 2 DEBUG nova.network.os_vif_util [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.749 2 DEBUG nova.network.os_vif_util [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.750 2 DEBUG os_vif [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.751 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.752 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.756 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75d3896e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.757 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75d3896e-b0, col_values=(('external_ids', {'iface-id': '75d3896e-b08f-4485-b4d7-dff914242597', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:f3:88', 'vm-uuid': '4e86b418-6e7f-4e2e-9146-a847920ed11f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:36 np0005473739 NetworkManager[44949]: <info>  [1759846356.7597] manager: (tap75d3896e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/143)
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.796 2 INFO os_vif [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0')#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.801 2 DEBUG nova.objects.instance [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'migration_context' on Instance uuid 867307d6-0b3f-4a3e-9dc4-a05221e2f080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.839 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.839 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Ensure instance console log exists: /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.840 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.840 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.841 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.865 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.866 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.866 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:a0:f3:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.867 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Using config drive#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.893 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:36 np0005473739 nova_compute[259550]: 2025-10-07 14:12:36.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:12:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:12:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2437332903' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.047 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.054 2 DEBUG nova.compute.provider_tree [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.068 2 DEBUG nova.scheduler.client.report [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.099 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.782s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.099 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.153 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.153 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.178 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.202 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.219 2 DEBUG nova.network.neutron [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Updated VIF entry in instance network info cache for port 75d3896e-b08f-4485-b4d7-dff914242597. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.220 2 DEBUG nova.network.neutron [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Updating instance_info_cache with network_info: [{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.264 2 DEBUG oslo_concurrency.lockutils [req-8162dcdc-9049-47a7-ae03-39a39c7d957d req-ab44f156-7fcc-44e6-bfa2-84274f647cb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e86b418-6e7f-4e2e-9146-a847920ed11f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.307 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.309 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.309 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Creating image(s)#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.329 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.353 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.381 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.386 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.430 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating config drive at /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.436 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1kg0elkj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.477 2 DEBUG nova.policy [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a0452296b3a942e893961944a0203d98', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06322ecec4b94a5d94e34cc8632d4104', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.482 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Successfully updated port: 130e9c49-53e8-495e-ac38-4d3e63f49011 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.486 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.486 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.487 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.487 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.507 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.511 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.550 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.551 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.551 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.556 2 DEBUG nova.compute.manager [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.556 2 DEBUG nova.compute.manager [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing instance network info cache due to event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.556 2 DEBUG oslo_concurrency.lockutils [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.582 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1kg0elkj" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.607 2 DEBUG nova.storage.rbd_utils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.613 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1441: 305 pgs: 305 active+clean; 88 MiB data, 443 MiB used, 60 GiB / 60 GiB avail; 65 KiB/s rd, 2.7 MiB/s wr, 98 op/s
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.706 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.861 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.350s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.894 2 DEBUG oslo_concurrency.processutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.895 2 INFO nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deleting local config drive /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config because it was imported into RBD.#033[00m
Oct  7 10:12:37 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.941 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] resizing rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:12:37 np0005473739 kernel: tap75d3896e-b0: entered promiscuous mode
Oct  7 10:12:37 np0005473739 NetworkManager[44949]: <info>  [1759846357.9566] manager: (tap75d3896e-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/144)
Oct  7 10:12:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:37Z|00289|binding|INFO|Claiming lport 75d3896e-b08f-4485-b4d7-dff914242597 for this chassis.
Oct  7 10:12:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:37Z|00290|binding|INFO|75d3896e-b08f-4485-b4d7-dff914242597: Claiming fa:16:3e:a0:f3:88 10.100.0.7
Oct  7 10:12:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:37Z|00291|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 ovn-installed in OVS
Oct  7 10:12:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:37Z|00292|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 up in Southbound
Oct  7 10:12:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:37.998 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f3:88 10.100.0.7'], port_security=['fa:16:3e:a0:f3:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4e86b418-6e7f-4e2e-9146-a847920ed11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=75d3896e-b08f-4485-b4d7-dff914242597) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:37.997 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.000 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 75d3896e-b08f-4485-b4d7-dff914242597 in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.001 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777#033[00m
Oct  7 10:12:38 np0005473739 systemd-udevd[309067]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.015 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a96dd042-d359-4ad7-8b41-c80102392740]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.016 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.019 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.020 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3bdaba-8353-4263-9369-9909209f7be4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 systemd-machined[214580]: New machine qemu-47-instance-0000002a.
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.022 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b33304d6-1a9e-4ee3-9093-b947c0ca3975]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 systemd[1]: Started Virtual Machine qemu-47-instance-0000002a.
Oct  7 10:12:38 np0005473739 NetworkManager[44949]: <info>  [1759846358.0434] device (tap75d3896e-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.042 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5b45562a-edcd-43ad-9ec6-f32964238d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 NetworkManager[44949]: <info>  [1759846358.0450] device (tap75d3896e-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.058 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[896be126-b75c-4cb6-afe6-a1abdda2b944]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 podman[309047]: 2025-10-07 14:12:38.096198067 +0000 UTC m=+0.106237683 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.105 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b05db223-5c58-4851-9f9a-a923db842333]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 systemd-udevd[309075]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.118 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f285a6-586a-466e-8281-e8c021e1a4cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 NetworkManager[44949]: <info>  [1759846358.1204] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/145)
Oct  7 10:12:38 np0005473739 podman[309051]: 2025-10-07 14:12:38.127439518 +0000 UTC m=+0.134288631 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.147 2 DEBUG nova.objects.instance [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'migration_context' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.160 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[69e85286-7e6f-4cce-bbdc-5d623fcad6d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.163 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[74ee1f33-0d08-417b-a322-949c094a911d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.162 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.163 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Ensure instance console log exists: /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.163 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.163 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.164 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:38 np0005473739 NetworkManager[44949]: <info>  [1759846358.1921] device (tapd2cb8ca0-10): carrier: link connected
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.197 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4d76d6de-cf94-450f-b3d2-1b1aca648207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.217 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[54bbd03b-b374-437d-8496-da80f01d6f9c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694175, 'reachable_time': 41094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309146, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.235 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[587225ff-c9b5-49c3-a4be-15acfbdeb94c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694175, 'tstamp': 694175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309147, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.255 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d447ab0e-4f2c-457e-b059-21f49cd98080]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 95], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694175, 'reachable_time': 41094, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309148, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.293 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c828c18-f29d-45d3-925d-8b533b61114f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e27e0677-e166-47dc-994b-b9684cc5d435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.365 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.366 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.366 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:38 np0005473739 NetworkManager[44949]: <info>  [1759846358.3694] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/146)
Oct  7 10:12:38 np0005473739 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.371 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:38Z|00293|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.381 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.393 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.395 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e3ec9309-7cd1-4fc4-9c90-8b3d67a3b5ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.395 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:12:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:38.396 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:12:38 np0005473739 podman[309222]: 2025-10-07 14:12:38.823030623 +0000 UTC m=+0.070384191 container create e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  7 10:12:38 np0005473739 systemd[1]: Started libpod-conmon-e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33.scope.
Oct  7 10:12:38 np0005473739 podman[309222]: 2025-10-07 14:12:38.786098482 +0000 UTC m=+0.033452100 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:12:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:12:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5fd8bb89be17d12218ba71ac6eeda91b03262ee72f2eb23233a59ffdba88634a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:12:38 np0005473739 podman[309222]: 2025-10-07 14:12:38.936325301 +0000 UTC m=+0.183678909 container init e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:12:38 np0005473739 podman[309222]: 2025-10-07 14:12:38.942814272 +0000 UTC m=+0.190167840 container start e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.955 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Successfully created port: 23002fad-2420-4b68-bfd7-2d90f8b5df6d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:12:38 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [NOTICE]   (309241) : New worker (309243) forked
Oct  7 10:12:38 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [NOTICE]   (309241) : Loading success.
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.984 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846358.9837368, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:38 np0005473739 nova_compute[259550]: 2025-10-07 14:12:38.985 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Started (Lifecycle Event)#033[00m
Oct  7 10:12:39 np0005473739 nova_compute[259550]: 2025-10-07 14:12:39.005 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:39 np0005473739 nova_compute[259550]: 2025-10-07 14:12:39.011 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846358.9840682, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:39 np0005473739 nova_compute[259550]: 2025-10-07 14:12:39.012 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:12:39 np0005473739 nova_compute[259550]: 2025-10-07 14:12:39.030 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:39 np0005473739 nova_compute[259550]: 2025-10-07 14:12:39.035 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:39 np0005473739 nova_compute[259550]: 2025-10-07 14:12:39.057 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1442: 305 pgs: 305 active+clean; 140 MiB data, 466 MiB used, 60 GiB / 60 GiB avail; 102 KiB/s rd, 4.5 MiB/s wr, 153 op/s
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.025 2 DEBUG nova.network.neutron [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.046 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.047 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance network_info: |[{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.047 2 DEBUG oslo_concurrency.lockutils [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.047 2 DEBUG nova.network.neutron [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.050 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start _get_guest_xml network_info=[{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.054 2 WARNING nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.060 2 DEBUG nova.virt.libvirt.host [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.060 2 DEBUG nova.virt.libvirt.host [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.069 2 DEBUG nova.virt.libvirt.host [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.070 2 DEBUG nova.virt.libvirt.host [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.071 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.071 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.071 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.072 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.073 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.073 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.073 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.073 2 DEBUG nova.virt.hardware [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.076 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.189 2 DEBUG nova.compute.manager [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.190 2 DEBUG oslo_concurrency.lockutils [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.190 2 DEBUG oslo_concurrency.lockutils [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.190 2 DEBUG oslo_concurrency.lockutils [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.191 2 DEBUG nova.compute.manager [req-693a5285-b2d9-4959-89d6-9a999cbda3bf req-3eb1edbb-ea37-47ff-b394-5960070bf802 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Processing event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.191 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.195 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846360.1951532, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.195 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.198 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.207 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance spawned successfully.#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.208 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.214 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.217 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.226 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.227 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.227 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.227 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.228 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.228 2 DEBUG nova.virt.libvirt.driver [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.239 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.282 2 INFO nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Took 9.07 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.283 2 DEBUG nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.353 2 INFO nova.compute.manager [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Took 11.29 seconds to build instance.#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.374 2 DEBUG oslo_concurrency.lockutils [None req-11abe5b9-1d16-4e4f-8095-54dd03db938a 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:12:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3813252651' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.552 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.573 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.578 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.624 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Successfully updated port: 23002fad-2420-4b68-bfd7-2d90f8b5df6d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.638 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.639 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.640 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.722 2 DEBUG nova.compute.manager [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-changed-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.723 2 DEBUG nova.compute.manager [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Refreshing instance network info cache due to event network-changed-23002fad-2420-4b68-bfd7-2d90f8b5df6d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.724 2 DEBUG oslo_concurrency.lockutils [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.847 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:12:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.912 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846345.9105232, d7bf9b41-6a09-4d71-8e33-cb428d71b2a8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.912 2 INFO nova.compute.manager [-] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.937 2 DEBUG nova.compute.manager [None req-fbc6f91f-5906-4f38-8ac6-41eb4102435c - - - - - -] [instance: d7bf9b41-6a09-4d71-8e33-cb428d71b2a8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:40 np0005473739 nova_compute[259550]: 2025-10-07 14:12:40.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:12:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2814303154' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.090 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.093 2 DEBUG nova.virt.libvirt.vif [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-84480482',display_name='tempest-SecurityGroupsTestJSON-server-84480482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-84480482',id=43,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-9i3oosu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:35Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=867307d6-0b3f-4a3e-9dc4-a05221e2f080,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.094 2 DEBUG nova.network.os_vif_util [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.095 2 DEBUG nova.network.os_vif_util [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.096 2 DEBUG nova.objects.instance [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'pci_devices' on Instance uuid 867307d6-0b3f-4a3e-9dc4-a05221e2f080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.115 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <uuid>867307d6-0b3f-4a3e-9dc4-a05221e2f080</uuid>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <name>instance-0000002b</name>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <nova:name>tempest-SecurityGroupsTestJSON-server-84480482</nova:name>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:12:40</nova:creationTime>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <nova:user uuid="3ba82e1a9f12417391d78758ae9bb83c">tempest-SecurityGroupsTestJSON-1673626413-project-member</nova:user>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <nova:project uuid="1f5ee4e560ed4660a6685a086282a370">tempest-SecurityGroupsTestJSON-1673626413</nova:project>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <nova:port uuid="130e9c49-53e8-495e-ac38-4d3e63f49011">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <entry name="serial">867307d6-0b3f-4a3e-9dc4-a05221e2f080</entry>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <entry name="uuid">867307d6-0b3f-4a3e-9dc4-a05221e2f080</entry>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:03:73:59"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <target dev="tap130e9c49-53"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/console.log" append="off"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:12:41 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:12:41 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:12:41 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:12:41 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.123 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Preparing to wait for external event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.123 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.123 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.124 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.125 2 DEBUG nova.virt.libvirt.vif [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-84480482',display_name='tempest-SecurityGroupsTestJSON-server-84480482',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-84480482',id=43,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-9i3oosu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:35Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=867307d6-0b3f-4a3e-9dc4-a05221e2f080,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.125 2 DEBUG nova.network.os_vif_util [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.126 2 DEBUG nova.network.os_vif_util [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.127 2 DEBUG os_vif [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.129 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.134 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.135 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap130e9c49-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.135 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap130e9c49-53, col_values=(('external_ids', {'iface-id': '130e9c49-53e8-495e-ac38-4d3e63f49011', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:73:59', 'vm-uuid': '867307d6-0b3f-4a3e-9dc4-a05221e2f080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:41 np0005473739 NetworkManager[44949]: <info>  [1759846361.1384] manager: (tap130e9c49-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/147)
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.145 2 INFO os_vif [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53')#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.224 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.225 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.225 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No VIF found with MAC fa:16:3e:03:73:59, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.226 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Using config drive#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.248 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1443: 305 pgs: 305 active+clean; 180 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.4 MiB/s wr, 127 op/s
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.813 2 DEBUG nova.network.neutron [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updated VIF entry in instance network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.814 2 DEBUG nova.network.neutron [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:41 np0005473739 nova_compute[259550]: 2025-10-07 14:12:41.837 2 DEBUG oslo_concurrency.lockutils [req-595fdeef-e987-4b94-a3b0-dfd4774661bb req-be68a0b6-3fb0-428b-a2a1-e73c6571a701 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.092 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Creating config drive at /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.100 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmja815un execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.249 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpmja815un" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.273 2 DEBUG nova.storage.rbd_utils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.277 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.314 2 DEBUG nova.network.neutron [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Updating instance_info_cache with network_info: [{"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.331 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.332 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance network_info: |[{"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.332 2 DEBUG oslo_concurrency.lockutils [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.333 2 DEBUG nova.network.neutron [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Refreshing network info cache for port 23002fad-2420-4b68-bfd7-2d90f8b5df6d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.336 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start _get_guest_xml network_info=[{"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.343 2 WARNING nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.350 2 DEBUG nova.virt.libvirt.host [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.351 2 DEBUG nova.virt.libvirt.host [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.354 2 DEBUG nova.virt.libvirt.host [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.355 2 DEBUG nova.virt.libvirt.host [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.355 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.356 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.356 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.356 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.357 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.357 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.357 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.358 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.358 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.358 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.358 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.359 2 DEBUG nova.virt.hardware [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.362 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.451 2 DEBUG oslo_concurrency.processutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config 867307d6-0b3f-4a3e-9dc4-a05221e2f080_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.452 2 INFO nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Deleting local config drive /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080/disk.config because it was imported into RBD.#033[00m
Oct  7 10:12:42 np0005473739 NetworkManager[44949]: <info>  [1759846362.5086] manager: (tap130e9c49-53): new Tun device (/org/freedesktop/NetworkManager/Devices/148)
Oct  7 10:12:42 np0005473739 kernel: tap130e9c49-53: entered promiscuous mode
Oct  7 10:12:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:42Z|00294|binding|INFO|Claiming lport 130e9c49-53e8-495e-ac38-4d3e63f49011 for this chassis.
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:42Z|00295|binding|INFO|130e9c49-53e8-495e-ac38-4d3e63f49011: Claiming fa:16:3e:03:73:59 10.100.0.5
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.609 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:73:59 10.100.0.5'], port_security=['fa:16:3e:03:73:59 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '867307d6-0b3f-4a3e-9dc4-a05221e2f080', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '2', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=130e9c49-53e8-495e-ac38-4d3e63f49011) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.610 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 130e9c49-53e8-495e-ac38-4d3e63f49011 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 bound to our chassis#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.612 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.624 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b8619153-4e10-4746-a3a8-3a673d7940b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.626 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap71870f0f-c1 in ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:12:42 np0005473739 systemd-machined[214580]: New machine qemu-48-instance-0000002b.
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.629 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap71870f0f-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.630 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cc04bae-0e59-4224-a1d7-a82680d1ebf5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b910d64-6c68-484d-a51c-a117e0b41e62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 systemd-udevd[309408]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:12:42 np0005473739 systemd[1]: Started Virtual Machine qemu-48-instance-0000002b.
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.647 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c029d2d3-fa43-468a-adf7-75d2bdc53674]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 NetworkManager[44949]: <info>  [1759846362.6516] device (tap130e9c49-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:12:42 np0005473739 NetworkManager[44949]: <info>  [1759846362.6527] device (tap130e9c49-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.676 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fdaf2188-e59f-49a8-b58b-91c7f5758372]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:42Z|00296|binding|INFO|Setting lport 130e9c49-53e8-495e-ac38-4d3e63f49011 ovn-installed in OVS
Oct  7 10:12:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:42Z|00297|binding|INFO|Setting lport 130e9c49-53e8-495e-ac38-4d3e63f49011 up in Southbound
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.708 2 DEBUG nova.compute.manager [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.709 2 DEBUG oslo_concurrency.lockutils [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.709 2 DEBUG oslo_concurrency.lockutils [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.709 2 DEBUG oslo_concurrency.lockutils [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.709 2 DEBUG nova.compute.manager [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.710 2 WARNING nova.compute.manager [req-bf4856d5-8948-43fe-8420-4e24fd4a0282 req-4089029b-4ce6-4fed-94aa-bb78a478dce6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.713 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3eeac796-3a58-4e23-bcb8-41149ffdd70f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.718 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ecbc2c01-4212-4da8-93ac-87167256b71b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 NetworkManager[44949]: <info>  [1759846362.7202] manager: (tap71870f0f-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/149)
Oct  7 10:12:42 np0005473739 systemd-udevd[309412]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.768 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3b0cea11-4c7b-4383-b094-0c74c40c1ff1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.772 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4df21070-83d8-44bf-be3b-a6a12801fdac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 NetworkManager[44949]: <info>  [1759846362.8021] device (tap71870f0f-c0): carrier: link connected
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.810 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a53702-a771-4934-bf80-8edcaf425a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.838 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75130151-e75d-48f6-a925-979966ea4936]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 24274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309441, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.858 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5678df53-6c23-4015-a20e-980f1aa66fcb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe2f:d112'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694636, 'tstamp': 694636}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309442, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.883 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6955afc1-93c7-4694-ab55-c59134d601f3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 24274, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309443, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.892 2 DEBUG nova.compute.manager [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.893 2 DEBUG oslo_concurrency.lockutils [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.893 2 DEBUG oslo_concurrency.lockutils [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.894 2 DEBUG oslo_concurrency.lockutils [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.894 2 DEBUG nova.compute.manager [req-d4e051a7-7cd6-47db-80ec-f03235658dfe req-f0b54fa2-855b-456c-a9ea-241d247e312b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Processing event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:12:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:42.920 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4115224-a949-46fa-9c3c-3a2f3962be3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:12:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/141100417' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:12:42 np0005473739 nova_compute[259550]: 2025-10-07 14:12:42.974 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.612s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.008 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.010 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7edf2aef-dca2-4083-a557-54105ec9f262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.012 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.013 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.013 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:43 np0005473739 NetworkManager[44949]: <info>  [1759846363.0158] manager: (tap71870f0f-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/150)
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.014 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:43 np0005473739 kernel: tap71870f0f-c0: entered promiscuous mode
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.020 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:43Z|00298|binding|INFO|Releasing lport 7bd1effb-a353-4387-8382-bb3ef13fb3f0 from this chassis (sb_readonly=0)
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.024 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/71870f0f-c94f-4d32-8df4-00da4d6d4129.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/71870f0f-c94f-4d32-8df4-00da4d6d4129.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.026 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[adf94ed1-4271-4b77-9177-4cac7355b365]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.027 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/71870f0f-c94f-4d32-8df4-00da4d6d4129.pid.haproxy
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 71870f0f-c94f-4d32-8df4-00da4d6d4129
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:12:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:43.029 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'env', 'PROCESS_TAG=haproxy-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/71870f0f-c94f-4d32-8df4-00da4d6d4129.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:43 np0005473739 podman[309557]: 2025-10-07 14:12:43.462401162 +0000 UTC m=+0.072334153 container create 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:12:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:12:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/739791835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:12:43 np0005473739 podman[309557]: 2025-10-07 14:12:43.421834165 +0000 UTC m=+0.031767186 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.521 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:43 np0005473739 systemd[1]: Started libpod-conmon-14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31.scope.
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.524 2 DEBUG nova.virt.libvirt.vif [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1001011532',display_name='tempest-DeleteServersTestJSON-server-1001011532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1001011532',id=44,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-rcycun96',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:37Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=4fef229d-c42d-43ac-a3ff-527ca68d3796,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.525 2 DEBUG nova.network.os_vif_util [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.525 2 DEBUG nova.network.os_vif_util [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.527 2 DEBUG nova.objects.instance [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:12:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/820a27a5171c3bf28772eb4c1b067f6a72a5b80227d7b45bd095d8022b7260b2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.552 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.554 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846363.55407, 867307d6-0b3f-4a3e-9dc4-a05221e2f080 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.554 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] VM Started (Lifecycle Event)#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.557 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.561 2 INFO nova.virt.libvirt.driver [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance spawned successfully.#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.562 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:12:43 np0005473739 podman[309557]: 2025-10-07 14:12:43.573003029 +0000 UTC m=+0.182936040 container init 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:12:43 np0005473739 podman[309557]: 2025-10-07 14:12:43.578589666 +0000 UTC m=+0.188522657 container start 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:12:43 np0005473739 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [NOTICE]   (309579) : New worker (309581) forked
Oct  7 10:12:43 np0005473739 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [NOTICE]   (309579) : Loading success.
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.628 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <uuid>4fef229d-c42d-43ac-a3ff-527ca68d3796</uuid>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <name>instance-0000002c</name>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <nova:name>tempest-DeleteServersTestJSON-server-1001011532</nova:name>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:12:42</nova:creationTime>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <nova:user uuid="a0452296b3a942e893961944a0203d98">tempest-DeleteServersTestJSON-1871282594-project-member</nova:user>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <nova:project uuid="06322ecec4b94a5d94e34cc8632d4104">tempest-DeleteServersTestJSON-1871282594</nova:project>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <nova:port uuid="23002fad-2420-4b68-bfd7-2d90f8b5df6d">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <entry name="serial">4fef229d-c42d-43ac-a3ff-527ca68d3796</entry>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <entry name="uuid">4fef229d-c42d-43ac-a3ff-527ca68d3796</entry>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4fef229d-c42d-43ac-a3ff-527ca68d3796_disk">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:4c:78:88"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <target dev="tap23002fad-24"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/console.log" append="off"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:12:43 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:12:43 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:12:43 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:12:43 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.629 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Preparing to wait for external event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.630 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.630 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.630 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.631 2 DEBUG nova.virt.libvirt.vif [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1001011532',display_name='tempest-DeleteServersTestJSON-server-1001011532',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1001011532',id=44,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-rcycun96',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:37Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=4fef229d-c42d-43ac-a3ff-527ca68d3796,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.631 2 DEBUG nova.network.os_vif_util [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.631 2 DEBUG nova.network.os_vif_util [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.632 2 DEBUG os_vif [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.635 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap23002fad-24, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap23002fad-24, col_values=(('external_ids', {'iface-id': '23002fad-2420-4b68-bfd7-2d90f8b5df6d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4c:78:88', 'vm-uuid': '4fef229d-c42d-43ac-a3ff-527ca68d3796'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:43 np0005473739 NetworkManager[44949]: <info>  [1759846363.6387] manager: (tap23002fad-24): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/151)
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.646 2 INFO os_vif [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24')#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.662 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1444: 305 pgs: 305 active+clean; 180 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 82 KiB/s rd, 6.4 MiB/s wr, 127 op/s
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.671 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.675 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.676 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.676 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.677 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.684 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.685 2 DEBUG nova.virt.libvirt.driver [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.727 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.729 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846363.554585, 867307d6-0b3f-4a3e-9dc4-a05221e2f080 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.729 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.755 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.759 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.760 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.760 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No VIF found with MAC fa:16:3e:4c:78:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.762 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Using config drive#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.782 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.788 2 INFO nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Took 7.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.789 2 DEBUG nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.792 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846363.5568259, 867307d6-0b3f-4a3e-9dc4-a05221e2f080 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.793 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.809 2 DEBUG nova.network.neutron [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Updated VIF entry in instance network info cache for port 23002fad-2420-4b68-bfd7-2d90f8b5df6d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.810 2 DEBUG nova.network.neutron [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Updating instance_info_cache with network_info: [{"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.827 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.839 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.848 2 DEBUG oslo_concurrency.lockutils [req-e89e46c7-334c-42e8-8919-264238291857 req-a95c233b-0cef-4935-8899-b9ee34712168 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4fef229d-c42d-43ac-a3ff-527ca68d3796" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.873 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.891 2 INFO nova.compute.manager [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Took 8.73 seconds to build instance.#033[00m
Oct  7 10:12:43 np0005473739 nova_compute[259550]: 2025-10-07 14:12:43.909 2 DEBUG oslo_concurrency.lockutils [None req-3fa948d9-2b3b-4030-8087-68b093b4dc5a 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.354 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Creating config drive at /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.359 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpef6p6s81 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.503 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpef6p6s81" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.530 2 DEBUG nova.storage.rbd_utils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.534 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.665 2 INFO nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Rebuilding instance#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.695 2 DEBUG oslo_concurrency.processutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config 4fef229d-c42d-43ac-a3ff-527ca68d3796_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.695 2 INFO nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Deleting local config drive /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796/disk.config because it was imported into RBD.#033[00m
Oct  7 10:12:44 np0005473739 kernel: tap23002fad-24: entered promiscuous mode
Oct  7 10:12:44 np0005473739 NetworkManager[44949]: <info>  [1759846364.7389] manager: (tap23002fad-24): new Tun device (/org/freedesktop/NetworkManager/Devices/152)
Oct  7 10:12:44 np0005473739 systemd-udevd[309437]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:44Z|00299|binding|INFO|Claiming lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d for this chassis.
Oct  7 10:12:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:44Z|00300|binding|INFO|23002fad-2420-4b68-bfd7-2d90f8b5df6d: Claiming fa:16:3e:4c:78:88 10.100.0.8
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.751 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:88 10.100.0.8'], port_security=['fa:16:3e:4c:78:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4fef229d-c42d-43ac-a3ff-527ca68d3796', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=23002fad-2420-4b68-bfd7-2d90f8b5df6d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.753 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 23002fad-2420-4b68-bfd7-2d90f8b5df6d in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f bound to our chassis#033[00m
Oct  7 10:12:44 np0005473739 NetworkManager[44949]: <info>  [1759846364.7560] device (tap23002fad-24): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.756 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8accac57-ab45-4b9b-95ed-86c2c65f202f#033[00m
Oct  7 10:12:44 np0005473739 NetworkManager[44949]: <info>  [1759846364.7572] device (tap23002fad-24): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:12:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:44Z|00301|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d ovn-installed in OVS
Oct  7 10:12:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:44Z|00302|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d up in Southbound
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.770 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4a8c43d-b178-4230-9812-80f169eae22f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.771 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8accac57-a1 in ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:12:44 np0005473739 nova_compute[259550]: 2025-10-07 14:12:44.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.775 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8accac57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.775 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4421c561-617d-4ba3-92e5-4a38dffde359]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.777 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f749f85-ce38-48a3-98b5-1959863e43da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 systemd-machined[214580]: New machine qemu-49-instance-0000002c.
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.799 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[42c8b25a-ef03-4f81-acb5-1d696fa33bc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 systemd[1]: Started Virtual Machine qemu-49-instance-0000002c.
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9156fe84-3ac0-4036-acf7-30c8f6d31abf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.863 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71f5d012-01f1-4c93-829c-725b3f5e2c34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.869 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[82f4f805-fe60-47ca-9dcc-8aa1b2ee7009]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 NetworkManager[44949]: <info>  [1759846364.8704] manager: (tap8accac57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/153)
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.911 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1367d6c3-7f46-4ef4-9572-92ce269743d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.918 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[122c11b5-7080-42a4-9d46-2b2b6e0ece6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 NetworkManager[44949]: <info>  [1759846364.9448] device (tap8accac57-a0): carrier: link connected
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.952 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0c88f78c-4b14-4b72-b8a1-a23e292337bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.973 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c647058-2d74-4c86-a501-9439a23861a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694850, 'reachable_time': 16785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309680, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:44.994 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c464c7b9-8b97-4362-9ecd-de8e86a2d10b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:e89f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694850, 'tstamp': 694850}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 309681, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.015 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b9c0be2d-96d3-4bb9-bb3a-344b6d10d0f1]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 99], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694850, 'reachable_time': 16785, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 309682, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.025 2 DEBUG nova.compute.manager [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.027 2 DEBUG oslo_concurrency.lockutils [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.028 2 DEBUG oslo_concurrency.lockutils [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.028 2 DEBUG oslo_concurrency.lockutils [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.029 2 DEBUG nova.compute.manager [req-c31c1b60-c7fa-4355-a99e-199cafb26217 req-cab00e07-b096-425e-b9d0-829c5a6ce79b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Processing event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.052 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3a5360-9b06-461e-9a83-eed219c93137]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.092 2 DEBUG nova.compute.manager [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.092 2 DEBUG oslo_concurrency.lockutils [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.093 2 DEBUG oslo_concurrency.lockutils [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.093 2 DEBUG oslo_concurrency.lockutils [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.093 2 DEBUG nova.compute.manager [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] No waiting events found dispatching network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.093 2 WARNING nova.compute.manager [req-dcdd3326-86d2-4dbe-a7d9-e785eb3796c4 req-363e2b86-968c-41b5-889d-6945d3af6cda 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received unexpected event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.095 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.107 2 DEBUG nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.125 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aca2c0fb-39c1-4075-bf80-8bca0131925c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.127 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.127 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.128 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8accac57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:45 np0005473739 NetworkManager[44949]: <info>  [1759846365.1305] manager: (tap8accac57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/154)
Oct  7 10:12:45 np0005473739 kernel: tap8accac57-a0: entered promiscuous mode
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.133 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8accac57-a0, col_values=(('external_ids', {'iface-id': 'a487ff40-6fa2-404e-b7fc-dbcc968fecc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:45 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:45Z|00303|binding|INFO|Releasing lport a487ff40-6fa2-404e-b7fc-dbcc968fecc3 from this chassis (sb_readonly=0)
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.155 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.156 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bcb04cd4-4d2d-4367-beb6-8d0e75f918e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.157 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:12:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:45.158 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'env', 'PROCESS_TAG=haproxy-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8accac57-ab45-4b9b-95ed-86c2c65f202f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.175 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_requests' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.188 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.201 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.219 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.234 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.242 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:12:45 np0005473739 podman[309755]: 2025-10-07 14:12:45.534186859 +0000 UTC m=+0.052654084 container create fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:12:45 np0005473739 systemd[1]: Started libpod-conmon-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e.scope.
Oct  7 10:12:45 np0005473739 podman[309755]: 2025-10-07 14:12:45.505161637 +0000 UTC m=+0.023628892 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:12:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:12:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1da5c37de559ab2f97b903232e4ec93e0e34fbf169fac94209f83b9bd008f0ca/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:12:45 np0005473739 podman[309755]: 2025-10-07 14:12:45.634441495 +0000 UTC m=+0.152908750 container init fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:12:45 np0005473739 podman[309755]: 2025-10-07 14:12:45.640296818 +0000 UTC m=+0.158764043 container start fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:12:45 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [NOTICE]   (309773) : New worker (309775) forked
Oct  7 10:12:45 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [NOTICE]   (309773) : Loading success.
Oct  7 10:12:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1445: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 3.6 MiB/s rd, 4.3 MiB/s wr, 213 op/s
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.717 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846365.7164025, 4fef229d-c42d-43ac-a3ff-527ca68d3796 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.717 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] VM Started (Lifecycle Event)#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.720 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.724 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.728 2 INFO nova.virt.libvirt.driver [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance spawned successfully.#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.729 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.740 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.753 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.757 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.757 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.758 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.758 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.758 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.759 2 DEBUG nova.virt.libvirt.driver [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.784 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.784 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846365.7166505, 4fef229d-c42d-43ac-a3ff-527ca68d3796 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.785 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.849 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.853 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846365.7229667, 4fef229d-c42d-43ac-a3ff-527ca68d3796 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.859 2 INFO nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Took 8.55 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.860 2 DEBUG nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.877 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.881 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:12:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.936 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:12:45 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.963 2 INFO nova.compute.manager [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Took 9.80 seconds to build instance.#033[00m
Oct  7 10:12:46 np0005473739 nova_compute[259550]: 2025-10-07 14:12:45.978 2 DEBUG oslo_concurrency.lockutils [None req-7ea86ba3-c93a-4390-94f9-e80abdfc340e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.035s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:46 np0005473739 nova_compute[259550]: 2025-10-07 14:12:46.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.175 2 DEBUG nova.compute.manager [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.175 2 DEBUG oslo_concurrency.lockutils [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.175 2 DEBUG oslo_concurrency.lockutils [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.176 2 DEBUG oslo_concurrency.lockutils [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.176 2 DEBUG nova.compute.manager [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.176 2 WARNING nova.compute.manager [req-2dde8f81-bf21-4a4f-a15d-008251a7ec16 req-92b7d278-d643-4cc1-a041-56b3b113dac5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state active and task_state None.#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.279 2 DEBUG oslo_concurrency.lockutils [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.280 2 DEBUG oslo_concurrency.lockutils [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.280 2 DEBUG nova.compute.manager [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.284 2 DEBUG nova.compute.manager [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.284 2 DEBUG nova.objects.instance [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'flavor' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.312 2 DEBUG nova.virt.libvirt.driver [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.508 2 DEBUG nova.compute.manager [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.509 2 DEBUG nova.compute.manager [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing instance network info cache due to event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.509 2 DEBUG oslo_concurrency.lockutils [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.509 2 DEBUG oslo_concurrency.lockutils [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:47 np0005473739 nova_compute[259550]: 2025-10-07 14:12:47.509 2 DEBUG nova.network.neutron [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:12:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1446: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.7 MiB/s wr, 181 op/s
Oct  7 10:12:48 np0005473739 nova_compute[259550]: 2025-10-07 14:12:48.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:49 np0005473739 nova_compute[259550]: 2025-10-07 14:12:49.050 2 DEBUG nova.network.neutron [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updated VIF entry in instance network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:12:49 np0005473739 nova_compute[259550]: 2025-10-07 14:12:49.051 2 DEBUG nova.network.neutron [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:49 np0005473739 nova_compute[259550]: 2025-10-07 14:12:49.085 2 DEBUG oslo_concurrency.lockutils [req-64a3365f-b1d2-4b96-b2b1-2b5c4518d606 req-3f0bf65e-0089-46c7-8d5b-d613c51aeb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:49 np0005473739 nova_compute[259550]: 2025-10-07 14:12:49.628 2 DEBUG nova.compute.manager [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:12:49 np0005473739 nova_compute[259550]: 2025-10-07 14:12:49.628 2 DEBUG nova.compute.manager [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing instance network info cache due to event network-changed-130e9c49-53e8-495e-ac38-4d3e63f49011. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:12:49 np0005473739 nova_compute[259550]: 2025-10-07 14:12:49.629 2 DEBUG oslo_concurrency.lockutils [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:12:49 np0005473739 nova_compute[259550]: 2025-10-07 14:12:49.629 2 DEBUG oslo_concurrency.lockutils [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:12:49 np0005473739 nova_compute[259550]: 2025-10-07 14:12:49.629 2 DEBUG nova.network.neutron [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Refreshing network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:12:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1447: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 4.9 MiB/s rd, 3.6 MiB/s wr, 244 op/s
Oct  7 10:12:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:51 np0005473739 nova_compute[259550]: 2025-10-07 14:12:51.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1448: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 1.7 MiB/s wr, 216 op/s
Oct  7 10:12:51 np0005473739 nova_compute[259550]: 2025-10-07 14:12:51.681 2 DEBUG nova.network.neutron [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updated VIF entry in instance network info cache for port 130e9c49-53e8-495e-ac38-4d3e63f49011. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:12:51 np0005473739 nova_compute[259550]: 2025-10-07 14:12:51.682 2 DEBUG nova.network.neutron [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:12:51 np0005473739 nova_compute[259550]: 2025-10-07 14:12:51.697 2 DEBUG oslo_concurrency.lockutils [req-77299da6-b361-4e66-a419-4cef7fcce021 req-62149483-44a8-469e-84a8-b6a7ef1ba498 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:12:53 np0005473739 nova_compute[259550]: 2025-10-07 14:12:53.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1449: 305 pgs: 305 active+clean; 181 MiB data, 486 MiB used, 60 GiB / 60 GiB avail; 5.7 MiB/s rd, 25 KiB/s wr, 211 op/s
Oct  7 10:12:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:53Z|00042|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a0:f3:88 10.100.0.7
Oct  7 10:12:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:53Z|00043|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a0:f3:88 10.100.0.7
Oct  7 10:12:55 np0005473739 nova_compute[259550]: 2025-10-07 14:12:55.294 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:12:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1450: 305 pgs: 305 active+clean; 213 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 6.1 MiB/s rd, 2.1 MiB/s wr, 275 op/s
Oct  7 10:12:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:12:56 np0005473739 nova_compute[259550]: 2025-10-07 14:12:56.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:56 np0005473739 podman[309786]: 2025-10-07 14:12:56.095554101 +0000 UTC m=+0.074335265 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid)
Oct  7 10:12:56 np0005473739 podman[309785]: 2025-10-07 14:12:56.101002205 +0000 UTC m=+0.084422220 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:12:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:56Z|00044|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:73:59 10.100.0.5
Oct  7 10:12:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:56Z|00045|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:73:59 10.100.0.5
Oct  7 10:12:57 np0005473739 nova_compute[259550]: 2025-10-07 14:12:57.355 2 DEBUG nova.virt.libvirt.driver [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:12:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1451: 305 pgs: 305 active+clean; 213 MiB data, 511 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 2.1 MiB/s wr, 162 op/s
Oct  7 10:12:58 np0005473739 kernel: tap75d3896e-b0 (unregistering): left promiscuous mode
Oct  7 10:12:58 np0005473739 NetworkManager[44949]: <info>  [1759846378.2251] device (tap75d3896e-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:12:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:58Z|00304|binding|INFO|Releasing lport 75d3896e-b08f-4485-b4d7-dff914242597 from this chassis (sb_readonly=0)
Oct  7 10:12:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:58Z|00305|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 down in Southbound
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:58Z|00306|binding|INFO|Removing iface tap75d3896e-b0 ovn-installed in OVS
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.248 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f3:88 10.100.0.7'], port_security=['fa:16:3e:a0:f3:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4e86b418-6e7f-4e2e-9146-a847920ed11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=75d3896e-b08f-4485-b4d7-dff914242597) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.250 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 75d3896e-b08f-4485-b4d7-dff914242597 in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.251 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a5d5bd-fa7c-42d3-a54e-ddd74fe7a0a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.255 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct  7 10:12:58 np0005473739 systemd[1]: machine-qemu\x2d47\x2dinstance\x2d0000002a.scope: Consumed 13.308s CPU time.
Oct  7 10:12:58 np0005473739 systemd-machined[214580]: Machine qemu-47-instance-0000002a terminated.
Oct  7 10:12:58 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [NOTICE]   (309241) : haproxy version is 2.8.14-c23fe91
Oct  7 10:12:58 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [NOTICE]   (309241) : path to executable is /usr/sbin/haproxy
Oct  7 10:12:58 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [WARNING]  (309241) : Exiting Master process...
Oct  7 10:12:58 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [WARNING]  (309241) : Exiting Master process...
Oct  7 10:12:58 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [ALERT]    (309241) : Current worker (309243) exited with code 143 (Terminated)
Oct  7 10:12:58 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[309237]: [WARNING]  (309241) : All workers exited. Exiting... (0)
Oct  7 10:12:58 np0005473739 systemd[1]: libpod-e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33.scope: Deactivated successfully.
Oct  7 10:12:58 np0005473739 podman[309844]: 2025-10-07 14:12:58.429445699 +0000 UTC m=+0.057676546 container died e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:12:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33-userdata-shm.mount: Deactivated successfully.
Oct  7 10:12:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5fd8bb89be17d12218ba71ac6eeda91b03262ee72f2eb23233a59ffdba88634a-merged.mount: Deactivated successfully.
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 podman[309844]: 2025-10-07 14:12:58.485994746 +0000 UTC m=+0.114225583 container cleanup e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.492 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.499 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance destroyed successfully.#033[00m
Oct  7 10:12:58 np0005473739 systemd[1]: libpod-conmon-e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33.scope: Deactivated successfully.
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.505 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance destroyed successfully.#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.508 2 DEBUG nova.virt.libvirt.vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:44Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.509 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.509 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.510 2 DEBUG os_vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.512 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75d3896e-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.521 2 INFO os_vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0')#033[00m
Oct  7 10:12:58 np0005473739 podman[309882]: 2025-10-07 14:12:58.565860046 +0000 UTC m=+0.050688015 container remove e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.573 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf03bfde-e4be-4ab8-ae6b-b6595ac2d596]: (4, ('Tue Oct  7 02:12:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33)\ne4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33\nTue Oct  7 02:12:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (e4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33)\ne4a701617602b68451261d750bfa5006f88a5ca8f489d0588b313dd6ee6c1f33\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.577 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7954e2ea-2a13-412d-b415-93f01412d241]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.579 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:12:58 np0005473739 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 nova_compute[259550]: 2025-10-07 14:12:58.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[709244a7-7e0f-4cf4-80ec-c29eb10f0082]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[526e4d05-997c-490a-9188-7cc7d05edd9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.638 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa952128-dcc0-4cbb-848c-c4215653e030]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.659 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[164c8782-fc65-4ce8-b7fe-70d299de46cd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694165, 'reachable_time': 40405, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 309916, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.663 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:12:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:12:58.663 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc12bc5-a147-4a89-a97d-802285888138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:12:58 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.063 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deleting instance files /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f_del#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.064 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deletion of /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f_del complete#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.186 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.187 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating image(s)#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.212 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.248 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.279 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.287 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:59Z|00046|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4c:78:88 10.100.0.8
Oct  7 10:12:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:12:59Z|00047|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4c:78:88 10.100.0.8
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.411 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.124s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.414 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.415 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.415 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.439 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.444 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:12:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1452: 305 pgs: 305 active+clean; 205 MiB data, 544 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 5.0 MiB/s wr, 261 op/s
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.757 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.313s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.831 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:12:59 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7b69cba8-b224-43b1-95fa-04df2af0c8e6 does not exist
Oct  7 10:12:59 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 124944d5-c723-4cfe-b8e3-5ab9595b6cf1 does not exist
Oct  7 10:12:59 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 909b8cf6-5f90-47da-a60d-3e5dee2a4673 does not exist
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:12:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.924 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.925 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Ensure instance console log exists: /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.925 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.926 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.926 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.930 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start _get_guest_xml network_info=[{"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.936 2 WARNING nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.946 2 DEBUG nova.virt.libvirt.host [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.946 2 DEBUG nova.virt.libvirt.host [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.950 2 DEBUG nova.virt.libvirt.host [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.950 2 DEBUG nova.virt.libvirt.host [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.951 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.951 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.951 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.951 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.952 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.953 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.953 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.953 2 DEBUG nova.virt.hardware [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.953 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:12:59 np0005473739 nova_compute[259550]: 2025-10-07 14:12:59.970 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.006 2 DEBUG nova.compute.manager [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.006 2 DEBUG oslo_concurrency.lockutils [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.009 2 DEBUG oslo_concurrency.lockutils [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.009 2 DEBUG oslo_concurrency.lockutils [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.009 2 DEBUG nova.compute.manager [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.010 2 WARNING nova.compute.manager [req-41929273-7ea1-42fa-92b6-525946db40d0 req-564aea9c-4215-4dc6-8f39-aed74bbd292e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  7 10:13:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:00.045 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.292 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.293 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.312 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.403 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.404 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.412 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.413 2 INFO nova.compute.claims [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:13:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3359051724' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.438 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.438 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.451 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.478 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.484 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.540 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:13:00 np0005473739 podman[310393]: 2025-10-07 14:13:00.544072924 +0000 UTC m=+0.046861653 container create 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 10:13:00 np0005473739 systemd[1]: Started libpod-conmon-43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438.scope.
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.613 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:00 np0005473739 podman[310393]: 2025-10-07 14:13:00.520463734 +0000 UTC m=+0.023252493 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:13:00 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:00 np0005473739 podman[310393]: 2025-10-07 14:13:00.665462944 +0000 UTC m=+0.168251693 container init 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:13:00 np0005473739 podman[310393]: 2025-10-07 14:13:00.676124155 +0000 UTC m=+0.178912884 container start 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 10:13:00 np0005473739 quizzical_noether[310410]: 167 167
Oct  7 10:13:00 np0005473739 systemd[1]: libpod-43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438.scope: Deactivated successfully.
Oct  7 10:13:00 np0005473739 conmon[310410]: conmon 43352a835a78b233acad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438.scope/container/memory.events
Oct  7 10:13:00 np0005473739 podman[310393]: 2025-10-07 14:13:00.68427274 +0000 UTC m=+0.187061469 container attach 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:13:00 np0005473739 podman[310393]: 2025-10-07 14:13:00.685042919 +0000 UTC m=+0.187831648 container died 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.697 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-597113cf9638aa70855daf7d775d33103ba228e1ed19fb47cc0f4cfc05b82b54-merged.mount: Deactivated successfully.
Oct  7 10:13:00 np0005473739 podman[310393]: 2025-10-07 14:13:00.732205579 +0000 UTC m=+0.234994308 container remove 43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_noether, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:13:00 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:13:00 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:13:00 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:13:00 np0005473739 systemd[1]: libpod-conmon-43352a835a78b233acad6f9be8070ea2d938fa853ab36c31998b2aa471766438.scope: Deactivated successfully.
Oct  7 10:13:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:00 np0005473739 podman[310472]: 2025-10-07 14:13:00.931739814 +0000 UTC m=+0.048024234 container create 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:13:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/657947331' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.983 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.988 2 DEBUG nova.virt.libvirt.vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:59Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.988 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.990 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:00 np0005473739 systemd[1]: Started libpod-conmon-1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5.scope.
Oct  7 10:13:00 np0005473739 nova_compute[259550]: 2025-10-07 14:13:00.995 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <uuid>4e86b418-6e7f-4e2e-9146-a847920ed11f</uuid>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <name>instance-0000002a</name>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-1086034539</nova:name>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:12:59</nova:creationTime>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <nova:port uuid="75d3896e-b08f-4485-b4d7-dff914242597">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <entry name="serial">4e86b418-6e7f-4e2e-9146-a847920ed11f</entry>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <entry name="uuid">4e86b418-6e7f-4e2e-9146-a847920ed11f</entry>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e86b418-6e7f-4e2e-9146-a847920ed11f_disk">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:a0:f3:88"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <target dev="tap75d3896e-b0"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/console.log" append="off"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:13:00 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:13:01 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:13:01 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:01 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:01 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:01 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.001 2 DEBUG nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Preparing to wait for external event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.002 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.002 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.003 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.004 2 DEBUG nova.virt.libvirt.vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:40Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:12:59Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:13:01 np0005473739 podman[310472]: 2025-10-07 14:13:00.911098122 +0000 UTC m=+0.027382542 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.004 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.006 2 DEBUG nova.network.os_vif_util [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.007 2 DEBUG os_vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.010 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap75d3896e-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap75d3896e-b0, col_values=(('external_ids', {'iface-id': '75d3896e-b08f-4485-b4d7-dff914242597', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a0:f3:88', 'vm-uuid': '4e86b418-6e7f-4e2e-9146-a847920ed11f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:01 np0005473739 NetworkManager[44949]: <info>  [1759846381.0187] manager: (tap75d3896e-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/155)
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.032 2 INFO os_vif [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0')#033[00m
Oct  7 10:13:01 np0005473739 podman[310472]: 2025-10-07 14:13:01.05447197 +0000 UTC m=+0.170756420 container init 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:13:01 np0005473739 podman[310472]: 2025-10-07 14:13:01.065026928 +0000 UTC m=+0.181311348 container start 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:13:01 np0005473739 podman[310472]: 2025-10-07 14:13:01.06966627 +0000 UTC m=+0.185950720 container attach 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.092 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.093 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.093 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:a0:f3:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.093 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Using config drive#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.118 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.137 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.165 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'keypairs' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/534977440' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.202 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.213 2 DEBUG nova.compute.provider_tree [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.228 2 DEBUG nova.scheduler.client.report [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.248 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.843s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.248 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.250 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.259 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.259 2 INFO nova.compute.claims [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.305 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.305 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.326 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.344 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.435 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.439 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.440 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Creating image(s)#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.462 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.489 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.513 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.518 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.555 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.595 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.598 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.599 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.599 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.623 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.627 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f563ffb7-1ade-4b71-ab68-115322eef141_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1453: 305 pgs: 305 active+clean; 211 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 6.7 MiB/s wr, 249 op/s
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.851 2 DEBUG nova.policy [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '3ba82e1a9f12417391d78758ae9bb83c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1f5ee4e560ed4660a6685a086282a370', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.912 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Creating config drive at /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config#033[00m
Oct  7 10:13:01 np0005473739 nova_compute[259550]: 2025-10-07 14:13:01.919 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2ak3g_o execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.029 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f563ffb7-1ade-4b71-ab68-115322eef141_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.401s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3594847103' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.090 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.091 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx2ak3g_o" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.117 2 DEBUG nova.storage.rbd_utils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.122 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:02 np0005473739 zen_cerf[310490]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.162 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] resizing rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:13:02 np0005473739 zen_cerf[310490]: --> relative data size: 1.0
Oct  7 10:13:02 np0005473739 zen_cerf[310490]: --> All data devices are unavailable
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.199 2 DEBUG nova.compute.manager [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:02 np0005473739 systemd[1]: libpod-1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5.scope: Deactivated successfully.
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.199 2 DEBUG oslo_concurrency.lockutils [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.200 2 DEBUG oslo_concurrency.lockutils [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.200 2 DEBUG oslo_concurrency.lockutils [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:02 np0005473739 systemd[1]: libpod-1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5.scope: Consumed 1.049s CPU time.
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.200 2 DEBUG nova.compute.manager [req-bcf945f9-52b1-461f-8d9d-bdc13df2e9eb req-de33f2dc-1af6-4344-b4b8-0d7dedb65833 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Processing event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.204 2 DEBUG nova.compute.provider_tree [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.221 2 DEBUG nova.scheduler.client.report [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:02 np0005473739 podman[310743]: 2025-10-07 14:13:02.252490301 +0000 UTC m=+0.035039372 container died 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.274 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.275 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.288 2 DEBUG nova.objects.instance [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'migration_context' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-001cf69cb98e8d4c16b1d3ad2e60e6a93a732c7abbef350900745ef149b08716-merged.mount: Deactivated successfully.
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.302 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.302 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Ensure instance console log exists: /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.303 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.303 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.303 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.305 2 DEBUG oslo_concurrency.processutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config 4e86b418-6e7f-4e2e-9146-a847920ed11f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.183s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.306 2 INFO nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deleting local config drive /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f/disk.config because it was imported into RBD.#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.325 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.325 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:13:02 np0005473739 podman[310743]: 2025-10-07 14:13:02.34418385 +0000 UTC m=+0.126732901 container remove 1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_cerf, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.343 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:13:02 np0005473739 systemd[1]: libpod-conmon-1ad12952a43de1419be5378860ab961c41d7809b6f58a26da3feda490b29a2f5.scope: Deactivated successfully.
Oct  7 10:13:02 np0005473739 kernel: tap75d3896e-b0: entered promiscuous mode
Oct  7 10:13:02 np0005473739 NetworkManager[44949]: <info>  [1759846382.3663] manager: (tap75d3896e-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/156)
Oct  7 10:13:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:02Z|00307|binding|INFO|Claiming lport 75d3896e-b08f-4485-b4d7-dff914242597 for this chassis.
Oct  7 10:13:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:02Z|00308|binding|INFO|75d3896e-b08f-4485-b4d7-dff914242597: Claiming fa:16:3e:a0:f3:88 10.100.0.7
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.363 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.379 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f3:88 10.100.0.7'], port_security=['fa:16:3e:a0:f3:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4e86b418-6e7f-4e2e-9146-a847920ed11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '5', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=75d3896e-b08f-4485-b4d7-dff914242597) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.380 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 75d3896e-b08f-4485-b4d7-dff914242597 in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.382 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777#033[00m
Oct  7 10:13:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:02Z|00309|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 ovn-installed in OVS
Oct  7 10:13:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:02Z|00310|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 up in Southbound
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[604aa7d3-49e0-44cf-94f1-91904cfc5846]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.395 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.397 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.397 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7cf4ef29-2715-449e-a751-1d5232f13bc7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.400 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[79024905-b615-4739-aa6c-1b1741060fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 systemd-machined[214580]: New machine qemu-50-instance-0000002a.
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.420 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b4135240-2667-4a90-a4b5-218d1e699c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 systemd[1]: Started Virtual Machine qemu-50-instance-0000002a.
Oct  7 10:13:02 np0005473739 systemd-udevd[310819]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.446 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24333de2-8fe0-4a1c-acc2-f1a336d0649b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 NetworkManager[44949]: <info>  [1759846382.4646] device (tap75d3896e-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:13:02 np0005473739 NetworkManager[44949]: <info>  [1759846382.4657] device (tap75d3896e-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.474 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.475 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.476 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Creating image(s)#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.475 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4eb5f8-1eab-4db1-ab58-377b78407c1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 systemd-udevd[310831]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:13:02 np0005473739 NetworkManager[44949]: <info>  [1759846382.4884] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/157)
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.487 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[46ef57e0-b4ff-42b3-9230-7ef019928e93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.537 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.539 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1c13b0-af8d-45f1-aa40-7562d15e2358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.543 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3656f1b8-72a1-4575-a464-de31cb1c2a31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 NetworkManager[44949]: <info>  [1759846382.5773] device (tapd2cb8ca0-10): carrier: link connected
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.578 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.585 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39e950f7-b167-43e2-a1f4-65fcbbc63b74]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.603 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95acf9d1-2cff-45fc-b915-9b45d11dbb82]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696614, 'reachable_time': 38955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 310931, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[34c51176-235b-4234-b87b-43ece3d67019]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 696614, 'tstamp': 696614}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 310957, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.623 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.634 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f166128f-3e7e-4561-a31f-8dac3de4b1af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 102], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696614, 'reachable_time': 38955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 310962, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.674 2 DEBUG nova.policy [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.681 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Successfully created port: 4f1564c5-865b-45e8-afe1-7f7c3748c854 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.684 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d5e848-c35d-4747-89dd-bae662c75b70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.725 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.726 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.727 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.727 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.755 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:02 np0005473739 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.765 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9a53be-32b7-4664-b179-081ac930919f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:02 np0005473739 NetworkManager[44949]: <info>  [1759846382.7704] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/158)
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.772 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:02Z|00311|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.775 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.794 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.795 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5e7e2aa-4d70-419b-9c80-011acff661f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.796 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:13:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:02.797 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:13:02 np0005473739 nova_compute[259550]: 2025-10-07 14:13:02.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:03 np0005473739 podman[311116]: 2025-10-07 14:13:03.089513582 +0000 UTC m=+0.046319339 container create 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True)
Oct  7 10:13:03 np0005473739 systemd[1]: Started libpod-conmon-23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c.scope.
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.128 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.353s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:03 np0005473739 podman[311116]: 2025-10-07 14:13:03.067322569 +0000 UTC m=+0.024128346 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:13:03 np0005473739 podman[311116]: 2025-10-07 14:13:03.168895808 +0000 UTC m=+0.125701575 container init 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:13:03 np0005473739 podman[311116]: 2025-10-07 14:13:03.178581273 +0000 UTC m=+0.135387030 container start 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:13:03 np0005473739 podman[311116]: 2025-10-07 14:13:03.18304048 +0000 UTC m=+0.139846237 container attach 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:13:03 np0005473739 happy_cerf[311146]: 167 167
Oct  7 10:13:03 np0005473739 systemd[1]: libpod-23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c.scope: Deactivated successfully.
Oct  7 10:13:03 np0005473739 conmon[311146]: conmon 23016269311c43f82bb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c.scope/container/memory.events
Oct  7 10:13:03 np0005473739 podman[311116]: 2025-10-07 14:13:03.189290005 +0000 UTC m=+0.146095762 container died 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.205 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] resizing rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:13:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-32fd851fe58f29bf7b02f6bc60dc220c420964c1f270a7ab9ad59c64c3e0320b-merged.mount: Deactivated successfully.
Oct  7 10:13:03 np0005473739 podman[311116]: 2025-10-07 14:13:03.23019702 +0000 UTC m=+0.187002777 container remove 23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_cerf, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:13:03 np0005473739 systemd[1]: libpod-conmon-23016269311c43f82bb371a4541cb0dd059cd8ea31a7444e47bdd0c52c67852c.scope: Deactivated successfully.
Oct  7 10:13:03 np0005473739 podman[311170]: 2025-10-07 14:13:03.251475129 +0000 UTC m=+0.076095971 container create 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:13:03 np0005473739 systemd[1]: Started libpod-conmon-8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f.scope.
Oct  7 10:13:03 np0005473739 podman[311170]: 2025-10-07 14:13:03.211385606 +0000 UTC m=+0.036006468 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:13:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4717fac25a34c640330119c1bfd29f795e79bb034362e253f351ada395c9189e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.352 2 DEBUG nova.objects.instance [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'migration_context' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:03 np0005473739 podman[311170]: 2025-10-07 14:13:03.354731953 +0000 UTC m=+0.179352795 container init 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:13:03 np0005473739 podman[311170]: 2025-10-07 14:13:03.360728391 +0000 UTC m=+0.185349233 container start 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.376 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.377 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Ensure instance console log exists: /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.378 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.378 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.378 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:03 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [NOTICE]   (311263) : New worker (311271) forked
Oct  7 10:13:03 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [NOTICE]   (311263) : Loading success.
Oct  7 10:13:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:03.426 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:13:03 np0005473739 podman[311269]: 2025-10-07 14:13:03.444422671 +0000 UTC m=+0.051293870 container create d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:13:03 np0005473739 systemd[1]: Started libpod-conmon-d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046.scope.
Oct  7 10:13:03 np0005473739 podman[311269]: 2025-10-07 14:13:03.421381586 +0000 UTC m=+0.028252815 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:13:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.555 2 DEBUG nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.556 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 4e86b418-6e7f-4e2e-9146-a847920ed11f due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.557 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846383.5547528, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.557 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Started (Lifecycle Event)#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.566 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:13:03 np0005473739 podman[311269]: 2025-10-07 14:13:03.572123037 +0000 UTC m=+0.178994326 container init d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.574 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance spawned successfully.#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.574 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.578 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.582 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:03 np0005473739 podman[311269]: 2025-10-07 14:13:03.583496116 +0000 UTC m=+0.190367315 container start d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 10:13:03 np0005473739 podman[311269]: 2025-10-07 14:13:03.587605384 +0000 UTC m=+0.194476633 container attach d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.588 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Successfully created port: f208f539-2cf1-4007-8806-5b4836d43c4f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.594 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.595 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.595 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.595 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.596 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.596 2 DEBUG nova.virt.libvirt.driver [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.601 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.601 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846383.5551705, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.602 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.633 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.637 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846383.5637846, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.637 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.662 2 DEBUG nova.compute.manager [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.663 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.669 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1454: 305 pgs: 305 active+clean; 211 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 985 KiB/s rd, 6.7 MiB/s wr, 219 op/s
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.681 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Successfully updated port: 4f1564c5-865b-45e8-afe1-7f7c3748c854 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.710 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.710 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.710 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.712 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.750 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.752 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.752 2 DEBUG nova.objects.instance [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.761 2 DEBUG nova.compute.manager [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.761 2 DEBUG nova.compute.manager [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing instance network info cache due to event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.761 2 DEBUG oslo_concurrency.lockutils [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.813 2 DEBUG oslo_concurrency.lockutils [None req-206a25b3-a184-4222-9bf2-2e8dbba4efe3 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:03 np0005473739 nova_compute[259550]: 2025-10-07 14:13:03.849 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.230 2 DEBUG nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.230 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.230 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.231 2 WARNING nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.231 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.232 2 DEBUG oslo_concurrency.lockutils [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.232 2 DEBUG nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:04 np0005473739 nova_compute[259550]: 2025-10-07 14:13:04.232 2 WARNING nova.compute.manager [req-47cb2a56-53a2-42ef-9b4a-af3238c2f8c8 req-7b0fcc9d-2af7-4ec2-a084-edfe0230e29d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:13:04 np0005473739 angry_golick[311294]: {
Oct  7 10:13:04 np0005473739 angry_golick[311294]:    "0": [
Oct  7 10:13:04 np0005473739 angry_golick[311294]:        {
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "devices": [
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "/dev/loop3"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            ],
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_name": "ceph_lv0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_size": "21470642176",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "name": "ceph_lv0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "tags": {
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cluster_name": "ceph",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.crush_device_class": "",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.encrypted": "0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osd_id": "0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.type": "block",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.vdo": "0"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            },
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "type": "block",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "vg_name": "ceph_vg0"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:        }
Oct  7 10:13:04 np0005473739 angry_golick[311294]:    ],
Oct  7 10:13:04 np0005473739 angry_golick[311294]:    "1": [
Oct  7 10:13:04 np0005473739 angry_golick[311294]:        {
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "devices": [
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "/dev/loop4"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            ],
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_name": "ceph_lv1",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_size": "21470642176",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "name": "ceph_lv1",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "tags": {
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cluster_name": "ceph",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.crush_device_class": "",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.encrypted": "0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osd_id": "1",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.type": "block",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.vdo": "0"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            },
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "type": "block",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "vg_name": "ceph_vg1"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:        }
Oct  7 10:13:04 np0005473739 angry_golick[311294]:    ],
Oct  7 10:13:04 np0005473739 angry_golick[311294]:    "2": [
Oct  7 10:13:04 np0005473739 angry_golick[311294]:        {
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "devices": [
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "/dev/loop5"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            ],
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_name": "ceph_lv2",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_size": "21470642176",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "name": "ceph_lv2",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "tags": {
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.cluster_name": "ceph",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.crush_device_class": "",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.encrypted": "0",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osd_id": "2",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.type": "block",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:                "ceph.vdo": "0"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            },
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "type": "block",
Oct  7 10:13:04 np0005473739 angry_golick[311294]:            "vg_name": "ceph_vg2"
Oct  7 10:13:04 np0005473739 angry_golick[311294]:        }
Oct  7 10:13:04 np0005473739 angry_golick[311294]:    ]
Oct  7 10:13:04 np0005473739 angry_golick[311294]: }
Oct  7 10:13:04 np0005473739 systemd[1]: libpod-d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046.scope: Deactivated successfully.
Oct  7 10:13:04 np0005473739 podman[311269]: 2025-10-07 14:13:04.459664527 +0000 UTC m=+1.066535726 container died d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:13:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c788668318182723d67a9d81f36a00edb803297b7ba4578642ec1b5c0d8f35c1-merged.mount: Deactivated successfully.
Oct  7 10:13:04 np0005473739 podman[311269]: 2025-10-07 14:13:04.534460164 +0000 UTC m=+1.141331363 container remove d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_golick, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 10:13:04 np0005473739 systemd[1]: libpod-conmon-d85a5d033ac350fe7edcf3037b2f41f79267502ec3d059c96d48cb90c79f3046.scope: Deactivated successfully.
Oct  7 10:13:05 np0005473739 podman[311456]: 2025-10-07 14:13:05.233767705 +0000 UTC m=+0.053495957 container create fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:13:05 np0005473739 systemd[1]: Started libpod-conmon-fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a.scope.
Oct  7 10:13:05 np0005473739 podman[311456]: 2025-10-07 14:13:05.211820038 +0000 UTC m=+0.031548310 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:13:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:05 np0005473739 podman[311456]: 2025-10-07 14:13:05.339787562 +0000 UTC m=+0.159515844 container init fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:13:05 np0005473739 podman[311456]: 2025-10-07 14:13:05.350701088 +0000 UTC m=+0.170429340 container start fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:13:05 np0005473739 podman[311456]: 2025-10-07 14:13:05.354452517 +0000 UTC m=+0.174180769 container attach fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:13:05 np0005473739 ecstatic_wescoff[311472]: 167 167
Oct  7 10:13:05 np0005473739 systemd[1]: libpod-fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a.scope: Deactivated successfully.
Oct  7 10:13:05 np0005473739 podman[311456]: 2025-10-07 14:13:05.359786917 +0000 UTC m=+0.179515169 container died fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:13:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0260dac6d8e0633de804a1b2c6476edbbeb14ccbb1d01272924ce9407e983279-merged.mount: Deactivated successfully.
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.396 2 DEBUG nova.network.neutron [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:05 np0005473739 podman[311456]: 2025-10-07 14:13:05.405410527 +0000 UTC m=+0.225138779 container remove fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_wescoff, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.416 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.416 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance network_info: |[{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.417 2 DEBUG oslo_concurrency.lockutils [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.417 2 DEBUG nova.network.neutron [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.419 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start _get_guest_xml network_info=[{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.432 2 WARNING nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:13:05 np0005473739 systemd[1]: libpod-conmon-fc10382d16642b1c891a25ed8b6f7151b6c7a5a3aeb81997cd8b990a019d310a.scope: Deactivated successfully.
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.438 2 DEBUG nova.virt.libvirt.host [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.439 2 DEBUG nova.virt.libvirt.host [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.448 2 DEBUG nova.virt.libvirt.host [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.449 2 DEBUG nova.virt.libvirt.host [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.449 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.449 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.450 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.450 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.450 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.451 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.452 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.452 2 DEBUG nova.virt.hardware [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.455 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:05 np0005473739 podman[311498]: 2025-10-07 14:13:05.639658624 +0000 UTC m=+0.065055262 container create 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:13:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1455: 305 pgs: 305 active+clean; 339 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 12 MiB/s wr, 322 op/s
Oct  7 10:13:05 np0005473739 systemd[1]: Started libpod-conmon-54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912.scope.
Oct  7 10:13:05 np0005473739 podman[311498]: 2025-10-07 14:13:05.612199772 +0000 UTC m=+0.037596450 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:13:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:05 np0005473739 podman[311498]: 2025-10-07 14:13:05.762202505 +0000 UTC m=+0.187599173 container init 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:13:05 np0005473739 podman[311498]: 2025-10-07 14:13:05.771466678 +0000 UTC m=+0.196863326 container start 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:13:05 np0005473739 podman[311498]: 2025-10-07 14:13:05.775463924 +0000 UTC m=+0.200860562 container attach 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.848 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Successfully updated port: f208f539-2cf1-4007-8806-5b4836d43c4f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.864 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.865 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.865 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:13:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467823024' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.951 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.975 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:05 np0005473739 nova_compute[259550]: 2025-10-07 14:13:05.980 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.014 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.207 2 DEBUG nova.compute.manager [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-changed-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.208 2 DEBUG nova.compute.manager [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing instance network info cache due to event network-changed-f208f539-2cf1-4007-8806-5b4836d43c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.208 2 DEBUG oslo_concurrency.lockutils [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.357 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.358 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.358 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.358 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.358 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.360 2 INFO nova.compute.manager [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Terminating instance#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.361 2 DEBUG nova.compute.manager [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:13:06 np0005473739 kernel: tap75d3896e-b0 (unregistering): left promiscuous mode
Oct  7 10:13:06 np0005473739 NetworkManager[44949]: <info>  [1759846386.4105] device (tap75d3896e-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2094748141' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:06Z|00312|binding|INFO|Releasing lport 75d3896e-b08f-4485-b4d7-dff914242597 from this chassis (sb_readonly=0)
Oct  7 10:13:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:06Z|00313|binding|INFO|Setting lport 75d3896e-b08f-4485-b4d7-dff914242597 down in Southbound
Oct  7 10:13:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:06Z|00314|binding|INFO|Removing iface tap75d3896e-b0 ovn-installed in OVS
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.488 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a0:f3:88 10.100.0.7'], port_security=['fa:16:3e:a0:f3:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '4e86b418-6e7f-4e2e-9146-a847920ed11f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '6', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=75d3896e-b08f-4485-b4d7-dff914242597) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.490 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 75d3896e-b08f-4485-b4d7-dff914242597 in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.492 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2a8032fa-ff01-4277-b79a-9b347a853779]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.496 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.508 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.510 2 DEBUG nova.virt.libvirt.vif [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:01Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.510 2 DEBUG nova.network.os_vif_util [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.511 2 DEBUG nova.network.os_vif_util [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.513 2 DEBUG nova.objects.instance [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'pci_devices' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.528 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <uuid>f563ffb7-1ade-4b71-ab68-115322eef141</uuid>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <name>instance-0000002d</name>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1638787156</nova:name>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:13:05</nova:creationTime>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <nova:user uuid="3ba82e1a9f12417391d78758ae9bb83c">tempest-SecurityGroupsTestJSON-1673626413-project-member</nova:user>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <nova:project uuid="1f5ee4e560ed4660a6685a086282a370">tempest-SecurityGroupsTestJSON-1673626413</nova:project>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <nova:port uuid="4f1564c5-865b-45e8-afe1-7f7c3748c854">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <entry name="serial">f563ffb7-1ade-4b71-ab68-115322eef141</entry>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <entry name="uuid">f563ffb7-1ade-4b71-ab68-115322eef141</entry>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f563ffb7-1ade-4b71-ab68-115322eef141_disk">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f563ffb7-1ade-4b71-ab68-115322eef141_disk.config">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6d:76:5c"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <target dev="tap4f1564c5-86"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/console.log" append="off"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:13:06 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:06 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:06 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:06 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.529 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Preparing to wait for external event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.529 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.529 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.529 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.530 2 DEBUG nova.virt.libvirt.vif [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:01Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.531 2 DEBUG nova.network.os_vif_util [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.532 2 DEBUG nova.network.os_vif_util [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.532 2 DEBUG os_vif [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:06 np0005473739 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002a.scope: Deactivated successfully.
Oct  7 10:13:06 np0005473739 systemd[1]: machine-qemu\x2d50\x2dinstance\x2d0000002a.scope: Consumed 3.731s CPU time.
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f1564c5-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:06 np0005473739 systemd-machined[214580]: Machine qemu-50-instance-0000002a terminated.
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.539 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f1564c5-86, col_values=(('external_ids', {'iface-id': '4f1564c5-865b-45e8-afe1-7f7c3748c854', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:76:5c', 'vm-uuid': 'f563ffb7-1ade-4b71-ab68-115322eef141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 NetworkManager[44949]: <info>  [1759846386.5432] manager: (tap4f1564c5-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/159)
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.555 2 INFO os_vif [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86')#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.605 2 INFO nova.virt.libvirt.driver [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Instance destroyed successfully.#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.606 2 DEBUG nova.objects.instance [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid 4e86b418-6e7f-4e2e-9146-a847920ed11f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.619 2 DEBUG nova.virt.libvirt.vif [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:12:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-1086034539',display_name='tempest-ServerDiskConfigTestJSON-server-1086034539',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-1086034539',id=42,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:03Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-flb5tbhr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:03Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=4e86b418-6e7f-4e2e-9146-a847920ed11f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.620 2 DEBUG nova.network.os_vif_util [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "75d3896e-b08f-4485-b4d7-dff914242597", "address": "fa:16:3e:a0:f3:88", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap75d3896e-b0", "ovs_interfaceid": "75d3896e-b08f-4485-b4d7-dff914242597", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.620 2 DEBUG nova.network.os_vif_util [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.620 2 DEBUG os_vif [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.626 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap75d3896e-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.637 2 INFO os_vif [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a0:f3:88,bridge_name='br-int',has_traffic_filtering=True,id=75d3896e-b08f-4485-b4d7-dff914242597,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap75d3896e-b0')#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.655 2 DEBUG nova.network.neutron [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.660 2 DEBUG nova.compute.manager [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.661 2 DEBUG oslo_concurrency.lockutils [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.661 2 DEBUG oslo_concurrency.lockutils [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.661 2 DEBUG oslo_concurrency.lockutils [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.661 2 DEBUG nova.compute.manager [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.662 2 DEBUG nova.compute.manager [req-6fc89d12-dfe7-4d21-ad54-13c45fcb50e1 req-db0511df-140d-4c8a-bdc9-342ad2a05e9c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-unplugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.666 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.667 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.667 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] No VIF found with MAC fa:16:3e:6d:76:5c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.667 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Using config drive#033[00m
Oct  7 10:13:06 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [NOTICE]   (311263) : haproxy version is 2.8.14-c23fe91
Oct  7 10:13:06 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [NOTICE]   (311263) : path to executable is /usr/sbin/haproxy
Oct  7 10:13:06 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [WARNING]  (311263) : Exiting Master process...
Oct  7 10:13:06 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [ALERT]    (311263) : Current worker (311271) exited with code 143 (Terminated)
Oct  7 10:13:06 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[311240]: [WARNING]  (311263) : All workers exited. Exiting... (0)
Oct  7 10:13:06 np0005473739 systemd[1]: libpod-8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f.scope: Deactivated successfully.
Oct  7 10:13:06 np0005473739 podman[311630]: 2025-10-07 14:13:06.699662287 +0000 UTC m=+0.059017352 container died 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.702 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.711 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.712 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance network_info: |[{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.717 2 DEBUG oslo_concurrency.lockutils [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.718 2 DEBUG nova.network.neutron [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing network info cache for port f208f539-2cf1-4007-8806-5b4836d43c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.723 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start _get_guest_xml network_info=[{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.737 2 DEBUG nova.network.neutron [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updated VIF entry in instance network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.738 2 DEBUG nova.network.neutron [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f-userdata-shm.mount: Deactivated successfully.
Oct  7 10:13:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4717fac25a34c640330119c1bfd29f795e79bb034362e253f351ada395c9189e-merged.mount: Deactivated successfully.
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.756 2 DEBUG oslo_concurrency.lockutils [req-2bfd5414-43b0-4690-9d2c-dfb9368a00d3 req-b8a72dcb-6341-4369-ab56-0f69a8e414fd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.762 2 WARNING nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.768 2 DEBUG nova.virt.libvirt.host [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.768 2 DEBUG nova.virt.libvirt.host [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.772 2 DEBUG nova.virt.libvirt.host [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.773 2 DEBUG nova.virt.libvirt.host [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.773 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.773 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.774 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.775 2 DEBUG nova.virt.hardware [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.778 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:06 np0005473739 podman[311630]: 2025-10-07 14:13:06.789633081 +0000 UTC m=+0.148988146 container cleanup 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:13:06 np0005473739 systemd[1]: libpod-conmon-8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f.scope: Deactivated successfully.
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]: {
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "osd_id": 2,
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "type": "bluestore"
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:    },
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "osd_id": 1,
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "type": "bluestore"
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:    },
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "osd_id": 0,
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:        "type": "bluestore"
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]:    }
Oct  7 10:13:06 np0005473739 quizzical_keldysh[311533]: }
Oct  7 10:13:06 np0005473739 podman[311709]: 2025-10-07 14:13:06.876525345 +0000 UTC m=+0.056769282 container remove 8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.887 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b567cdb-6e11-45f2-9944-754def1ca74d]: (4, ('Tue Oct  7 02:13:06 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f)\n8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f\nTue Oct  7 02:13:06 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f)\n8cb151d82287165caad3c4282a10be554a2e64b775e8a4670616393f8ebbe57f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.890 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6863cc61-c793-4fa2-b600-b0f44876cbe5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.892 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:06 np0005473739 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 systemd[1]: libpod-54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912.scope: Deactivated successfully.
Oct  7 10:13:06 np0005473739 systemd[1]: libpod-54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912.scope: Consumed 1.076s CPU time.
Oct  7 10:13:06 np0005473739 podman[311498]: 2025-10-07 14:13:06.917078721 +0000 UTC m=+1.342475399 container died 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.925 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.927 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d679649a-13da-47ab-b511-5f5f0779a7c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7df00a8a2997698fb3066cc6be3b52439261d5ded822e07831e326585286c066-merged.mount: Deactivated successfully.
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.958 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f385d32-0d7e-42f4-b998-9aab082032cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.961 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f9f1ba7-754d-4ce6-b8b3-9b3449bcf227]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.979 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[33c3a5c0-4750-48f3-8a86-0646869d2585]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 696603, 'reachable_time': 35160, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311764, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:06 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.988 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:13:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:06.988 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[36397058-2157-4d8c-aea1-43d3c1c631f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:06 np0005473739 podman[311498]: 2025-10-07 14:13:06.991425245 +0000 UTC m=+1.416821893 container remove 54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_keldysh, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.992 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Creating config drive at /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config#033[00m
Oct  7 10:13:06 np0005473739 nova_compute[259550]: 2025-10-07 14:13:06.999 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzcn7idot execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:07 np0005473739 systemd[1]: libpod-conmon-54aca9c5daf03ef5317b4391a6c1ab9ab1afcea1a44293ccc6e47f2dccc47912.scope: Deactivated successfully.
Oct  7 10:13:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:13:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:13:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:13:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:13:07 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 146635f0-3f06-409c-b19b-9c06b8ceeda6 does not exist
Oct  7 10:13:07 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 634f47c4-ba0c-42cf-b799-9c318d6238cc does not exist
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.154 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzcn7idot" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.189 2 DEBUG nova.storage.rbd_utils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] rbd image f563ffb7-1ade-4b71-ab68-115322eef141_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.194 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config f563ffb7-1ade-4b71-ab68-115322eef141_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.252 2 INFO nova.virt.libvirt.driver [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deleting instance files /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f_del#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.254 2 INFO nova.virt.libvirt.driver [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deletion of /var/lib/nova/instances/4e86b418-6e7f-4e2e-9146-a847920ed11f_del complete#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.316 2 INFO nova.compute.manager [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.317 2 DEBUG oslo.service.loopingcall [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.317 2 DEBUG nova.compute.manager [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.317 2 DEBUG nova.network.neutron [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.391 2 DEBUG oslo_concurrency.processutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config f563ffb7-1ade-4b71-ab68-115322eef141_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.391 2 INFO nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Deleting local config drive /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/disk.config because it was imported into RBD.#033[00m
Oct  7 10:13:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1848916736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.427 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.649s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.453 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:07 np0005473739 kernel: tap4f1564c5-86: entered promiscuous mode
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.459 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:07 np0005473739 systemd-udevd[311584]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:13:07 np0005473739 NetworkManager[44949]: <info>  [1759846387.4620] manager: (tap4f1564c5-86): new Tun device (/org/freedesktop/NetworkManager/Devices/160)
Oct  7 10:13:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:07Z|00315|binding|INFO|Claiming lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 for this chassis.
Oct  7 10:13:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:07Z|00316|binding|INFO|4f1564c5-865b-45e8-afe1-7f7c3748c854: Claiming fa:16:3e:6d:76:5c 10.100.0.3
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.471 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:76:5c 10.100.0.3'], port_security=['fa:16:3e:6d:76:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f563ffb7-1ade-4b71-ab68-115322eef141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '2', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4f1564c5-865b-45e8-afe1-7f7c3748c854) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.472 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4f1564c5-865b-45e8-afe1-7f7c3748c854 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 bound to our chassis#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.474 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129#033[00m
Oct  7 10:13:07 np0005473739 NetworkManager[44949]: <info>  [1759846387.4837] device (tap4f1564c5-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:13:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:07Z|00317|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 ovn-installed in OVS
Oct  7 10:13:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:07Z|00318|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 up in Southbound
Oct  7 10:13:07 np0005473739 NetworkManager[44949]: <info>  [1759846387.4883] device (tap4f1564c5-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.496 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30c49ccf-2f07-4a77-96a8-993ce9a7cfb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:07 np0005473739 systemd-machined[214580]: New machine qemu-51-instance-0000002d.
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:07 np0005473739 systemd[1]: Started Virtual Machine qemu-51-instance-0000002d.
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.549 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8c1038fc-6296-4d6a-8788-191cc6cecb32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.553 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[657ad1f0-a87d-4bc8-ae2c-ebca767d776d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.591 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[afebd86e-1a73-478c-9cd1-db5d83e05cee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.619 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6df1de18-cde9-4748-8d26-3c5a9cf5b339]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 311902, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.643 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef6c550-784a-4cab-aa24-6945a0f6f015]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694652, 'tstamp': 694652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311921, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694656, 'tstamp': 694656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 311921, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.645 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.649 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:07.649 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1456: 305 pgs: 305 active+clean; 339 MiB data, 579 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 9.6 MiB/s wr, 258 op/s
Oct  7 10:13:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/760427212' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.952 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.954 2 DEBUG nova.virt.libvirt.vif [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.954 2 DEBUG nova.network.os_vif_util [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.955 2 DEBUG nova.network.os_vif_util [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.956 2 DEBUG nova.objects.instance [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_devices' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.991 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <name>instance-0000002e</name>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:13:06</nova:creationTime>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <entry name="serial">b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <entry name="uuid">b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:5f:75:d2"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <target dev="tapf208f539-2c"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log" append="off"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:13:07 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:07 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:07 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:07 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.992 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Preparing to wait for external event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.992 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.993 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.993 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.993 2 DEBUG nova.virt.libvirt.vif [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.994 2 DEBUG nova.network.os_vif_util [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.994 2 DEBUG nova.network.os_vif_util [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.995 2 DEBUG os_vif [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.995 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:07 np0005473739 nova_compute[259550]: 2025-10-07 14:13:07.996 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf208f539-2c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.000 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf208f539-2c, col_values=(('external_ids', {'iface-id': 'f208f539-2cf1-4007-8806-5b4836d43c4f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5f:75:d2', 'vm-uuid': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:08 np0005473739 NetworkManager[44949]: <info>  [1759846388.0032] manager: (tapf208f539-2c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/161)
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.010 2 INFO os_vif [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c')#033[00m
Oct  7 10:13:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:13:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.068 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.068 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.068 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:5f:75:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.069 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Using config drive#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.094 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.485 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846388.4849563, f563ffb7-1ade-4b71-ab68-115322eef141 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.486 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Started (Lifecycle Event)#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.501 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.507 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846388.4850912, f563ffb7-1ade-4b71-ab68-115322eef141 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.508 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.520 2 DEBUG nova.virt.libvirt.driver [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.523 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.526 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.541 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.775 2 DEBUG nova.network.neutron [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.780 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Creating config drive at /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.789 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9ngyr8u execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.891 2 DEBUG nova.network.neutron [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updated VIF entry in instance network info cache for port f208f539-2cf1-4007-8806-5b4836d43c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.892 2 DEBUG nova.network.neutron [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.894 2 INFO nova.compute.manager [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Took 1.58 seconds to deallocate network for instance.#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.920 2 DEBUG oslo_concurrency.lockutils [req-f185b233-d136-4c27-a97e-ec650909acbe req-7b0ae00f-35dc-4d21-b3dd-4297972ac05e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.941 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.941 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.953 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc9ngyr8u" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.979 2 DEBUG nova.storage.rbd_utils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:08 np0005473739 nova_compute[259550]: 2025-10-07 14:13:08.983 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:09 np0005473739 podman[312007]: 2025-10-07 14:13:09.107863767 +0000 UTC m=+0.083171077 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.132 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.133 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.133 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.133 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] No waiting events found dispatching network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.134 2 WARNING nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received unexpected event network-vif-plugged-75d3896e-b08f-4485-b4d7-dff914242597 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.134 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Processing event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG oslo_concurrency.lockutils [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.135 2 DEBUG nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.136 2 WARNING nova.compute.manager [req-66900bb7-06a5-4ab5-9a5f-febf838dc1a3 req-3e448747-2653-4683-b172-1489d7b4b190 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.136 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.141 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.142 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846389.1409862, f563ffb7-1ade-4b71-ab68-115322eef141 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.142 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.148 2 INFO nova.virt.libvirt.driver [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance spawned successfully.#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.149 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.175 2 DEBUG oslo_concurrency.processutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.175 2 INFO nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Deleting local config drive /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/disk.config because it was imported into RBD.#033[00m
Oct  7 10:13:09 np0005473739 podman[312009]: 2025-10-07 14:13:09.176615505 +0000 UTC m=+0.151448192 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.190 2 DEBUG oslo_concurrency.processutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.240 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:09 np0005473739 kernel: tapf208f539-2c: entered promiscuous mode
Oct  7 10:13:09 np0005473739 NetworkManager[44949]: <info>  [1759846389.2419] manager: (tapf208f539-2c): new Tun device (/org/freedesktop/NetworkManager/Devices/162)
Oct  7 10:13:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:09Z|00319|binding|INFO|Claiming lport f208f539-2cf1-4007-8806-5b4836d43c4f for this chassis.
Oct  7 10:13:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:09Z|00320|binding|INFO|f208f539-2cf1-4007-8806-5b4836d43c4f: Claiming fa:16:3e:5f:75:d2 10.100.0.12
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:09 np0005473739 NetworkManager[44949]: <info>  [1759846389.2597] device (tapf208f539-2c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:13:09 np0005473739 NetworkManager[44949]: <info>  [1759846389.2633] device (tapf208f539-2c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.271 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:75:d2 10.100.0.12'], port_security=['fa:16:3e:5f:75:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e8acaacb-1526-452b-a139-17a738541bb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f208f539-2cf1-4007-8806-5b4836d43c4f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.272 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.273 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.274 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.274 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.274 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.273 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f208f539-2cf1-4007-8806-5b4836d43c4f in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.274 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.275 2 DEBUG nova.virt.libvirt.driver [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:09 np0005473739 systemd-machined[214580]: New machine qemu-52-instance-0000002e.
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.291 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.289 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1f8ce32e-2ce4-4882-854f-a81d61b96434]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.290 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1d9f332-f1 in ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.293 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1d9f332-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.294 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78f1d8a4-046e-42f7-b88e-839bdcb18fe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.295 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76925a83-6dd8-43fc-aa2d-3685962d5949]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.309 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6bccdef0-3668-49f1-b1ea-1b9a4bbfe42c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.325 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.337 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68794ea4-e859-4def-91a5-7d4d939bcc36]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 systemd[1]: Started Virtual Machine qemu-52-instance-0000002e.
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.356 2 INFO nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Took 7.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.357 2 DEBUG nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:09Z|00321|binding|INFO|Setting lport f208f539-2cf1-4007-8806-5b4836d43c4f ovn-installed in OVS
Oct  7 10:13:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:09Z|00322|binding|INFO|Setting lport f208f539-2cf1-4007-8806-5b4836d43c4f up in Southbound
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.381 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eafedfa6-62e2-4b9a-8789-f413389c2b95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 NetworkManager[44949]: <info>  [1759846389.3882] manager: (tapb1d9f332-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/163)
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.387 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff6714f-3b79-4812-95b9-23e828478f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.430 2 INFO nova.compute.manager [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Took 9.07 seconds to build instance.#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.433 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a44ccba-7dfd-4781-abaa-f77546d94865]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.444 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b807922d-433b-4934-bb8a-615c2744db55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.447 2 DEBUG oslo_concurrency.lockutils [None req-a1df2e11-3338-4068-8d7c-aacc301f4df4 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.155s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:09 np0005473739 NetworkManager[44949]: <info>  [1759846389.4891] device (tapb1d9f332-f0): carrier: link connected
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.500 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f2cef718-662c-41f6-967d-fb0cdf8c342e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.523 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[532be02d-8224-483a-815d-90bc9110bfff]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697305, 'reachable_time': 34760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312136, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.549 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2d7282e-6c37-4628-8ae2-1959e4fea58a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:be96'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697305, 'tstamp': 697305}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312137, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.571 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3115bfd9-1954-4483-9681-ea118416b68f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697305, 'reachable_time': 34760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312138, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.599 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12662821-7cab-4e04-9283-e4d03029df9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.615 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.616 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.634 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.653 2 DEBUG nova.compute.manager [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.653 2 DEBUG oslo_concurrency.lockutils [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.654 2 DEBUG oslo_concurrency.lockutils [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.654 2 DEBUG oslo_concurrency.lockutils [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.654 2 DEBUG nova.compute.manager [req-f9af5bbf-a216-4901-a29a-75904f1120e6 req-c3a7f012-61fc-4e6a-aa00-366a0b4e13c1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Processing event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[556a01e6-0ae6-4764-983a-695f413862db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.676 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.676 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1457: 305 pgs: 305 active+clean; 306 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 9.6 MiB/s wr, 330 op/s
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.677 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:09 np0005473739 NetworkManager[44949]: <info>  [1759846389.6801] manager: (tapb1d9f332-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/164)
Oct  7 10:13:09 np0005473739 kernel: tapb1d9f332-f0: entered promiscuous mode
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.683 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:09Z|00323|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.704 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.707 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a54aa580-a6be-4146-a12d-a2c7e22ec0e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.708 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:13:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:09.708 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'env', 'PROCESS_TAG=haproxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1d9f332-f920-4d6e-8e91-dd13ec334d51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.712 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1022203406' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.772 2 DEBUG oslo_concurrency.processutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.582s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.777 2 DEBUG nova.compute.provider_tree [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.796 2 DEBUG nova.scheduler.client.report [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.813 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.871s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.815 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.103s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.822 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.822 2 INFO nova.compute.claims [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.838 2 INFO nova.scheduler.client.report [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Deleted allocations for instance 4e86b418-6e7f-4e2e-9146-a847920ed11f#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.908 2 DEBUG oslo_concurrency.lockutils [None req-d262971c-7bcd-4998-ac4b-c69bb2d1151d 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "4e86b418-6e7f-4e2e-9146-a847920ed11f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:09 np0005473739 nova_compute[259550]: 2025-10-07 14:13:09.979 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:10 np0005473739 podman[312214]: 2025-10-07 14:13:10.165714973 +0000 UTC m=+0.053436475 container create 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:13:10 np0005473739 systemd[1]: Started libpod-conmon-51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba.scope.
Oct  7 10:13:10 np0005473739 podman[312214]: 2025-10-07 14:13:10.138341064 +0000 UTC m=+0.026062586 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:13:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea5f871018c8e56c6586c7466d5febec36105738892f6f280160d66dce07ac2e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:10 np0005473739 podman[312214]: 2025-10-07 14:13:10.263994547 +0000 UTC m=+0.151716079 container init 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:13:10 np0005473739 podman[312214]: 2025-10-07 14:13:10.274172755 +0000 UTC m=+0.161894257 container start 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.285 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846390.2852232, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.286 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] VM Started (Lifecycle Event)#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.290 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.293 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:13:10 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [NOTICE]   (312253) : New worker (312255) forked
Oct  7 10:13:10 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [NOTICE]   (312253) : Loading success.
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.302 2 INFO nova.virt.libvirt.driver [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance spawned successfully.#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.303 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.309 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.314 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.324 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.325 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.325 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.326 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.327 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.327 2 DEBUG nova.virt.libvirt.driver [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.333 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.333 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846390.2866583, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.333 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.363 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.366 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846390.293427, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.367 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.388 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.393 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.399 2 INFO nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Took 7.92 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.399 2 DEBUG nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.409 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.450 2 INFO nova.compute.manager [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Took 9.86 seconds to build instance.#033[00m
Oct  7 10:13:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1501691277' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.463 2 DEBUG oslo_concurrency.lockutils [None req-040a740d-4e6a-4911-a264-03f02d201bc1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.025s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.488 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.494 2 DEBUG nova.compute.provider_tree [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.514 2 DEBUG nova.scheduler.client.report [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.546 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.547 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.585 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.586 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.607 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.647 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.751 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.755 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.755 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Creating image(s)#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.780 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:10 np0005473739 kernel: tap23002fad-24 (unregistering): left promiscuous mode
Oct  7 10:13:10 np0005473739 NetworkManager[44949]: <info>  [1759846390.7932] device (tap23002fad-24): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:10Z|00324|binding|INFO|Releasing lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d from this chassis (sb_readonly=0)
Oct  7 10:13:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:10Z|00325|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d down in Southbound
Oct  7 10:13:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:10Z|00326|binding|INFO|Removing iface tap23002fad-24 ovn-installed in OVS
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.819 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.820 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:88 10.100.0.8'], port_security=['fa:16:3e:4c:78:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4fef229d-c42d-43ac-a3ff-527ca68d3796', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=23002fad-2420-4b68-bfd7-2d90f8b5df6d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.822 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 23002fad-2420-4b68-bfd7-2d90f8b5df6d in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis#033[00m
Oct  7 10:13:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.825 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.826 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3b7c9f-aa9f-472f-a0ac-c84060bbbaf1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:10.826 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace which is not needed anymore#033[00m
Oct  7 10:13:10 np0005473739 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Deactivated successfully.
Oct  7 10:13:10 np0005473739 systemd[1]: machine-qemu\x2d49\x2dinstance\x2d0000002c.scope: Consumed 14.115s CPU time.
Oct  7 10:13:10 np0005473739 systemd-machined[214580]: Machine qemu-49-instance-0000002c terminated.
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.872 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.877 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.919 2 DEBUG nova.policy [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7bf568df6a8d461a83d287493b393589', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'de6794f6448744329cf2081eb5b889a5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:10 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [NOTICE]   (309773) : haproxy version is 2.8.14-c23fe91
Oct  7 10:13:10 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [NOTICE]   (309773) : path to executable is /usr/sbin/haproxy
Oct  7 10:13:10 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [WARNING]  (309773) : Exiting Master process...
Oct  7 10:13:10 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [WARNING]  (309773) : Exiting Master process...
Oct  7 10:13:10 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [ALERT]    (309773) : Current worker (309775) exited with code 143 (Terminated)
Oct  7 10:13:10 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[309769]: [WARNING]  (309773) : All workers exited. Exiting... (0)
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.974 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.975 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.975 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:10 np0005473739 nova_compute[259550]: 2025-10-07 14:13:10.976 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:10 np0005473739 systemd[1]: libpod-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e.scope: Deactivated successfully.
Oct  7 10:13:10 np0005473739 conmon[309769]: conmon fa49b6bc16ac5ac81553 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e.scope/container/memory.events
Oct  7 10:13:10 np0005473739 podman[312343]: 2025-10-07 14:13:10.983107369 +0000 UTC m=+0.053683822 container died fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.006 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.011 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e-userdata-shm.mount: Deactivated successfully.
Oct  7 10:13:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1da5c37de559ab2f97b903232e4ec93e0e34fbf169fac94209f83b9bd008f0ca-merged.mount: Deactivated successfully.
Oct  7 10:13:11 np0005473739 podman[312343]: 2025-10-07 14:13:11.024304013 +0000 UTC m=+0.094880466 container cleanup fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:13:11 np0005473739 NetworkManager[44949]: <info>  [1759846391.0270] manager: (tap23002fad-24): new Tun device (/org/freedesktop/NetworkManager/Devices/165)
Oct  7 10:13:11 np0005473739 kernel: tap23002fad-24: entered promiscuous mode
Oct  7 10:13:11 np0005473739 systemd-udevd[312293]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00327|binding|INFO|Claiming lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d for this chassis.
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00328|binding|INFO|23002fad-2420-4b68-bfd7-2d90f8b5df6d: Claiming fa:16:3e:4c:78:88 10.100.0.8
Oct  7 10:13:11 np0005473739 kernel: tap23002fad-24 (unregistering): left promiscuous mode
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.046 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:88 10.100.0.8'], port_security=['fa:16:3e:4c:78:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4fef229d-c42d-43ac-a3ff-527ca68d3796', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=23002fad-2420-4b68-bfd7-2d90f8b5df6d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:11 np0005473739 systemd[1]: libpod-conmon-fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e.scope: Deactivated successfully.
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00329|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d ovn-installed in OVS
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00330|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d up in Southbound
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00331|binding|INFO|Releasing lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d from this chassis (sb_readonly=1)
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00332|if_status|INFO|Dropped 4 log messages in last 131 seconds (most recently, 131 seconds ago) due to excessive rate
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00333|if_status|INFO|Not setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d down as sb is readonly
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00334|binding|INFO|Removing iface tap23002fad-24 ovn-installed in OVS
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00335|binding|INFO|Releasing lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d from this chassis (sb_readonly=0)
Oct  7 10:13:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:11Z|00336|binding|INFO|Setting lport 23002fad-2420-4b68-bfd7-2d90f8b5df6d down in Southbound
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.080 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4c:78:88 10.100.0.8'], port_security=['fa:16:3e:4c:78:88 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '4fef229d-c42d-43ac-a3ff-527ca68d3796', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=23002fad-2420-4b68-bfd7-2d90f8b5df6d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:11 np0005473739 podman[312391]: 2025-10-07 14:13:11.119092443 +0000 UTC m=+0.064635039 container remove fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.127 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1cbcd5cf-1bc5-4146-9710-1ff389edc619]: (4, ('Tue Oct  7 02:13:10 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e)\nfa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e\nTue Oct  7 02:13:11 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (fa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e)\nfa49b6bc16ac5ac815532ee889775387414de5480f67e2d06242cf7a0c0e5a3e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.129 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1c7f11-33da-43c3-8231-bba9fe9b51fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.130 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:11 np0005473739 kernel: tap8accac57-a0: left promiscuous mode
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.205 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb028840-0d39-4567-b736-bee7febc6d6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.231 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6b4af6-0216-4874-9b02-0a69298fa2f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.232 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9867a75b-99b3-4cb2-ab7a-c553b37f8d8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.250 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c2e2e4a5-c30d-4f10-b606-33aeee76fca3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694842, 'reachable_time': 42720, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312426, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.254 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.255 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[343399e3-24bc-4ebb-adb1-ec31917c6e72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.255 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 23002fad-2420-4b68-bfd7-2d90f8b5df6d in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis#033[00m
Oct  7 10:13:11 np0005473739 systemd[1]: run-netns-ovnmeta\x2d8accac57\x2dab45\x2d4b9b\x2d95ed\x2d86c2c65f202f.mount: Deactivated successfully.
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.260 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.267 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b029e3d0-5a81-465d-8ba0-0607e485caed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.270 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 23002fad-2420-4b68-bfd7-2d90f8b5df6d in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.276 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:11.277 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c3a15ed-8a9f-46b8-bf71-28b9cbd3cb06]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.439 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.510 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] resizing rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.551 2 INFO nova.virt.libvirt.driver [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance shutdown successfully after 24 seconds.#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.564 2 INFO nova.virt.libvirt.driver [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance destroyed successfully.#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.565 2 DEBUG nova.objects.instance [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.582 2 DEBUG nova.compute.manager [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.637 2 DEBUG nova.objects.instance [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'migration_context' on Instance uuid cb7392fb-c42f-47e9-b661-7cbf3dfe1263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.656 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.657 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Ensure instance console log exists: /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.658 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.658 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.658 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.671 2 DEBUG oslo_concurrency.lockutils [None req-2a5b7e09-3fc7-4c66-ac8d-bb7b14c13576 a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 24.391s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1458: 305 pgs: 305 active+clean; 293 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 6.8 MiB/s wr, 271 op/s
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.735 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Successfully created port: c8a4482e-996f-427f-8071-48124530250e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.810 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Received event network-vif-deleted-75d3896e-b08f-4485-b4d7-dff914242597 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.810 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.810 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.811 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.811 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.811 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.811 2 WARNING nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.811 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.812 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.812 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.812 2 DEBUG oslo_concurrency.lockutils [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.812 2 DEBUG nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.812 2 WARNING nova.compute.manager [req-63f35aef-bbb2-4ef4-80cd-eee038b5e581 req-b552886d-9d27-4805-8c24-045600d08f48 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.966 2 DEBUG nova.compute.manager [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.966 2 DEBUG oslo_concurrency.lockutils [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.967 2 DEBUG oslo_concurrency.lockutils [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.967 2 DEBUG oslo_concurrency.lockutils [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.967 2 DEBUG nova.compute.manager [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:11 np0005473739 nova_compute[259550]: 2025-10-07 14:13:11.967 2 WARNING nova.compute.manager [req-82bec374-ed3a-45b4-b0d7-54a58980d4b0 req-4071a072-8ecb-4a40-8ff4-5a4bffe8e393 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f for instance with vm_state active and task_state None.#033[00m
Oct  7 10:13:12 np0005473739 NetworkManager[44949]: <info>  [1759846392.6796] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/166)
Oct  7 10:13:12 np0005473739 NetworkManager[44949]: <info>  [1759846392.6805] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/167)
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.722 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.723 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.723 2 INFO nova.compute.manager [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Rebooting instance#033[00m
Oct  7 10:13:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:12Z|00337|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:13:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:12Z|00338|binding|INFO|Releasing lport 7bd1effb-a353-4387-8382-bb3ef13fb3f0 from this chassis (sb_readonly=0)
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.736 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.737 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.737 2 DEBUG nova.network.neutron [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:12Z|00339|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:13:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:12Z|00340|binding|INFO|Releasing lport 7bd1effb-a353-4387-8382-bb3ef13fb3f0 from this chassis (sb_readonly=0)
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.839 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Successfully updated port: c8a4482e-996f-427f-8071-48124530250e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.854 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.854 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquired lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.854 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:13:12 np0005473739 nova_compute[259550]: 2025-10-07 14:13:12.987 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:13:13 np0005473739 nova_compute[259550]: 2025-10-07 14:13:13.002 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1459: 305 pgs: 305 active+clean; 293 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 5.1 MiB/s wr, 216 op/s
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.007 2 DEBUG nova.network.neutron [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Updating instance_info_cache with network_info: [{"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.031 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Releasing lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.032 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance network_info: |[{"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.034 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start _get_guest_xml network_info=[{"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.040 2 WARNING nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.047 2 DEBUG nova.virt.libvirt.host [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.048 2 DEBUG nova.virt.libvirt.host [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.051 2 DEBUG nova.virt.libvirt.host [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.051 2 DEBUG nova.virt.libvirt.host [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.052 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.052 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.052 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.053 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.053 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.053 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.053 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.054 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.054 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.054 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.054 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.055 2 DEBUG nova.virt.hardware [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.057 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.097 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.098 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.098 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.099 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.099 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.099 2 WARNING nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.099 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.100 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.100 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.100 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.100 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.101 2 WARNING nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.101 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.101 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.101 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.102 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.102 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.102 2 WARNING nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-unplugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.102 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.103 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.103 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.103 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.103 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] No waiting events found dispatching network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.104 2 WARNING nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received unexpected event network-vif-plugged-23002fad-2420-4b68-bfd7-2d90f8b5df6d for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.104 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-changed-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.104 2 DEBUG nova.compute.manager [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing instance network info cache due to event network-changed-f208f539-2cf1-4007-8806-5b4836d43c4f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.105 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.105 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.105 2 DEBUG nova.network.neutron [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing network info cache for port f208f539-2cf1-4007-8806-5b4836d43c4f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.108 2 DEBUG nova.network.neutron [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.130 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.133 2 DEBUG nova.compute.manager [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.215 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.216 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.216 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.217 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.217 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.218 2 INFO nova.compute.manager [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Terminating instance#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.220 2 DEBUG nova.compute.manager [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.239 2 INFO nova.virt.libvirt.driver [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Instance destroyed successfully.#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.240 2 DEBUG nova.objects.instance [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'resources' on Instance uuid 4fef229d-c42d-43ac-a3ff-527ca68d3796 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.259 2 DEBUG nova.virt.libvirt.vif [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1001011532',display_name='tempest-DeleteServersTestJSON-server-1001011532',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1001011532',id=44,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-rcycun96',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:11Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=4fef229d-c42d-43ac-a3ff-527ca68d3796,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.259 2 DEBUG nova.network.os_vif_util [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "address": "fa:16:3e:4c:78:88", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap23002fad-24", "ovs_interfaceid": "23002fad-2420-4b68-bfd7-2d90f8b5df6d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.261 2 DEBUG nova.network.os_vif_util [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.261 2 DEBUG os_vif [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.264 2 DEBUG nova.compute.manager [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.264 2 DEBUG nova.compute.manager [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing instance network info cache due to event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.265 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.265 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.265 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.268 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap23002fad-24, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.274 2 INFO os_vif [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4c:78:88,bridge_name='br-int',has_traffic_filtering=True,id=23002fad-2420-4b68-bfd7-2d90f8b5df6d,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap23002fad-24')#033[00m
Oct  7 10:13:14 np0005473739 kernel: tap4f1564c5-86 (unregistering): left promiscuous mode
Oct  7 10:13:14 np0005473739 NetworkManager[44949]: <info>  [1759846394.3043] device (tap4f1564c5-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:14Z|00341|binding|INFO|Releasing lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 from this chassis (sb_readonly=0)
Oct  7 10:13:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:14Z|00342|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 down in Southbound
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:14Z|00343|binding|INFO|Removing iface tap4f1564c5-86 ovn-installed in OVS
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.320 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:76:5c 10.100.0.3'], port_security=['fa:16:3e:6d:76:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f563ffb7-1ade-4b71-ab68-115322eef141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '5', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205 fa91ff43-ea3c-45e4-a467-ff30d6e445a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4f1564c5-865b-45e8-afe1-7f7c3748c854) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4f1564c5-865b-45e8-afe1-7f7c3748c854 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 unbound from our chassis#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.324 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.340 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ef1c248-8e4a-4129-a1f1-4715bd5c1a7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:14 np0005473739 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct  7 10:13:14 np0005473739 systemd[1]: machine-qemu\x2d51\x2dinstance\x2d0000002d.scope: Consumed 5.999s CPU time.
Oct  7 10:13:14 np0005473739 systemd-machined[214580]: Machine qemu-51-instance-0000002d terminated.
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.379 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[83aaaead-bb55-4e73-95d4-47bb1901e9e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.386 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b2729305-db62-44b0-ae79-fdccef199a3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.424 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[68b6117a-e430-421a-a5a0-d40387c916d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4784cdde-ee7e-4240-a4e1-9a15c6149f23]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312550, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.485 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5c46f0c-567e-4a0c-80b0-298e1797f0cd]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694652, 'tstamp': 694652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312553, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694656, 'tstamp': 694656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312553, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.487 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.490 2 INFO nova.virt.libvirt.driver [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance destroyed successfully.#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.491 2 DEBUG nova.objects.instance [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'resources' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.495 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.497 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:14.497 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.503 2 DEBUG nova.virt.libvirt.vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:14Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.504 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.505 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.505 2 DEBUG os_vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f1564c5-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.525 2 INFO os_vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86')#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.533 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start _get_guest_xml network_info=[{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.538 2 WARNING nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.543 2 DEBUG nova.virt.libvirt.host [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.544 2 DEBUG nova.virt.libvirt.host [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:13:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3331525263' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.546 2 DEBUG nova.virt.libvirt.host [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.548 2 DEBUG nova.virt.libvirt.host [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.548 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.548 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.549 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.550 2 DEBUG nova.virt.hardware [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.551 2 DEBUG nova.objects.instance [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'vcpu_model' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.566 2 DEBUG oslo_concurrency.processutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.598 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.541s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.621 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.626 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.909 2 INFO nova.virt.libvirt.driver [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Deleting instance files /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796_del#033[00m
Oct  7 10:13:14 np0005473739 nova_compute[259550]: 2025-10-07 14:13:14.914 2 INFO nova.virt.libvirt.driver [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Deletion of /var/lib/nova/instances/4fef229d-c42d-43ac-a3ff-527ca68d3796_del complete#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.020 2 INFO nova.compute.manager [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.021 2 DEBUG oslo.service.loopingcall [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.021 2 DEBUG nova.compute.manager [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.022 2 DEBUG nova.network.neutron [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:13:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297930814' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.057 2 DEBUG oslo_concurrency.processutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.099 2 DEBUG oslo_concurrency.processutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/553660048' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.144 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.146 2 DEBUG nova.virt.libvirt.vif [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-738566473',display_name='tempest-ServerDiskConfigTestJSON-server-738566473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-738566473',id=47,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-zl7hxhm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:10Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=cb7392fb-c42f-47e9-b661-7cbf3dfe1263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.147 2 DEBUG nova.network.os_vif_util [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.148 2 DEBUG nova.network.os_vif_util [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.149 2 DEBUG nova.objects.instance [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'pci_devices' on Instance uuid cb7392fb-c42f-47e9-b661-7cbf3dfe1263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.165 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <uuid>cb7392fb-c42f-47e9-b661-7cbf3dfe1263</uuid>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <name>instance-0000002f</name>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerDiskConfigTestJSON-server-738566473</nova:name>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:13:14</nova:creationTime>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:user uuid="7bf568df6a8d461a83d287493b393589">tempest-ServerDiskConfigTestJSON-831175870-project-member</nova:user>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:project uuid="de6794f6448744329cf2081eb5b889a5">tempest-ServerDiskConfigTestJSON-831175870</nova:project>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:port uuid="c8a4482e-996f-427f-8071-48124530250e">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="serial">cb7392fb-c42f-47e9-b661-7cbf3dfe1263</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="uuid">cb7392fb-c42f-47e9-b661-7cbf3dfe1263</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:c4:4b:9c"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <target dev="tapc8a4482e-99"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/console.log" append="off"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:15 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:15 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.166 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Preparing to wait for external event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.177 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.177 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.177 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.178 2 DEBUG nova.virt.libvirt.vif [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-738566473',display_name='tempest-ServerDiskConfigTestJSON-server-738566473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-738566473',id=47,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-zl7hxhm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:10Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=cb7392fb-c42f-47e9-b661-7cbf3dfe1263,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.178 2 DEBUG nova.network.os_vif_util [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.181 2 DEBUG nova.network.os_vif_util [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.181 2 DEBUG os_vif [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.183 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc8a4482e-99, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc8a4482e-99, col_values=(('external_ids', {'iface-id': 'c8a4482e-996f-427f-8071-48124530250e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c4:4b:9c', 'vm-uuid': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 NetworkManager[44949]: <info>  [1759846395.1911] manager: (tapc8a4482e-99): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/168)
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.196 2 INFO os_vif [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99')#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.265 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.266 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.266 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] No VIF found with MAC fa:16:3e:c4:4b:9c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.266 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Using config drive#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.283 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2389833081' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.571 2 DEBUG oslo_concurrency.processutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.572 2 DEBUG nova.virt.libvirt.vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:14Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.572 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.573 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.574 2 DEBUG nova.objects.instance [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'pci_devices' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.613 2 DEBUG nova.network.neutron [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updated VIF entry in instance network info cache for port f208f539-2cf1-4007-8806-5b4836d43c4f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.613 2 DEBUG nova.network.neutron [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.641 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <uuid>f563ffb7-1ade-4b71-ab68-115322eef141</uuid>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <name>instance-0000002d</name>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:name>tempest-SecurityGroupsTestJSON-server-1638787156</nova:name>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:13:14</nova:creationTime>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:user uuid="3ba82e1a9f12417391d78758ae9bb83c">tempest-SecurityGroupsTestJSON-1673626413-project-member</nova:user>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:project uuid="1f5ee4e560ed4660a6685a086282a370">tempest-SecurityGroupsTestJSON-1673626413</nova:project>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <nova:port uuid="4f1564c5-865b-45e8-afe1-7f7c3748c854">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="serial">f563ffb7-1ade-4b71-ab68-115322eef141</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="uuid">f563ffb7-1ade-4b71-ab68-115322eef141</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f563ffb7-1ade-4b71-ab68-115322eef141_disk">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f563ffb7-1ade-4b71-ab68-115322eef141_disk.config">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6d:76:5c"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <target dev="tap4f1564c5-86"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141/console.log" append="off"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:13:15 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:15 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:15 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:15 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.642 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.642 2 DEBUG nova.virt.libvirt.driver [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] skipping disk for instance-0000002d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.644 2 DEBUG nova.virt.libvirt.vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:14Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.644 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.645 2 DEBUG nova.network.os_vif_util [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.645 2 DEBUG os_vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.648 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4f1564c5-86, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4f1564c5-86, col_values=(('external_ids', {'iface-id': '4f1564c5-865b-45e8-afe1-7f7c3748c854', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:76:5c', 'vm-uuid': 'f563ffb7-1ade-4b71-ab68-115322eef141'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 NetworkManager[44949]: <info>  [1759846395.6514] manager: (tap4f1564c5-86): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/169)
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.660 2 INFO os_vif [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86')#033[00m
Oct  7 10:13:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1460: 305 pgs: 305 active+clean; 306 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 6.9 MiB/s wr, 372 op/s
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.709 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Creating config drive at /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.715 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_23wc5i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:15 np0005473739 kernel: tap4f1564c5-86: entered promiscuous mode
Oct  7 10:13:15 np0005473739 systemd-udevd[312539]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:13:15 np0005473739 NetworkManager[44949]: <info>  [1759846395.7316] manager: (tap4f1564c5-86): new Tun device (/org/freedesktop/NetworkManager/Devices/170)
Oct  7 10:13:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:15Z|00344|binding|INFO|Claiming lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 for this chassis.
Oct  7 10:13:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:15Z|00345|binding|INFO|4f1564c5-865b-45e8-afe1-7f7c3748c854: Claiming fa:16:3e:6d:76:5c 10.100.0.3
Oct  7 10:13:15 np0005473739 NetworkManager[44949]: <info>  [1759846395.7531] device (tap4f1564c5-86): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:13:15 np0005473739 NetworkManager[44949]: <info>  [1759846395.7545] device (tap4f1564c5-86): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.761 2 DEBUG nova.network.neutron [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.764 2 DEBUG oslo_concurrency.lockutils [req-a59ff7a4-4f15-4977-9380-a7e436cd1c39 req-bfd8a0b8-6a49-4286-a9d1-e90207bdab22 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:15Z|00346|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 ovn-installed in OVS
Oct  7 10:13:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:15Z|00347|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 up in Southbound
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.776 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:76:5c 10.100.0.3'], port_security=['fa:16:3e:6d:76:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f563ffb7-1ade-4b71-ab68-115322eef141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '6', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205 fa91ff43-ea3c-45e4-a467-ff30d6e445a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4f1564c5-865b-45e8-afe1-7f7c3748c854) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.782 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4f1564c5-865b-45e8-afe1-7f7c3748c854 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 bound to our chassis#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.785 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129#033[00m
Oct  7 10:13:15 np0005473739 systemd-machined[214580]: New machine qemu-53-instance-0000002d.
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.797 2 INFO nova.compute.manager [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Took 0.78 seconds to deallocate network for instance.#033[00m
Oct  7 10:13:15 np0005473739 systemd[1]: Started Virtual Machine qemu-53-instance-0000002d.
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.812 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe77bf8-a841-44c5-bd62-183b9b5ebf1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.844 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.845 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.849 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b1254faf-7b79-4614-b053-cb6f86171111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.855 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[716855b3-c5ab-4cd3-ae2e-47f14125af6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.870 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpb_23wc5i" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.896 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[35d33956-b2e9-4015-b57a-53ab0a5baa58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.901 2 DEBUG nova.storage.rbd_utils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] rbd image cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.912 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.919 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2543e237-67c5-4cad-9bf8-384e4b61b098]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312738, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd21eb67-daa0-4c38-a2e2-7907f26a0928]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694652, 'tstamp': 694652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312740, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694656, 'tstamp': 694656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312740, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.940 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.948 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:15.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.977 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updated VIF entry in instance network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.978 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.995 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.996 2 DEBUG nova.compute.manager [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-changed-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.996 2 DEBUG nova.compute.manager [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Refreshing instance network info cache due to event network-changed-c8a4482e-996f-427f-8071-48124530250e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.996 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.997 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:15 np0005473739 nova_compute[259550]: 2025-10-07 14:13:15.997 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Refreshing network info cache for port c8a4482e-996f-427f-8071-48124530250e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.031 2 DEBUG oslo_concurrency.processutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.077 2 DEBUG oslo_concurrency.processutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config cb7392fb-c42f-47e9-b661-7cbf3dfe1263_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.078 2 INFO nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Deleting local config drive /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263/disk.config because it was imported into RBD.#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:16 np0005473739 kernel: tapc8a4482e-99: entered promiscuous mode
Oct  7 10:13:16 np0005473739 NetworkManager[44949]: <info>  [1759846396.1429] manager: (tapc8a4482e-99): new Tun device (/org/freedesktop/NetworkManager/Devices/171)
Oct  7 10:13:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:16Z|00348|binding|INFO|Claiming lport c8a4482e-996f-427f-8071-48124530250e for this chassis.
Oct  7 10:13:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:16Z|00349|binding|INFO|c8a4482e-996f-427f-8071-48124530250e: Claiming fa:16:3e:c4:4b:9c 10.100.0.7
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.152 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:4b:9c 10.100.0.7'], port_security=['fa:16:3e:c4:4b:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '2', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c8a4482e-996f-427f-8071-48124530250e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.154 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c8a4482e-996f-427f-8071-48124530250e in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 bound to our chassis#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.156 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777#033[00m
Oct  7 10:13:16 np0005473739 NetworkManager[44949]: <info>  [1759846396.1592] device (tapc8a4482e-99): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.159 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.160 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.160 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.161 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.161 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.161 2 WARNING nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.161 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:16 np0005473739 NetworkManager[44949]: <info>  [1759846396.1618] device (tapc8a4482e-99): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.161 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.162 2 WARNING nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.162 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.163 2 WARNING nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.163 2 DEBUG oslo_concurrency.lockutils [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.164 2 DEBUG nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.164 2 WARNING nova.compute.manager [req-934666fa-2711-46b8-93e8-74cbcaa00b4c req-3c5fb16f-8caf-4956-9879-b75798df024c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  7 10:13:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:16Z|00350|binding|INFO|Setting lport c8a4482e-996f-427f-8071-48124530250e ovn-installed in OVS
Oct  7 10:13:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:16Z|00351|binding|INFO|Setting lport c8a4482e-996f-427f-8071-48124530250e up in Southbound
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.174 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a812b60a-27bf-4a85-b60c-fd368b155a24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.178 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd2cb8ca0-11 in ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.181 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd2cb8ca0-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.181 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d0dbec0-bb4a-46ca-bca7-4cb28f170fc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.184 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6e59831-b5c3-4f2f-b18a-6c152a501fbb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 systemd-machined[214580]: New machine qemu-54-instance-0000002f.
Oct  7 10:13:16 np0005473739 systemd[1]: Started Virtual Machine qemu-54-instance-0000002f.
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.206 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[54690b49-c4e5-4011-be96-cbf99728e91e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.226 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fda2b321-4275-4a16-a284-81c51582685c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.260 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d10d3d0a-ef27-4675-9608-1828fc9d93f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 NetworkManager[44949]: <info>  [1759846396.2698] manager: (tapd2cb8ca0-10): new Veth device (/org/freedesktop/NetworkManager/Devices/172)
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[afc18324-2445-4450-8e24-d370477878d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.308 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0a20b331-1e61-4636-824b-6a1bba48b7d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.312 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe23f72-a67d-4f81-9ea4-d71d6d983923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.329 2 DEBUG nova.compute.manager [req-c14bde28-50d8-496d-abd4-66b85a6d281a req-7eb371fe-e48b-4789-8004-58814b18e06b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Received event network-vif-deleted-23002fad-2420-4b68-bfd7-2d90f8b5df6d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:16 np0005473739 NetworkManager[44949]: <info>  [1759846396.3401] device (tapd2cb8ca0-10): carrier: link connected
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.349 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[615e251e-ac42-401f-819a-da5047dfb253]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.370 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c23e61c6-a045-4e0d-87e1-7b963c139927]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697990, 'reachable_time': 23990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 312868, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.392 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81b0e5a2-20e7-4fdf-a64e-d1c432f0c458]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8f:eb7e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697990, 'tstamp': 697990}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 312869, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.413 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37beefe7-2a5a-4b32-be6a-6b00c02218b7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd2cb8ca0-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8f:eb:7e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 111], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697990, 'reachable_time': 23990, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 312870, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f74f8eba-4e02-44b5-967c-9a7348e61bc2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.522 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03ff7a10-17f6-4d92-94f3-74257110a362]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.524 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.524 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.525 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd2cb8ca0-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:16 np0005473739 NetworkManager[44949]: <info>  [1759846396.5281] manager: (tapd2cb8ca0-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/173)
Oct  7 10:13:16 np0005473739 kernel: tapd2cb8ca0-10: entered promiscuous mode
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.530 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd2cb8ca0-10, col_values=(('external_ids', {'iface-id': '93001468-74a0-4bac-94dd-0978737be6e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:16Z|00352|binding|INFO|Releasing lport 93001468-74a0-4bac-94dd-0978737be6e2 from this chassis (sb_readonly=0)
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3145046715' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.550 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.551 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[459ab3ce-2499-4b34-8df3-2a3a108a7bce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.552 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.pid.haproxy
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:13:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:16.554 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'env', 'PROCESS_TAG=haproxy-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.576 2 DEBUG oslo_concurrency.processutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.545s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.582 2 DEBUG nova.compute.provider_tree [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.600 2 DEBUG nova.scheduler.client.report [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.620 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.642 2 INFO nova.scheduler.client.report [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Deleted allocations for instance 4fef229d-c42d-43ac-a3ff-527ca68d3796#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.680 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for f563ffb7-1ade-4b71-ab68-115322eef141 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.681 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846396.6798258, f563ffb7-1ade-4b71-ab68-115322eef141 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.681 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.700 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.704 2 DEBUG oslo_concurrency.lockutils [None req-2fbafaad-a0e9-40cd-a464-2268894fee7f a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "4fef229d-c42d-43ac-a3ff-527ca68d3796" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.488s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.706 2 DEBUG nova.compute.manager [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.709 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.712 2 INFO nova.virt.libvirt.driver [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance rebooted successfully.#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.712 2 DEBUG nova.compute.manager [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.739 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.739 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846396.7039187, f563ffb7-1ade-4b71-ab68-115322eef141 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.740 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Started (Lifecycle Event)#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.763 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.768 2 DEBUG oslo_concurrency.lockutils [None req-4293b4d3-8d62-49df-9ab7-642982c99600 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:16 np0005473739 nova_compute[259550]: 2025-10-07 14:13:16.770 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:16 np0005473739 podman[312902]: 2025-10-07 14:13:16.978565324 +0000 UTC m=+0.061353414 container create 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:13:17 np0005473739 systemd[1]: Started libpod-conmon-6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985.scope.
Oct  7 10:13:17 np0005473739 podman[312902]: 2025-10-07 14:13:16.942167637 +0000 UTC m=+0.024955737 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:13:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/650a9a6c1f663f200a5eecd27e0eaaee72d722619b67ac6d8e8939c3ec7ae27d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.082 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Updated VIF entry in instance network info cache for port c8a4482e-996f-427f-8071-48124530250e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.084 2 DEBUG nova.network.neutron [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Updating instance_info_cache with network_info: [{"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.098 2 DEBUG oslo_concurrency.lockutils [req-c47cea3e-9dec-4fb3-b985-f72942c4e8c5 req-c103f893-7525-4837-b72f-e0137daf043e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-cb7392fb-c42f-47e9-b661-7cbf3dfe1263" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:17 np0005473739 podman[312902]: 2025-10-07 14:13:17.123453033 +0000 UTC m=+0.206241113 container init 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:13:17 np0005473739 podman[312902]: 2025-10-07 14:13:17.131124924 +0000 UTC m=+0.213913004 container start 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:13:17 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [NOTICE]   (312929) : New worker (312940) forked
Oct  7 10:13:17 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [NOTICE]   (312929) : Loading success.
Oct  7 10:13:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1461: 305 pgs: 305 active+clean; 306 MiB data, 559 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 1.8 MiB/s wr, 268 op/s
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.758 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846397.7582183, cb7392fb-c42f-47e9-b661-7cbf3dfe1263 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.759 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] VM Started (Lifecycle Event)#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.784 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.788 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846397.7583191, cb7392fb-c42f-47e9-b661-7cbf3dfe1263 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.788 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.810 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.814 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:17 np0005473739 nova_compute[259550]: 2025-10-07 14:13:17.837 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:13:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1462: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 6.9 MiB/s rd, 1.8 MiB/s wr, 339 op/s
Oct  7 10:13:20 np0005473739 nova_compute[259550]: 2025-10-07 14:13:20.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.601 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846386.6002095, 4e86b418-6e7f-4e2e-9146-a847920ed11f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.602 2 INFO nova.compute.manager [-] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.622 2 DEBUG nova.compute.manager [None req-37e4caa2-fdc4-4fbe-a664-d5d550e681e9 - - - - - -] [instance: 4e86b418-6e7f-4e2e-9146-a847920ed11f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1463: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.8 MiB/s rd, 1.8 MiB/s wr, 292 op/s
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.897 2 DEBUG nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.897 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.898 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.898 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.898 2 DEBUG nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Processing event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.899 2 DEBUG nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.899 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.899 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.900 2 DEBUG oslo_concurrency.lockutils [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.900 2 DEBUG nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] No waiting events found dispatching network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.900 2 WARNING nova.compute.manager [req-5f653930-85e2-491f-a7aa-72f0022c3836 req-b6753feb-793b-436a-8bae-fdd9d1bce6a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received unexpected event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.901 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.905 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846401.9049726, cb7392fb-c42f-47e9-b661-7cbf3dfe1263 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.905 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.907 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.910 2 INFO nova.virt.libvirt.driver [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance spawned successfully.#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.911 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.929 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.931 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.940 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.940 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.940 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.941 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.941 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.942 2 DEBUG nova.virt.libvirt.driver [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:21 np0005473739 nova_compute[259550]: 2025-10-07 14:13:21.968 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.001 2 INFO nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Took 11.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.001 2 DEBUG nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.066 2 INFO nova.compute.manager [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Took 12.38 seconds to build instance.#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.085 2 DEBUG oslo_concurrency.lockutils [None req-26c07b87-724d-4465-b824-6d0807673c41 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:13:22
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'vms', 'images', 'default.rgw.control', '.mgr']
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.756 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.756 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.772 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.854 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.855 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.863 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:13:22 np0005473739 nova_compute[259550]: 2025-10-07 14:13:22.863 2 INFO nova.compute.claims [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:13:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.114 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.549 2 DEBUG nova.compute.manager [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.550 2 DEBUG nova.compute.manager [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing instance network info cache due to event network-changed-4f1564c5-865b-45e8-afe1-7f7c3748c854. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.550 2 DEBUG oslo_concurrency.lockutils [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.550 2 DEBUG oslo_concurrency.lockutils [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.551 2 DEBUG nova.network.neutron [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Refreshing network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:13:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2779217877' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.663 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.550s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.669 2 DEBUG nova.compute.provider_tree [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1464: 305 pgs: 305 active+clean; 260 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 1.8 MiB/s wr, 252 op/s
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.685 2 DEBUG nova.scheduler.client.report [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.708 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.854s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.710 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.768 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.769 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:13:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:23Z|00048|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:5f:75:d2 10.100.0.12
Oct  7 10:13:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:23Z|00049|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:5f:75:d2 10.100.0.12
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.794 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.820 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.911 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.913 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.914 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Creating image(s)#033[00m
Oct  7 10:13:23 np0005473739 nova_compute[259550]: 2025-10-07 14:13:23.945 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.003 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.029 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.035 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.093 2 DEBUG nova.policy [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'a0452296b3a942e893961944a0203d98', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '06322ecec4b94a5d94e34cc8632d4104', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.127 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.128 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.128 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.129 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.129 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.131 2 INFO nova.compute.manager [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Terminating instance#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.132 2 DEBUG nova.compute.manager [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.134 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.135 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.136 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.136 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.160 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.165 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e2c535ed-c6cf-4684-82dc-ae59904e7381_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:24 np0005473739 kernel: tap4f1564c5-86 (unregistering): left promiscuous mode
Oct  7 10:13:24 np0005473739 NetworkManager[44949]: <info>  [1759846404.3496] device (tap4f1564c5-86): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:24Z|00353|binding|INFO|Releasing lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 from this chassis (sb_readonly=0)
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:24Z|00354|binding|INFO|Setting lport 4f1564c5-865b-45e8-afe1-7f7c3748c854 down in Southbound
Oct  7 10:13:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:24Z|00355|binding|INFO|Removing iface tap4f1564c5-86 ovn-installed in OVS
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.377 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:76:5c 10.100.0.3'], port_security=['fa:16:3e:6d:76:5c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'f563ffb7-1ade-4b71-ab68-115322eef141', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '8', 'neutron:security_group_ids': '2c8064f9-5492-4e10-8ae7-4680563dba64 83afa0b2-d45d-4225-8e21-5474e9077205 fa91ff43-ea3c-45e4-a467-ff30d6e445a2', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4f1564c5-865b-45e8-afe1-7f7c3748c854) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.379 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4f1564c5-865b-45e8-afe1-7f7c3748c854 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 unbound from our chassis#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.381 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 71870f0f-c94f-4d32-8df4-00da4d6d4129#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.412 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dba74272-74fe-41cf-bbbe-056269ffc8ab]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:24 np0005473739 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002d.scope: Deactivated successfully.
Oct  7 10:13:24 np0005473739 systemd[1]: machine-qemu\x2d53\x2dinstance\x2d0000002d.scope: Consumed 8.397s CPU time.
Oct  7 10:13:24 np0005473739 systemd-machined[214580]: Machine qemu-53-instance-0000002d terminated.
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.446 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f8eb6982-dec1-4c4d-9411-7f2474a4f624]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.450 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7603ba4f-83d8-488a-9130-128da66efcd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.488 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3fd12b0c-c68d-4ffd-ae61-c460c3f0a97c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.504 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b1c1f275-1397-44aa-9d08-9be52ebc9569]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap71870f0f-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:2f:d1:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 97], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694636, 'reachable_time': 33464, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313103, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.519 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f32ea8c8-2416-45a7-8d3c-8a8128b50daf]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694652, 'tstamp': 694652}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313104, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap71870f0f-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 694656, 'tstamp': 694656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313104, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.521 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.528 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71870f0f-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap71870f0f-c0, col_values=(('external_ids', {'iface-id': '7bd1effb-a353-4387-8382-bb3ef13fb3f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:24.530 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.658 2 INFO nova.virt.libvirt.driver [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Instance destroyed successfully.#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.659 2 DEBUG nova.objects.instance [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'resources' on Instance uuid f563ffb7-1ade-4b71-ab68-115322eef141 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.679 2 DEBUG nova.virt.libvirt.vif [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-1638787156',display_name='tempest-SecurityGroupsTestJSON-server-1638787156',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-1638787156',id=45,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-0vzib393',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:16Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=f563ffb7-1ade-4b71-ab68-115322eef141,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.680 2 DEBUG nova.network.os_vif_util [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.681 2 DEBUG nova.network.os_vif_util [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.681 2 DEBUG os_vif [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4f1564c5-86, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:24 np0005473739 nova_compute[259550]: 2025-10-07 14:13:24.693 2 INFO os_vif [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:76:5c,bridge_name='br-int',has_traffic_filtering=True,id=4f1564c5-865b-45e8-afe1-7f7c3748c854,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4f1564c5-86')#033[00m
Oct  7 10:13:25 np0005473739 nova_compute[259550]: 2025-10-07 14:13:25.598 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e2c535ed-c6cf-4684-82dc-ae59904e7381_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:25 np0005473739 nova_compute[259550]: 2025-10-07 14:13:25.652 2 DEBUG nova.compute.manager [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:25 np0005473739 nova_compute[259550]: 2025-10-07 14:13:25.653 2 DEBUG oslo_concurrency.lockutils [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:25 np0005473739 nova_compute[259550]: 2025-10-07 14:13:25.653 2 DEBUG oslo_concurrency.lockutils [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:25 np0005473739 nova_compute[259550]: 2025-10-07 14:13:25.654 2 DEBUG oslo_concurrency.lockutils [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:25 np0005473739 nova_compute[259550]: 2025-10-07 14:13:25.654 2 DEBUG nova.compute.manager [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:25 np0005473739 nova_compute[259550]: 2025-10-07 14:13:25.655 2 DEBUG nova.compute.manager [req-58d65c53-554c-4e6f-ba9a-be0c5c14c61e req-4c755963-66e7-4df2-9080-859781b0275a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-unplugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:13:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1465: 305 pgs: 305 active+clean; 311 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 4.6 MiB/s wr, 374 op/s
Oct  7 10:13:25 np0005473739 nova_compute[259550]: 2025-10-07 14:13:25.704 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] resizing rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:13:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.042 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Successfully created port: 98f1c539-b9b9-4abb-89cf-268c353264ed _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.071 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846391.052923, 4fef229d-c42d-43ac-a3ff-527ca68d3796 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.072 2 INFO nova.compute.manager [-] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.090 2 DEBUG nova.compute.manager [None req-7e539d33-6695-4830-9c53-ec794008e86a - - - - - -] [instance: 4fef229d-c42d-43ac-a3ff-527ca68d3796] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.293 2 DEBUG nova.objects.instance [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'migration_context' on Instance uuid e2c535ed-c6cf-4684-82dc-ae59904e7381 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.311 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.312 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Ensure instance console log exists: /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.313 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.313 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.313 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.554 2 INFO nova.virt.libvirt.driver [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Deleting instance files /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141_del#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.555 2 INFO nova.virt.libvirt.driver [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Deletion of /var/lib/nova/instances/f563ffb7-1ade-4b71-ab68-115322eef141_del complete#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.604 2 INFO nova.compute.manager [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Took 2.47 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.605 2 DEBUG oslo.service.loopingcall [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.606 2 DEBUG nova.compute.manager [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.606 2 DEBUG nova.network.neutron [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.916 2 DEBUG nova.network.neutron [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updated VIF entry in instance network info cache for port 4f1564c5-865b-45e8-afe1-7f7c3748c854. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.917 2 DEBUG nova.network.neutron [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [{"id": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "address": "fa:16:3e:6d:76:5c", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4f1564c5-86", "ovs_interfaceid": "4f1564c5-865b-45e8-afe1-7f7c3748c854", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.940 2 DEBUG oslo_concurrency.lockutils [req-c3ab3bf5-ae1e-4630-a5c7-0219564c1eb2 req-ea3df3c6-7b9f-4900-831c-054b3c1f5fe9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f563ffb7-1ade-4b71-ab68-115322eef141" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:26 np0005473739 nova_compute[259550]: 2025-10-07 14:13:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:27 np0005473739 podman[313209]: 2025-10-07 14:13:27.104778878 +0000 UTC m=+0.079306216 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:13:27 np0005473739 podman[313208]: 2025-10-07 14:13:27.133183985 +0000 UTC m=+0.109128340 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.261 2 DEBUG nova.network.neutron [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.293 2 INFO nova.compute.manager [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Took 0.69 seconds to deallocate network for instance.#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.377 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.377 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.402 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.439 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.440 2 DEBUG nova.compute.provider_tree [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.458 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.481 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.591 2 DEBUG oslo_concurrency.processutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1466: 305 pgs: 305 active+clean; 311 MiB data, 558 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 2.9 MiB/s wr, 218 op/s
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:13:27 np0005473739 nova_compute[259550]: 2025-10-07 14:13:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2915733219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.067 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Successfully updated port: 98f1c539-b9b9-4abb-89cf-268c353264ed _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.069 2 DEBUG oslo_concurrency.processutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.076 2 DEBUG nova.compute.provider_tree [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.080 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.080 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquired lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.080 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.093 2 DEBUG nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.116 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.119 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.107s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.119 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.119 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.119 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.159 2 DEBUG nova.compute.manager [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.159 2 DEBUG oslo_concurrency.lockutils [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.160 2 DEBUG oslo_concurrency.lockutils [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.160 2 DEBUG oslo_concurrency.lockutils [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.160 2 DEBUG nova.compute.manager [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] No waiting events found dispatching network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.160 2 WARNING nova.compute.manager [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received unexpected event network-vif-plugged-4f1564c5-865b-45e8-afe1-7f7c3748c854 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.160 2 DEBUG nova.compute.manager [req-ac48dd7a-8063-4c3c-b3a4-b29d66085268 req-0e371a6c-c7ea-48a2-a14d-230ad6f3a17d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Received event network-vif-deleted-4f1564c5-865b-45e8-afe1-7f7c3748c854 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.184 2 INFO nova.scheduler.client.report [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Deleted allocations for instance f563ffb7-1ade-4b71-ab68-115322eef141#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.211 2 DEBUG nova.compute.manager [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-changed-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.211 2 DEBUG nova.compute.manager [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Refreshing instance network info cache due to event network-changed-98f1c539-b9b9-4abb-89cf-268c353264ed. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.212 2 DEBUG oslo_concurrency.lockutils [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.239 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.262 2 DEBUG oslo_concurrency.lockutils [None req-40f5328d-6808-4f4c-b1ad-abda34f7fb64 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "f563ffb7-1ade-4b71-ab68-115322eef141" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2607777912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.616 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.700 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.703 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.707 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.707 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.710 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.711 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000002f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.931 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.932 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.84690475463867GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.932 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:28 np0005473739 nova_compute[259550]: 2025-10-07 14:13:28.933 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.008 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 867307d6-0b3f-4a3e-9dc4-a05221e2f080 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance cb7392fb-c42f-47e9-b661-7cbf3dfe1263 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance e2c535ed-c6cf-4684-82dc-ae59904e7381 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.009 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.015 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.015 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.015 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.016 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.016 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.017 2 INFO nova.compute.manager [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Terminating instance#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.018 2 DEBUG nova.compute.manager [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:13:29 np0005473739 kernel: tapc8a4482e-99 (unregistering): left promiscuous mode
Oct  7 10:13:29 np0005473739 NetworkManager[44949]: <info>  [1759846409.0948] device (tapc8a4482e-99): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:29Z|00356|binding|INFO|Releasing lport c8a4482e-996f-427f-8071-48124530250e from this chassis (sb_readonly=0)
Oct  7 10:13:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:29Z|00357|binding|INFO|Setting lport c8a4482e-996f-427f-8071-48124530250e down in Southbound
Oct  7 10:13:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:29Z|00358|binding|INFO|Removing iface tapc8a4482e-99 ovn-installed in OVS
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.129 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:4b:9c 10.100.0.7'], port_security=['fa:16:3e:c4:4b:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c8a4482e-996f-427f-8071-48124530250e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.131 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c8a4482e-996f-427f-8071-48124530250e in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.132 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.133 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[899678fc-13cc-45f2-8d03-9ee82a4d5760]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.136 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 namespace which is not needed anymore#033[00m
Oct  7 10:13:29 np0005473739 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002f.scope: Deactivated successfully.
Oct  7 10:13:29 np0005473739 systemd[1]: machine-qemu\x2d54\x2dinstance\x2d0000002f.scope: Consumed 8.464s CPU time.
Oct  7 10:13:29 np0005473739 systemd-machined[214580]: Machine qemu-54-instance-0000002f terminated.
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.210 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:29 np0005473739 kernel: tapc8a4482e-99: entered promiscuous mode
Oct  7 10:13:29 np0005473739 systemd-udevd[313292]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:13:29 np0005473739 NetworkManager[44949]: <info>  [1759846409.2444] manager: (tapc8a4482e-99): new Tun device (/org/freedesktop/NetworkManager/Devices/174)
Oct  7 10:13:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:29Z|00359|binding|INFO|Claiming lport c8a4482e-996f-427f-8071-48124530250e for this chassis.
Oct  7 10:13:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:29Z|00360|binding|INFO|c8a4482e-996f-427f-8071-48124530250e: Claiming fa:16:3e:c4:4b:9c 10.100.0.7
Oct  7 10:13:29 np0005473739 kernel: tapc8a4482e-99 (unregistering): left promiscuous mode
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.271 2 INFO nova.virt.libvirt.driver [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Instance destroyed successfully.#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.272 2 DEBUG nova.objects.instance [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lazy-loading 'resources' on Instance uuid cb7392fb-c42f-47e9-b661-7cbf3dfe1263 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:29Z|00361|binding|INFO|Releasing lport c8a4482e-996f-427f-8071-48124530250e from this chassis (sb_readonly=0)
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.280 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:4b:9c 10.100.0.7'], port_security=['fa:16:3e:c4:4b:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c8a4482e-996f-427f-8071-48124530250e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.287 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c4:4b:9c 10.100.0.7'], port_security=['fa:16:3e:c4:4b:9c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'cb7392fb-c42f-47e9-b661-7cbf3dfe1263', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'de6794f6448744329cf2081eb5b889a5', 'neutron:revision_number': '4', 'neutron:security_group_ids': '87b0a1b1-e544-4abe-aca3-f2c86cefe8c7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cb35c390-e270-4bf1-8877-4c738e025b16, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c8a4482e-996f-427f-8071-48124530250e) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.289 2 DEBUG nova.virt.libvirt.vif [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerDiskConfigTestJSON-server-738566473',display_name='tempest-ServerDiskConfigTestJSON-server-738566473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverdiskconfigtestjson-server-738566473',id=47,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:22Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='de6794f6448744329cf2081eb5b889a5',ramdisk_id='',reservation_id='r-zl7hxhm4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerDiskConfigTestJSON-831175870',owner_user_name='tempest-ServerDiskConfigTestJSON-831175870-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:27Z,user_data=None,user_id='7bf568df6a8d461a83d287493b393589',uuid=cb7392fb-c42f-47e9-b661-7cbf3dfe1263,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.289 2 DEBUG nova.network.os_vif_util [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converting VIF {"id": "c8a4482e-996f-427f-8071-48124530250e", "address": "fa:16:3e:c4:4b:9c", "network": {"id": "d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777", "bridge": "br-int", "label": "tempest-ServerDiskConfigTestJSON-2143494792-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "de6794f6448744329cf2081eb5b889a5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc8a4482e-99", "ovs_interfaceid": "c8a4482e-996f-427f-8071-48124530250e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.290 2 DEBUG nova.network.os_vif_util [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.291 2 DEBUG os_vif [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.295 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc8a4482e-99, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:29 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [NOTICE]   (312929) : haproxy version is 2.8.14-c23fe91
Oct  7 10:13:29 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [NOTICE]   (312929) : path to executable is /usr/sbin/haproxy
Oct  7 10:13:29 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [WARNING]  (312929) : Exiting Master process...
Oct  7 10:13:29 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [ALERT]    (312929) : Current worker (312940) exited with code 143 (Terminated)
Oct  7 10:13:29 np0005473739 neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777[312917]: [WARNING]  (312929) : All workers exited. Exiting... (0)
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 systemd[1]: libpod-6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985.scope: Deactivated successfully.
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.307 2 INFO os_vif [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c4:4b:9c,bridge_name='br-int',has_traffic_filtering=True,id=c8a4482e-996f-427f-8071-48124530250e,network=Network(d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc8a4482e-99')#033[00m
Oct  7 10:13:29 np0005473739 podman[313313]: 2025-10-07 14:13:29.313322181 +0000 UTC m=+0.073315468 container died 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  7 10:13:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985-userdata-shm.mount: Deactivated successfully.
Oct  7 10:13:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-650a9a6c1f663f200a5eecd27e0eaaee72d722619b67ac6d8e8939c3ec7ae27d-merged.mount: Deactivated successfully.
Oct  7 10:13:29 np0005473739 podman[313313]: 2025-10-07 14:13:29.369107547 +0000 UTC m=+0.129100834 container cleanup 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:13:29 np0005473739 systemd[1]: libpod-conmon-6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985.scope: Deactivated successfully.
Oct  7 10:13:29 np0005473739 podman[313369]: 2025-10-07 14:13:29.45709062 +0000 UTC m=+0.056272151 container remove 6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.463 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[537c8d9c-a3fb-400e-a52d-9bfc8a6822f7]: (4, ('Tue Oct  7 02:13:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985)\n6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985\nTue Oct  7 02:13:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 (6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985)\n6f4950305f31abfcfa7771d0b5ef59aa7b975ada4338c8d30b69180913abb985\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.466 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c67f2470-43f0-4598-94e8-d151ea670e35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.468 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd2cb8ca0-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 kernel: tapd2cb8ca0-10: left promiscuous mode
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.486 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e1adab4-bbfe-4202-b8f0-750e60f5fae6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.514 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[968b5973-c742-48b1-b40d-4efcb83e2523]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.516 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10f5ce2d-1fbc-489c-b2f4-9cc176bd3939]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.536 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92b6e708-04c1-4d99-96dd-45e040c19871]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697981, 'reachable_time': 44724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313397, 'error': None, 'target': 'ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.538 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.539 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[798cc4fd-1dd8-4c87-a4ec-1fafddd8e1bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd2cb8ca0\x2d1272\x2d4fa9\x2db4ed\x2d8d0a1e3df777.mount: Deactivated successfully.
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.540 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c8a4482e-996f-427f-8071-48124530250e in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.542 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.543 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b47d8f-5b95-458a-867f-440a6a046865]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.544 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c8a4482e-996f-427f-8071-48124530250e in datapath d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777 unbound from our chassis#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.546 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d2cb8ca0-1272-4fa9-b4ed-8d0a1e3df777, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:29.547 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7f81e3-71f2-45f6-9d95-44a70384f013]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1467: 305 pgs: 305 active+clean; 303 MiB data, 554 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 277 op/s
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2315139432' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.757 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.548s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.763 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.772 2 INFO nova.virt.libvirt.driver [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Deleting instance files /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263_del#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.773 2 INFO nova.virt.libvirt.driver [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Deletion of /var/lib/nova/instances/cb7392fb-c42f-47e9-b661-7cbf3dfe1263_del complete#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.778 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.808 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.809 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #63. Immutable memtables: 0.
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.813537) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 33] Flushing memtable with next log file: 63
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409813589, "job": 33, "event": "flush_started", "num_memtables": 1, "num_entries": 2555, "num_deletes": 523, "total_data_size": 3402637, "memory_usage": 3459904, "flush_reason": "Manual Compaction"}
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 33] Level-0 flush table #64: started
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.822 2 INFO nova.compute.manager [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.823 2 DEBUG oslo.service.loopingcall [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.823 2 DEBUG nova.compute.manager [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.823 2 DEBUG nova.network.neutron [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409831271, "cf_name": "default", "job": 33, "event": "table_file_creation", "file_number": 64, "file_size": 3338260, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 27985, "largest_seqno": 30539, "table_properties": {"data_size": 3327064, "index_size": 6794, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 3333, "raw_key_size": 26751, "raw_average_key_size": 20, "raw_value_size": 3302348, "raw_average_value_size": 2492, "num_data_blocks": 296, "num_entries": 1325, "num_filter_entries": 1325, "num_deletions": 523, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846227, "oldest_key_time": 1759846227, "file_creation_time": 1759846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 64, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 33] Flush lasted 17791 microseconds, and 7149 cpu microseconds.
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.831327) [db/flush_job.cc:967] [default] [JOB 33] Level-0 flush table #64: 3338260 bytes OK
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.831355) [db/memtable_list.cc:519] [default] Level-0 commit table #64 started
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.833555) [db/memtable_list.cc:722] [default] Level-0 commit table #64: memtable #1 done
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.833571) EVENT_LOG_v1 {"time_micros": 1759846409833565, "job": 33, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.833593) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 33] Try to delete WAL files size 3390673, prev total WAL file size 3390673, number of live WAL files 2.
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000060.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.834799) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730032353130' seq:72057594037927935, type:22 .. '7061786F730032373632' seq:0, type:0; will stop at (end)
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 34] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 33 Base level 0, inputs: [64(3260KB)], [62(8566KB)]
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409834882, "job": 34, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [64], "files_L6": [62], "score": -1, "input_data_size": 12110702, "oldest_snapshot_seqno": -1}
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.875 2 DEBUG nova.network.neutron [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Updating instance_info_cache with network_info: [{"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 34] Generated table #65: 5535 keys, 10369275 bytes, temperature: kUnknown
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409904579, "cf_name": "default", "job": 34, "event": "table_file_creation", "file_number": 65, "file_size": 10369275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10327740, "index_size": 26577, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13893, "raw_key_size": 138933, "raw_average_key_size": 25, "raw_value_size": 10223725, "raw_average_value_size": 1847, "num_data_blocks": 1087, "num_entries": 5535, "num_filter_entries": 5535, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846409, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 65, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.905 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Releasing lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.904860) [db/compaction/compaction_job.cc:1663] [default] [JOB 34] Compacted 1@0 + 1@6 files to L6 => 10369275 bytes
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.906534) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 173.6 rd, 148.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 8.4 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 6586, records dropped: 1051 output_compression: NoCompression
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.906553) EVENT_LOG_v1 {"time_micros": 1759846409906544, "job": 34, "event": "compaction_finished", "compaction_time_micros": 69782, "compaction_time_cpu_micros": 27007, "output_level": 6, "num_output_files": 1, "total_output_size": 10369275, "num_input_records": 6586, "num_output_records": 5535, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.906 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance network_info: |[{"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000064.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409907222, "job": 34, "event": "table_file_deletion", "file_number": 64}
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.907 2 DEBUG oslo_concurrency.lockutils [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.907 2 DEBUG nova.network.neutron [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Refreshing network info cache for port 98f1c539-b9b9-4abb-89cf-268c353264ed _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000062.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846409908742, "job": 34, "event": "table_file_deletion", "file_number": 62}
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.834678) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908875) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908882) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908884) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908886) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:29.908888) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.910 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start _get_guest_xml network_info=[{"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.915 2 WARNING nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.922 2 DEBUG nova.virt.libvirt.host [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.923 2 DEBUG nova.virt.libvirt.host [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.930 2 DEBUG nova.virt.libvirt.host [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.930 2 DEBUG nova.virt.libvirt.host [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.931 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.931 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.931 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.931 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.932 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.933 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.933 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.933 2 DEBUG nova.virt.hardware [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:13:29 np0005473739 nova_compute[259550]: 2025-10-07 14:13:29.937 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422403577' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.446 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.475 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.481 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.519 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-unplugged-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.520 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.520 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.521 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.521 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] No waiting events found dispatching network-vif-unplugged-c8a4482e-996f-427f-8071-48124530250e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.521 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-unplugged-c8a4482e-996f-427f-8071-48124530250e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.521 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.522 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.522 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.522 2 DEBUG oslo_concurrency.lockutils [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.523 2 DEBUG nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] No waiting events found dispatching network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.523 2 WARNING nova.compute.manager [req-52807ed1-74e4-45fe-9a26-17c9d1d8134f req-ee139616-ba2a-4007-ae97-3905ccf8a620 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received unexpected event network-vif-plugged-c8a4482e-996f-427f-8071-48124530250e for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.757 2 DEBUG nova.network.neutron [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.788 2 INFO nova.compute.manager [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Took 0.96 seconds to deallocate network for instance.#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.809 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.842 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.843 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:13:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2838868307' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:13:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.920 2 DEBUG oslo_concurrency.processutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.968 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.971 2 DEBUG nova.virt.libvirt.vif [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1601957698',display_name='tempest-DeleteServersTestJSON-server-1601957698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1601957698',id=48,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-bm0zqvip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:23Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=e2c535ed-c6cf-4684-82dc-ae59904e7381,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.971 2 DEBUG nova.network.os_vif_util [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.972 2 DEBUG nova.network.os_vif_util [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.974 2 DEBUG nova.objects.instance [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2c535ed-c6cf-4684-82dc-ae59904e7381 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:30 np0005473739 nova_compute[259550]: 2025-10-07 14:13:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.004 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <uuid>e2c535ed-c6cf-4684-82dc-ae59904e7381</uuid>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <name>instance-00000030</name>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <nova:name>tempest-DeleteServersTestJSON-server-1601957698</nova:name>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:13:29</nova:creationTime>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <nova:user uuid="a0452296b3a942e893961944a0203d98">tempest-DeleteServersTestJSON-1871282594-project-member</nova:user>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <nova:project uuid="06322ecec4b94a5d94e34cc8632d4104">tempest-DeleteServersTestJSON-1871282594</nova:project>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <nova:port uuid="98f1c539-b9b9-4abb-89cf-268c353264ed">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <entry name="serial">e2c535ed-c6cf-4684-82dc-ae59904e7381</entry>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <entry name="uuid">e2c535ed-c6cf-4684-82dc-ae59904e7381</entry>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e2c535ed-c6cf-4684-82dc-ae59904e7381_disk">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:41:a7:57"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <target dev="tap98f1c539-b9"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/console.log" append="off"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:13:31 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:31 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:31 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:31 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.010 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Preparing to wait for external event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.010 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.011 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.011 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.012 2 DEBUG nova.virt.libvirt.vif [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1601957698',display_name='tempest-DeleteServersTestJSON-server-1601957698',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1601957698',id=48,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-bm0zqvip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:23Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=e2c535ed-c6cf-4684-82dc-ae59904e7381,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.012 2 DEBUG nova.network.os_vif_util [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.013 2 DEBUG nova.network.os_vif_util [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.013 2 DEBUG os_vif [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.015 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.018 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap98f1c539-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.019 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap98f1c539-b9, col_values=(('external_ids', {'iface-id': '98f1c539-b9b9-4abb-89cf-268c353264ed', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:a7:57', 'vm-uuid': 'e2c535ed-c6cf-4684-82dc-ae59904e7381'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:31 np0005473739 NetworkManager[44949]: <info>  [1759846411.0482] manager: (tap98f1c539-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/175)
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.052 2 INFO os_vif [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9')#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.125 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.125 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.125 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] No VIF found with MAC fa:16:3e:41:a7:57, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.126 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Using config drive#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.146 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1169457586' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.425 2 DEBUG oslo_concurrency.processutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.430 2 DEBUG nova.compute.provider_tree [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.457 2 DEBUG nova.scheduler.client.report [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.483 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.641s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.514 2 INFO nova.scheduler.client.report [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Deleted allocations for instance cb7392fb-c42f-47e9-b661-7cbf3dfe1263#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.537 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Creating config drive at /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.542 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_0pvskb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.599 2 DEBUG nova.network.neutron [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Updated VIF entry in instance network info cache for port 98f1c539-b9b9-4abb-89cf-268c353264ed. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.600 2 DEBUG nova.network.neutron [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Updating instance_info_cache with network_info: [{"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.602 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.602 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.603 2 DEBUG nova.objects.instance [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.638 2 DEBUG oslo_concurrency.lockutils [req-5396cb91-019d-435a-a547-434f9a8f6e1e req-90f878b3-3cd6-40f7-923d-78b8b868e175 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e2c535ed-c6cf-4684-82dc-ae59904e7381" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.649 2 DEBUG oslo_concurrency.lockutils [None req-e27ab727-e670-48aa-9311-bdb9ca3f0e28 7bf568df6a8d461a83d287493b393589 de6794f6448744329cf2081eb5b889a5 - - default default] Lock "cb7392fb-c42f-47e9-b661-7cbf3dfe1263" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.682 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpn_0pvskb" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1468: 305 pgs: 305 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 3.9 MiB/s wr, 276 op/s
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.710 2 DEBUG nova.storage.rbd_utils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] rbd image e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.716 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.907 2 DEBUG oslo_concurrency.processutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config e2c535ed-c6cf-4684-82dc-ae59904e7381_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.908 2 INFO nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Deleting local config drive /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381/disk.config because it was imported into RBD.#033[00m
Oct  7 10:13:31 np0005473739 kernel: tap98f1c539-b9: entered promiscuous mode
Oct  7 10:13:31 np0005473739 NetworkManager[44949]: <info>  [1759846411.9680] manager: (tap98f1c539-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/176)
Oct  7 10:13:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:31Z|00362|binding|INFO|Claiming lport 98f1c539-b9b9-4abb-89cf-268c353264ed for this chassis.
Oct  7 10:13:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:31Z|00363|binding|INFO|98f1c539-b9b9-4abb-89cf-268c353264ed: Claiming fa:16:3e:41:a7:57 10.100.0.13
Oct  7 10:13:31 np0005473739 nova_compute[259550]: 2025-10-07 14:13:31.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:31 np0005473739 NetworkManager[44949]: <info>  [1759846411.9837] device (tap98f1c539-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:13:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:31.984 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:a7:57 10.100.0.13'], port_security=['fa:16:3e:41:a7:57 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e2c535ed-c6cf-4684-82dc-ae59904e7381', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=98f1c539-b9b9-4abb-89cf-268c353264ed) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:31 np0005473739 NetworkManager[44949]: <info>  [1759846411.9862] device (tap98f1c539-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:13:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:31.986 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 98f1c539-b9b9-4abb-89cf-268c353264ed in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f bound to our chassis#033[00m
Oct  7 10:13:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:31.989 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8accac57-ab45-4b9b-95ed-86c2c65f202f#033[00m
Oct  7 10:13:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:32Z|00364|binding|INFO|Setting lport 98f1c539-b9b9-4abb-89cf-268c353264ed ovn-installed in OVS
Oct  7 10:13:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:32Z|00365|binding|INFO|Setting lport 98f1c539-b9b9-4abb-89cf-268c353264ed up in Southbound
Oct  7 10:13:32 np0005473739 nova_compute[259550]: 2025-10-07 14:13:32.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.002 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ce51449-e3b2-4b2d-9bc5-ca09c97d63ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.003 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8accac57-a1 in ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:13:32 np0005473739 nova_compute[259550]: 2025-10-07 14:13:32.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.006 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8accac57-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4babb802-b248-407a-9f49-082794630d46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.008 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f937b052-8fb9-4200-b4e7-8ce4cc6a40cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 systemd-machined[214580]: New machine qemu-55-instance-00000030.
Oct  7 10:13:32 np0005473739 nova_compute[259550]: 2025-10-07 14:13:32.021 2 DEBUG nova.objects.instance [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:32 np0005473739 systemd[1]: Started Virtual Machine qemu-55-instance-00000030.
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.025 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[96e81977-d782-4ee5-8e7a-982be04ec529]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.043 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9bb17d-5068-4a18-8a57-53bb6b9b9d55]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 nova_compute[259550]: 2025-10-07 14:13:32.043 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.081 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6f2d52-9bd9-4b57-8feb-4d8a6399ee05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.092 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a29574d6-d314-4c93-8c87-bc21f3b62b3f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 NetworkManager[44949]: <info>  [1759846412.0937] manager: (tap8accac57-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/177)
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.147 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71b149be-5149-4d50-bdb9-91119808c0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.151 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[61c9ef59-7b17-440a-809f-aacc7babb882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 NetworkManager[44949]: <info>  [1759846412.1856] device (tap8accac57-a0): carrier: link connected
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.191 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[85aa8db2-01ad-4150-b046-b24b6315bef5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 nova_compute[259550]: 2025-10-07 14:13:32.209 2 DEBUG nova.policy [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.211 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d74a38ec-d88e-4fe1-bd33-2b4ae5eab4a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699574, 'reachable_time': 29541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313591, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.229 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd3504d6-365e-4cb4-b1db-d2c827c1f7db]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fecf:e89f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 699574, 'tstamp': 699574}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313592, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.249 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a9eb90-c1c4-4f30-9804-8e688dd3cec5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8accac57-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:cf:e8:9f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 115], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699574, 'reachable_time': 29541, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 313593, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.276 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e8ea3d-9118-45c2-8a32-08ac3e454d94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002005269380412841 of space, bias 1.0, pg target 0.6015808141238523 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:13:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.342 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a393b27f-03f3-4a52-b09c-563ff6c05fdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.344 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.345 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.346 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8accac57-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:32 np0005473739 NetworkManager[44949]: <info>  [1759846412.3493] manager: (tap8accac57-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/178)
Oct  7 10:13:32 np0005473739 kernel: tap8accac57-a0: entered promiscuous mode
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.354 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8accac57-a0, col_values=(('external_ids', {'iface-id': 'a487ff40-6fa2-404e-b7fc-dbcc968fecc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:32Z|00366|binding|INFO|Releasing lport a487ff40-6fa2-404e-b7fc-dbcc968fecc3 from this chassis (sb_readonly=0)
Oct  7 10:13:32 np0005473739 nova_compute[259550]: 2025-10-07 14:13:32.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:32 np0005473739 nova_compute[259550]: 2025-10-07 14:13:32.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.380 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c6cb93c4-8b99-48e2-923e-e37fd72286ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.382 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/8accac57-ab45-4b9b-95ed-86c2c65f202f.pid.haproxy
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 8accac57-ab45-4b9b-95ed-86c2c65f202f
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:13:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:32.383 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'env', 'PROCESS_TAG=haproxy-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8accac57-ab45-4b9b-95ed-86c2c65f202f.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:13:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:13:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723362424' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:13:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:13:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/723362424' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:13:32 np0005473739 podman[313625]: 2025-10-07 14:13:32.793798227 +0000 UTC m=+0.059495025 container create 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:13:32 np0005473739 systemd[1]: Started libpod-conmon-175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147.scope.
Oct  7 10:13:32 np0005473739 podman[313625]: 2025-10-07 14:13:32.761709604 +0000 UTC m=+0.027406422 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:13:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:13:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/181ed1164f571f18c8510bf11989ae73eb1b54ec91d8a852ede1b73c5ee4b345/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:13:32 np0005473739 podman[313625]: 2025-10-07 14:13:32.886050882 +0000 UTC m=+0.151747700 container init 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:13:32 np0005473739 podman[313625]: 2025-10-07 14:13:32.89167033 +0000 UTC m=+0.157367128 container start 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:13:32 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [NOTICE]   (313686) : New worker (313688) forked
Oct  7 10:13:32 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [NOTICE]   (313686) : Loading success.
Oct  7 10:13:32 np0005473739 nova_compute[259550]: 2025-10-07 14:13:32.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.207 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Successfully created port: f66416c8-d3f2-4dfa-b30b-8505f9a9120c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.347 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Received event network-vif-deleted-c8a4482e-996f-427f-8071-48124530250e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.347 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.347 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.348 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.348 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.348 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Processing event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.349 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.349 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.350 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.350 2 DEBUG oslo_concurrency.lockutils [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.350 2 DEBUG nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] No waiting events found dispatching network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.350 2 WARNING nova.compute.manager [req-a1f1473b-ae52-4d7c-bf21-767f78faacfe req-68c35e99-d2f4-4487-ab2d-c8a623c094ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received unexpected event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.388 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846413.3875046, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.388 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Started (Lifecycle Event)#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.390 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.393 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.397 2 INFO nova.virt.libvirt.driver [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance spawned successfully.#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.397 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.421 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.424 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.595 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.595 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846413.3877447, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.595 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.600 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.600 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.601 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.601 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.602 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.602 2 DEBUG nova.virt.libvirt.driver [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.629 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.632 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846413.3930922, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.632 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.659 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.662 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.668 2 INFO nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Took 9.76 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.669 2 DEBUG nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.680 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:13:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1469: 305 pgs: 305 active+clean; 274 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.9 MiB/s wr, 251 op/s
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.727 2 INFO nova.compute.manager [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Took 10.91 seconds to build instance.#033[00m
Oct  7 10:13:33 np0005473739 nova_compute[259550]: 2025-10-07 14:13:33.751 2 DEBUG oslo_concurrency.lockutils [None req-5acdfc00-9c5d-40f1-901e-38d03e428e1c a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:34 np0005473739 nova_compute[259550]: 2025-10-07 14:13:34.087 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Successfully updated port: f66416c8-d3f2-4dfa-b30b-8505f9a9120c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:13:34 np0005473739 nova_compute[259550]: 2025-10-07 14:13:34.100 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:34 np0005473739 nova_compute[259550]: 2025-10-07 14:13:34.101 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:34 np0005473739 nova_compute[259550]: 2025-10-07 14:13:34.101 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:13:34 np0005473739 nova_compute[259550]: 2025-10-07 14:13:34.244 2 DEBUG nova.compute.manager [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-changed-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:34 np0005473739 nova_compute[259550]: 2025-10-07 14:13:34.244 2 DEBUG nova.compute.manager [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing instance network info cache due to event network-changed-f66416c8-d3f2-4dfa-b30b-8505f9a9120c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:13:34 np0005473739 nova_compute[259550]: 2025-10-07 14:13:34.245 2 DEBUG oslo_concurrency.lockutils [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:34 np0005473739 nova_compute[259550]: 2025-10-07 14:13:34.286 2 WARNING nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:13:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1470: 305 pgs: 305 active+clean; 246 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.9 MiB/s wr, 314 op/s
Oct  7 10:13:35 np0005473739 nova_compute[259550]: 2025-10-07 14:13:35.895 2 DEBUG nova.objects.instance [None req-94505fe2-3374-4c4c-ba18-e84125b5239a a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2c535ed-c6cf-4684-82dc-ae59904e7381 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:35 np0005473739 nova_compute[259550]: 2025-10-07 14:13:35.917 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846415.9173257, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:35 np0005473739 nova_compute[259550]: 2025-10-07 14:13:35.918 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:13:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:35 np0005473739 nova_compute[259550]: 2025-10-07 14:13:35.937 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:35 np0005473739 nova_compute[259550]: 2025-10-07 14:13:35.940 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:13:35 np0005473739 nova_compute[259550]: 2025-10-07 14:13:35.973 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.251 2 DEBUG nova.network.neutron [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.272 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.273 2 DEBUG oslo_concurrency.lockutils [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.274 2 DEBUG nova.network.neutron [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Refreshing network info cache for port f66416c8-d3f2-4dfa-b30b-8505f9a9120c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.277 2 DEBUG nova.virt.libvirt.vif [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.278 2 DEBUG nova.network.os_vif_util [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.278 2 DEBUG nova.network.os_vif_util [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.279 2 DEBUG os_vif [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.279 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.280 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.284 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf66416c8-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.285 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf66416c8-d3, col_values=(('external_ids', {'iface-id': 'f66416c8-d3f2-4dfa-b30b-8505f9a9120c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:47:a0:0a', 'vm-uuid': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:36 np0005473739 NetworkManager[44949]: <info>  [1759846416.2878] manager: (tapf66416c8-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/179)
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.297 2 INFO os_vif [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3')#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.298 2 DEBUG nova.virt.libvirt.vif [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.298 2 DEBUG nova.network.os_vif_util [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.299 2 DEBUG nova.network.os_vif_util [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.302 2 DEBUG nova.virt.libvirt.guest [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:47:a0:0a"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <target dev="tapf66416c8-d3"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:13:36 np0005473739 nova_compute[259550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  7 10:13:36 np0005473739 kernel: tapf66416c8-d3: entered promiscuous mode
Oct  7 10:13:36 np0005473739 NetworkManager[44949]: <info>  [1759846416.3162] manager: (tapf66416c8-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/180)
Oct  7 10:13:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:36Z|00367|binding|INFO|Claiming lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c for this chassis.
Oct  7 10:13:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:36Z|00368|binding|INFO|f66416c8-d3f2-4dfa-b30b-8505f9a9120c: Claiming fa:16:3e:47:a0:0a 10.100.0.14
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.323 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:a0:0a 10.100.0.14'], port_security=['fa:16:3e:47:a0:0a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f66416c8-d3f2-4dfa-b30b-8505f9a9120c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.325 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f66416c8-d3f2-4dfa-b30b-8505f9a9120c in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.326 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:13:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:36Z|00369|binding|INFO|Setting lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c ovn-installed in OVS
Oct  7 10:13:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:36Z|00370|binding|INFO|Setting lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c up in Southbound
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.351 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8e361fcd-4ea5-45b8-b84e-6749b742c0c2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:36 np0005473739 systemd-udevd[313708]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:13:36 np0005473739 NetworkManager[44949]: <info>  [1759846416.3754] device (tapf66416c8-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:13:36 np0005473739 NetworkManager[44949]: <info>  [1759846416.3764] device (tapf66416c8-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.390 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a36ad1aa-ef67-472f-a49e-f311fd8f715d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.396 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[43fefce8-1b63-450e-bb66-ef63e2067e7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.433 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[480a11bb-6510-4c0b-9ad1-f12c9c9ce070]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.453 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a451be80-0310-43e5-8936-0d3324441f3e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697305, 'reachable_time': 34760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313715, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.474 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c80b7264-5a5d-4cc1-9819-ce0e5b601108]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697319, 'tstamp': 697319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313716, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697323, 'tstamp': 697323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313716, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.476 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.480 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.481 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.481 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.482 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:36 np0005473739 kernel: tap98f1c539-b9 (unregistering): left promiscuous mode
Oct  7 10:13:36 np0005473739 NetworkManager[44949]: <info>  [1759846416.5503] device (tap98f1c539-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.560 2 DEBUG nova.virt.libvirt.driver [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.561 2 DEBUG nova.virt.libvirt.driver [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.561 2 DEBUG nova.virt.libvirt.driver [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:5f:75:d2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.562 2 DEBUG nova.virt.libvirt.driver [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:47:a0:0a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:36Z|00371|binding|INFO|Releasing lport 98f1c539-b9b9-4abb-89cf-268c353264ed from this chassis (sb_readonly=0)
Oct  7 10:13:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:36Z|00372|binding|INFO|Setting lport 98f1c539-b9b9-4abb-89cf-268c353264ed down in Southbound
Oct  7 10:13:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:36Z|00373|binding|INFO|Removing iface tap98f1c539-b9 ovn-installed in OVS
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.581 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:a7:57 10.100.0.13'], port_security=['fa:16:3e:41:a7:57 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'e2c535ed-c6cf-4684-82dc-ae59904e7381', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '06322ecec4b94a5d94e34cc8632d4104', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bc4243f5-ae46-415b-bf7d-438ed1b9d047', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1a4249a4-aa26-443d-945d-f02e79705aea, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=98f1c539-b9b9-4abb-89cf-268c353264ed) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.582 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 98f1c539-b9b9-4abb-89cf-268c353264ed in datapath 8accac57-ab45-4b9b-95ed-86c2c65f202f unbound from our chassis#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.584 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8accac57-ab45-4b9b-95ed-86c2c65f202f, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92add2a9-16c0-4da0-9373-e8e9351a6428]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:36.587 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f namespace which is not needed anymore#033[00m
Oct  7 10:13:36 np0005473739 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000030.scope: Deactivated successfully.
Oct  7 10:13:36 np0005473739 systemd[1]: machine-qemu\x2d55\x2dinstance\x2d00000030.scope: Consumed 3.934s CPU time.
Oct  7 10:13:36 np0005473739 systemd-machined[214580]: Machine qemu-55-instance-00000030 terminated.
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.618 2 DEBUG nova.virt.libvirt.guest [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:13:36</nova:creationTime>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct  7 10:13:36 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    <nova:port uuid="f66416c8-d3f2-4dfa-b30b-8505f9a9120c">
Oct  7 10:13:36 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:36 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:13:36 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:13:36 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.687 2 DEBUG nova.compute.manager [None req-94505fe2-3374-4c4c-ba18-e84125b5239a a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.758 2 DEBUG oslo_concurrency.lockutils [None req-ff279274-4af7-4741-990f-aab0f612df20 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:36 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [NOTICE]   (313686) : haproxy version is 2.8.14-c23fe91
Oct  7 10:13:36 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [NOTICE]   (313686) : path to executable is /usr/sbin/haproxy
Oct  7 10:13:36 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [WARNING]  (313686) : Exiting Master process...
Oct  7 10:13:36 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [ALERT]    (313686) : Current worker (313688) exited with code 143 (Terminated)
Oct  7 10:13:36 np0005473739 neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f[313674]: [WARNING]  (313686) : All workers exited. Exiting... (0)
Oct  7 10:13:36 np0005473739 systemd[1]: libpod-175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147.scope: Deactivated successfully.
Oct  7 10:13:36 np0005473739 podman[313745]: 2025-10-07 14:13:36.806393121 +0000 UTC m=+0.096038486 container died 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:13:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-181ed1164f571f18c8510bf11989ae73eb1b54ec91d8a852ede1b73c5ee4b345-merged.mount: Deactivated successfully.
Oct  7 10:13:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147-userdata-shm.mount: Deactivated successfully.
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.903 2 DEBUG nova.compute.manager [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.903 2 DEBUG oslo_concurrency.lockutils [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.903 2 DEBUG oslo_concurrency.lockutils [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.904 2 DEBUG oslo_concurrency.lockutils [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.904 2 DEBUG nova.compute.manager [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.904 2 WARNING nova.compute.manager [req-4591b047-a6e7-416a-b13d-4dcf47fb5e8a req-18fc2566-a71c-4575-9d6a-aa7640d81e9d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c for instance with vm_state active and task_state None.#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:36 np0005473739 podman[313745]: 2025-10-07 14:13:36.990084819 +0000 UTC m=+0.279730194 container cleanup 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.999 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:13:36 np0005473739 nova_compute[259550]: 2025-10-07 14:13:36.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:13:37 np0005473739 systemd[1]: libpod-conmon-175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147.scope: Deactivated successfully.
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.070 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.070 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.071 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.071 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.072 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.075 2 INFO nova.compute.manager [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Terminating instance#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.076 2 DEBUG nova.compute.manager [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:13:37 np0005473739 podman[313776]: 2025-10-07 14:13:37.091108074 +0000 UTC m=+0.077937560 container remove 175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.101 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58e11822-4d27-45cc-a465-48c47c497e4c]: (4, ('Tue Oct  7 02:13:36 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147)\n175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147\nTue Oct  7 02:13:36 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f (175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147)\n175cb3446d997a42f5b1028db43c0b43510c3d98ca1784f3e4602330ee026147\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.103 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53e797fc-7aab-4088-90b8-5dd3d34f2544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.105 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8accac57-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 kernel: tap8accac57-a0: left promiscuous mode
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.139 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[023a78f5-c720-4dcf-918d-1a21526b68ee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 kernel: tap130e9c49-53 (unregistering): left promiscuous mode
Oct  7 10:13:37 np0005473739 NetworkManager[44949]: <info>  [1759846417.1583] device (tap130e9c49-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:37Z|00374|binding|INFO|Releasing lport 130e9c49-53e8-495e-ac38-4d3e63f49011 from this chassis (sb_readonly=0)
Oct  7 10:13:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:37Z|00375|binding|INFO|Setting lport 130e9c49-53e8-495e-ac38-4d3e63f49011 down in Southbound
Oct  7 10:13:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:37Z|00376|binding|INFO|Removing iface tap130e9c49-53 ovn-installed in OVS
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.179 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:73:59 10.100.0.5'], port_security=['fa:16:3e:03:73:59 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '867307d6-0b3f-4a3e-9dc4-a05221e2f080', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1f5ee4e560ed4660a6685a086282a370', 'neutron:revision_number': '6', 'neutron:security_group_ids': '83afa0b2-d45d-4225-8e21-5474e9077205 b548d2bc-ba8b-4cd7-964e-e85cc9260fba f3568815-0490-424c-b408-29e1102ff805', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=437744f3-38b8-4d96-80b1-8cef5bd6873b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=130e9c49-53e8-495e-ac38-4d3e63f49011) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.180 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9d9e869-99d5-4f3d-94ea-b492ee29abb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.183 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b6a5473-3dee-435e-adb2-92432b23c218]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.207 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6dcae640-6b73-44db-b3c2-5968a064abf4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 699563, 'reachable_time': 32762, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313800, 'error': None, 'target': 'ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.212 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8accac57-ab45-4b9b-95ed-86c2c65f202f deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.212 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f5f39e67-cbd9-4742-a2a3-2fd093767df6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 systemd[1]: run-netns-ovnmeta\x2d8accac57\x2dab45\x2d4b9b\x2d95ed\x2d86c2c65f202f.mount: Deactivated successfully.
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.213 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 130e9c49-53e8-495e-ac38-4d3e63f49011 in datapath 71870f0f-c94f-4d32-8df4-00da4d6d4129 unbound from our chassis#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.214 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 71870f0f-c94f-4d32-8df4-00da4d6d4129, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.215 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[832f6d68-0843-4c11-8ba3-1a2268991714]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.216 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 namespace which is not needed anymore#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.216 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.217 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.217 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.217 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 867307d6-0b3f-4a3e-9dc4-a05221e2f080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:37 np0005473739 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Deactivated successfully.
Oct  7 10:13:37 np0005473739 systemd[1]: machine-qemu\x2d48\x2dinstance\x2d0000002b.scope: Consumed 14.927s CPU time.
Oct  7 10:13:37 np0005473739 systemd-machined[214580]: Machine qemu-48-instance-0000002b terminated.
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.329 2 INFO nova.virt.libvirt.driver [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Instance destroyed successfully.#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.330 2 DEBUG nova.objects.instance [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lazy-loading 'resources' on Instance uuid 867307d6-0b3f-4a3e-9dc4-a05221e2f080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.354 2 DEBUG nova.virt.libvirt.vif [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-SecurityGroupsTestJSON-server-84480482',display_name='tempest-SecurityGroupsTestJSON-server-84480482',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-securitygroupstestjson-server-84480482',id=43,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:12:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1f5ee4e560ed4660a6685a086282a370',ramdisk_id='',reservation_id='r-9i3oosu3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-SecurityGroupsTestJSON-1673626413',owner_user_name='tempest-SecurityGroupsTestJSON-1673626413-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:12:43Z,user_data=None,user_id='3ba82e1a9f12417391d78758ae9bb83c',uuid=867307d6-0b3f-4a3e-9dc4-a05221e2f080,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.355 2 DEBUG nova.network.os_vif_util [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converting VIF {"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.355 2 DEBUG nova.network.os_vif_util [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.356 2 DEBUG os_vif [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.358 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap130e9c49-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [NOTICE]   (309579) : haproxy version is 2.8.14-c23fe91
Oct  7 10:13:37 np0005473739 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [NOTICE]   (309579) : path to executable is /usr/sbin/haproxy
Oct  7 10:13:37 np0005473739 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [WARNING]  (309579) : Exiting Master process...
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.369 2 INFO os_vif [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:73:59,bridge_name='br-int',has_traffic_filtering=True,id=130e9c49-53e8-495e-ac38-4d3e63f49011,network=Network(71870f0f-c94f-4d32-8df4-00da4d6d4129),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap130e9c49-53')#033[00m
Oct  7 10:13:37 np0005473739 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [ALERT]    (309579) : Current worker (309581) exited with code 143 (Terminated)
Oct  7 10:13:37 np0005473739 neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129[309575]: [WARNING]  (309579) : All workers exited. Exiting... (0)
Oct  7 10:13:37 np0005473739 systemd[1]: libpod-14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31.scope: Deactivated successfully.
Oct  7 10:13:37 np0005473739 podman[313824]: 2025-10-07 14:13:37.379423633 +0000 UTC m=+0.056520397 container died 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:13:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:37Z|00050|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:47:a0:0a 10.100.0.14
Oct  7 10:13:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:37Z|00051|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:47:a0:0a 10.100.0.14
Oct  7 10:13:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31-userdata-shm.mount: Deactivated successfully.
Oct  7 10:13:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-820a27a5171c3bf28772eb4c1b067f6a72a5b80227d7b45bd095d8022b7260b2-merged.mount: Deactivated successfully.
Oct  7 10:13:37 np0005473739 podman[313824]: 2025-10-07 14:13:37.490951694 +0000 UTC m=+0.168048448 container cleanup 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:13:37 np0005473739 systemd[1]: libpod-conmon-14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31.scope: Deactivated successfully.
Oct  7 10:13:37 np0005473739 podman[313880]: 2025-10-07 14:13:37.598900982 +0000 UTC m=+0.077077267 container remove 14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.607 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67c6f823-21f7-4df3-85db-f49c72ceda19]: (4, ('Tue Oct  7 02:13:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 (14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31)\n14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31\nTue Oct  7 02:13:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 (14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31)\n14f7d389627d354c92b6345ea20954d02730339eb8463f219c0d4a2ffef3be31\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.609 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fc7631fb-42ca-4fdd-95f3-5d5578637520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.610 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71870f0f-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 kernel: tap71870f0f-c0: left promiscuous mode
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.648 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ac4a1ea-277f-4118-9abc-8cfb0e8caefc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.678 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[34b2dbfe-3ad2-40c9-a235-2c4185c24948]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.680 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dacc1dfa-c92c-46ff-924c-f3f0c14a624b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1471: 305 pgs: 305 active+clean; 246 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.1 MiB/s wr, 192 op/s
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[14c719b5-8d48-48a2-9634-7e1dc5cbb261]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 694626, 'reachable_time': 19654, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313899, 'error': None, 'target': 'ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.703 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-71870f0f-c94f-4d32-8df4-00da4d6d4129 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:13:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:37.704 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9898f63d-39ae-4c1e-9f8d-075b2a5d4225]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.843 2 DEBUG nova.compute.manager [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-unplugged-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.843 2 DEBUG oslo_concurrency.lockutils [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.844 2 DEBUG oslo_concurrency.lockutils [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.844 2 DEBUG oslo_concurrency.lockutils [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.844 2 DEBUG nova.compute.manager [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] No waiting events found dispatching network-vif-unplugged-130e9c49-53e8-495e-ac38-4d3e63f49011 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:37 np0005473739 nova_compute[259550]: 2025-10-07 14:13:37.844 2 DEBUG nova.compute.manager [req-2d1decd4-cedb-454d-8d69-e9e0ec0505c0 req-356cdbce-0da1-4ebf-be35-2f5888a4e103 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-unplugged-130e9c49-53e8-495e-ac38-4d3e63f49011 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:13:37 np0005473739 systemd[1]: run-netns-ovnmeta\x2d71870f0f\x2dc94f\x2d4d32\x2d8df4\x2d00da4d6d4129.mount: Deactivated successfully.
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.210 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.211 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.212 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.212 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.212 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.214 2 INFO nova.compute.manager [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Terminating instance#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.215 2 DEBUG nova.compute.manager [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.222 2 INFO nova.virt.libvirt.driver [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Instance destroyed successfully.#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.223 2 DEBUG nova.objects.instance [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lazy-loading 'resources' on Instance uuid e2c535ed-c6cf-4684-82dc-ae59904e7381 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.247 2 DEBUG nova.virt.libvirt.vif [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-DeleteServersTestJSON-server-1601957698',display_name='tempest-DeleteServersTestJSON-server-1601957698',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-deleteserverstestjson-server-1601957698',id=48,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='06322ecec4b94a5d94e34cc8632d4104',ramdisk_id='',reservation_id='r-bm0zqvip',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-DeleteServersTestJSON-1871282594',owner_user_name='tempest-DeleteServersTestJSON-1871282594-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:36Z,user_data=None,user_id='a0452296b3a942e893961944a0203d98',uuid=e2c535ed-c6cf-4684-82dc-ae59904e7381,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.247 2 DEBUG nova.network.os_vif_util [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converting VIF {"id": "98f1c539-b9b9-4abb-89cf-268c353264ed", "address": "fa:16:3e:41:a7:57", "network": {"id": "8accac57-ab45-4b9b-95ed-86c2c65f202f", "bridge": "br-int", "label": "tempest-DeleteServersTestJSON-1720593357-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "06322ecec4b94a5d94e34cc8632d4104", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap98f1c539-b9", "ovs_interfaceid": "98f1c539-b9b9-4abb-89cf-268c353264ed", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.248 2 DEBUG nova.network.os_vif_util [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.248 2 DEBUG os_vif [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.250 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap98f1c539-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.256 2 DEBUG nova.network.neutron [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updated VIF entry in instance network info cache for port f66416c8-d3f2-4dfa-b30b-8505f9a9120c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.256 2 DEBUG nova.network.neutron [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.260 2 INFO os_vif [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:a7:57,bridge_name='br-int',has_traffic_filtering=True,id=98f1c539-b9b9-4abb-89cf-268c353264ed,network=Network(8accac57-ab45-4b9b-95ed-86c2c65f202f),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap98f1c539-b9')#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.276 2 DEBUG oslo_concurrency.lockutils [req-fb069bcc-7a11-4ddc-b1d7-1fffa96cbb3b req-b7b256ce-5a88-4a5a-b9d7-8f73b62f5970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.284 2 INFO nova.virt.libvirt.driver [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Deleting instance files /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080_del#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.285 2 INFO nova.virt.libvirt.driver [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Deletion of /var/lib/nova/instances/867307d6-0b3f-4a3e-9dc4-a05221e2f080_del complete#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.348 2 INFO nova.compute.manager [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Took 1.27 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.349 2 DEBUG oslo.service.loopingcall [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.349 2 DEBUG nova.compute.manager [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.349 2 DEBUG nova.network.neutron [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.481 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [{"id": "130e9c49-53e8-495e-ac38-4d3e63f49011", "address": "fa:16:3e:03:73:59", "network": {"id": "71870f0f-c94f-4d32-8df4-00da4d6d4129", "bridge": "br-int", "label": "tempest-SecurityGroupsTestJSON-202014423-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1f5ee4e560ed4660a6685a086282a370", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap130e9c49-53", "ovs_interfaceid": "130e9c49-53e8-495e-ac38-4d3e63f49011", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.499 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-867307d6-0b3f-4a3e-9dc4-a05221e2f080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.500 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.500 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.569 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-f66416c8-d3f2-4dfa-b30b-8505f9a9120c" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.569 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-f66416c8-d3f2-4dfa-b30b-8505f9a9120c" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.593 2 DEBUG nova.objects.instance [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.789 2 DEBUG nova.virt.libvirt.vif [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.790 2 DEBUG nova.network.os_vif_util [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.791 2 DEBUG nova.network.os_vif_util [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.797 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.800 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.803 2 DEBUG nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Attempting to detach device tapf66416c8-d3 from instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.803 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:47:a0:0a"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <target dev="tapf66416c8-d3"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:13:38 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:13:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:38Z|00377|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.881 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.886 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface>not found in domain: <domain type='kvm' id='52'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <name>instance-0000002e</name>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:13:36</nova:creationTime>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <nova:port uuid="f66416c8-d3f2-4dfa-b30b-8505f9a9120c">
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:13:38 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <entry name='serial'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <entry name='uuid'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk' index='2'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config' index='1'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:5f:75:d2'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target dev='tapf208f539-2c'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:47:a0:0a'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target dev='tapf66416c8-d3'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='net1'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <source path='/dev/pts/3'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/3'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <source path='/dev/pts/3'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c550,c569</label>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c550,c569</imagelabel>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:13:38 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:38 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.886 2 INFO nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tapf66416c8-d3 from instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 from the persistent domain config.#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.887 2 DEBUG nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] (1/8): Attempting to detach device tapf66416c8-d3 with device alias net1 from instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.887 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:47:a0:0a"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]:  <target dev="tapf66416c8-d3"/>
Oct  7 10:13:38 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:13:38 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:38 np0005473739 kernel: tapf66416c8-d3 (unregistering): left promiscuous mode
Oct  7 10:13:38 np0005473739 NetworkManager[44949]: <info>  [1759846418.9812] device (tapf66416c8-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:38Z|00378|binding|INFO|Releasing lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c from this chassis (sb_readonly=0)
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:38Z|00379|binding|INFO|Setting lport f66416c8-d3f2-4dfa-b30b-8505f9a9120c down in Southbound
Oct  7 10:13:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:38Z|00380|binding|INFO|Removing iface tapf66416c8-d3 ovn-installed in OVS
Oct  7 10:13:38 np0005473739 nova_compute[259550]: 2025-10-07 14:13:38.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:38.998 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:47:a0:0a 10.100.0.14'], port_security=['fa:16:3e:47:a0:0a 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f66416c8-d3f2-4dfa-b30b-8505f9a9120c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:38.999 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f66416c8-d3f2-4dfa-b30b-8505f9a9120c in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.001 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.013 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759846419.0136461, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.015 2 DEBUG nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Start waiting for the detach event from libvirt for device tapf66416c8-d3 with device alias net1 for instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.016 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b95962d4-a5a2-46d1-be7e-e2c2a4142f3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.020 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface>not found in domain: <domain type='kvm' id='52'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <name>instance-0000002e</name>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:13:36</nova:creationTime>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:port uuid="f66416c8-d3f2-4dfa-b30b-8505f9a9120c">
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:13:39 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <entry name='serial'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <entry name='uuid'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk' index='2'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config' index='1'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:5f:75:d2'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target dev='tapf208f539-2c'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <source path='/dev/pts/3'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/3'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <source path='/dev/pts/3'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c550,c569</label>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c550,c569</imagelabel>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:13:39 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:39 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.020 2 INFO nova.virt.libvirt.driver [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tapf66416c8-d3 from instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 from the live domain config.#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.021 2 DEBUG nova.virt.libvirt.vif [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.021 2 DEBUG nova.network.os_vif_util [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.022 2 DEBUG nova.network.os_vif_util [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.022 2 DEBUG os_vif [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.024 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66416c8-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.030 2 INFO os_vif [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3')#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.031 2 DEBUG nova.virt.libvirt.guest [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:13:39</nova:creationTime>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct  7 10:13:39 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:39 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:13:39 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:13:39 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.059 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b9452725-8bcb-4a47-a9d1-3de4e19ad40d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.063 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[72fddc2a-73e3-4baa-95e4-ef78559d44e1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.112 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9ded921f-acd3-4436-b610-a6fbfddcde0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.131 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ce89472b-bffc-41df-9fc7-baa89398d3d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 8, 'rx_bytes': 916, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 106], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697305, 'reachable_time': 34760, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 313930, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:39 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.150 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[702761d2-9534-4b6a-8e1d-1daf90eddbfc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697319, 'tstamp': 697319}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313931, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 697323, 'tstamp': 697323}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 313931, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.153 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.156 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.156 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.156 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:39.157 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.222 2 DEBUG nova.network.neutron [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.246 2 INFO nova.compute.manager [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Took 0.90 seconds to deallocate network for instance.#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.292 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.292 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.387 2 DEBUG oslo_concurrency.processutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.521 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.522 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.522 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.522 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.522 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.523 2 WARNING nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c for instance with vm_state active and task_state None.#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-unplugged-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.523 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] No waiting events found dispatching network-vif-unplugged-98f1c539-b9b9-4abb-89cf-268c353264ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-unplugged-98f1c539-b9b9-4abb-89cf-268c353264ed for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG oslo_concurrency.lockutils [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.524 2 DEBUG nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] No waiting events found dispatching network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.525 2 WARNING nova.compute.manager [req-d75a0157-94d7-401e-81ab-c4ec04d8cb09 req-987acc87-6b7c-4b11-88b2-8845f5584747 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received unexpected event network-vif-plugged-98f1c539-b9b9-4abb-89cf-268c353264ed for instance with vm_state suspended and task_state deleting.#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.580 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.581 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.581 2 DEBUG nova.network.neutron [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.655 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846404.6546457, f563ffb7-1ade-4b71-ab68-115322eef141 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.656 2 INFO nova.compute.manager [-] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:13:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1472: 305 pgs: 305 active+clean; 170 MiB data, 514 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 252 op/s
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.693 2 DEBUG nova.compute.manager [None req-828f93e6-25e8-4001-8dfc-2a2028f277e5 - - - - - -] [instance: f563ffb7-1ade-4b71-ab68-115322eef141] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1629729109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.868 2 DEBUG oslo_concurrency.processutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.874 2 DEBUG nova.compute.provider_tree [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.893 2 DEBUG nova.scheduler.client.report [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.914 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.923 2 DEBUG nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.924 2 DEBUG oslo_concurrency.lockutils [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.924 2 DEBUG oslo_concurrency.lockutils [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.924 2 DEBUG oslo_concurrency.lockutils [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.924 2 DEBUG nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] No waiting events found dispatching network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.925 2 WARNING nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received unexpected event network-vif-plugged-130e9c49-53e8-495e-ac38-4d3e63f49011 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.925 2 DEBUG nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Received event network-vif-deleted-130e9c49-53e8-495e-ac38-4d3e63f49011 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.925 2 DEBUG nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-deleted-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.925 2 INFO nova.compute.manager [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Neutron deleted interface f66416c8-d3f2-4dfa-b30b-8505f9a9120c; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.925 2 DEBUG nova.network.neutron [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.941 2 INFO nova.scheduler.client.report [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Deleted allocations for instance 867307d6-0b3f-4a3e-9dc4-a05221e2f080#033[00m
Oct  7 10:13:39 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.957 2 DEBUG nova.objects.instance [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lazy-loading 'system_metadata' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:39.999 2 DEBUG nova.objects.instance [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lazy-loading 'flavor' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.014 2 INFO nova.virt.libvirt.driver [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Deleting instance files /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381_del#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.015 2 INFO nova.virt.libvirt.driver [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Deletion of /var/lib/nova/instances/e2c535ed-c6cf-4684-82dc-ae59904e7381_del complete#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.056 2 DEBUG nova.virt.libvirt.vif [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.056 2 DEBUG nova.network.os_vif_util [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.057 2 DEBUG nova.network.os_vif_util [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.061 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.064 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface>not found in domain: <domain type='kvm' id='52'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <name>instance-0000002e</name>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:13:39</nova:creationTime>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:13:40 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='serial'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='uuid'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk' index='2'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config' index='1'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:5f:75:d2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target dev='tapf208f539-2c'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <source path='/dev/pts/3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/3'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <source path='/dev/pts/3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c550,c569</label>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c550,c569</imagelabel>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:13:40 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:40 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.065 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.067 2 DEBUG oslo_concurrency.lockutils [None req-94a8ebe5-b88a-4337-8204-6e8fef65d24c 3ba82e1a9f12417391d78758ae9bb83c 1f5ee4e560ed4660a6685a086282a370 - - default default] Lock "867307d6-0b3f-4a3e-9dc4-a05221e2f080" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.080 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:47:a0:0a"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tapf66416c8-d3"/></interface>not found in domain: <domain type='kvm' id='52'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <name>instance-0000002e</name>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <uuid>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</uuid>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:13:39</nova:creationTime>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:13:40 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='serial'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='uuid'>b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk' index='2'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_disk.config' index='1'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 podman[313954]: 2025-10-07 14:13:40.084199 +0000 UTC m=+0.067963908 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:5f:75:d2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target dev='tapf208f539-2c'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <source path='/dev/pts/3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/3'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <source path='/dev/pts/3'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1/console.log' append='off'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5903' autoport='yes' listen='::0'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c550,c569</label>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c550,c569</imagelabel>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:13:40 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:13:40 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.081 2 WARNING nova.virt.libvirt.driver [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Detaching interface fa:16:3e:47:a0:0a failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tapf66416c8-d3' not found.#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.082 2 DEBUG nova.virt.libvirt.vif [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.082 2 DEBUG nova.network.os_vif_util [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converting VIF {"id": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "address": "fa:16:3e:47:a0:0a", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf66416c8-d3", "ovs_interfaceid": "f66416c8-d3f2-4dfa-b30b-8505f9a9120c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.083 2 DEBUG nova.network.os_vif_util [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.083 2 DEBUG os_vif [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf66416c8-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.090 2 INFO os_vif [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:47:a0:0a,bridge_name='br-int',has_traffic_filtering=True,id=f66416c8-d3f2-4dfa-b30b-8505f9a9120c,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf66416c8-d3')#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.091 2 DEBUG nova.virt.libvirt.guest [req-d3d70c49-1121-4119-b886-3b6647528a59 req-32d3f9ed-74c8-4236-b112-4e4f14385c4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-83378302</nova:name>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:13:40</nova:creationTime>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    <nova:port uuid="f208f539-2cf1-4007-8806-5b4836d43c4f">
Oct  7 10:13:40 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:13:40 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:13:40 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:13:40 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.114 2 INFO nova.compute.manager [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Took 1.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.114 2 DEBUG oslo.service.loopingcall [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.115 2 DEBUG nova.compute.manager [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:13:40 np0005473739 nova_compute[259550]: 2025-10-07 14:13:40.115 2 DEBUG nova.network.neutron [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:13:40 np0005473739 podman[313955]: 2025-10-07 14:13:40.130233639 +0000 UTC m=+0.108975555 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #66. Immutable memtables: 0.
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:40.939256) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 35] Flushing memtable with next log file: 66
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846420939347, "job": 35, "event": "flush_started", "num_memtables": 1, "num_entries": 344, "num_deletes": 250, "total_data_size": 162314, "memory_usage": 168400, "flush_reason": "Manual Compaction"}
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 35] Level-0 flush table #67: started
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846420986824, "cf_name": "default", "job": 35, "event": "table_file_creation", "file_number": 67, "file_size": 160352, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 30540, "largest_seqno": 30883, "table_properties": {"data_size": 158198, "index_size": 318, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5974, "raw_average_key_size": 20, "raw_value_size": 153937, "raw_average_value_size": 523, "num_data_blocks": 14, "num_entries": 294, "num_filter_entries": 294, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846410, "oldest_key_time": 1759846410, "file_creation_time": 1759846420, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 67, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 35] Flush lasted 47615 microseconds, and 1992 cpu microseconds.
Oct  7 10:13:40 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:40.986878) [db/flush_job.cc:967] [default] [JOB 35] Level-0 flush table #67: 160352 bytes OK
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:40.986907) [db/memtable_list.cc:519] [default] Level-0 commit table #67 started
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.011424) [db/memtable_list.cc:722] [default] Level-0 commit table #67: memtable #1 done
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.011455) EVENT_LOG_v1 {"time_micros": 1759846421011446, "job": 35, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.011479) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 35] Try to delete WAL files size 159968, prev total WAL file size 159968, number of live WAL files 2.
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000063.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.012056) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031303032' seq:72057594037927935, type:22 .. '6D6772737461740031323533' seq:0, type:0; will stop at (end)
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 36] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 35 Base level 0, inputs: [67(156KB)], [65(10126KB)]
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846421012098, "job": 36, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [67], "files_L6": [65], "score": -1, "input_data_size": 10529627, "oldest_snapshot_seqno": -1}
Oct  7 10:13:41 np0005473739 nova_compute[259550]: 2025-10-07 14:13:41.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 36] Generated table #68: 5322 keys, 7236991 bytes, temperature: kUnknown
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846421138254, "cf_name": "default", "job": 36, "event": "table_file_creation", "file_number": 68, "file_size": 7236991, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7201732, "index_size": 20877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 13317, "raw_key_size": 134722, "raw_average_key_size": 25, "raw_value_size": 7106160, "raw_average_value_size": 1335, "num_data_blocks": 849, "num_entries": 5322, "num_filter_entries": 5322, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759846421, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 68, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.138686) [db/compaction/compaction_job.cc:1663] [default] [JOB 36] Compacted 1@0 + 1@6 files to L6 => 7236991 bytes
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.150981) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 83.4 rd, 57.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.9 +0.0 blob) out(6.9 +0.0 blob), read-write-amplify(110.8) write-amplify(45.1) OK, records in: 5829, records dropped: 507 output_compression: NoCompression
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.151033) EVENT_LOG_v1 {"time_micros": 1759846421151012, "job": 36, "event": "compaction_finished", "compaction_time_micros": 126286, "compaction_time_cpu_micros": 27464, "output_level": 6, "num_output_files": 1, "total_output_size": 7236991, "num_input_records": 5829, "num_output_records": 5322, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000067.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846421151323, "job": 36, "event": "table_file_deletion", "file_number": 67}
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000065.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759846421152803, "job": 36, "event": "table_file_deletion", "file_number": 65}
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.011967) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152847) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152856) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:41 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:13:41.152857) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:13:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1473: 305 pgs: 305 active+clean; 121 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 35 KiB/s wr, 207 op/s
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.263 2 DEBUG nova.network.neutron [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.281 2 INFO nova.compute.manager [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Took 2.17 seconds to deallocate network for instance.#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.323 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.323 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.327 2 INFO nova.network.neutron [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Port f66416c8-d3f2-4dfa-b30b-8505f9a9120c from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.327 2 DEBUG nova.network.neutron [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [{"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.340 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.368 2 DEBUG nova.compute.manager [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.369 2 DEBUG oslo_concurrency.lockutils [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.369 2 DEBUG oslo_concurrency.lockutils [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.369 2 DEBUG oslo_concurrency.lockutils [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.369 2 DEBUG nova.compute.manager [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.370 2 WARNING nova.compute.manager [req-3cc0c793-ad21-49a8-8b17-6660a9a2fbad req-c63fabf0-49a1-4481-b0a3-cab839e7b503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f66416c8-d3f2-4dfa-b30b-8505f9a9120c for instance with vm_state active and task_state None.#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.372 2 DEBUG oslo_concurrency.lockutils [None req-0cabecda-9e1d-4b24-9eb2-79468adf6bb4 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-f66416c8-d3f2-4dfa-b30b-8505f9a9120c" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.802s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.398 2 DEBUG oslo_concurrency.processutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.459 2 DEBUG nova.compute.manager [req-51859ba4-b350-4077-9088-190ac5433ed4 req-d5c53f1a-ad91-4792-83c1-731d1ede4182 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Received event network-vif-deleted-98f1c539-b9b9-4abb-89cf-268c353264ed external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.557 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.558 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.558 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.558 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.558 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.560 2 INFO nova.compute.manager [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Terminating instance#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.561 2 DEBUG nova.compute.manager [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:13:42 np0005473739 kernel: tapf208f539-2c (unregistering): left promiscuous mode
Oct  7 10:13:42 np0005473739 NetworkManager[44949]: <info>  [1759846422.6186] device (tapf208f539-2c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:13:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:42Z|00381|binding|INFO|Releasing lport f208f539-2cf1-4007-8806-5b4836d43c4f from this chassis (sb_readonly=0)
Oct  7 10:13:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:42Z|00382|binding|INFO|Setting lport f208f539-2cf1-4007-8806-5b4836d43c4f down in Southbound
Oct  7 10:13:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:13:42Z|00383|binding|INFO|Removing iface tapf208f539-2c ovn-installed in OVS
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.632 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5f:75:d2 10.100.0.12'], port_security=['fa:16:3e:5f:75:d2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e8acaacb-1526-452b-a139-17a738541bb7', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.217'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f208f539-2cf1-4007-8806-5b4836d43c4f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.634 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f208f539-2cf1-4007-8806-5b4836d43c4f in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.635 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1d9f332-f920-4d6e-8e91-dd13ec334d51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.635 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1dfc8d9-66d2-4969-ac3b-348a2b781818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.636 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace which is not needed anymore#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:42 np0005473739 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Deactivated successfully.
Oct  7 10:13:42 np0005473739 systemd[1]: machine-qemu\x2d52\x2dinstance\x2d0000002e.scope: Consumed 14.667s CPU time.
Oct  7 10:13:42 np0005473739 systemd-machined[214580]: Machine qemu-52-instance-0000002e terminated.
Oct  7 10:13:42 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [NOTICE]   (312253) : haproxy version is 2.8.14-c23fe91
Oct  7 10:13:42 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [NOTICE]   (312253) : path to executable is /usr/sbin/haproxy
Oct  7 10:13:42 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [WARNING]  (312253) : Exiting Master process...
Oct  7 10:13:42 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [ALERT]    (312253) : Current worker (312255) exited with code 143 (Terminated)
Oct  7 10:13:42 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[312249]: [WARNING]  (312253) : All workers exited. Exiting... (0)
Oct  7 10:13:42 np0005473739 systemd[1]: libpod-51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba.scope: Deactivated successfully.
Oct  7 10:13:42 np0005473739 podman[314043]: 2025-10-07 14:13:42.785503095 +0000 UTC m=+0.053903439 container died 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.807 2 INFO nova.virt.libvirt.driver [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Instance destroyed successfully.#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.808 2 DEBUG nova.objects.instance [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'resources' on Instance uuid b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba-userdata-shm.mount: Deactivated successfully.
Oct  7 10:13:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ea5f871018c8e56c6586c7466d5febec36105738892f6f280160d66dce07ac2e-merged.mount: Deactivated successfully.
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.835 2 DEBUG nova.virt.libvirt.vif [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:12:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-83378302',display_name='tempest-AttachInterfacesTestJSON-server-83378302',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-83378302',id=46,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBhbNkb9RJCiI/GkFe04dtxG7xWcFO07VUrumVPfTEOgOBoJhuvmDv9cetrJ6XIQ/1YCI8CWruM93E49EXk00F5CtAbm3zc0bxYBCy7t665rpHSj0Q0DNaoPcuzsqKAXSw==',key_name='tempest-keypair-542533303',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:13:10Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ddh3gr00',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:13:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:13:42 np0005473739 podman[314043]: 2025-10-07 14:13:42.836151956 +0000 UTC m=+0.104552310 container cleanup 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.835 2 DEBUG nova.network.os_vif_util [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "f208f539-2cf1-4007-8806-5b4836d43c4f", "address": "fa:16:3e:5f:75:d2", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.217", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf208f539-2c", "ovs_interfaceid": "f208f539-2cf1-4007-8806-5b4836d43c4f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.837 2 DEBUG nova.network.os_vif_util [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.839 2 DEBUG os_vif [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf208f539-2c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1700400190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.849 2 INFO os_vif [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:5f:75:d2,bridge_name='br-int',has_traffic_filtering=True,id=f208f539-2cf1-4007-8806-5b4836d43c4f,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf208f539-2c')#033[00m
Oct  7 10:13:42 np0005473739 systemd[1]: libpod-conmon-51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba.scope: Deactivated successfully.
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.866 2 DEBUG oslo_concurrency.processutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.872 2 DEBUG nova.compute.provider_tree [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.894 2 DEBUG nova.scheduler.client.report [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.915 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.591s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:42 np0005473739 podman[314084]: 2025-10-07 14:13:42.929458209 +0000 UTC m=+0.063275914 container remove 51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.935 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb587ff-752f-4bab-8d99-5b78dd13743a]: (4, ('Tue Oct  7 02:13:42 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba)\n51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba\nTue Oct  7 02:13:42 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba)\n51e32ea7af7770bf0944d6fb9338875e516a18638763ed93987f33fa17e97aba\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74e9daf5-e7dc-4546-b91b-4b2ae127805c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.938 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:42 np0005473739 kernel: tapb1d9f332-f0: left promiscuous mode
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.944 2 INFO nova.scheduler.client.report [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Deleted allocations for instance e2c535ed-c6cf-4684-82dc-ae59904e7381#033[00m
Oct  7 10:13:42 np0005473739 nova_compute[259550]: 2025-10-07 14:13:42.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.960 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e88a8acc-01b7-4734-bb47-7f82dc546f1b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.988 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c0141619-bc28-4588-9180-bdd8f4946ec5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:42.989 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac05583c-0bbe-44f3-b4c1-744f63892c61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:43.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[14e56edc-cb9c-41bf-8f87-dbfd0382ac34]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 697293, 'reachable_time': 28118, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314118, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:43 np0005473739 systemd[1]: run-netns-ovnmeta\x2db1d9f332\x2df920\x2d4d6e\x2d8e91\x2ddd13ec334d51.mount: Deactivated successfully.
Oct  7 10:13:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:43.009 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:13:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:13:43.009 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2c444cae-2469-4f4e-85e2-6e3f88b832e4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:13:43 np0005473739 nova_compute[259550]: 2025-10-07 14:13:43.014 2 DEBUG oslo_concurrency.lockutils [None req-8f18ad7e-3ba6-4cd5-82fc-a3fd0928260e a0452296b3a942e893961944a0203d98 06322ecec4b94a5d94e34cc8632d4104 - - default default] Lock "e2c535ed-c6cf-4684-82dc-ae59904e7381" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.803s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:43 np0005473739 nova_compute[259550]: 2025-10-07 14:13:43.300 2 INFO nova.virt.libvirt.driver [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Deleting instance files /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_del#033[00m
Oct  7 10:13:43 np0005473739 nova_compute[259550]: 2025-10-07 14:13:43.301 2 INFO nova.virt.libvirt.driver [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Deletion of /var/lib/nova/instances/b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1_del complete#033[00m
Oct  7 10:13:43 np0005473739 nova_compute[259550]: 2025-10-07 14:13:43.376 2 INFO nova.compute.manager [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:13:43 np0005473739 nova_compute[259550]: 2025-10-07 14:13:43.376 2 DEBUG oslo.service.loopingcall [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:13:43 np0005473739 nova_compute[259550]: 2025-10-07 14:13:43.377 2 DEBUG nova.compute.manager [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:13:43 np0005473739 nova_compute[259550]: 2025-10-07 14:13:43.377 2 DEBUG nova.network.neutron [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:13:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1474: 305 pgs: 305 active+clean; 121 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 28 KiB/s wr, 138 op/s
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.267 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846409.265955, cb7392fb-c42f-47e9-b661-7cbf3dfe1263 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.268 2 INFO nova.compute.manager [-] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.286 2 DEBUG nova.compute.manager [None req-882c62fe-f037-4849-84c0-f59dfade20b3 - - - - - -] [instance: cb7392fb-c42f-47e9-b661-7cbf3dfe1263] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.398 2 DEBUG nova.network.neutron [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.427 2 INFO nova.compute.manager [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Took 1.05 seconds to deallocate network for instance.#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.465 2 DEBUG nova.compute.manager [req-bbfa1ad6-ab88-4f8e-8de4-e429eabc95dc req-ba2bce58-d157-4995-9997-9b43f4f38841 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-deleted-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.479 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.480 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.537 2 DEBUG oslo_concurrency.processutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.579 2 DEBUG nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-unplugged-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.579 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.579 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.580 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.580 2 DEBUG nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-unplugged-f208f539-2cf1-4007-8806-5b4836d43c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.580 2 WARNING nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-unplugged-f208f539-2cf1-4007-8806-5b4836d43c4f for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.580 2 DEBUG nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.581 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.581 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.581 2 DEBUG oslo_concurrency.lockutils [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.581 2 DEBUG nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] No waiting events found dispatching network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.581 2 WARNING nova.compute.manager [req-4476d93a-2d90-49fc-a5c6-75318132d0e9 req-538543f0-c110-4cd3-8a38-649ba7726883 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Received unexpected event network-vif-plugged-f208f539-2cf1-4007-8806-5b4836d43c4f for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:13:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218718914' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.992 2 DEBUG oslo_concurrency.processutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:44 np0005473739 nova_compute[259550]: 2025-10-07 14:13:44.999 2 DEBUG nova.compute.provider_tree [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:45 np0005473739 nova_compute[259550]: 2025-10-07 14:13:45.017 2 DEBUG nova.scheduler.client.report [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:45 np0005473739 nova_compute[259550]: 2025-10-07 14:13:45.043 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.563s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:45 np0005473739 nova_compute[259550]: 2025-10-07 14:13:45.065 2 INFO nova.scheduler.client.report [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Deleted allocations for instance b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1#033[00m
Oct  7 10:13:45 np0005473739 nova_compute[259550]: 2025-10-07 14:13:45.127 2 DEBUG oslo_concurrency.lockutils [None req-49b6eecf-d33b-4338-8a1b-9a6a101012db eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1475: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 MiB/s rd, 31 KiB/s wr, 176 op/s
Oct  7 10:13:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:45 np0005473739 nova_compute[259550]: 2025-10-07 14:13:45.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:46 np0005473739 nova_compute[259550]: 2025-10-07 14:13:46.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:46 np0005473739 nova_compute[259550]: 2025-10-07 14:13:46.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1476: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1021 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Oct  7 10:13:47 np0005473739 nova_compute[259550]: 2025-10-07 14:13:47.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1477: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 1021 KiB/s rd, 4.5 KiB/s wr, 112 op/s
Oct  7 10:13:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:51 np0005473739 nova_compute[259550]: 2025-10-07 14:13:51.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:51 np0005473739 nova_compute[259550]: 2025-10-07 14:13:51.688 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846416.6867077, e2c535ed-c6cf-4684-82dc-ae59904e7381 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:51 np0005473739 nova_compute[259550]: 2025-10-07 14:13:51.688 2 INFO nova.compute.manager [-] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:13:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1478: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 35 KiB/s rd, 3.1 KiB/s wr, 52 op/s
Oct  7 10:13:51 np0005473739 nova_compute[259550]: 2025-10-07 14:13:51.711 2 DEBUG nova.compute.manager [None req-e4851b99-bdcb-4014-a776-61fa336244aa - - - - - -] [instance: e2c535ed-c6cf-4684-82dc-ae59904e7381] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:52 np0005473739 nova_compute[259550]: 2025-10-07 14:13:52.328 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846417.3231442, 867307d6-0b3f-4a3e-9dc4-a05221e2f080 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:52 np0005473739 nova_compute[259550]: 2025-10-07 14:13:52.328 2 INFO nova.compute.manager [-] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:13:52 np0005473739 nova_compute[259550]: 2025-10-07 14:13:52.347 2 DEBUG nova.compute.manager [None req-147d87a4-92d7-426a-9e72-c7398a72874f - - - - - -] [instance: 867307d6-0b3f-4a3e-9dc4-a05221e2f080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:13:52 np0005473739 nova_compute[259550]: 2025-10-07 14:13:52.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:53 np0005473739 nova_compute[259550]: 2025-10-07 14:13:53.481 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:53 np0005473739 nova_compute[259550]: 2025-10-07 14:13:53.482 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:53 np0005473739 nova_compute[259550]: 2025-10-07 14:13:53.500 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:13:53 np0005473739 nova_compute[259550]: 2025-10-07 14:13:53.571 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:53 np0005473739 nova_compute[259550]: 2025-10-07 14:13:53.572 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:53 np0005473739 nova_compute[259550]: 2025-10-07 14:13:53.579 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:13:53 np0005473739 nova_compute[259550]: 2025-10-07 14:13:53.580 2 INFO nova.compute.claims [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:13:53 np0005473739 nova_compute[259550]: 2025-10-07 14:13:53.681 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1479: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 KiB/s wr, 37 op/s
Oct  7 10:13:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:13:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3981732767' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.144 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.151 2 DEBUG nova.compute.provider_tree [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.165 2 DEBUG nova.scheduler.client.report [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.183 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.611s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.184 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.273 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.274 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.301 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.344 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.472 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.474 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.474 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Creating image(s)#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.497 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.527 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.556 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.562 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.637 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.638 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.639 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.639 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.663 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.667 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 eb8777f3-5daa-49c7-8994-687012f20453_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:13:54 np0005473739 nova_compute[259550]: 2025-10-07 14:13:54.707 2 DEBUG nova.policy [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:13:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1480: 305 pgs: 305 active+clean; 41 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 26 KiB/s rd, 2.7 KiB/s wr, 37 op/s
Oct  7 10:13:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:13:55 np0005473739 nova_compute[259550]: 2025-10-07 14:13:55.995 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 eb8777f3-5daa-49c7-8994-687012f20453_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.328s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:13:56 np0005473739 nova_compute[259550]: 2025-10-07 14:13:56.063 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] resizing rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:13:56 np0005473739 nova_compute[259550]: 2025-10-07 14:13:56.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:56 np0005473739 nova_compute[259550]: 2025-10-07 14:13:56.165 2 DEBUG nova.objects.instance [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'migration_context' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:13:56 np0005473739 nova_compute[259550]: 2025-10-07 14:13:56.180 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:13:56 np0005473739 nova_compute[259550]: 2025-10-07 14:13:56.181 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Ensure instance console log exists: /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:13:56 np0005473739 nova_compute[259550]: 2025-10-07 14:13:56.181 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:13:56 np0005473739 nova_compute[259550]: 2025-10-07 14:13:56.182 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:13:56 np0005473739 nova_compute[259550]: 2025-10-07 14:13:56.182 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:13:57 np0005473739 nova_compute[259550]: 2025-10-07 14:13:57.566 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully created port: fdcb59f4-9f89-4147-941b-a28bfa0621bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:13:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1481: 305 pgs: 305 active+clean; 47 MiB data, 436 MiB used, 60 GiB / 60 GiB avail; 2.0 KiB/s rd, 136 KiB/s wr, 3 op/s
Oct  7 10:13:57 np0005473739 nova_compute[259550]: 2025-10-07 14:13:57.804 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846422.8031216, b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:13:57 np0005473739 nova_compute[259550]: 2025-10-07 14:13:57.805 2 INFO nova.compute.manager [-] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:13:57 np0005473739 nova_compute[259550]: 2025-10-07 14:13:57.828 2 DEBUG nova.compute.manager [None req-62949155-a9eb-4472-894e-8f5bc3ac401a - - - - - -] [instance: b46bf9ad-763d-4e0b-b55c-3a8ee1d869a1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:13:57 np0005473739 nova_compute[259550]: 2025-10-07 14:13:57.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:13:58 np0005473739 podman[314333]: 2025-10-07 14:13:58.078848012 +0000 UTC m=+0.064695113 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid)
Oct  7 10:13:58 np0005473739 podman[314332]: 2025-10-07 14:13:58.07954796 +0000 UTC m=+0.066433317 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:13:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1482: 305 pgs: 305 active+clean; 73 MiB data, 446 MiB used, 60 GiB / 60 GiB avail; 9.4 KiB/s rd, 1.2 MiB/s wr, 17 op/s
Oct  7 10:14:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:00.104 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully updated port: fdcb59f4-9f89-4147-941b-a28bfa0621bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:14:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:00.133 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:00.134 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:00.134 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:14:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:00.290 2 DEBUG nova.compute.manager [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:00.290 2 DEBUG nova.compute.manager [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-fdcb59f4-9f89-4147-941b-a28bfa0621bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:00.291 2 DEBUG oslo_concurrency.lockutils [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:00.428 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:14:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1483: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.955 2 DEBUG nova.network.neutron [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.974 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.974 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance network_info: |[{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.975 2 DEBUG oslo_concurrency.lockutils [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.975 2 DEBUG nova.network.neutron [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port fdcb59f4-9f89-4147-941b-a28bfa0621bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.979 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start _get_guest_xml network_info=[{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.985 2 WARNING nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.995 2 DEBUG nova.virt.libvirt.host [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:14:01 np0005473739 nova_compute[259550]: 2025-10-07 14:14:01.996 2 DEBUG nova.virt.libvirt.host [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.000 2 DEBUG nova.virt.libvirt.host [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.000 2 DEBUG nova.virt.libvirt.host [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.001 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.001 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.002 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.002 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.003 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.003 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.003 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.004 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.004 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.005 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.005 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.005 2 DEBUG nova.virt.hardware [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.009 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1646856935' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.478 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.508 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.513 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:02 np0005473739 nova_compute[259550]: 2025-10-07 14:14:02.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1942696248' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.001 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.003 2 DEBUG nova.virt.libvirt.vif [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.003 2 DEBUG nova.network.os_vif_util [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.004 2 DEBUG nova.network.os_vif_util [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.005 2 DEBUG nova.objects.instance [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_devices' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.023 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <name>instance-00000031</name>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:14:01</nova:creationTime>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <entry name="serial">eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <entry name="uuid">eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/eb8777f3-5daa-49c7-8994-687012f20453_disk">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:65:98:6b"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <target dev="tapfdcb59f4-9f"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log" append="off"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:14:03 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:14:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:14:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:14:03 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.025 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Preparing to wait for external event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.026 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.026 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.026 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.027 2 DEBUG nova.virt.libvirt.vif [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:13:54Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.027 2 DEBUG nova.network.os_vif_util [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.028 2 DEBUG nova.network.os_vif_util [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.029 2 DEBUG os_vif [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfdcb59f4-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.035 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfdcb59f4-9f, col_values=(('external_ids', {'iface-id': 'fdcb59f4-9f89-4147-941b-a28bfa0621bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:65:98:6b', 'vm-uuid': 'eb8777f3-5daa-49c7-8994-687012f20453'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:03 np0005473739 NetworkManager[44949]: <info>  [1759846443.0387] manager: (tapfdcb59f4-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/181)
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.044 2 INFO os_vif [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f')#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.098 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.099 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.099 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:65:98:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.100 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Using config drive#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.119 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.635 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Creating config drive at /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.643 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph735y8px execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.689 2 DEBUG nova.network.neutron [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port fdcb59f4-9f89-4147-941b-a28bfa0621bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.690 2 DEBUG nova.network.neutron [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1484: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.707 2 DEBUG oslo_concurrency.lockutils [req-43464137-db81-4531-b8ff-be2176898bb2 req-ab6f429e-5264-4791-825a-cecfb4a45c2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.795 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph735y8px" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.828 2 DEBUG nova.storage.rbd_utils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image eb8777f3-5daa-49c7-8994-687012f20453_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.833 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config eb8777f3-5daa-49c7-8994-687012f20453_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:03.881 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:03 np0005473739 nova_compute[259550]: 2025-10-07 14:14:03.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:03.884 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:14:04 np0005473739 nova_compute[259550]: 2025-10-07 14:14:04.008 2 DEBUG oslo_concurrency.processutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config eb8777f3-5daa-49c7-8994-687012f20453_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:04 np0005473739 nova_compute[259550]: 2025-10-07 14:14:04.009 2 INFO nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Deleting local config drive /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/disk.config because it was imported into RBD.#033[00m
Oct  7 10:14:04 np0005473739 kernel: tapfdcb59f4-9f: entered promiscuous mode
Oct  7 10:14:04 np0005473739 NetworkManager[44949]: <info>  [1759846444.0752] manager: (tapfdcb59f4-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/182)
Oct  7 10:14:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:04Z|00384|binding|INFO|Claiming lport fdcb59f4-9f89-4147-941b-a28bfa0621bf for this chassis.
Oct  7 10:14:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:04Z|00385|binding|INFO|fdcb59f4-9f89-4147-941b-a28bfa0621bf: Claiming fa:16:3e:65:98:6b 10.100.0.13
Oct  7 10:14:04 np0005473739 nova_compute[259550]: 2025-10-07 14:14:04.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.091 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:98:6b 10.100.0.13'], port_security=['fa:16:3e:65:98:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '206b777f-07ac-463f-aac7-dcc9bac5a7aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fdcb59f4-9f89-4147-941b-a28bfa0621bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.092 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fdcb59f4-9f89-4147-941b-a28bfa0621bf in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.094 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3dcce0a-3912-4b6f-b15b-c5b1528bb5cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.112 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1d9f332-f1 in ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.115 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1d9f332-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.115 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99aeea4c-051b-441b-b326-c84d94e92f65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.116 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bfd26903-f461-4e64-a795-c9a98d280a2f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 systemd-machined[214580]: New machine qemu-56-instance-00000031.
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.131 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[66fc4cb9-dd5e-406a-95ae-1ee98a46e9ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 systemd[1]: Started Virtual Machine qemu-56-instance-00000031.
Oct  7 10:14:04 np0005473739 nova_compute[259550]: 2025-10-07 14:14:04.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.160 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47eb35fd-bb19-4e5d-a0e7-3b0e33fc0b53]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:04Z|00386|binding|INFO|Setting lport fdcb59f4-9f89-4147-941b-a28bfa0621bf ovn-installed in OVS
Oct  7 10:14:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:04Z|00387|binding|INFO|Setting lport fdcb59f4-9f89-4147-941b-a28bfa0621bf up in Southbound
Oct  7 10:14:04 np0005473739 systemd-udevd[314509]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:04 np0005473739 nova_compute[259550]: 2025-10-07 14:14:04.166 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:04 np0005473739 NetworkManager[44949]: <info>  [1759846444.1834] device (tapfdcb59f4-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:14:04 np0005473739 NetworkManager[44949]: <info>  [1759846444.1868] device (tapfdcb59f4-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.193 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[224985a4-7409-41d1-bf37-d8d3cb235ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 systemd-udevd[314513]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.199 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00cb15ba-f1ac-43e7-b41e-ff9297fdfd5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 NetworkManager[44949]: <info>  [1759846444.2004] manager: (tapb1d9f332-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/183)
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.234 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a0401777-6bf1-4a29-9b38-faa5cc955966]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.237 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[47709525-fee0-4b40-a22f-df8f1a03a664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 NetworkManager[44949]: <info>  [1759846444.2624] device (tapb1d9f332-f0): carrier: link connected
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.270 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a62b831-5b24-4726-bf58-e64758f1063a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.292 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[083fd49e-c354-47bd-9c34-54d762b53780]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 314539, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.312 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12698e4f-8132-41fe-a18f-b26ea9c59d9c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:be96'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702782, 'tstamp': 702782}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 314540, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.332 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c07a3c84-2f24-4a8e-8292-b5f2100ebd2c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 314541, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.371 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[367ef44c-1c3e-41dd-adbe-d0eae43611c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.425 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c34c97bc-8baf-4f8a-b031-7b2a05138ec0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.427 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.427 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.428 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:04 np0005473739 kernel: tapb1d9f332-f0: entered promiscuous mode
Oct  7 10:14:04 np0005473739 NetworkManager[44949]: <info>  [1759846444.4310] manager: (tapb1d9f332-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/184)
Oct  7 10:14:04 np0005473739 nova_compute[259550]: 2025-10-07 14:14:04.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:04Z|00388|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:14:04 np0005473739 nova_compute[259550]: 2025-10-07 14:14:04.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:04 np0005473739 nova_compute[259550]: 2025-10-07 14:14:04.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.450 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e70ee2e1-1faf-4846-95ef-91e6666e4f5c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.452 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:14:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:04.453 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'env', 'PROCESS_TAG=haproxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1d9f332-f920-4d6e-8e91-dd13ec334d51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:14:04 np0005473739 podman[314573]: 2025-10-07 14:14:04.829029233 +0000 UTC m=+0.060305476 container create 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:14:04 np0005473739 systemd[1]: Started libpod-conmon-61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327.scope.
Oct  7 10:14:04 np0005473739 podman[314573]: 2025-10-07 14:14:04.799470546 +0000 UTC m=+0.030746809 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:14:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/16461d24c3ac066e546c480d661cadac4a5387b7af213607c5bfba84e3d1e3d1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:04 np0005473739 podman[314573]: 2025-10-07 14:14:04.930023057 +0000 UTC m=+0.161299300 container init 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:14:04 np0005473739 podman[314573]: 2025-10-07 14:14:04.938127861 +0000 UTC m=+0.169404104 container start 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:14:04 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [NOTICE]   (314592) : New worker (314594) forked
Oct  7 10:14:04 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [NOTICE]   (314592) : Loading success.
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.111 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.113 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.129 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.210 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.211 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.222 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.223 2 INFO nova.compute.claims [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.331 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.651 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846445.6508718, eb8777f3-5daa-49c7-8994-687012f20453 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.652 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] VM Started (Lifecycle Event)#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.673 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.678 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846445.6510162, eb8777f3-5daa-49c7-8994-687012f20453 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.678 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.697 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.700 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1485: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.719 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:14:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:14:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4276450104' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.818 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.824 2 DEBUG nova.compute.provider_tree [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.841 2 DEBUG nova.scheduler.client.report [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.868 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.868 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:14:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:05.887 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.907 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.908 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.930 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:14:05 np0005473739 nova_compute[259550]: 2025-10-07 14:14:05.951 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:14:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.036 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.037 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.038 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Creating image(s)#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.059 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.084 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.105 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.109 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.200 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.201 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.202 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.202 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.242 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.246 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:06 np0005473739 nova_compute[259550]: 2025-10-07 14:14:06.593 2 DEBUG nova.policy [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39e4681256e44d92ac5928e4f8e0d348', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef9390a1dd804281beea149e0086b360', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:14:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1486: 305 pgs: 305 active+clean; 88 MiB data, 455 MiB used, 60 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  7 10:14:07 np0005473739 nova_compute[259550]: 2025-10-07 14:14:07.735 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:14:07 np0005473739 nova_compute[259550]: 2025-10-07 14:14:07.825 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] resizing rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:14:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:14:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.123 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.124 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.142 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.207 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.208 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.215 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.215 2 INFO nova.compute.claims [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.354 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.451 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Successfully created port: 432c69dd-fb1b-432b-b867-9fe29716430d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:14:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:14:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3429720248' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.837 2 DEBUG nova.objects.instance [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'migration_context' on Instance uuid fc163bed-856c-4ea5-9bf3-6989fb1027eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.853 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.854 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Ensure instance console log exists: /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.854 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.854 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.855 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.858 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.865 2 DEBUG nova.compute.provider_tree [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.883 2 DEBUG nova.scheduler.client.report [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:14:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.911 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.912 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.962 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.962 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:14:08 np0005473739 nova_compute[259550]: 2025-10-07 14:14:08.992 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.011 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.021 2 DEBUG nova.compute.manager [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.022 2 DEBUG oslo_concurrency.lockutils [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.022 2 DEBUG oslo_concurrency.lockutils [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.022 2 DEBUG oslo_concurrency.lockutils [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.023 2 DEBUG nova.compute.manager [req-25a81aa7-db77-47e9-b766-1f673677afc2 req-e3c96ab4-dc9f-4fa6-8dc0-4dae7d07815e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Processing event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.024 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.029 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.030 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846449.0294657, eb8777f3-5daa-49c7-8994-687012f20453 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.030 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.035 2 INFO nova.virt.libvirt.driver [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance spawned successfully.#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.036 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.063 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.068 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.068 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.069 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.069 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.070 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.070 2 DEBUG nova.virt.libvirt.driver [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.078 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c69585fa-af40-416f-b06f-871498756628 does not exist
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:14:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b826e68f-987d-44dd-94e2-3b361c4c29b3 does not exist
Oct  7 10:14:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 175fcdcb-6510-4efa-96b2-8185098a148f does not exist
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.129 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.142 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.143 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.144 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Creating image(s)#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.167 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.207 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.248 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.257 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.302 2 DEBUG nova.policy [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fae6fe12a4234cd28439c010bdf3e497', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '304ddc5a55af455ba608d37c37f217aa', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.309 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Successfully updated port: 432c69dd-fb1b-432b-b867-9fe29716430d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.314 2 INFO nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Took 14.84 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.314 2 DEBUG nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.317 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.317 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.343 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.345 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.345 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.346 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.369 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.373 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f95ef1e3-b526-4516-a982-e1e415c5a657_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.412 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.413 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquired lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.413 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.414 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.425 2 INFO nova.compute.manager [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Took 15.88 seconds to build instance.#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.457 2 DEBUG oslo_concurrency.lockutils [None req-b8074e19-25c5-4a8e-ba9e-e28b1da56404 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.975s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.486 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.487 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.493 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.493 2 INFO nova.compute.claims [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.609 2 DEBUG nova.compute.manager [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-changed-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.609 2 DEBUG nova.compute.manager [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Refreshing instance network info cache due to event network-changed-432c69dd-fb1b-432b-b867-9fe29716430d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.609 2 DEBUG oslo_concurrency.lockutils [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.660 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.665 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1487: 305 pgs: 305 active+clean; 119 MiB data, 469 MiB used, 60 GiB / 60 GiB avail; 33 KiB/s rd, 2.8 MiB/s wr, 48 op/s
Oct  7 10:14:09 np0005473739 podman[315342]: 2025-10-07 14:14:09.800733537 +0000 UTC m=+0.068676757 container create 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:14:09 np0005473739 systemd[1]: Started libpod-conmon-78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418.scope.
Oct  7 10:14:09 np0005473739 podman[315342]: 2025-10-07 14:14:09.760600462 +0000 UTC m=+0.028543712 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.878 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f95ef1e3-b526-4516-a982-e1e415c5a657_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:09 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:14:09 np0005473739 podman[315342]: 2025-10-07 14:14:09.908032407 +0000 UTC m=+0.175975657 container init 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:14:09 np0005473739 podman[315342]: 2025-10-07 14:14:09.918132963 +0000 UTC m=+0.186076183 container start 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:14:09 np0005473739 podman[315342]: 2025-10-07 14:14:09.922305622 +0000 UTC m=+0.190248862 container attach 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:14:09 np0005473739 silly_perlman[315376]: 167 167
Oct  7 10:14:09 np0005473739 systemd[1]: libpod-78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418.scope: Deactivated successfully.
Oct  7 10:14:09 np0005473739 podman[315342]: 2025-10-07 14:14:09.927053898 +0000 UTC m=+0.194997128 container died 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:14:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c76436bcab89d0c24a4362bb4b313e4194d8edd260dde816852289fa66db7bca-merged.mount: Deactivated successfully.
Oct  7 10:14:09 np0005473739 podman[315342]: 2025-10-07 14:14:09.971572158 +0000 UTC m=+0.239515378 container remove 78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_perlman, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:14:09 np0005473739 systemd[1]: libpod-conmon-78193461f0ffbac3b872ec85c1f8e2e47182f28108f5accb9e7c0fb01a58e418.scope: Deactivated successfully.
Oct  7 10:14:09 np0005473739 nova_compute[259550]: 2025-10-07 14:14:09.988 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] resizing rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.098 2 DEBUG nova.objects.instance [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'migration_context' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.115 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.116 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Ensure instance console log exists: /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.116 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.116 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.117 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.160 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Successfully created port: fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:14:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:14:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2429707016' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:14:10 np0005473739 podman[315471]: 2025-10-07 14:14:10.19308661 +0000 UTC m=+0.047847658 container create 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.201 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.219 2 DEBUG nova.compute.provider_tree [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:14:10 np0005473739 systemd[1]: Started libpod-conmon-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope.
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.245 2 DEBUG nova.scheduler.client.report [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:14:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.269 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.270 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:14:10 np0005473739 podman[315471]: 2025-10-07 14:14:10.173124705 +0000 UTC m=+0.027885773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:14:10 np0005473739 podman[315471]: 2025-10-07 14:14:10.288730174 +0000 UTC m=+0.143491252 container init 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:14:10 np0005473739 podman[315471]: 2025-10-07 14:14:10.297678299 +0000 UTC m=+0.152439337 container start 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:14:10 np0005473739 podman[315471]: 2025-10-07 14:14:10.301218172 +0000 UTC m=+0.155979240 container attach 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.316 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.317 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.335 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:14:10 np0005473739 podman[315488]: 2025-10-07 14:14:10.337709972 +0000 UTC m=+0.094881956 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.355 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:14:10 np0005473739 podman[315491]: 2025-10-07 14:14:10.377620451 +0000 UTC m=+0.134283861 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.459 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.460 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.461 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Creating image(s)#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.487 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.513 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.541 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.546 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.591 2 DEBUG nova.network.neutron [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.594 2 DEBUG nova.policy [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ed6b624fb80d4d4d9b897c788b614297', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'daaa10f40e014711ba0819e5a5b251c7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.617 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Releasing lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.617 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance network_info: |[{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.618 2 DEBUG oslo_concurrency.lockutils [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.618 2 DEBUG nova.network.neutron [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Refreshing network info cache for port 432c69dd-fb1b-432b-b867-9fe29716430d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.623 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start _get_guest_xml network_info=[{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.640 2 WARNING nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.643 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.644 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.645 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.645 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.674 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.679 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6be00afd-ab65-48db-a575-23a285419e60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.735 2 DEBUG nova.virt.libvirt.host [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.736 2 DEBUG nova.virt.libvirt.host [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.741 2 DEBUG nova.virt.libvirt.host [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.742 2 DEBUG nova.virt.libvirt.host [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.743 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.744 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.744 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.744 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.745 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.745 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.745 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.745 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.746 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.746 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.746 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.746 2 DEBUG nova.virt.hardware [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:14:10 np0005473739 nova_compute[259550]: 2025-10-07 14:14:10.750 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.150 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6be00afd-ab65-48db-a575-23a285419e60_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.224 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] resizing rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:14:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1111425089' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.264 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.285 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.290 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.381 2 DEBUG nova.objects.instance [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lazy-loading 'migration_context' on Instance uuid 6be00afd-ab65-48db-a575-23a285419e60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.400 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.401 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Ensure instance console log exists: /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.401 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.402 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.403 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:11 np0005473739 fervent_tharp[315492]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:14:11 np0005473739 fervent_tharp[315492]: --> relative data size: 1.0
Oct  7 10:14:11 np0005473739 fervent_tharp[315492]: --> All data devices are unavailable
Oct  7 10:14:11 np0005473739 systemd[1]: libpod-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope: Deactivated successfully.
Oct  7 10:14:11 np0005473739 systemd[1]: libpod-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope: Consumed 1.066s CPU time.
Oct  7 10:14:11 np0005473739 conmon[315492]: conmon 37d3c3491ccd6295b1e8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope/container/memory.events
Oct  7 10:14:11 np0005473739 podman[315471]: 2025-10-07 14:14:11.48582089 +0000 UTC m=+1.340581938 container died 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:14:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-20bf481602154f55f09507a4dadd5d5d6e5b81c48d5b328006ef755f05734261-merged.mount: Deactivated successfully.
Oct  7 10:14:11 np0005473739 podman[315471]: 2025-10-07 14:14:11.553117109 +0000 UTC m=+1.407878157 container remove 37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_tharp, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 10:14:11 np0005473739 systemd[1]: libpod-conmon-37d3c3491ccd6295b1e8a3017d741f5db72fc9d3d415258a06bce5b5c196f5c2.scope: Deactivated successfully.
Oct  7 10:14:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1488: 305 pgs: 305 active+clean; 157 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 776 KiB/s rd, 3.4 MiB/s wr, 92 op/s
Oct  7 10:14:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3250958552' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.806 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.807 2 DEBUG nova.virt.libvirt.vif [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-307445853',display_name='tempest-ServerActionsTestOtherA-server-307445853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-307445853',id=50,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+9Z65pTClTOFGfwBQwoBDEk0wDdHVeNmjMfU680t6jhHHvju/LmHnN+5TGqyxhWrME7/S2SBjWiIYsOdkRBZmw+292d2qkOy0bnGNB53h//Xfe51NNgLX77Oc4GTlk5Q==',key_name='tempest-keypair-1536550939',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-n9ekxyu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=fc163bed-856c-4ea5-9bf3-6989fb1027eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.808 2 DEBUG nova.network.os_vif_util [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.809 2 DEBUG nova.network.os_vif_util [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.810 2 DEBUG nova.objects.instance [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_devices' on Instance uuid fc163bed-856c-4ea5-9bf3-6989fb1027eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.824 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <uuid>fc163bed-856c-4ea5-9bf3-6989fb1027eb</uuid>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <name>instance-00000032</name>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerActionsTestOtherA-server-307445853</nova:name>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:14:10</nova:creationTime>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <nova:user uuid="39e4681256e44d92ac5928e4f8e0d348">tempest-ServerActionsTestOtherA-508284156-project-member</nova:user>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <nova:project uuid="ef9390a1dd804281beea149e0086b360">tempest-ServerActionsTestOtherA-508284156</nova:project>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <nova:port uuid="432c69dd-fb1b-432b-b867-9fe29716430d">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <entry name="serial">fc163bed-856c-4ea5-9bf3-6989fb1027eb</entry>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <entry name="uuid">fc163bed-856c-4ea5-9bf3-6989fb1027eb</entry>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:a6:54:7c"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <target dev="tap432c69dd-fb"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/console.log" append="off"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:14:11 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:14:11 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:14:11 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:14:11 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.825 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Preparing to wait for external event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.825 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.826 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.826 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.827 2 DEBUG nova.virt.libvirt.vif [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-307445853',display_name='tempest-ServerActionsTestOtherA-server-307445853',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-307445853',id=50,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+9Z65pTClTOFGfwBQwoBDEk0wDdHVeNmjMfU680t6jhHHvju/LmHnN+5TGqyxhWrME7/S2SBjWiIYsOdkRBZmw+292d2qkOy0bnGNB53h//Xfe51NNgLX77Oc4GTlk5Q==',key_name='tempest-keypair-1536550939',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-n9ekxyu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:05Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=fc163bed-856c-4ea5-9bf3-6989fb1027eb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.828 2 DEBUG nova.network.os_vif_util [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.829 2 DEBUG nova.network.os_vif_util [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.830 2 DEBUG os_vif [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.831 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.838 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap432c69dd-fb, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.839 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap432c69dd-fb, col_values=(('external_ids', {'iface-id': '432c69dd-fb1b-432b-b867-9fe29716430d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a6:54:7c', 'vm-uuid': 'fc163bed-856c-4ea5-9bf3-6989fb1027eb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:11 np0005473739 NetworkManager[44949]: <info>  [1759846451.8431] manager: (tap432c69dd-fb): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/185)
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:11 np0005473739 nova_compute[259550]: 2025-10-07 14:14:11.851 2 INFO os_vif [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb')#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.027 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.028 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.028 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No VIF found with MAC fa:16:3e:a6:54:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.029 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Using config drive#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.054 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.062 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Successfully updated port: fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.087 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.087 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquired lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.088 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.091 2 DEBUG nova.compute.manager [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.091 2 DEBUG oslo_concurrency.lockutils [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.092 2 DEBUG oslo_concurrency.lockutils [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.092 2 DEBUG oslo_concurrency.lockutils [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.092 2 DEBUG nova.compute.manager [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.092 2 WARNING nova.compute.manager [req-e5739303-3846-44c6-a490-88a30f1dfc1b req-767cb2e8-f0fe-41d4-8145-e972f036085c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf for instance with vm_state active and task_state None.#033[00m
Oct  7 10:14:12 np0005473739 podman[315964]: 2025-10-07 14:14:12.273851565 +0000 UTC m=+0.055639324 container create 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:14:12 np0005473739 podman[315964]: 2025-10-07 14:14:12.241601237 +0000 UTC m=+0.023389006 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:14:12 np0005473739 systemd[1]: Started libpod-conmon-6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff.scope.
Oct  7 10:14:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.397 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:14:12 np0005473739 podman[315964]: 2025-10-07 14:14:12.41863437 +0000 UTC m=+0.200422149 container init 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:14:12 np0005473739 podman[315964]: 2025-10-07 14:14:12.429450854 +0000 UTC m=+0.211238603 container start 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:14:12 np0005473739 podman[315964]: 2025-10-07 14:14:12.436477189 +0000 UTC m=+0.218264938 container attach 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 10:14:12 np0005473739 bold_lalande[315980]: 167 167
Oct  7 10:14:12 np0005473739 systemd[1]: libpod-6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff.scope: Deactivated successfully.
Oct  7 10:14:12 np0005473739 conmon[315980]: conmon 6daa10359846b8b0ba5e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff.scope/container/memory.events
Oct  7 10:14:12 np0005473739 podman[315964]: 2025-10-07 14:14:12.443577726 +0000 UTC m=+0.225365475 container died 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:14:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1728b64334aca8f854e9dceb515aece9ad7b80e898a332c41d48ac1660d10593-merged.mount: Deactivated successfully.
Oct  7 10:14:12 np0005473739 podman[315964]: 2025-10-07 14:14:12.487344876 +0000 UTC m=+0.269132625 container remove 6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_lalande, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 10:14:12 np0005473739 systemd[1]: libpod-conmon-6daa10359846b8b0ba5ed4f7142db98b4a0266905eeb4ada35a1fc63e3e451ff.scope: Deactivated successfully.
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.655 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Creating config drive at /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.661 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2bl1ps_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:12 np0005473739 podman[316004]: 2025-10-07 14:14:12.68456776 +0000 UTC m=+0.059236998 container create a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:14:12 np0005473739 systemd[1]: Started libpod-conmon-a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754.scope.
Oct  7 10:14:12 np0005473739 podman[316004]: 2025-10-07 14:14:12.655897317 +0000 UTC m=+0.030566615 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.759 2 DEBUG nova.compute.manager [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-changed-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.759 2 DEBUG nova.compute.manager [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Refreshing instance network info cache due to event network-changed-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.760 2 DEBUG oslo_concurrency.lockutils [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:12 np0005473739 podman[316004]: 2025-10-07 14:14:12.801900215 +0000 UTC m=+0.176569483 container init a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:14:12 np0005473739 podman[316004]: 2025-10-07 14:14:12.810459919 +0000 UTC m=+0.185129157 container start a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:14:12 np0005473739 podman[316004]: 2025-10-07 14:14:12.814207638 +0000 UTC m=+0.188876896 container attach a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.979 2 DEBUG nova.network.neutron [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updated VIF entry in instance network info cache for port 432c69dd-fb1b-432b-b867-9fe29716430d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.980 2 DEBUG nova.network.neutron [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.984 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpg2bl1ps_" returned: 0 in 0.323s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:12 np0005473739 nova_compute[259550]: 2025-10-07 14:14:12.985 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Successfully created port: e4ca2e16-9053-460a-9aee-cd724306083e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.010 2 DEBUG nova.storage.rbd_utils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.013 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.057 2 DEBUG oslo_concurrency.lockutils [req-ff9c4cc8-37c4-4ec9-97e6-0046d6565106 req-74818ab4-0315-48ef-aab2-b5eafe6f3ef2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.189 2 DEBUG oslo_concurrency.processutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config fc163bed-856c-4ea5-9bf3-6989fb1027eb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:13 np0005473739 NetworkManager[44949]: <info>  [1759846453.1912] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/186)
Oct  7 10:14:13 np0005473739 NetworkManager[44949]: <info>  [1759846453.1927] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/187)
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.190 2 INFO nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Deleting local config drive /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb/disk.config because it was imported into RBD.#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:13 np0005473739 NetworkManager[44949]: <info>  [1759846453.2741] manager: (tap432c69dd-fb): new Tun device (/org/freedesktop/NetworkManager/Devices/188)
Oct  7 10:14:13 np0005473739 systemd-udevd[316075]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:13 np0005473739 systemd-machined[214580]: New machine qemu-57-instance-00000032.
Oct  7 10:14:13 np0005473739 systemd[1]: Started Virtual Machine qemu-57-instance-00000032.
Oct  7 10:14:13 np0005473739 kernel: tap432c69dd-fb: entered promiscuous mode
Oct  7 10:14:13 np0005473739 NetworkManager[44949]: <info>  [1759846453.4108] device (tap432c69dd-fb): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:14:13 np0005473739 NetworkManager[44949]: <info>  [1759846453.4134] device (tap432c69dd-fb): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:13Z|00389|binding|INFO|Claiming lport 432c69dd-fb1b-432b-b867-9fe29716430d for this chassis.
Oct  7 10:14:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:13Z|00390|binding|INFO|432c69dd-fb1b-432b-b867-9fe29716430d: Claiming fa:16:3e:a6:54:7c 10.100.0.7
Oct  7 10:14:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:13Z|00391|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.448 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:54:7c 10.100.0.7'], port_security=['fa:16:3e:a6:54:7c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fc163bed-856c-4ea5-9bf3-6989fb1027eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0a36afbb-3eb8-42d9-b597-25ad85279a69', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=432c69dd-fb1b-432b-b867-9fe29716430d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.449 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 432c69dd-fb1b-432b-b867-9fe29716430d in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb bound to our chassis#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.451 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb#033[00m
Oct  7 10:14:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:13Z|00392|binding|INFO|Setting lport 432c69dd-fb1b-432b-b867-9fe29716430d ovn-installed in OVS
Oct  7 10:14:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:13Z|00393|binding|INFO|Setting lport 432c69dd-fb1b-432b-b867-9fe29716430d up in Southbound
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.466 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66561416-181d-42d9-92ba-820c25f96c97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.467 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7ba9d553-b1 in ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.469 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7ba9d553-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[794e375d-b3d1-4499-9cf7-e2ddb9c63905]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f7ce8c8-4664-43ba-a02b-438bb7bb1b8b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.484 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6dccf0-6e44-4804-b70b-59437e86e6bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.512 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a867c225-c0f8-4d3d-bf74-171ef649ac19]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.547 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe047b6-44dc-494f-9ca1-8187db8ebf93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 NetworkManager[44949]: <info>  [1759846453.5558] manager: (tap7ba9d553-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/189)
Oct  7 10:14:13 np0005473739 systemd-udevd[316079]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.557 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[741a8992-d2d2-40b1-afa9-3c32acce5637]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.600 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[93602a8b-96f0-48b8-9858-a6660a6e6e5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.604 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[28b34068-e7de-46b5-94ac-a536af9b1b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 NetworkManager[44949]: <info>  [1759846453.6322] device (tap7ba9d553-b0): carrier: link connected
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.639 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d03ae228-d749-4dd8-9a69-8f30269021cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.659 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8dac0fb0-fbaf-4daa-b13b-b8eac91152d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316113, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb246f6-5841-4a36-9360-f870e73205ff]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee3:7b82'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703719, 'tstamp': 703719}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316116, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1489: 305 pgs: 305 active+clean; 157 MiB data, 489 MiB used, 60 GiB / 60 GiB avail; 768 KiB/s rd, 2.8 MiB/s wr, 83 op/s
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.710 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2939db-b707-4e05-a277-36dbc5df31bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316117, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 reverent_carson[316023]: {
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:    "0": [
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:        {
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "devices": [
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "/dev/loop3"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            ],
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_name": "ceph_lv0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_size": "21470642176",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "name": "ceph_lv0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "tags": {
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cluster_name": "ceph",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.crush_device_class": "",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.encrypted": "0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osd_id": "0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.type": "block",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.vdo": "0"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            },
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "type": "block",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "vg_name": "ceph_vg0"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:        }
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:    ],
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:    "1": [
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:        {
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "devices": [
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "/dev/loop4"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            ],
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_name": "ceph_lv1",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_size": "21470642176",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "name": "ceph_lv1",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "tags": {
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cluster_name": "ceph",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.crush_device_class": "",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.encrypted": "0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osd_id": "1",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.type": "block",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.vdo": "0"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            },
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "type": "block",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "vg_name": "ceph_vg1"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:        }
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:    ],
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:    "2": [
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:        {
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "devices": [
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "/dev/loop5"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            ],
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_name": "ceph_lv2",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_size": "21470642176",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "name": "ceph_lv2",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "tags": {
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.cluster_name": "ceph",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.crush_device_class": "",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.encrypted": "0",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osd_id": "2",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.type": "block",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:                "ceph.vdo": "0"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            },
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "type": "block",
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:            "vg_name": "ceph_vg2"
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:        }
Oct  7 10:14:13 np0005473739 reverent_carson[316023]:    ]
Oct  7 10:14:13 np0005473739 reverent_carson[316023]: }
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d829443-8c8f-42e9-80ad-6b33ec351581]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 systemd[1]: libpod-a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754.scope: Deactivated successfully.
Oct  7 10:14:13 np0005473739 podman[316004]: 2025-10-07 14:14:13.760143522 +0000 UTC m=+1.134812770 container died a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:14:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ff4b814b092bf18a38c19c193c033b731a33f59ee0b6a1a313c1acb9777cdcb5-merged.mount: Deactivated successfully.
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.822 2 DEBUG nova.network.neutron [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updating instance_info_cache with network_info: [{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:13 np0005473739 podman[316004]: 2025-10-07 14:14:13.837076384 +0000 UTC m=+1.211745622 container remove a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_carson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.846 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Releasing lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.847 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance network_info: |[{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.848 2 DEBUG oslo_concurrency.lockutils [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.849 2 DEBUG nova.network.neutron [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Refreshing network info cache for port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[14e72c58-6710-4e84-8759-e432eb46d1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.851 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.851 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.852 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.852 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start _get_guest_xml network_info=[{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:13 np0005473739 NetworkManager[44949]: <info>  [1759846453.8550] manager: (tap7ba9d553-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/190)
Oct  7 10:14:13 np0005473739 kernel: tap7ba9d553-b0: entered promiscuous mode
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:13 np0005473739 systemd[1]: libpod-conmon-a638b6f3b8290c90b68289030ef4886e5a1863f8dec2f5a3b180395aa9cd9754.scope: Deactivated successfully.
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.861 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:13Z|00394|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.890 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7ba9d553-bbaa-47f8-8281-6a74e53c37fb.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7ba9d553-bbaa-47f8-8281-6a74e53c37fb.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.891 2 WARNING nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.891 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3702b8b7-9f22-4b69-b254-94808446ba06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.892 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/7ba9d553-bbaa-47f8-8281-6a74e53c37fb.pid.haproxy
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 7ba9d553-bbaa-47f8-8281-6a74e53c37fb
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:14:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:13.893 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'env', 'PROCESS_TAG=haproxy-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7ba9d553-bbaa-47f8-8281-6a74e53c37fb.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.898 2 DEBUG nova.virt.libvirt.host [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.900 2 DEBUG nova.virt.libvirt.host [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.903 2 DEBUG nova.virt.libvirt.host [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.904 2 DEBUG nova.virt.libvirt.host [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.905 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.905 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.906 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.906 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.906 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.907 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.907 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.907 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.908 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.908 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.908 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.908 2 DEBUG nova.virt.hardware [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:14:13 np0005473739 nova_compute[259550]: 2025-10-07 14:14:13.912 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.194 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.196 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-fdcb59f4-9f89-4147-941b-a28bfa0621bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.197 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.197 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.197 2 DEBUG nova.network.neutron [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port fdcb59f4-9f89-4147-941b-a28bfa0621bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:14 np0005473739 podman[316322]: 2025-10-07 14:14:14.33985153 +0000 UTC m=+0.054068781 container create f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:14:14 np0005473739 systemd[1]: Started libpod-conmon-f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153.scope.
Oct  7 10:14:14 np0005473739 podman[316322]: 2025-10-07 14:14:14.308391524 +0000 UTC m=+0.022608785 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:14:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a901171075059a0193ddbd3fb0f9b59df344514fed4a53bc1eb915147968b7ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.433 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846454.433579, fc163bed-856c-4ea5-9bf3-6989fb1027eb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.434 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] VM Started (Lifecycle Event)#033[00m
Oct  7 10:14:14 np0005473739 podman[316322]: 2025-10-07 14:14:14.44710334 +0000 UTC m=+0.161320601 container init f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.453 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:14 np0005473739 podman[316322]: 2025-10-07 14:14:14.457981446 +0000 UTC m=+0.172198687 container start f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.458 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846454.4336905, fc163bed-856c-4ea5-9bf3-6989fb1027eb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.459 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:14:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3164028786' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:14 np0005473739 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [NOTICE]   (316364) : New worker (316368) forked
Oct  7 10:14:14 np0005473739 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [NOTICE]   (316364) : Loading success.
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.486 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.492 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.503 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.529 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.536 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.590 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Successfully updated port: e4ca2e16-9053-460a-9aee-cd724306083e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.592 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.616 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.617 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquired lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.617 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:14:14 np0005473739 podman[316408]: 2025-10-07 14:14:14.631310752 +0000 UTC m=+0.049412990 container create d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:14:14 np0005473739 systemd[1]: Started libpod-conmon-d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce.scope.
Oct  7 10:14:14 np0005473739 podman[316408]: 2025-10-07 14:14:14.616150084 +0000 UTC m=+0.034252342 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:14:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:14 np0005473739 podman[316408]: 2025-10-07 14:14:14.758206767 +0000 UTC m=+0.176309025 container init d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:14:14 np0005473739 podman[316408]: 2025-10-07 14:14:14.768603571 +0000 UTC m=+0.186705809 container start d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:14:14 np0005473739 podman[316408]: 2025-10-07 14:14:14.772650037 +0000 UTC m=+0.190752295 container attach d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:14:14 np0005473739 quizzical_leakey[316434]: 167 167
Oct  7 10:14:14 np0005473739 systemd[1]: libpod-d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce.scope: Deactivated successfully.
Oct  7 10:14:14 np0005473739 podman[316408]: 2025-10-07 14:14:14.778210313 +0000 UTC m=+0.196312551 container died d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:14:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9f16f68329b92d251204b4cd4926c0be14aa313a60ceab31ae681ced3c63c5cc-merged.mount: Deactivated successfully.
Oct  7 10:14:14 np0005473739 podman[316408]: 2025-10-07 14:14:14.819368875 +0000 UTC m=+0.237471113 container remove d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_leakey, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 10:14:14 np0005473739 systemd[1]: libpod-conmon-d1353874151ef3a0b4e6bb063d26b194d32dfb5f16c99958cef8287156d4a0ce.scope: Deactivated successfully.
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.832 2 DEBUG nova.compute.manager [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-changed-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.835 2 DEBUG nova.compute.manager [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Refreshing instance network info cache due to event network-changed-e4ca2e16-9053-460a-9aee-cd724306083e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.835 2 DEBUG oslo_concurrency.lockutils [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:14 np0005473739 nova_compute[259550]: 2025-10-07 14:14:14.884 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:14:15 np0005473739 podman[316470]: 2025-10-07 14:14:15.028903843 +0000 UTC m=+0.057555635 container create a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 10:14:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2195307386' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:15 np0005473739 systemd[1]: Started libpod-conmon-a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d.scope.
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.073 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.074 2 DEBUG nova.virt.libvirt.vif [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:09Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.075 2 DEBUG nova.network.os_vif_util [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.075 2 DEBUG nova.network.os_vif_util [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.077 2 DEBUG nova.objects.instance [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'pci_devices' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:15 np0005473739 podman[316470]: 2025-10-07 14:14:15.006024131 +0000 UTC m=+0.034675973 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.094 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <uuid>f95ef1e3-b526-4516-a982-e1e415c5a657</uuid>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <name>instance-00000033</name>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <nova:name>tempest-InstanceActionsTestJSON-server-1758004192</nova:name>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:14:13</nova:creationTime>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <nova:user uuid="fae6fe12a4234cd28439c010bdf3e497">tempest-InstanceActionsTestJSON-550730255-project-member</nova:user>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <nova:project uuid="304ddc5a55af455ba608d37c37f217aa">tempest-InstanceActionsTestJSON-550730255</nova:project>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <nova:port uuid="fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <entry name="serial">f95ef1e3-b526-4516-a982-e1e415c5a657</entry>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <entry name="uuid">f95ef1e3-b526-4516-a982-e1e415c5a657</entry>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f95ef1e3-b526-4516-a982-e1e415c5a657_disk">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:ed:21:7c"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <target dev="tapfa2c6ead-0b"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/console.log" append="off"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:14:15 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:14:15 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:14:15 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:14:15 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.095 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Preparing to wait for external event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.095 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.096 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.096 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.097 2 DEBUG nova.virt.libvirt.vif [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:09Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.097 2 DEBUG nova.network.os_vif_util [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.098 2 DEBUG nova.network.os_vif_util [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.098 2 DEBUG os_vif [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.099 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.100 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.106 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa2c6ead-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.107 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa2c6ead-0b, col_values=(('external_ids', {'iface-id': 'fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:21:7c', 'vm-uuid': 'f95ef1e3-b526-4516-a982-e1e415c5a657'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:15 np0005473739 NetworkManager[44949]: <info>  [1759846455.1106] manager: (tapfa2c6ead-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/191)
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.118 2 INFO os_vif [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b')#033[00m
Oct  7 10:14:15 np0005473739 podman[316470]: 2025-10-07 14:14:15.125607255 +0000 UTC m=+0.154259067 container init a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:14:15 np0005473739 podman[316470]: 2025-10-07 14:14:15.133691417 +0000 UTC m=+0.162343209 container start a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:14:15 np0005473739 podman[316470]: 2025-10-07 14:14:15.138277858 +0000 UTC m=+0.166929670 container attach a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.190 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.191 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.191 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] No VIF found with MAC fa:16:3e:ed:21:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.192 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Using config drive#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.216 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.464 2 DEBUG nova.network.neutron [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updated VIF entry in instance network info cache for port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.465 2 DEBUG nova.network.neutron [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updating instance_info_cache with network_info: [{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:15 np0005473739 nova_compute[259550]: 2025-10-07 14:14:15.487 2 DEBUG oslo_concurrency.lockutils [req-6cadcc50-7841-4e83-b592-61a4111753ec req-d815d8be-38ee-43ac-8bec-9f7e74ea4ed9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1490: 305 pgs: 305 active+clean; 216 MiB data, 516 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.1 MiB/s wr, 163 op/s
Oct  7 10:14:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.118 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Creating config drive at /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.124 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbi7aimn4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]: {
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "osd_id": 2,
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "type": "bluestore"
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:    },
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "osd_id": 1,
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "type": "bluestore"
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:    },
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "osd_id": 0,
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:        "type": "bluestore"
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]:    }
Oct  7 10:14:16 np0005473739 admiring_aryabhata[316488]: }
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.169 2 DEBUG nova.network.neutron [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port fdcb59f4-9f89-4147-941b-a28bfa0621bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.170 2 DEBUG nova.network.neutron [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.199 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.200 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.200 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.201 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.201 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.201 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Processing event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.201 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.202 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.202 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.202 2 DEBUG oslo_concurrency.lockutils [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.202 2 DEBUG nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] No waiting events found dispatching network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.203 2 WARNING nova.compute.manager [req-b6b84ae2-a477-47b4-b1c0-91b2e0f150f9 req-fb3d8912-bc0b-4ddc-9a55-7562167dad7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received unexpected event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.204 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:14:16 np0005473739 systemd[1]: libpod-a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d.scope: Deactivated successfully.
Oct  7 10:14:16 np0005473739 systemd[1]: libpod-a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d.scope: Consumed 1.079s CPU time.
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.212 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846456.2114837, fc163bed-856c-4ea5-9bf3-6989fb1027eb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.212 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.215 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.222 2 INFO nova.virt.libvirt.driver [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance spawned successfully.#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.222 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.231 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.236 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.247 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.247 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.248 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.248 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.249 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.249 2 DEBUG nova.virt.libvirt.driver [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:16 np0005473739 podman[316545]: 2025-10-07 14:14:16.261277117 +0000 UTC m=+0.024884236 container died a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.277 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpbi7aimn4" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.278 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:14:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-000db361e285a685b1729a8c273bbb6c1bcce20b402b6ff675d0f4cf74bd2399-merged.mount: Deactivated successfully.
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.314 2 DEBUG nova.storage.rbd_utils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] rbd image f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.320 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:16 np0005473739 podman[316545]: 2025-10-07 14:14:16.339348028 +0000 UTC m=+0.102955137 container remove a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_aryabhata, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:14:16 np0005473739 systemd[1]: libpod-conmon-a725f98ea7ec9cda9b7271d9cae917e6d7da8b74d641a6e70a06c72f7373f48d.scope: Deactivated successfully.
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.375 2 INFO nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Took 10.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.376 2 DEBUG nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:14:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:14:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:16 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8a96fe33-d2d1-4378-8db5-f7acdc8ffef9 does not exist
Oct  7 10:14:16 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d7175329-fdd1-4e38-b4ba-0b1884d7209e does not exist
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.449 2 INFO nova.compute.manager [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Took 11.27 seconds to build instance.#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.493 2 DEBUG oslo_concurrency.lockutils [None req-a2ecd811-6c70-4bf6-9d3a-5d389f5e4043 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.381s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.525 2 DEBUG oslo_concurrency.processutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.526 2 INFO nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Deleting local config drive /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/disk.config because it was imported into RBD.#033[00m
Oct  7 10:14:16 np0005473739 kernel: tapfa2c6ead-0b: entered promiscuous mode
Oct  7 10:14:16 np0005473739 NetworkManager[44949]: <info>  [1759846456.6035] manager: (tapfa2c6ead-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/192)
Oct  7 10:14:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:16Z|00395|binding|INFO|Claiming lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for this chassis.
Oct  7 10:14:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:16Z|00396|binding|INFO|fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598: Claiming fa:16:3e:ed:21:7c 10.100.0.10
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.622 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:21:7c 10.100.0.10'], port_security=['fa:16:3e:ed:21:7c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f95ef1e3-b526-4516-a982-e1e415c5a657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '304ddc5a55af455ba608d37c37f217aa', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5e1b758c-b736-4703-bebe-5a0d70f8524d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d731c88f-69f4-4c5d-9133-965e19cd35d1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.624 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 in datapath 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b bound to our chassis#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.625 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b#033[00m
Oct  7 10:14:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:16Z|00397|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 ovn-installed in OVS
Oct  7 10:14:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:16Z|00398|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 up in Southbound
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.641 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaa19f6-5c49-45de-bafe-ab75ca980ca5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.642 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ce7b2cd-51 in ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.645 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ce7b2cd-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d91ea4-c6ea-429a-89cd-63f8c64ccd27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f40782e1-7a97-4e9f-b944-00714268ff76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 systemd-udevd[316660]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:16 np0005473739 systemd-machined[214580]: New machine qemu-58-instance-00000033.
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.663 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e32de7-7a29-4b04-84b3-54bb9b0bc4bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 NetworkManager[44949]: <info>  [1759846456.6702] device (tapfa2c6ead-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:14:16 np0005473739 NetworkManager[44949]: <info>  [1759846456.6715] device (tapfa2c6ead-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:14:16 np0005473739 systemd[1]: Started Virtual Machine qemu-58-instance-00000033.
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.699 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de4343c7-e3f2-41a7-84c7-057679f1a0b9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.738 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cd714b86-a254-413e-ac36-3e0796ad795a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 NetworkManager[44949]: <info>  [1759846456.7461] manager: (tap4ce7b2cd-50): new Veth device (/org/freedesktop/NetworkManager/Devices/193)
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.744 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7a7aa7-c355-4e00-a9eb-1ec6a345f1c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.787 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5bd1e366-72d6-49bb-a108-584dfef26faa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.792 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[63686403-f11d-4364-9edd-f49bd2c0890d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 NetworkManager[44949]: <info>  [1759846456.8229] device (tap4ce7b2cd-50): carrier: link connected
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.836 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b8645dfb-761d-4388-8fcc-763c4ade81be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.858 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[923d1d48-2fa3-4518-8011-869e76391405]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ce7b2cd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:bc:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704038, 'reachable_time': 43082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316693, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.880 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67d5bcd1-c980-4f82-8703-bc3bf93f9a6f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:bcce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704038, 'tstamp': 704038}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316694, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.903 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed25dd5c-c018-4d01-aef8-61f906156fd9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ce7b2cd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:bc:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 125], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704038, 'reachable_time': 43082, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316695, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.904 2 DEBUG nova.network.neutron [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updating instance_info_cache with network_info: [{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:16.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a6f900a2-2207-4525-a6b2-8629d8ca00ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.940 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Releasing lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.940 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance network_info: |[{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.940 2 DEBUG oslo_concurrency.lockutils [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.941 2 DEBUG nova.network.neutron [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Refreshing network info cache for port e4ca2e16-9053-460a-9aee-cd724306083e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.943 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start _get_guest_xml network_info=[{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.949 2 WARNING nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.954 2 DEBUG nova.virt.libvirt.host [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.955 2 DEBUG nova.virt.libvirt.host [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.960 2 DEBUG nova.virt.libvirt.host [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.961 2 DEBUG nova.virt.libvirt.host [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.961 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.961 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.962 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.962 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.962 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.963 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.963 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.963 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.963 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.964 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.964 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.964 2 DEBUG nova.virt.hardware [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:14:16 np0005473739 nova_compute[259550]: 2025-10-07 14:14:16.967 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.031 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17078243-955f-4352-89b9-b824637038fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.033 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ce7b2cd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.033 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.034 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ce7b2cd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:17 np0005473739 kernel: tap4ce7b2cd-50: entered promiscuous mode
Oct  7 10:14:17 np0005473739 NetworkManager[44949]: <info>  [1759846457.0369] manager: (tap4ce7b2cd-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/194)
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.040 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ce7b2cd-50, col_values=(('external_ids', {'iface-id': '1909b8a8-f403-4e60-bcbd-3a0491b233a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.041 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:17Z|00399|binding|INFO|Releasing lport 1909b8a8-f403-4e60-bcbd-3a0491b233a9 from this chassis (sb_readonly=0)
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.060 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.062 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4012384-79ce-402b-9d4f-f7a79614ab07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.063 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:14:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:17.065 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'env', 'PROCESS_TAG=haproxy-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:14:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:14:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3567838038' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.516 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.549s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:17 np0005473739 podman[316786]: 2025-10-07 14:14:17.518795001 +0000 UTC m=+0.053699623 container create 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.546 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.556 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:17 np0005473739 systemd[1]: Started libpod-conmon-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03.scope.
Oct  7 10:14:17 np0005473739 podman[316786]: 2025-10-07 14:14:17.489168163 +0000 UTC m=+0.024072825 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:14:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b633774cda3ee70e43501224cdb7411d4938ebe0f67121f0bf818e12aae2f79e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:17 np0005473739 podman[316786]: 2025-10-07 14:14:17.605908811 +0000 UTC m=+0.140813443 container init 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct  7 10:14:17 np0005473739 podman[316786]: 2025-10-07 14:14:17.612169216 +0000 UTC m=+0.147073848 container start 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:14:17 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [NOTICE]   (316825) : New worker (316827) forked
Oct  7 10:14:17 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [NOTICE]   (316825) : Loading success.
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.699 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846457.6980968, f95ef1e3-b526-4516-a982-e1e415c5a657 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.700 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Started (Lifecycle Event)#033[00m
Oct  7 10:14:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1491: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.3 MiB/s wr, 161 op/s
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.737 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.748 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846457.7000487, f95ef1e3-b526-4516-a982-e1e415c5a657 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.749 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.811 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.815 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:17 np0005473739 nova_compute[259550]: 2025-10-07 14:14:17.832 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:14:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/397282575' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.040 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.043 2 DEBUG nova.virt.libvirt.vif [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-749882943',display_name='tempest-ServersTestManualDisk-server-749882943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-749882943',id=52,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCrm9pvHnuJnXcyCVc0QhU5t7gBB0n4d3wZnsQ891irt3BRxIDuply0Hxb2qV0GnCSguPcTHy9UPa8LT2jy6C6OH3oPbsOkrpY1oYyFQDRv9Qn+7DlyBAQdOnjMOuLm/AQ==',key_name='tempest-keypair-1254505334',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daaa10f40e014711ba0819e5a5b251c7',ramdisk_id='',reservation_id='r-807vnzq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-456247214',owner_user_name='tempest-ServersTestManualDisk-456247214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ed6b624fb80d4d4d9b897c788b614297',uuid=6be00afd-ab65-48db-a575-23a285419e60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.045 2 DEBUG nova.network.os_vif_util [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converting VIF {"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.047 2 DEBUG nova.network.os_vif_util [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.050 2 DEBUG nova.objects.instance [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 6be00afd-ab65-48db-a575-23a285419e60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.059 2 DEBUG nova.compute.manager [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.060 2 DEBUG oslo_concurrency.lockutils [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.061 2 DEBUG oslo_concurrency.lockutils [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.061 2 DEBUG oslo_concurrency.lockutils [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.062 2 DEBUG nova.compute.manager [req-13113647-5444-43aa-8803-30684f7f5297 req-1732e308-5e24-44b8-ac1c-27877c6e6baf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Processing event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.064 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.078 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846458.0767114, f95ef1e3-b526-4516-a982-e1e415c5a657 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.078 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.082 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <uuid>6be00afd-ab65-48db-a575-23a285419e60</uuid>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <name>instance-00000034</name>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestManualDisk-server-749882943</nova:name>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:14:16</nova:creationTime>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <nova:user uuid="ed6b624fb80d4d4d9b897c788b614297">tempest-ServersTestManualDisk-456247214-project-member</nova:user>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <nova:project uuid="daaa10f40e014711ba0819e5a5b251c7">tempest-ServersTestManualDisk-456247214</nova:project>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <nova:port uuid="e4ca2e16-9053-460a-9aee-cd724306083e">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <entry name="serial">6be00afd-ab65-48db-a575-23a285419e60</entry>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <entry name="uuid">6be00afd-ab65-48db-a575-23a285419e60</entry>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/6be00afd-ab65-48db-a575-23a285419e60_disk">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/6be00afd-ab65-48db-a575-23a285419e60_disk.config">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:17:78:e6"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <target dev="tape4ca2e16-90"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/console.log" append="off"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:14:18 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:14:18 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:14:18 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:14:18 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.082 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Preparing to wait for external event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.082 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.083 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.083 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.083 2 DEBUG nova.virt.libvirt.vif [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-749882943',display_name='tempest-ServersTestManualDisk-server-749882943',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-749882943',id=52,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCrm9pvHnuJnXcyCVc0QhU5t7gBB0n4d3wZnsQ891irt3BRxIDuply0Hxb2qV0GnCSguPcTHy9UPa8LT2jy6C6OH3oPbsOkrpY1oYyFQDRv9Qn+7DlyBAQdOnjMOuLm/AQ==',key_name='tempest-keypair-1254505334',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='daaa10f40e014711ba0819e5a5b251c7',ramdisk_id='',reservation_id='r-807vnzq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-456247214',owner_user_name='tempest-ServersTestManualDisk-456247214-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:14:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ed6b624fb80d4d4d9b897c788b614297',uuid=6be00afd-ab65-48db-a575-23a285419e60,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.084 2 DEBUG nova.network.os_vif_util [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converting VIF {"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.084 2 DEBUG nova.network.os_vif_util [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.085 2 DEBUG os_vif [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.087 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape4ca2e16-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.091 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape4ca2e16-90, col_values=(('external_ids', {'iface-id': 'e4ca2e16-9053-460a-9aee-cd724306083e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:78:e6', 'vm-uuid': '6be00afd-ab65-48db-a575-23a285419e60'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:18 np0005473739 NetworkManager[44949]: <info>  [1759846458.0934] manager: (tape4ca2e16-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/195)
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.096 2 INFO nova.virt.libvirt.driver [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance spawned successfully.#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.096 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.101 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.104 2 INFO os_vif [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90')#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.107 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.133 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.138 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.138 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.138 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.139 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.139 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.139 2 DEBUG nova.virt.libvirt.driver [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.170 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.170 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.171 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] No VIF found with MAC fa:16:3e:17:78:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.171 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Using config drive#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.195 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.206 2 INFO nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Took 9.06 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.206 2 DEBUG nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.266 2 INFO nova.compute.manager [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Took 10.08 seconds to build instance.#033[00m
Oct  7 10:14:18 np0005473739 nova_compute[259550]: 2025-10-07 14:14:18.284 2 DEBUG oslo_concurrency.lockutils [None req-f32b244f-5b79-4d07-92da-448fb9e2813a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.080 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Creating config drive at /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.087 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53luq7d5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.248 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53luq7d5" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.280 2 DEBUG nova.storage.rbd_utils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] rbd image 6be00afd-ab65-48db-a575-23a285419e60_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.285 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config 6be00afd-ab65-48db-a575-23a285419e60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.488 2 DEBUG oslo_concurrency.processutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config 6be00afd-ab65-48db-a575-23a285419e60_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.203s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.489 2 INFO nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Deleting local config drive /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60/disk.config because it was imported into RBD.#033[00m
Oct  7 10:14:19 np0005473739 kernel: tape4ca2e16-90: entered promiscuous mode
Oct  7 10:14:19 np0005473739 NetworkManager[44949]: <info>  [1759846459.5395] manager: (tape4ca2e16-90): new Tun device (/org/freedesktop/NetworkManager/Devices/196)
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:19Z|00400|binding|INFO|Claiming lport e4ca2e16-9053-460a-9aee-cd724306083e for this chassis.
Oct  7 10:14:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:19Z|00401|binding|INFO|e4ca2e16-9053-460a-9aee-cd724306083e: Claiming fa:16:3e:17:78:e6 10.100.0.11
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.552 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:78:e6 10.100.0.11'], port_security=['fa:16:3e:17:78:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6be00afd-ab65-48db-a575-23a285419e60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daaa10f40e014711ba0819e5a5b251c7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad649100-f9d9-4b25-b07d-f2fec1bfad18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18c2539a-e2f1-4e57-8c40-f76b6e8c17e8, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e4ca2e16-9053-460a-9aee-cd724306083e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.554 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e4ca2e16-9053-460a-9aee-cd724306083e in datapath d1e4e3b1-c955-4533-8a61-49b547446a5a bound to our chassis#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.559 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d1e4e3b1-c955-4533-8a61-49b547446a5a#033[00m
Oct  7 10:14:19 np0005473739 NetworkManager[44949]: <info>  [1759846459.5650] device (tape4ca2e16-90): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:14:19 np0005473739 NetworkManager[44949]: <info>  [1759846459.5663] device (tape4ca2e16-90): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:14:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:19Z|00402|binding|INFO|Setting lport e4ca2e16-9053-460a-9aee-cd724306083e ovn-installed in OVS
Oct  7 10:14:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:19Z|00403|binding|INFO|Setting lport e4ca2e16-9053-460a-9aee-cd724306083e up in Southbound
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.579 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c8584687-418f-4daf-8600-f420029030a9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.581 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd1e4e3b1-c1 in ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.583 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd1e4e3b1-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.583 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de65ed5b-2657-48b6-8e28-6e73e0de54ef]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1554862-a6a9-4bab-ac05-9bc33c4670ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.602 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c19cc9b2-f436-42cc-aaa9-b17f2f628ba5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 systemd-machined[214580]: New machine qemu-59-instance-00000034.
Oct  7 10:14:19 np0005473739 systemd[1]: Started Virtual Machine qemu-59-instance-00000034.
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[72c75a41-34c4-4522-973a-bbe560eb4807]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.677 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8e22c610-1d51-4d27-a626-08baf7e5e36c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 NetworkManager[44949]: <info>  [1759846459.6853] manager: (tapd1e4e3b1-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/197)
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ec423368-e8cc-41c5-9a6d-b3bddc42d5da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1492: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 5.3 MiB/s wr, 223 op/s
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.711 2 DEBUG nova.network.neutron [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updated VIF entry in instance network info cache for port e4ca2e16-9053-460a-9aee-cd724306083e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.712 2 DEBUG nova.network.neutron [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updating instance_info_cache with network_info: [{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:19 np0005473739 systemd-udevd[316945]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.729 2 DEBUG oslo_concurrency.lockutils [req-4bd6a682-8e33-4143-a05c-3ff448af5dff req-0060a0bc-53a6-4f12-8a45-c1a04ff78169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.744 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ba54c48d-b02d-43b8-a9b2-5b624ef69f50]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.764 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b440a99e-9320-4453-9246-bd5cecbb32dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 NetworkManager[44949]: <info>  [1759846459.7935] device (tapd1e4e3b1-c0): carrier: link connected
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.795 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.796 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.796 2 INFO nova.compute.manager [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Rebooting instance#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.800 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[db91fdf8-1470-48a9-b63a-9fc364796c2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.821 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[38880ae6-4ce2-4104-aa9f-cedc4444d145]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd1e4e3b1-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:64:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704335, 'reachable_time': 19780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 316964, 'error': None, 'target': 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.840 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.841 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquired lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.841 2 DEBUG nova.network.neutron [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.843 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9ecee968-37f5-4c34-ac18-4d35bd5984f5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb6:64c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704335, 'tstamp': 704335}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 316965, 'error': None, 'target': 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.866 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62aad6da-9ef8-4d2c-a218-5a88e55d6577]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd1e4e3b1-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b6:64:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 127], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704335, 'reachable_time': 19780, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 316966, 'error': None, 'target': 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.904 2 DEBUG nova.compute.manager [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-changed-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.904 2 DEBUG nova.compute.manager [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Refreshing instance network info cache due to event network-changed-432c69dd-fb1b-432b-b867-9fe29716430d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.905 2 DEBUG oslo_concurrency.lockutils [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.905 2 DEBUG oslo_concurrency.lockutils [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.905 2 DEBUG nova.network.neutron [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Refreshing network info cache for port 432c69dd-fb1b-432b-b867-9fe29716430d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.905 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eeecfe0c-cd86-46f4-aaa6-e87aad7987aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.987 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2bdf6f44-3146-43ab-890d-ff51d0ab470e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.991 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e4e3b1-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.991 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:19.992 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd1e4e3b1-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:19 np0005473739 kernel: tapd1e4e3b1-c0: entered promiscuous mode
Oct  7 10:14:19 np0005473739 nova_compute[259550]: 2025-10-07 14:14:19.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:19 np0005473739 NetworkManager[44949]: <info>  [1759846459.9980] manager: (tapd1e4e3b1-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/198)
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.002 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd1e4e3b1-c0, col_values=(('external_ids', {'iface-id': 'b45f6f84-7ca0-4e1b-b2cb-172425235762'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:20Z|00404|binding|INFO|Releasing lport b45f6f84-7ca0-4e1b-b2cb-172425235762 from this chassis (sb_readonly=0)
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.007 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d1e4e3b1-c955-4533-8a61-49b547446a5a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d1e4e3b1-c955-4533-8a61-49b547446a5a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.008 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c4afcff-d694-426f-a0ca-18bfb286161f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.010 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d1e4e3b1-c955-4533-8a61-49b547446a5a
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d1e4e3b1-c955-4533-8a61-49b547446a5a.pid.haproxy
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d1e4e3b1-c955-4533-8a61-49b547446a5a
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:14:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:20.012 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'env', 'PROCESS_TAG=haproxy-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d1e4e3b1-c955-4533-8a61-49b547446a5a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.146 2 DEBUG nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.146 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.147 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.147 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.147 2 DEBUG nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.147 2 WARNING nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state rebooting_hard.#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.149 2 DEBUG nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.149 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.150 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.151 2 DEBUG oslo_concurrency.lockutils [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.151 2 DEBUG nova.compute.manager [req-4b75f17b-1f9f-4a8f-835a-cf8a25c7c7c6 req-ec3c6add-c964-4b7e-ba24-04d915eaf0a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Processing event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:14:20 np0005473739 podman[317037]: 2025-10-07 14:14:20.508490445 +0000 UTC m=+0.070239576 container create e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:14:20 np0005473739 systemd[1]: Started libpod-conmon-e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c.scope.
Oct  7 10:14:20 np0005473739 podman[317037]: 2025-10-07 14:14:20.464586375 +0000 UTC m=+0.026335316 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:14:20 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ac2cf9d455103e8b5a6ebb536445b2ec9cea0901a75ddb51351064b3aa652499/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.582 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.584 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846460.582082, 6be00afd-ab65-48db-a575-23a285419e60 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.584 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] VM Started (Lifecycle Event)#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.588 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.593 2 INFO nova.virt.libvirt.driver [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance spawned successfully.#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.593 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:14:20 np0005473739 podman[317037]: 2025-10-07 14:14:20.6006612 +0000 UTC m=+0.162410141 container init e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:14:20 np0005473739 podman[317037]: 2025-10-07 14:14:20.608779944 +0000 UTC m=+0.170528855 container start e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.610 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.619 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.626 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.627 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.628 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.628 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.629 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.629 2 DEBUG nova.virt.libvirt.driver [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:14:20 np0005473739 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [NOTICE]   (317055) : New worker (317057) forked
Oct  7 10:14:20 np0005473739 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [NOTICE]   (317055) : Loading success.
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.651 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.652 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846460.5823793, 6be00afd-ab65-48db-a575-23a285419e60 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.652 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.676 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.680 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846460.586493, 6be00afd-ab65-48db-a575-23a285419e60 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.680 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.707 2 INFO nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Took 10.25 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.707 2 DEBUG nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.709 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.715 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.748 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.773 2 INFO nova.compute.manager [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Took 11.30 seconds to build instance.#033[00m
Oct  7 10:14:20 np0005473739 nova_compute[259550]: 2025-10-07 14:14:20.794 2 DEBUG oslo_concurrency.lockutils [None req-cb8f3ee1-c926-47c8-b2b2-07ca859d5204 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.283 2 DEBUG nova.network.neutron [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updating instance_info_cache with network_info: [{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.303 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Releasing lock "refresh_cache-f95ef1e3-b526-4516-a982-e1e415c5a657" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.306 2 DEBUG nova.compute.manager [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:21 np0005473739 kernel: tapfa2c6ead-0b (unregistering): left promiscuous mode
Oct  7 10:14:21 np0005473739 NetworkManager[44949]: <info>  [1759846461.4696] device (tapfa2c6ead-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:14:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:21Z|00405|binding|INFO|Releasing lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 from this chassis (sb_readonly=0)
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:21Z|00406|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 down in Southbound
Oct  7 10:14:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:21Z|00407|binding|INFO|Removing iface tapfa2c6ead-0b ovn-installed in OVS
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.491 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:21:7c 10.100.0.10'], port_security=['fa:16:3e:ed:21:7c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f95ef1e3-b526-4516-a982-e1e415c5a657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '304ddc5a55af455ba608d37c37f217aa', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5e1b758c-b736-4703-bebe-5a0d70f8524d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d731c88f-69f4-4c5d-9133-965e19cd35d1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.493 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 in datapath 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b unbound from our chassis#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.494 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.496 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78ccb436-dfb7-4714-803d-f61cab7ac6f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.498 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b namespace which is not needed anymore#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:21 np0005473739 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct  7 10:14:21 np0005473739 systemd[1]: machine-qemu\x2d58\x2dinstance\x2d00000033.scope: Consumed 4.182s CPU time.
Oct  7 10:14:21 np0005473739 systemd-machined[214580]: Machine qemu-58-instance-00000033 terminated.
Oct  7 10:14:21 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [NOTICE]   (316825) : haproxy version is 2.8.14-c23fe91
Oct  7 10:14:21 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [NOTICE]   (316825) : path to executable is /usr/sbin/haproxy
Oct  7 10:14:21 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [WARNING]  (316825) : Exiting Master process...
Oct  7 10:14:21 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [WARNING]  (316825) : Exiting Master process...
Oct  7 10:14:21 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [ALERT]    (316825) : Current worker (316827) exited with code 143 (Terminated)
Oct  7 10:14:21 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[316820]: [WARNING]  (316825) : All workers exited. Exiting... (0)
Oct  7 10:14:21 np0005473739 systemd[1]: libpod-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03.scope: Deactivated successfully.
Oct  7 10:14:21 np0005473739 conmon[316820]: conmon 77156c139be9147324e9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03.scope/container/memory.events
Oct  7 10:14:21 np0005473739 podman[317086]: 2025-10-07 14:14:21.658884685 +0000 UTC m=+0.053912866 container died 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.658 2 INFO nova.virt.libvirt.driver [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance destroyed successfully.#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.659 2 DEBUG nova.objects.instance [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'resources' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.673 2 DEBUG nova.virt.libvirt.vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:21Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.674 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.675 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.676 2 DEBUG os_vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa2c6ead-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03-userdata-shm.mount: Deactivated successfully.
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.695 2 INFO os_vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b')#033[00m
Oct  7 10:14:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b633774cda3ee70e43501224cdb7411d4938ebe0f67121f0bf818e12aae2f79e-merged.mount: Deactivated successfully.
Oct  7 10:14:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1493: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.2 MiB/s wr, 265 op/s
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.714 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start _get_guest_xml network_info=[{"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:14:21 np0005473739 podman[317086]: 2025-10-07 14:14:21.720559894 +0000 UTC m=+0.115588075 container cleanup 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.721 2 WARNING nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.731 2 DEBUG nova.virt.libvirt.host [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.732 2 DEBUG nova.virt.libvirt.host [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.735 2 DEBUG nova.virt.libvirt.host [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.736 2 DEBUG nova.virt.libvirt.host [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.737 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.737 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.737 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.738 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.738 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.738 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.738 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.739 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.739 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.739 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.740 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.740 2 DEBUG nova.virt.hardware [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.740 2 DEBUG nova.objects.instance [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'vcpu_model' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:21 np0005473739 systemd[1]: libpod-conmon-77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03.scope: Deactivated successfully.
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.756 2 DEBUG oslo_concurrency.processutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:21 np0005473739 podman[317124]: 2025-10-07 14:14:21.805909858 +0000 UTC m=+0.059020780 container remove 77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.813 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7b6e81ce-9e60-4c04-8ca9-ead4a48e7934]: (4, ('Tue Oct  7 02:14:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b (77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03)\n77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03\nTue Oct  7 02:14:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b (77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03)\n77156c139be9147324e953b9aac99f7456347f0f0167f184f9dd39686d493b03\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.814 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[89f4f8ac-fd49-49ee-ae64-40eea671ef89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.815 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ce7b2cd-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:21 np0005473739 kernel: tap4ce7b2cd-50: left promiscuous mode
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2c86af8-ed23-4f77-b94a-601f332b2257]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:21 np0005473739 nova_compute[259550]: 2025-10-07 14:14:21.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c59d42c9-e23b-4de1-89b0-df36d7ecd33b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.850 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd478458-f614-4194-aeef-df23d2abbbf0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.868 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[778aef4f-f182-40da-9b1c-8cbfa6aa3b4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704029, 'reachable_time': 34026, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317139, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:21 np0005473739 systemd[1]: run-netns-ovnmeta\x2d4ce7b2cd\x2d57c6\x2d43a1\x2db302\x2da9e47c2a613b.mount: Deactivated successfully.
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.874 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:14:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:21.874 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ed7e419e-f35a-46f9-854a-2c7685e2bd27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.015 2 DEBUG nova.network.neutron [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updated VIF entry in instance network info cache for port 432c69dd-fb1b-432b-b867-9fe29716430d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.016 2 DEBUG nova.network.neutron [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.033 2 DEBUG oslo_concurrency.lockutils [req-45a4fd40-6368-4dca-95ef-83b010eadfca req-6efdd194-784c-4ebf-8dbd-857225d9a0de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.223 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.224 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.226 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.226 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.227 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] No waiting events found dispatching network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.227 2 WARNING nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received unexpected event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e for instance with vm_state active and task_state None.#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.227 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.227 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.228 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.228 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.228 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.228 2 WARNING nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.229 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.229 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.229 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.230 2 DEBUG oslo_concurrency.lockutils [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.230 2 DEBUG nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.230 2 WARNING nova.compute.manager [req-b61acb0a-d37e-490b-9afb-5caae82baad9 req-9235d1e5-43e6-4d36-bade-b2538d5f2c9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  7 10:14:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/551195547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.276 2 DEBUG oslo_concurrency.processutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.309 2 DEBUG oslo_concurrency.processutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:22Z|00052|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:65:98:6b 10.100.0.13
Oct  7 10:14:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:22Z|00053|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:65:98:6b 10.100.0.13
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:14:22
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'images', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.meta', 'volumes', 'backups', 'default.rgw.log']
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:14:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:14:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/978359750' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.869 2 DEBUG oslo_concurrency.processutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.873 2 DEBUG nova.virt.libvirt.vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:21Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.874 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.876 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.879 2 DEBUG nova.objects.instance [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'pci_devices' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:14:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.907 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <uuid>f95ef1e3-b526-4516-a982-e1e415c5a657</uuid>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <name>instance-00000033</name>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <nova:name>tempest-InstanceActionsTestJSON-server-1758004192</nova:name>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:14:21</nova:creationTime>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <nova:user uuid="fae6fe12a4234cd28439c010bdf3e497">tempest-InstanceActionsTestJSON-550730255-project-member</nova:user>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <nova:project uuid="304ddc5a55af455ba608d37c37f217aa">tempest-InstanceActionsTestJSON-550730255</nova:project>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <nova:port uuid="fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <entry name="serial">f95ef1e3-b526-4516-a982-e1e415c5a657</entry>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <entry name="uuid">f95ef1e3-b526-4516-a982-e1e415c5a657</entry>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f95ef1e3-b526-4516-a982-e1e415c5a657_disk">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f95ef1e3-b526-4516-a982-e1e415c5a657_disk.config">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:ed:21:7c"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <target dev="tapfa2c6ead-0b"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657/console.log" append="off"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:14:22 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:14:22 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:14:22 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:14:22 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.921 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.924 2 DEBUG nova.virt.libvirt.driver [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] skipping disk for instance-00000033 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.937 2 DEBUG nova.virt.libvirt.vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:21Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.941 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.948 2 DEBUG nova.network.os_vif_util [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.954 2 DEBUG os_vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.956 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.957 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.960 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa2c6ead-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.961 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfa2c6ead-0b, col_values=(('external_ids', {'iface-id': 'fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ed:21:7c', 'vm-uuid': 'f95ef1e3-b526-4516-a982-e1e415c5a657'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:22 np0005473739 NetworkManager[44949]: <info>  [1759846462.9642] manager: (tapfa2c6ead-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/199)
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.968 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:22 np0005473739 nova_compute[259550]: 2025-10-07 14:14:22.969 2 INFO os_vif [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b')#033[00m
Oct  7 10:14:23 np0005473739 kernel: tapfa2c6ead-0b: entered promiscuous mode
Oct  7 10:14:23 np0005473739 NetworkManager[44949]: <info>  [1759846463.0764] manager: (tapfa2c6ead-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/200)
Oct  7 10:14:23 np0005473739 nova_compute[259550]: 2025-10-07 14:14:23.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:23Z|00408|binding|INFO|Claiming lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for this chassis.
Oct  7 10:14:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:23Z|00409|binding|INFO|fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598: Claiming fa:16:3e:ed:21:7c 10.100.0.10
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.085 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:21:7c 10.100.0.10'], port_security=['fa:16:3e:ed:21:7c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f95ef1e3-b526-4516-a982-e1e415c5a657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '304ddc5a55af455ba608d37c37f217aa', 'neutron:revision_number': '5', 'neutron:security_group_ids': '5e1b758c-b736-4703-bebe-5a0d70f8524d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d731c88f-69f4-4c5d-9133-965e19cd35d1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.086 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 in datapath 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b bound to our chassis#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.087 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.102 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[073aef83-106a-4710-97ba-5972b97170bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.103 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4ce7b2cd-51 in ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.104 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4ce7b2cd-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.104 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2e7fee45-dc7e-49f3-bcda-6894def9e6fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.106 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67114146-18ad-47c8-8991-4bb1c7581750]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:23Z|00410|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 ovn-installed in OVS
Oct  7 10:14:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:23Z|00411|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 up in Southbound
Oct  7 10:14:23 np0005473739 nova_compute[259550]: 2025-10-07 14:14:23.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:23 np0005473739 nova_compute[259550]: 2025-10-07 14:14:23.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.122 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f2595379-44a8-4259-b073-70103b10c37b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 systemd-machined[214580]: New machine qemu-60-instance-00000033.
Oct  7 10:14:23 np0005473739 systemd[1]: Started Virtual Machine qemu-60-instance-00000033.
Oct  7 10:14:23 np0005473739 systemd-udevd[317219]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.147 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e445436b-6b9d-4d2e-a6ac-58c463d2bf67]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 NetworkManager[44949]: <info>  [1759846463.1591] device (tapfa2c6ead-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:14:23 np0005473739 NetworkManager[44949]: <info>  [1759846463.1600] device (tapfa2c6ead-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.196 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5217ba52-08b5-4222-822e-7a5884c20155]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 systemd-udevd[317222]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.206 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[18e72750-8bbf-41a8-95fd-e36abd230e87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 NetworkManager[44949]: <info>  [1759846463.2080] manager: (tap4ce7b2cd-50): new Veth device (/org/freedesktop/NetworkManager/Devices/201)
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.254 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aaaf166d-f21b-4e8e-b921-cde6bf947e76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.257 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[03208380-1f1f-486c-a220-a75c6acaf1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 NetworkManager[44949]: <info>  [1759846463.2801] device (tap4ce7b2cd-50): carrier: link connected
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.285 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ed38bcec-99fb-4365-becc-5067929b4d23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.310 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5408251f-265b-427d-baf7-333d2f4ac875]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ce7b2cd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:bc:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704684, 'reachable_time': 44276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317249, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.329 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[613eaadb-86b2-4af2-b9ea-c9688417e5b8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec8:bcce'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 704684, 'tstamp': 704684}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317250, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.346 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0bc9c30c-8598-4b9f-9697-cb2ba8596869]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4ce7b2cd-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c8:bc:ce'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 130], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704684, 'reachable_time': 44276, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 317251, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[210dc2c0-689a-464d-ad49-1ca725dd8299]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.443 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c998a8ea-af2d-4d02-b09c-35c30418aa64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.445 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ce7b2cd-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.445 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.446 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4ce7b2cd-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:23 np0005473739 NetworkManager[44949]: <info>  [1759846463.4484] manager: (tap4ce7b2cd-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/202)
Oct  7 10:14:23 np0005473739 kernel: tap4ce7b2cd-50: entered promiscuous mode
Oct  7 10:14:23 np0005473739 nova_compute[259550]: 2025-10-07 14:14:23.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.451 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4ce7b2cd-50, col_values=(('external_ids', {'iface-id': '1909b8a8-f403-4e60-bcbd-3a0491b233a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:23 np0005473739 nova_compute[259550]: 2025-10-07 14:14:23.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:23Z|00412|binding|INFO|Releasing lport 1909b8a8-f403-4e60-bcbd-3a0491b233a9 from this chassis (sb_readonly=0)
Oct  7 10:14:23 np0005473739 nova_compute[259550]: 2025-10-07 14:14:23.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.473 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.474 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4e7fc1-3ab0-4161-bec5-a1546207d94c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.475 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.pid.haproxy
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:14:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:23.477 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'env', 'PROCESS_TAG=haproxy-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4ce7b2cd-57c6-43a1-b302-a9e47c2a613b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:14:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1494: 305 pgs: 305 active+clean; 227 MiB data, 520 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 2.5 MiB/s wr, 207 op/s
Oct  7 10:14:23 np0005473739 podman[317282]: 2025-10-07 14:14:23.877356399 +0000 UTC m=+0.059582764 container create 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:14:23 np0005473739 systemd[1]: Started libpod-conmon-5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9.scope.
Oct  7 10:14:23 np0005473739 podman[317282]: 2025-10-07 14:14:23.843851164 +0000 UTC m=+0.026077579 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:14:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:14:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/04179669fb3fb9cd6332de7447c41e0390e38c28e1c7b71bc0b58cfa47c41ae7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:14:23 np0005473739 podman[317282]: 2025-10-07 14:14:23.975396039 +0000 UTC m=+0.157622444 container init 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  7 10:14:23 np0005473739 podman[317282]: 2025-10-07 14:14:23.981893411 +0000 UTC m=+0.164119806 container start 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:14:24 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [NOTICE]   (317300) : New worker (317302) forked
Oct  7 10:14:24 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [NOTICE]   (317300) : Loading success.
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.058 2 DEBUG nova.compute.manager [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-changed-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.059 2 DEBUG nova.compute.manager [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Refreshing instance network info cache due to event network-changed-e4ca2e16-9053-460a-9aee-cd724306083e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.059 2 DEBUG oslo_concurrency.lockutils [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.060 2 DEBUG oslo_concurrency.lockutils [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.060 2 DEBUG nova.network.neutron [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Refreshing network info cache for port e4ca2e16-9053-460a-9aee-cd724306083e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.291 2 DEBUG nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.292 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.293 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.293 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.293 2 DEBUG nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.294 2 WARNING nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.294 2 DEBUG nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.294 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.295 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.295 2 DEBUG oslo_concurrency.lockutils [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.296 2 DEBUG nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.296 2 WARNING nova.compute.manager [req-a99113d8-0d82-4641-8bb2-50fc220c0ea4 req-9591bdc8-c363-4d8b-9731-1bc4595664b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state active and task_state reboot_started_hard.#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.966 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for f95ef1e3-b526-4516-a982-e1e415c5a657 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.968 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846464.9658492, f95ef1e3-b526-4516-a982-e1e415c5a657 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.969 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.971 2 DEBUG nova.compute.manager [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.975 2 INFO nova.virt.libvirt.driver [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance rebooted successfully.#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.976 2 DEBUG nova.compute.manager [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.990 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:24 np0005473739 nova_compute[259550]: 2025-10-07 14:14:24.994 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:25 np0005473739 nova_compute[259550]: 2025-10-07 14:14:25.025 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Oct  7 10:14:25 np0005473739 nova_compute[259550]: 2025-10-07 14:14:25.026 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846464.9670968, f95ef1e3-b526-4516-a982-e1e415c5a657 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:25 np0005473739 nova_compute[259550]: 2025-10-07 14:14:25.026 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Started (Lifecycle Event)#033[00m
Oct  7 10:14:25 np0005473739 nova_compute[259550]: 2025-10-07 14:14:25.048 2 DEBUG oslo_concurrency.lockutils [None req-816940bb-4ed1-4b55-b76b-b10c23dcb0a1 fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 5.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:25 np0005473739 nova_compute[259550]: 2025-10-07 14:14:25.051 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:25 np0005473739 nova_compute[259550]: 2025-10-07 14:14:25.055 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:14:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1495: 305 pgs: 305 active+clean; 249 MiB data, 533 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 3.7 MiB/s wr, 330 op/s
Oct  7 10:14:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:26 np0005473739 nova_compute[259550]: 2025-10-07 14:14:26.069 2 DEBUG nova.network.neutron [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updated VIF entry in instance network info cache for port e4ca2e16-9053-460a-9aee-cd724306083e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:26 np0005473739 nova_compute[259550]: 2025-10-07 14:14:26.070 2 DEBUG nova.network.neutron [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updating instance_info_cache with network_info: [{"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:26 np0005473739 nova_compute[259550]: 2025-10-07 14:14:26.089 2 DEBUG oslo_concurrency.lockutils [req-739d19ca-204b-49b6-aad1-c731da077e18 req-7e4ae25a-b036-46cd-9fa5-6d297f8ddac4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6be00afd-ab65-48db-a575-23a285419e60" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:26 np0005473739 nova_compute[259550]: 2025-10-07 14:14:26.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.059 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.061 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.062 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.062 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.063 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.066 2 INFO nova.compute.manager [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Terminating instance#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.068 2 DEBUG nova.compute.manager [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:14:27 np0005473739 kernel: tapfa2c6ead-0b (unregistering): left promiscuous mode
Oct  7 10:14:27 np0005473739 NetworkManager[44949]: <info>  [1759846467.1264] device (tapfa2c6ead-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:14:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:27Z|00413|binding|INFO|Releasing lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 from this chassis (sb_readonly=0)
Oct  7 10:14:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:27Z|00414|binding|INFO|Setting lport fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 down in Southbound
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:27Z|00415|binding|INFO|Removing iface tapfa2c6ead-0b ovn-installed in OVS
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000033.scope: Deactivated successfully.
Oct  7 10:14:27 np0005473739 systemd[1]: machine-qemu\x2d60\x2dinstance\x2d00000033.scope: Consumed 3.830s CPU time.
Oct  7 10:14:27 np0005473739 systemd-machined[214580]: Machine qemu-60-instance-00000033 terminated.
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.238 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ed:21:7c 10.100.0.10'], port_security=['fa:16:3e:ed:21:7c 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f95ef1e3-b526-4516-a982-e1e415c5a657', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '304ddc5a55af455ba608d37c37f217aa', 'neutron:revision_number': '6', 'neutron:security_group_ids': '5e1b758c-b736-4703-bebe-5a0d70f8524d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d731c88f-69f4-4c5d-9133-965e19cd35d1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.240 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 in datapath 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b unbound from our chassis#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.242 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.244 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa3ae83f-8b1e-493d-8e5a-06dfcc8339ff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.245 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b namespace which is not needed anymore#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.310 2 INFO nova.virt.libvirt.driver [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Instance destroyed successfully.#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.310 2 DEBUG nova.objects.instance [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lazy-loading 'resources' on Instance uuid f95ef1e3-b526-4516-a982-e1e415c5a657 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.326 2 DEBUG nova.virt.libvirt.vif [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsTestJSON-server-1758004192',display_name='tempest-InstanceActionsTestJSON-server-1758004192',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionstestjson-server-1758004192',id=51,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='304ddc5a55af455ba608d37c37f217aa',ramdisk_id='',reservation_id='r-79xwgd02',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsTestJSON-550730255',owner_user_name='tempest-InstanceActionsTestJSON-550730255-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:25Z,user_data=None,user_id='fae6fe12a4234cd28439c010bdf3e497',uuid=f95ef1e3-b526-4516-a982-e1e415c5a657,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.326 2 DEBUG nova.network.os_vif_util [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converting VIF {"id": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "address": "fa:16:3e:ed:21:7c", "network": {"id": "4ce7b2cd-57c6-43a1-b302-a9e47c2a613b", "bridge": "br-int", "label": "tempest-InstanceActionsTestJSON-51513385-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "304ddc5a55af455ba608d37c37f217aa", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfa2c6ead-0b", "ovs_interfaceid": "fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.327 2 DEBUG nova.network.os_vif_util [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.327 2 DEBUG os_vif [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.333 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa2c6ead-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.340 2 INFO os_vif [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:ed:21:7c,bridge_name='br-int',has_traffic_filtering=True,id=fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598,network=Network(4ce7b2cd-57c6-43a1-b302-a9e47c2a613b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfa2c6ead-0b')#033[00m
Oct  7 10:14:27 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [NOTICE]   (317300) : haproxy version is 2.8.14-c23fe91
Oct  7 10:14:27 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [NOTICE]   (317300) : path to executable is /usr/sbin/haproxy
Oct  7 10:14:27 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [WARNING]  (317300) : Exiting Master process...
Oct  7 10:14:27 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [ALERT]    (317300) : Current worker (317302) exited with code 143 (Terminated)
Oct  7 10:14:27 np0005473739 neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b[317296]: [WARNING]  (317300) : All workers exited. Exiting... (0)
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.450 2 DEBUG nova.compute.manager [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.450 2 DEBUG oslo_concurrency.lockutils [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.450 2 DEBUG oslo_concurrency.lockutils [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.451 2 DEBUG oslo_concurrency.lockutils [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.452 2 DEBUG nova.compute.manager [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:27 np0005473739 systemd[1]: libpod-5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9.scope: Deactivated successfully.
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.452 2 DEBUG nova.compute.manager [req-4348f83c-03b6-4895-9e5c-fd052671fd29 req-068e69f5-2713-44ab-88c7-4744534baca9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-unplugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:14:27 np0005473739 podman[317400]: 2025-10-07 14:14:27.457627288 +0000 UTC m=+0.070058881 container died 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 10:14:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9-userdata-shm.mount: Deactivated successfully.
Oct  7 10:14:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-04179669fb3fb9cd6332de7447c41e0390e38c28e1c7b71bc0b58cfa47c41ae7-merged.mount: Deactivated successfully.
Oct  7 10:14:27 np0005473739 podman[317400]: 2025-10-07 14:14:27.590965681 +0000 UTC m=+0.203397274 container cleanup 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:14:27 np0005473739 systemd[1]: libpod-conmon-5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9.scope: Deactivated successfully.
Oct  7 10:14:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1496: 305 pgs: 305 active+clean; 260 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 2.4 MiB/s wr, 285 op/s
Oct  7 10:14:27 np0005473739 podman[317433]: 2025-10-07 14:14:27.712393899 +0000 UTC m=+0.089372773 container remove 5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.726 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c736fc3-0bae-4aa2-9dc0-2a037c6b288d]: (4, ('Tue Oct  7 02:14:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b (5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9)\n5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9\nTue Oct  7 02:14:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b (5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9)\n5350f960e05702da1b352653cde51c4fe1d2d0bc1368de896d5b6796c415cde9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.729 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[659b73fb-c468-4894-89e1-030b173d2404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4ce7b2cd-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:27 np0005473739 kernel: tap4ce7b2cd-50: left promiscuous mode
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.787 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7847f353-ea22-44e2-8ed7-c7f0acceb660]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7caafb6-bad1-45fc-af11-1c6c0d2eadc7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.824 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[39d42572-3e14-45c2-8baf-0219dfc2ec09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.846 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ace66bc4-43a5-48ac-88c2-e0f133b7a046]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704674, 'reachable_time': 25137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317448, 'error': None, 'target': 'ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:27 np0005473739 systemd[1]: run-netns-ovnmeta\x2d4ce7b2cd\x2d57c6\x2d43a1\x2db302\x2da9e47c2a613b.mount: Deactivated successfully.
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.852 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4ce7b2cd-57c6-43a1-b302-a9e47c2a613b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:14:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:27.853 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a38774e4-172a-459b-8b4a-b885742c34ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:27 np0005473739 nova_compute[259550]: 2025-10-07 14:14:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.008 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.009 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.009 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.010 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.011 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.174 2 INFO nova.virt.libvirt.driver [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Deleting instance files /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657_del#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.175 2 INFO nova.virt.libvirt.driver [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Deletion of /var/lib/nova/instances/f95ef1e3-b526-4516-a982-e1e415c5a657_del complete#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.259 2 INFO nova.compute.manager [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Took 1.19 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.260 2 DEBUG oslo.service.loopingcall [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.260 2 DEBUG nova.compute.manager [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.261 2 DEBUG nova.network.neutron [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:14:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:14:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3447623833' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.503 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:28 np0005473739 podman[317473]: 2025-10-07 14:14:28.63497135 +0000 UTC m=+0.069797615 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:14:28 np0005473739 podman[317472]: 2025-10-07 14:14:28.646919916 +0000 UTC m=+0.085096449 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.768 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.768 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000034 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.771 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.772 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.775 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:14:28 np0005473739 nova_compute[259550]: 2025-10-07 14:14:28.775 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000031 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.005 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.006 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3686MB free_disk=59.88011932373047GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.224 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance eb8777f3-5daa-49c7-8994-687012f20453 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.225 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance fc163bed-856c-4ea5-9bf3-6989fb1027eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.225 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance f95ef1e3-b526-4516-a982-e1e415c5a657 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.225 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 6be00afd-ab65-48db-a575-23a285419e60 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.225 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.226 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.344 2 DEBUG nova.network.neutron [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.370 2 INFO nova.compute.manager [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Took 1.11 seconds to deallocate network for instance.#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.382 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.438 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.440 2 DEBUG nova.compute.manager [req-e733b6c7-5640-4761-815a-905bad7f9b1f req-8499bdd9-cddd-4600-a5d6-b6fdfba21874 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-deleted-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.518 2 DEBUG nova.compute.manager [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.519 2 DEBUG oslo_concurrency.lockutils [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.519 2 DEBUG oslo_concurrency.lockutils [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.520 2 DEBUG oslo_concurrency.lockutils [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.520 2 DEBUG nova.compute.manager [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] No waiting events found dispatching network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.520 2 WARNING nova.compute.manager [req-42e75314-f7ab-4cc2-ac5f-d77bd7463b13 req-2910127e-30fb-4ea6-839d-c2b8ad3017c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Received unexpected event network-vif-plugged-fa2c6ead-0bff-47b9-9cce-bbcbe6a4a598 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:14:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1497: 305 pgs: 305 active+clean; 250 MiB data, 536 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 3.7 MiB/s wr, 359 op/s
Oct  7 10:14:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:14:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2799586118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.885 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.892 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.913 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.935 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.936 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.929s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:29 np0005473739 nova_compute[259550]: 2025-10-07 14:14:29.937 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.499s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.018 2 DEBUG oslo_concurrency.processutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.185 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.186 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.187 2 DEBUG nova.objects.instance [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:30Z|00054|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a6:54:7c 10.100.0.7
Oct  7 10:14:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:30Z|00055|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a6:54:7c 10.100.0.7
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.221 2 DEBUG nova.objects.instance [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.326 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:14:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:14:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4020859017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.511 2 DEBUG oslo_concurrency.processutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.518 2 DEBUG nova.compute.provider_tree [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.539 2 DEBUG nova.scheduler.client.report [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.562 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.601 2 DEBUG nova.policy [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.605 2 INFO nova.scheduler.client.report [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Deleted allocations for instance f95ef1e3-b526-4516-a982-e1e415c5a657#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.668 2 DEBUG oslo_concurrency.lockutils [None req-a63f5df2-75a3-486d-b17d-b8e01f69377a fae6fe12a4234cd28439c010bdf3e497 304ddc5a55af455ba608d37c37f217aa - - default default] Lock "f95ef1e3-b526-4516-a982-e1e415c5a657" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.933 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.933 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.934 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:30 np0005473739 nova_compute[259550]: 2025-10-07 14:14:30.934 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:14:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:31 np0005473739 nova_compute[259550]: 2025-10-07 14:14:31.182 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully created port: 9a533309-4d4d-4458-9a27-3fe85361ab15 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:14:31 np0005473739 nova_compute[259550]: 2025-10-07 14:14:31.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1498: 305 pgs: 305 active+clean; 241 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 6.5 MiB/s rd, 4.2 MiB/s wr, 355 op/s
Oct  7 10:14:31 np0005473739 nova_compute[259550]: 2025-10-07 14:14:31.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.108 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully updated port: 9a533309-4d4d-4458-9a27-3fe85361ab15 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.127 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.127 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.127 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.182 2 DEBUG nova.compute.manager [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.182 2 DEBUG nova.compute.manager [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-9a533309-4d4d-4458-9a27-3fe85361ab15. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.183 2 DEBUG oslo_concurrency.lockutils [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.294 2 WARNING nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0018551317192485512 of space, bias 1.0, pg target 0.5565395157745654 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:14:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:14:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:14:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1271116766' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:14:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:14:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1271116766' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:14:32 np0005473739 nova_compute[259550]: 2025-10-07 14:14:32.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1499: 305 pgs: 305 active+clean; 241 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 4.8 MiB/s rd, 4.2 MiB/s wr, 293 op/s
Oct  7 10:14:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:34Z|00416|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct  7 10:14:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:34Z|00417|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:14:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:34Z|00418|binding|INFO|Releasing lport b45f6f84-7ca0-4e1b-b2cb-172425235762 from this chassis (sb_readonly=0)
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.207 2 DEBUG nova.network.neutron [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.228 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.229 2 DEBUG oslo_concurrency.lockutils [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.229 2 DEBUG nova.network.neutron [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port 9a533309-4d4d-4458-9a27-3fe85361ab15 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.233 2 DEBUG nova.virt.libvirt.vif [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.233 2 DEBUG nova.network.os_vif_util [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.234 2 DEBUG nova.network.os_vif_util [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.234 2 DEBUG os_vif [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.240 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9a533309-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9a533309-4d, col_values=(('external_ids', {'iface-id': '9a533309-4d4d-4458-9a27-3fe85361ab15', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2b:f6:d8', 'vm-uuid': 'eb8777f3-5daa-49c7-8994-687012f20453'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:34 np0005473739 NetworkManager[44949]: <info>  [1759846474.2437] manager: (tap9a533309-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/203)
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.252 2 INFO os_vif [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d')#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.253 2 DEBUG nova.virt.libvirt.vif [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.253 2 DEBUG nova.network.os_vif_util [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.254 2 DEBUG nova.network.os_vif_util [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.256 2 DEBUG nova.virt.libvirt.guest [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:2b:f6:d8"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <target dev="tap9a533309-4d"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:14:34 np0005473739 nova_compute[259550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  7 10:14:34 np0005473739 kernel: tap9a533309-4d: entered promiscuous mode
Oct  7 10:14:34 np0005473739 NetworkManager[44949]: <info>  [1759846474.2705] manager: (tap9a533309-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/204)
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:34Z|00419|binding|INFO|Claiming lport 9a533309-4d4d-4458-9a27-3fe85361ab15 for this chassis.
Oct  7 10:14:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:34Z|00420|binding|INFO|9a533309-4d4d-4458-9a27-3fe85361ab15: Claiming fa:16:3e:2b:f6:d8 10.100.0.8
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.279 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:f6:d8 10.100.0.8'], port_security=['fa:16:3e:2b:f6:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9a533309-4d4d-4458-9a27-3fe85361ab15) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.280 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9a533309-4d4d-4458-9a27-3fe85361ab15 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.281 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:34Z|00421|binding|INFO|Setting lport 9a533309-4d4d-4458-9a27-3fe85361ab15 ovn-installed in OVS
Oct  7 10:14:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:34Z|00422|binding|INFO|Setting lport 9a533309-4d4d-4458-9a27-3fe85361ab15 up in Southbound
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.301 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8cff8175-8382-40f3-8993-232567652005]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 systemd-udevd[317564]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:34 np0005473739 NetworkManager[44949]: <info>  [1759846474.3338] device (tap9a533309-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:14:34 np0005473739 NetworkManager[44949]: <info>  [1759846474.3346] device (tap9a533309-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.355 2 DEBUG nova.virt.libvirt.driver [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.357 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eeddfa2a-5c03-4ecb-976e-7e5a14c20a10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.356 2 DEBUG nova.virt.libvirt.driver [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.357 2 DEBUG nova.virt.libvirt.driver [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:65:98:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.357 2 DEBUG nova.virt.libvirt.driver [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:2b:f6:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.362 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a7f2c37d-e8e1-46ff-8146-dcae0c66e133]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.388 2 DEBUG nova.virt.libvirt.guest [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:14:34</nova:creationTime>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:14:34 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct  7 10:14:34 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:14:34 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:14:34 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:14:34 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.399 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[27472cda-29a2-4c5f-9375-921a938f3153]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.417 2 DEBUG oslo_concurrency.lockutils [None req-03fc00b2-e2b0-4d8d-ba12-36b2a8eb3827 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 4.231s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.421 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1024c68d-5442-4fe3-9f9d-c778d9442bdd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317571, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.441 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92ce3109-53d0-44b5-8134-555f1af42f93]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317572, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317572, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.443 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.447 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.447 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.447 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:34.448 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.603 2 DEBUG nova.compute.manager [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.603 2 DEBUG oslo_concurrency.lockutils [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.604 2 DEBUG oslo_concurrency.lockutils [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.604 2 DEBUG oslo_concurrency.lockutils [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.604 2 DEBUG nova.compute.manager [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:34 np0005473739 nova_compute[259550]: 2025-10-07 14:14:34.604 2 WARNING nova.compute.manager [req-e6e2cbba-9248-4bb1-8c08-3cb93c653878 req-560935ee-b64a-4a6c-8592-997ada56cb31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:14:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:35Z|00056|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:78:e6 10.100.0.11
Oct  7 10:14:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:35Z|00057|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:78:e6 10.100.0.11
Oct  7 10:14:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:35Z|00058|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:2b:f6:d8 10.100.0.8
Oct  7 10:14:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:35Z|00059|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:2b:f6:d8 10.100.0.8
Oct  7 10:14:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1500: 305 pgs: 305 active+clean; 258 MiB data, 562 MiB used, 59 GiB / 60 GiB avail; 5.2 MiB/s rd, 5.5 MiB/s wr, 349 op/s
Oct  7 10:14:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.343 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.344 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.344 2 DEBUG nova.objects.instance [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.748 2 DEBUG nova.compute.manager [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.748 2 DEBUG oslo_concurrency.lockutils [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.748 2 DEBUG oslo_concurrency.lockutils [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.749 2 DEBUG oslo_concurrency.lockutils [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.749 2 DEBUG nova.compute.manager [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.749 2 WARNING nova.compute.manager [req-3ce900a3-5a72-4e22-8e45-f29adc9efe5a req-e8a3f335-8eaa-428a-885b-b6355e656395 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.784 2 DEBUG nova.network.neutron [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port 9a533309-4d4d-4458-9a27-3fe85361ab15. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.784 2 DEBUG nova.network.neutron [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:36 np0005473739 nova_compute[259550]: 2025-10-07 14:14:36.810 2 DEBUG oslo_concurrency.lockutils [req-d60f7624-875f-4275-a6d9-533b3f4894ca req-1761421a-f50a-4635-b585-bd25bf76aa49 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1501: 305 pgs: 305 active+clean; 275 MiB data, 573 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 5.2 MiB/s wr, 241 op/s
Oct  7 10:14:37 np0005473739 nova_compute[259550]: 2025-10-07 14:14:37.870 2 DEBUG nova.objects.instance [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:37 np0005473739 nova_compute[259550]: 2025-10-07 14:14:37.924 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:14:37 np0005473739 nova_compute[259550]: 2025-10-07 14:14:37.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:37 np0005473739 nova_compute[259550]: 2025-10-07 14:14:37.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:14:38 np0005473739 nova_compute[259550]: 2025-10-07 14:14:38.021 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:14:38 np0005473739 nova_compute[259550]: 2025-10-07 14:14:38.872 2 DEBUG nova.policy [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:14:38 np0005473739 nova_compute[259550]: 2025-10-07 14:14:38.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:14:39 np0005473739 nova_compute[259550]: 2025-10-07 14:14:39.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:39 np0005473739 nova_compute[259550]: 2025-10-07 14:14:39.397 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully created port: 3e6d7010-f744-42e8-b831-8a1955357b14 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:14:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1502: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 4.3 MiB/s wr, 217 op/s
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.542 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully updated port: 3e6d7010-f744-42e8-b831-8a1955357b14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.640 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.640 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.641 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.812 2 DEBUG nova.compute.manager [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.812 2 DEBUG nova.compute.manager [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-3e6d7010-f744-42e8-b831-8a1955357b14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.813 2 DEBUG oslo_concurrency.lockutils [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.916 2 WARNING nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:14:40 np0005473739 nova_compute[259550]: 2025-10-07 14:14:40.917 2 WARNING nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:14:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:41 np0005473739 podman[317574]: 2025-10-07 14:14:41.093159007 +0000 UTC m=+0.070438742 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:14:41 np0005473739 podman[317575]: 2025-10-07 14:14:41.136413419 +0000 UTC m=+0.114423304 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:14:41 np0005473739 nova_compute[259550]: 2025-10-07 14:14:41.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1503: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.7 MiB/s wr, 142 op/s
Oct  7 10:14:42 np0005473739 nova_compute[259550]: 2025-10-07 14:14:42.308 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846467.307876, f95ef1e3-b526-4516-a982-e1e415c5a657 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:42 np0005473739 nova_compute[259550]: 2025-10-07 14:14:42.309 2 INFO nova.compute.manager [-] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:14:42 np0005473739 nova_compute[259550]: 2025-10-07 14:14:42.465 2 DEBUG nova.compute.manager [None req-ad368fdd-13e3-440c-816a-029123bfb55d - - - - - -] [instance: f95ef1e3-b526-4516-a982-e1e415c5a657] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1504: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 460 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.567 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.567 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.568 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.568 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.568 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.570 2 INFO nova.compute.manager [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Terminating instance#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.571 2 DEBUG nova.compute.manager [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:14:44 np0005473739 kernel: tape4ca2e16-90 (unregistering): left promiscuous mode
Oct  7 10:14:44 np0005473739 NetworkManager[44949]: <info>  [1759846484.6392] device (tape4ca2e16-90): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:14:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:44Z|00423|binding|INFO|Releasing lport e4ca2e16-9053-460a-9aee-cd724306083e from this chassis (sb_readonly=0)
Oct  7 10:14:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:44Z|00424|binding|INFO|Setting lport e4ca2e16-9053-460a-9aee-cd724306083e down in Southbound
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:44Z|00425|binding|INFO|Removing iface tape4ca2e16-90 ovn-installed in OVS
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Deactivated successfully.
Oct  7 10:14:44 np0005473739 systemd[1]: machine-qemu\x2d59\x2dinstance\x2d00000034.scope: Consumed 13.993s CPU time.
Oct  7 10:14:44 np0005473739 systemd-machined[214580]: Machine qemu-59-instance-00000034 terminated.
Oct  7 10:14:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:44Z|00426|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct  7 10:14:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:44Z|00427|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:14:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:44Z|00428|binding|INFO|Releasing lport b45f6f84-7ca0-4e1b-b2cb-172425235762 from this chassis (sb_readonly=0)
Oct  7 10:14:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.745 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:78:e6 10.100.0.11'], port_security=['fa:16:3e:17:78:e6 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '6be00afd-ab65-48db-a575-23a285419e60', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'daaa10f40e014711ba0819e5a5b251c7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad649100-f9d9-4b25-b07d-f2fec1bfad18', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.185'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=18c2539a-e2f1-4e57-8c40-f76b6e8c17e8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e4ca2e16-9053-460a-9aee-cd724306083e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.747 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e4ca2e16-9053-460a-9aee-cd724306083e in datapath d1e4e3b1-c955-4533-8a61-49b547446a5a unbound from our chassis#033[00m
Oct  7 10:14:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.749 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d1e4e3b1-c955-4533-8a61-49b547446a5a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:14:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40c337fb-bdb6-46dd-b298-0de2bfb994c9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:44.751 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a namespace which is not needed anymore#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.814 2 INFO nova.virt.libvirt.driver [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Instance destroyed successfully.#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.815 2 DEBUG nova.objects.instance [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lazy-loading 'resources' on Instance uuid 6be00afd-ab65-48db-a575-23a285419e60 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.846 2 DEBUG nova.virt.libvirt.vif [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-749882943',display_name='tempest-ServersTestManualDisk-server-749882943',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-749882943',id=52,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCrm9pvHnuJnXcyCVc0QhU5t7gBB0n4d3wZnsQ891irt3BRxIDuply0Hxb2qV0GnCSguPcTHy9UPa8LT2jy6C6OH3oPbsOkrpY1oYyFQDRv9Qn+7DlyBAQdOnjMOuLm/AQ==',key_name='tempest-keypair-1254505334',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='daaa10f40e014711ba0819e5a5b251c7',ramdisk_id='',reservation_id='r-807vnzq7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-456247214',owner_user_name='tempest-ServersTestManualDisk-456247214-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:20Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='ed6b624fb80d4d4d9b897c788b614297',uuid=6be00afd-ab65-48db-a575-23a285419e60,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.847 2 DEBUG nova.network.os_vif_util [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converting VIF {"id": "e4ca2e16-9053-460a-9aee-cd724306083e", "address": "fa:16:3e:17:78:e6", "network": {"id": "d1e4e3b1-c955-4533-8a61-49b547446a5a", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-1437143641-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "daaa10f40e014711ba0819e5a5b251c7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape4ca2e16-90", "ovs_interfaceid": "e4ca2e16-9053-460a-9aee-cd724306083e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.847 2 DEBUG nova.network.os_vif_util [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.848 2 DEBUG os_vif [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.852 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape4ca2e16-90, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:44 np0005473739 nova_compute[259550]: 2025-10-07 14:14:44.859 2 INFO os_vif [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:17:78:e6,bridge_name='br-int',has_traffic_filtering=True,id=e4ca2e16-9053-460a-9aee-cd724306083e,network=Network(d1e4e3b1-c955-4533-8a61-49b547446a5a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape4ca2e16-90')#033[00m
Oct  7 10:14:44 np0005473739 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [NOTICE]   (317055) : haproxy version is 2.8.14-c23fe91
Oct  7 10:14:44 np0005473739 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [NOTICE]   (317055) : path to executable is /usr/sbin/haproxy
Oct  7 10:14:44 np0005473739 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [WARNING]  (317055) : Exiting Master process...
Oct  7 10:14:44 np0005473739 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [WARNING]  (317055) : Exiting Master process...
Oct  7 10:14:44 np0005473739 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [ALERT]    (317055) : Current worker (317057) exited with code 143 (Terminated)
Oct  7 10:14:44 np0005473739 neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a[317051]: [WARNING]  (317055) : All workers exited. Exiting... (0)
Oct  7 10:14:44 np0005473739 systemd[1]: libpod-e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c.scope: Deactivated successfully.
Oct  7 10:14:44 np0005473739 podman[317652]: 2025-10-07 14:14:44.92945076 +0000 UTC m=+0.055763355 container died e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:14:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c-userdata-shm.mount: Deactivated successfully.
Oct  7 10:14:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ac2cf9d455103e8b5a6ebb536445b2ec9cea0901a75ddb51351064b3aa652499-merged.mount: Deactivated successfully.
Oct  7 10:14:44 np0005473739 podman[317652]: 2025-10-07 14:14:44.980445236 +0000 UTC m=+0.106757831 container cleanup e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:14:44 np0005473739 systemd[1]: libpod-conmon-e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c.scope: Deactivated successfully.
Oct  7 10:14:45 np0005473739 podman[317699]: 2025-10-07 14:14:45.060783199 +0000 UTC m=+0.054626764 container remove e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.067 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30d35523-36b0-43ce-96c5-8a008f3bd7ef]: (4, ('Tue Oct  7 02:14:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a (e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c)\ne8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c\nTue Oct  7 02:14:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a (e8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c)\ne8b48fee7788931173dd19faa4a5fd132ef60e41b3fbd539d427a35ab7b81c3c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.068 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2e5ed4bf-6aaa-4263-80e8-85b1a0e2050f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.069 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1e4e3b1-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:45 np0005473739 kernel: tapd1e4e3b1-c0: left promiscuous mode
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.080 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe8b36c8-dcea-4744-9742-72798750fa42]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a7a36b6d-5b47-4e89-ab96-58c1ed036408]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d88520c8-8a2d-498f-88aa-99fd57cd2609]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.129 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2a3c472-cbd8-468d-9848-92ae47ac0432]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 704323, 'reachable_time': 20075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317717, 'error': None, 'target': 'ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:45 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd1e4e3b1\x2dc955\x2d4533\x2d8a61\x2d49b547446a5a.mount: Deactivated successfully.
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.136 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d1e4e3b1-c955-4533-8a61-49b547446a5a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:14:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:45.137 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[34e37f86-9dfe-4ba9-ad24-d28179a15d13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.221 2 DEBUG nova.compute.manager [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-unplugged-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.221 2 DEBUG oslo_concurrency.lockutils [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.222 2 DEBUG oslo_concurrency.lockutils [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.222 2 DEBUG oslo_concurrency.lockutils [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.222 2 DEBUG nova.compute.manager [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] No waiting events found dispatching network-vif-unplugged-e4ca2e16-9053-460a-9aee-cd724306083e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.222 2 DEBUG nova.compute.manager [req-62846d75-2894-4592-a27e-200bc66fceb3 req-5549939e-d2f4-4d7c-8ea4-352d43e7e6e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-unplugged-e4ca2e16-9053-460a-9aee-cd724306083e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.285 2 INFO nova.virt.libvirt.driver [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Deleting instance files /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60_del#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.286 2 INFO nova.virt.libvirt.driver [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Deletion of /var/lib/nova/instances/6be00afd-ab65-48db-a575-23a285419e60_del complete#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.332 2 INFO nova.compute.manager [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.332 2 DEBUG oslo.service.loopingcall [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.332 2 DEBUG nova.compute.manager [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:14:45 np0005473739 nova_compute[259550]: 2025-10-07 14:14:45.333 2 DEBUG nova.network.neutron [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:14:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1505: 305 pgs: 305 active+clean; 279 MiB data, 574 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.2 MiB/s wr, 83 op/s
Oct  7 10:14:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.188 2 DEBUG nova.network.neutron [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.211 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.213 2 DEBUG oslo_concurrency.lockutils [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.213 2 DEBUG nova.network.neutron [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port 3e6d7010-f744-42e8-b831-8a1955357b14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.216 2 DEBUG nova.virt.libvirt.vif [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.217 2 DEBUG nova.network.os_vif_util [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.217 2 DEBUG nova.network.os_vif_util [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.218 2 DEBUG os_vif [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.218 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3e6d7010-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3e6d7010-f7, col_values=(('external_ids', {'iface-id': '3e6d7010-f744-42e8-b831-8a1955357b14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:02:a9', 'vm-uuid': 'eb8777f3-5daa-49c7-8994-687012f20453'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:46 np0005473739 NetworkManager[44949]: <info>  [1759846486.2241] manager: (tap3e6d7010-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/205)
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.230 2 INFO os_vif [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7')#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.231 2 DEBUG nova.virt.libvirt.vif [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.231 2 DEBUG nova.network.os_vif_util [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.232 2 DEBUG nova.network.os_vif_util [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.238 2 DEBUG nova.virt.libvirt.guest [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:04:02:a9"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <target dev="tap3e6d7010-f7"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:14:46 np0005473739 nova_compute[259550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  7 10:14:46 np0005473739 systemd-udevd[317621]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:14:46 np0005473739 kernel: tap3e6d7010-f7: entered promiscuous mode
Oct  7 10:14:46 np0005473739 NetworkManager[44949]: <info>  [1759846486.2541] manager: (tap3e6d7010-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/206)
Oct  7 10:14:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:46Z|00429|binding|INFO|Claiming lport 3e6d7010-f744-42e8-b831-8a1955357b14 for this chassis.
Oct  7 10:14:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:46Z|00430|binding|INFO|3e6d7010-f744-42e8-b831-8a1955357b14: Claiming fa:16:3e:04:02:a9 10.100.0.6
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.264 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:02:a9 10.100.0.6'], port_security=['fa:16:3e:04:02:a9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3e6d7010-f744-42e8-b831-8a1955357b14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.266 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3e6d7010-f744-42e8-b831-8a1955357b14 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:14:46 np0005473739 NetworkManager[44949]: <info>  [1759846486.2682] device (tap3e6d7010-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:14:46 np0005473739 NetworkManager[44949]: <info>  [1759846486.2699] device (tap3e6d7010-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.270 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:14:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:46Z|00431|binding|INFO|Setting lport 3e6d7010-f744-42e8-b831-8a1955357b14 ovn-installed in OVS
Oct  7 10:14:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:46Z|00432|binding|INFO|Setting lport 3e6d7010-f744-42e8-b831-8a1955357b14 up in Southbound
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.288 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7c629d33-8a7b-4baa-9412-94735b463a3e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.329 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5501ebf7-7867-4be7-a262-361262e4c430]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.333 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e03844cd-7ee0-47be-a4bc-fe5d548c22cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.340 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.340 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.340 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:65:98:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.340 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:2b:f6:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.341 2 DEBUG nova.virt.libvirt.driver [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:04:02:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.363 2 DEBUG nova.virt.libvirt.guest [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:14:46</nova:creationTime>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:14:46 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct  7 10:14:46 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct  7 10:14:46 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:14:46 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:14:46 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:14:46 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.374 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[518aea15-4ba6-456f-92ae-39d2e453da09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.394 2 DEBUG oslo_concurrency.lockutils [None req-4347dca0-ee93-4f5a-bac2-095a1fc769c2 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 10.050s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.395 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ddafa99-9439-41b1-a77d-23e13cdc407f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317731, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9163ab3-2b69-4c0f-a16d-1aa4ac9cffd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317732, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317732, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.417 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 nova_compute[259550]: 2025-10-07 14:14:46.419 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.420 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.420 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.420 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:14:46.421 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.102 2 DEBUG nova.network.neutron [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.123 2 INFO nova.compute.manager [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Took 1.79 seconds to deallocate network for instance.#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.168 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.169 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.198 2 DEBUG nova.compute.manager [req-78e4b816-98a1-4bde-a6f2-86fb97423fc0 req-b5ba95c4-9b4f-4579-916e-8b701d329f06 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-deleted-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.295 2 DEBUG oslo_concurrency.processutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.343 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.344 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6be00afd-ab65-48db-a575-23a285419e60-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.344 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.344 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.344 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] No waiting events found dispatching network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.344 2 WARNING nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Received unexpected event network-vif-plugged-e4ca2e16-9053-460a-9aee-cd724306083e for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.345 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.345 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.345 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.345 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.346 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.346 2 WARNING nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.346 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.346 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.347 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.347 2 DEBUG oslo_concurrency.lockutils [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.347 2 DEBUG nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.347 2 WARNING nova.compute.manager [req-a68ca1b8-58a1-4a79-b048-1e14a8e55de4 req-2e9db56c-6871-476c-bfb1-d74920675262 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:14:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1506: 305 pgs: 305 active+clean; 251 MiB data, 561 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 993 KiB/s wr, 33 op/s
Oct  7 10:14:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:14:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857963059' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.753 2 DEBUG oslo_concurrency.processutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.761 2 DEBUG nova.compute.provider_tree [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.778 2 DEBUG nova.scheduler.client.report [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.803 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.634s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.834 2 INFO nova.scheduler.client.report [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Deleted allocations for instance 6be00afd-ab65-48db-a575-23a285419e60#033[00m
Oct  7 10:14:47 np0005473739 nova_compute[259550]: 2025-10-07 14:14:47.906 2 DEBUG oslo_concurrency.lockutils [None req-52a21c6c-f5ff-494d-bd71-cb745724b4a6 ed6b624fb80d4d4d9b897c788b614297 daaa10f40e014711ba0819e5a5b251c7 - - default default] Lock "6be00afd-ab65-48db-a575-23a285419e60" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.035 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-be1f37db-0265-418f-bc5a-36bd71615d14" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.036 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-be1f37db-0265-418f-bc5a-36bd71615d14" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.036 2 DEBUG nova.objects.instance [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.095 2 DEBUG nova.network.neutron [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port 3e6d7010-f744-42e8-b831-8a1955357b14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.095 2 DEBUG nova.network.neutron [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.115 2 DEBUG oslo_concurrency.lockutils [req-a0b61e6e-d613-4d04-bf66-9cab2729ba6a req-67a98496-d929-480b-9b20-b02d8391be7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.664 2 DEBUG nova.objects.instance [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.677 2 DEBUG nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:14:48 np0005473739 nova_compute[259550]: 2025-10-07 14:14:48.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:49Z|00060|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:02:a9 10.100.0.6
Oct  7 10:14:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:49Z|00061|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:02:a9 10.100.0.6
Oct  7 10:14:49 np0005473739 nova_compute[259550]: 2025-10-07 14:14:49.316 2 DEBUG nova.policy [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:14:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1507: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 120 KiB/s wr, 40 op/s
Oct  7 10:14:50 np0005473739 nova_compute[259550]: 2025-10-07 14:14:50.666 2 DEBUG nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Successfully updated port: be1f37db-0265-418f-bc5a-36bd71615d14 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:14:50 np0005473739 nova_compute[259550]: 2025-10-07 14:14:50.741 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:50 np0005473739 nova_compute[259550]: 2025-10-07 14:14:50.741 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:50 np0005473739 nova_compute[259550]: 2025-10-07 14:14:50.742 2 DEBUG nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:14:50 np0005473739 nova_compute[259550]: 2025-10-07 14:14:50.858 2 DEBUG nova.compute.manager [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-changed-be1f37db-0265-418f-bc5a-36bd71615d14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:14:50 np0005473739 nova_compute[259550]: 2025-10-07 14:14:50.858 2 DEBUG nova.compute.manager [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing instance network info cache due to event network-changed-be1f37db-0265-418f-bc5a-36bd71615d14. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:14:50 np0005473739 nova_compute[259550]: 2025-10-07 14:14:50.859 2 DEBUG oslo_concurrency.lockutils [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:14:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:51 np0005473739 nova_compute[259550]: 2025-10-07 14:14:51.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:51 np0005473739 nova_compute[259550]: 2025-10-07 14:14:51.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1508: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct  7 10:14:51 np0005473739 nova_compute[259550]: 2025-10-07 14:14:51.877 2 WARNING nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:14:51 np0005473739 nova_compute[259550]: 2025-10-07 14:14:51.878 2 WARNING nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:14:51 np0005473739 nova_compute[259550]: 2025-10-07 14:14:51.878 2 WARNING nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:14:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1509: 305 pgs: 305 active+clean; 200 MiB data, 549 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 19 KiB/s wr, 29 op/s
Oct  7 10:14:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:55Z|00433|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct  7 10:14:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:55Z|00434|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:14:55 np0005473739 nova_compute[259550]: 2025-10-07 14:14:55.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1510: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 21 KiB/s wr, 29 op/s
Oct  7 10:14:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:14:56 np0005473739 nova_compute[259550]: 2025-10-07 14:14:56.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:56 np0005473739 nova_compute[259550]: 2025-10-07 14:14:56.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:57 np0005473739 nova_compute[259550]: 2025-10-07 14:14:57.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1511: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 29 op/s
Oct  7 10:14:59 np0005473739 podman[317758]: 2025-10-07 14:14:59.092953383 +0000 UTC m=+0.068357576 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:14:59 np0005473739 podman[317757]: 2025-10-07 14:14:59.092968624 +0000 UTC m=+0.073276937 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.638 2 DEBUG nova.network.neutron [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:14:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1512: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 11 KiB/s wr, 24 op/s
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.810 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846484.8086007, 6be00afd-ab65-48db-a575-23a285419e60 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.811 2 INFO nova.compute.manager [-] [instance: 6be00afd-ab65-48db-a575-23a285419e60] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.920 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.922 2 DEBUG oslo_concurrency.lockutils [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.923 2 DEBUG nova.network.neutron [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Refreshing network info cache for port be1f37db-0265-418f-bc5a-36bd71615d14 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.926 2 DEBUG nova.compute.manager [None req-7832283a-8156-40ef-bf5c-892d37de73a7 - - - - - -] [instance: 6be00afd-ab65-48db-a575-23a285419e60] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.930 2 DEBUG nova.virt.libvirt.vif [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.931 2 DEBUG nova.network.os_vif_util [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.932 2 DEBUG nova.network.os_vif_util [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.933 2 DEBUG os_vif [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.935 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.936 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbe1f37db-02, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.941 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbe1f37db-02, col_values=(('external_ids', {'iface-id': 'be1f37db-0265-418f-bc5a-36bd71615d14', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:b3:7c', 'vm-uuid': 'eb8777f3-5daa-49c7-8994-687012f20453'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:59 np0005473739 NetworkManager[44949]: <info>  [1759846499.9436] manager: (tapbe1f37db-02): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/207)
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.950 2 INFO os_vif [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02')#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.951 2 DEBUG nova.virt.libvirt.vif [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.951 2 DEBUG nova.network.os_vif_util [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.952 2 DEBUG nova.network.os_vif_util [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.956 2 DEBUG nova.virt.libvirt.guest [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct  7 10:14:59 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:4d:b3:7c"/>
Oct  7 10:14:59 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:14:59 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:14:59 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:14:59 np0005473739 nova_compute[259550]:  <target dev="tapbe1f37db-02"/>
Oct  7 10:14:59 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:14:59 np0005473739 nova_compute[259550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  7 10:14:59 np0005473739 kernel: tapbe1f37db-02: entered promiscuous mode
Oct  7 10:14:59 np0005473739 NetworkManager[44949]: <info>  [1759846499.9692] manager: (tapbe1f37db-02): new Tun device (/org/freedesktop/NetworkManager/Devices/208)
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:14:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:59Z|00435|binding|INFO|Claiming lport be1f37db-0265-418f-bc5a-36bd71615d14 for this chassis.
Oct  7 10:14:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:59Z|00436|binding|INFO|be1f37db-0265-418f-bc5a-36bd71615d14: Claiming fa:16:3e:4d:b3:7c 10.100.0.4
Oct  7 10:14:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:14:59Z|00437|binding|INFO|Setting lport be1f37db-0265-418f-bc5a-36bd71615d14 ovn-installed in OVS
Oct  7 10:14:59 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:14:59.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:00 np0005473739 systemd-udevd[317803]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:00 np0005473739 NetworkManager[44949]: <info>  [1759846500.0214] device (tapbe1f37db-02): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:00 np0005473739 NetworkManager[44949]: <info>  [1759846500.0224] device (tapbe1f37db-02): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.046 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.047 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.049 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:00Z|00438|binding|INFO|Setting lport be1f37db-0265-418f-bc5a-36bd71615d14 up in Southbound
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.108 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:b3:7c 10.100.0.4'], port_security=['fa:16:3e:4d:b3:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-166023136', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-166023136', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=be1f37db-0265-418f-bc5a-36bd71615d14) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.109 161536 INFO neutron.agent.ovn.metadata.agent [-] Port be1f37db-0265-418f-bc5a-36bd71615d14 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.111 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.131 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b21653df-3878-45d8-85c8-4f529b7e3552]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.171 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9ba95d-528c-4ca4-a352-92f94df57298]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.175 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[62949bf7-9edf-4a45-8fae-f34a88be3e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.206 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2f55a99b-7484-495c-8a5a-73be8203df6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.226 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f95f8600-13c4-4313-942e-e8cfc637e920]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 9, 'rx_bytes': 1084, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 317811, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.246 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a111d49-8ae2-4dbd-996d-a5c2c018650e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317812, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 317812, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.248 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.251 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.252 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.252 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:00.253 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.379 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:65:98:6b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:2b:f6:d8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:04:02:a9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.380 2 DEBUG nova.virt.libvirt.driver [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:4d:b3:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.518 2 DEBUG nova.virt.libvirt.guest [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:15:00</nova:creationTime>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:15:00 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct  7 10:15:00 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct  7 10:15:00 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct  7 10:15:00 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:00 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:15:00 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:15:00 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.562 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.563 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.566 2 DEBUG oslo_concurrency.lockutils [None req-edd33495-9642-4ce3-a64e-1878960be6af eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-be1f37db-0265-418f-bc5a-36bd71615d14" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 12.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.609 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.737 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.738 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.749 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:15:00 np0005473739 nova_compute[259550]: 2025-10-07 14:15:00.750 2 INFO nova.compute.claims [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:15:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.039 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/22000158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.511 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.518 2 DEBUG nova.compute.provider_tree [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:01Z|00062|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:b3:7c 10.100.0.4
Oct  7 10:15:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:01Z|00063|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:b3:7c 10.100.0.4
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.597 2 DEBUG nova.scheduler.client.report [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1513: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 7.4 KiB/s wr, 1 op/s
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.759 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.760 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.913 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.913 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.943 2 DEBUG nova.compute.manager [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.944 2 DEBUG oslo_concurrency.lockutils [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.944 2 DEBUG oslo_concurrency.lockutils [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.945 2 DEBUG oslo_concurrency.lockutils [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.945 2 DEBUG nova.compute.manager [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.945 2 WARNING nova.compute.manager [req-46d517b5-7aef-46ba-bd99-693810295a4a req-12091840-23f4-4a66-b3f4-000fb486ffb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:15:01 np0005473739 nova_compute[259550]: 2025-10-07 14:15:01.969 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.006 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.169 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.171 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.171 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating image(s)#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.195 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.221 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.247 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.251 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.334 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.335 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.336 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.336 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.358 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.363 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.486 2 DEBUG nova.policy [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39e4681256e44d92ac5928e4f8e0d348', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef9390a1dd804281beea149e0086b360', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.669 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.752 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] resizing rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.854 2 DEBUG nova.objects.instance [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'migration_context' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.890 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.891 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Ensure instance console log exists: /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.891 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.892 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:02 np0005473739 nova_compute[259550]: 2025-10-07 14:15:02.892 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.052 2 DEBUG nova.network.neutron [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updated VIF entry in instance network info cache for port be1f37db-0265-418f-bc5a-36bd71615d14. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.052 2 DEBUG nova.network.neutron [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.071 2 DEBUG oslo_concurrency.lockutils [req-f53b9425-0eef-439e-9225-d92c94b72e90 req-852728f3-17cc-409c-b08f-67127c50d118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.167 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-9a533309-4d4d-4458-9a27-3fe85361ab15" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.168 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-9a533309-4d4d-4458-9a27-3fe85361ab15" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.191 2 DEBUG nova.objects.instance [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.222 2 DEBUG nova.virt.libvirt.vif [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.223 2 DEBUG nova.network.os_vif_util [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.224 2 DEBUG nova.network.os_vif_util [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.228 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.230 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.232 2 DEBUG nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Attempting to detach device tap9a533309-4d from instance eb8777f3-5daa-49c7-8994-687012f20453 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.233 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:2b:f6:d8"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <target dev="tap9a533309-4d"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.238 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.242 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface>not found in domain: <domain type='kvm' id='56'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <name>instance-00000031</name>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:15:00</nova:creationTime>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='serial'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='uuid'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk' index='2'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config' index='1'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:65:98:6b'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='tapfdcb59f4-9f'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:2b:f6:d8'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='tap9a533309-4d'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='net1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:04:02:a9'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='tap3e6d7010-f7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='net2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:4d:b3:7c'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='tapbe1f37db-02'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='net3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c351,c644</label>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c351,c644</imagelabel>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.243 2 INFO nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap9a533309-4d from instance eb8777f3-5daa-49c7-8994-687012f20453 from the persistent domain config.#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.243 2 DEBUG nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] (1/8): Attempting to detach device tap9a533309-4d with device alias net1 from instance eb8777f3-5daa-49c7-8994-687012f20453 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.243 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:2b:f6:d8"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <target dev="tap9a533309-4d"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.320 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.321 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:03 np0005473739 kernel: tap9a533309-4d (unregistering): left promiscuous mode
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.346 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:15:03 np0005473739 NetworkManager[44949]: <info>  [1759846503.3538] device (tap9a533309-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:15:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:03Z|00439|binding|INFO|Releasing lport 9a533309-4d4d-4458-9a27-3fe85361ab15 from this chassis (sb_readonly=0)
Oct  7 10:15:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:03Z|00440|binding|INFO|Setting lport 9a533309-4d4d-4458-9a27-3fe85361ab15 down in Southbound
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:03Z|00441|binding|INFO|Removing iface tap9a533309-4d ovn-installed in OVS
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.369 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759846503.3692205, eb8777f3-5daa-49c7-8994-687012f20453 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.371 2 DEBUG nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Start waiting for the detach event from libvirt for device tap9a533309-4d with device alias net1 for instance eb8777f3-5daa-49c7-8994-687012f20453 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.371 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.373 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2b:f6:d8 10.100.0.8'], port_security=['fa:16:3e:2b:f6:d8 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9a533309-4d4d-4458-9a27-3fe85361ab15) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.374 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9a533309-4d4d-4458-9a27-3fe85361ab15 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.375 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface>not found in domain: <domain type='kvm' id='56'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <name>instance-00000031</name>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:15:00</nova:creationTime>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.375 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="9a533309-4d4d-4458-9a27-3fe85361ab15">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='serial'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='uuid'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk' index='2'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config' index='1'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:65:98:6b'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='tapfdcb59f4-9f'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:04:02:a9'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='tap3e6d7010-f7'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='net2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:4d:b3:7c'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target dev='tapbe1f37db-02'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='net3'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c351,c644</label>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c351,c644</imagelabel>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.375 2 INFO nova.virt.libvirt.driver [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap9a533309-4d from instance eb8777f3-5daa-49c7-8994-687012f20453 from the live domain config.#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.376 2 DEBUG nova.virt.libvirt.vif [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.376 2 DEBUG nova.network.os_vif_util [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.377 2 DEBUG nova.network.os_vif_util [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.377 2 DEBUG os_vif [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.380 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a533309-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.389 2 INFO os_vif [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d')#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.390 2 DEBUG nova.virt.libvirt.guest [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:15:03</nova:creationTime>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct  7 10:15:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:03 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:15:03 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9af96df-ee32-48ee-b1eb-28d1c446127f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.423 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0b58c35e-6fb3-4574-99db-f87c6f62d046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.426 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ffb493f-7f67-4574-9a71-5990ab780ee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.433 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.434 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.441 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.442 2 INFO nova.compute.claims [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.451 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef746da5-e1eb-4c4b-a9aa-b9ce20cd86d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b78f8a4d-69cd-4aab-a324-2dfa1f6b3949]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 14, 'tx_packets': 11, 'rx_bytes': 1084, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318012, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.488 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20daee53-81d1-40b6-83a5-7f302c88c79d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318013, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318013, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.490 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:03.494 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.515 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Successfully created port: 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.612 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1514: 305 pgs: 305 active+clean; 200 MiB data, 545 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 7.1 KiB/s wr, 1 op/s
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.936 2 DEBUG nova.compute.manager [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.936 2 DEBUG oslo_concurrency.lockutils [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.937 2 DEBUG oslo_concurrency.lockutils [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.937 2 DEBUG oslo_concurrency.lockutils [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.937 2 DEBUG nova.compute.manager [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-unplugged-9a533309-4d4d-4458-9a27-3fe85361ab15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:03 np0005473739 nova_compute[259550]: 2025-10-07 14:15:03.938 2 WARNING nova.compute.manager [req-340a8fea-4717-4a40-9317-e90260fbf697 req-0fb7657d-64ca-4722-8966-bea6ef92aef4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-unplugged-9a533309-4d4d-4458-9a27-3fe85361ab15 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.028 2 DEBUG nova.compute.manager [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.029 2 DEBUG oslo_concurrency.lockutils [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.029 2 DEBUG oslo_concurrency.lockutils [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.029 2 DEBUG oslo_concurrency.lockutils [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.030 2 DEBUG nova.compute.manager [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.030 2 WARNING nova.compute.manager [req-88b046ff-a7ad-42c9-a105-76cb9e67c29d req-ca5e711a-0393-4b9d-986d-98d7d7af7d04 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-be1f37db-0265-418f-bc5a-36bd71615d14 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:15:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3132550295' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.092 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.099 2 DEBUG nova.compute.provider_tree [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.116 2 DEBUG nova.scheduler.client.report [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.144 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.145 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.189 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.189 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.207 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.233 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.311 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.312 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.312 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Creating image(s)#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.331 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.355 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.377 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.380 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.454 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.456 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.457 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.458 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.490 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.495 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.804 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.309s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.878 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] resizing rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:04.954 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.954 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:04.955 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:15:04 np0005473739 nova_compute[259550]: 2025-10-07 14:15:04.986 2 DEBUG nova.objects.instance [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lazy-loading 'migration_context' on Instance uuid 52aed8a1-32e4-4242-881e-1b40f79f09e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.004 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.004 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Ensure instance console log exists: /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.005 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.005 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.005 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.089 2 DEBUG nova.policy [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b202876689054b5ebeef4c4648b455bf', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0a497c44829943d787416adb835d66e5', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.698 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.699 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:05 np0005473739 nova_compute[259550]: 2025-10-07 14:15:05.700 2 DEBUG nova.network.neutron [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1515: 305 pgs: 305 active+clean; 230 MiB data, 556 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 928 KiB/s wr, 15 op/s
Oct  7 10:15:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.255 2 DEBUG nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.255 2 DEBUG oslo_concurrency.lockutils [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.255 2 DEBUG oslo_concurrency.lockutils [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.256 2 DEBUG oslo_concurrency.lockutils [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.256 2 DEBUG nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.256 2 WARNING nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-9a533309-4d4d-4458-9a27-3fe85361ab15 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.256 2 DEBUG nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-deleted-9a533309-4d4d-4458-9a27-3fe85361ab15 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.256 2 INFO nova.compute.manager [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Neutron deleted interface 9a533309-4d4d-4458-9a27-3fe85361ab15; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.257 2 DEBUG nova.network.neutron [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.276 2 DEBUG nova.objects.instance [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lazy-loading 'system_metadata' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.298 2 DEBUG nova.objects.instance [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lazy-loading 'flavor' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.317 2 DEBUG nova.virt.libvirt.vif [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.318 2 DEBUG nova.network.os_vif_util [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.319 2 DEBUG nova.network.os_vif_util [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.322 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.325 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface>not found in domain: <domain type='kvm' id='56'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <name>instance-00000031</name>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:15:03</nova:creationTime>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:15:06 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='serial'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='uuid'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk' index='2'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config' index='1'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:65:98:6b'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='tapfdcb59f4-9f'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:04:02:a9'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='tap3e6d7010-f7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='net2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:4d:b3:7c'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='tapbe1f37db-02'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='net3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c351,c644</label>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c351,c644</imagelabel>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:15:06 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:06 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.327 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.330 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:2b:f6:d8"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap9a533309-4d"/></interface>not found in domain: <domain type='kvm' id='56'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <name>instance-00000031</name>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <uuid>eb8777f3-5daa-49c7-8994-687012f20453</uuid>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:15:03</nova:creationTime>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:15:06 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='serial'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='uuid'>eb8777f3-5daa-49c7-8994-687012f20453</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk' index='2'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/eb8777f3-5daa-49c7-8994-687012f20453_disk.config' index='1'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:65:98:6b'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='tapfdcb59f4-9f'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:04:02:a9'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='tap3e6d7010-f7'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='net2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:4d:b3:7c'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target dev='tapbe1f37db-02'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='net3'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x08' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453/console.log' append='off'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c351,c644</label>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c351,c644</imagelabel>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:15:06 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:06 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.330 2 WARNING nova.virt.libvirt.driver [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Detaching interface fa:16:3e:2b:f6:d8 failed because the device is no longer found on the guest.: nova.exception.DeviceNotFound: Device 'tap9a533309-4d' not found.#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.331 2 DEBUG nova.virt.libvirt.vif [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata=<?>,migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.331 2 DEBUG nova.network.os_vif_util [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converting VIF {"id": "9a533309-4d4d-4458-9a27-3fe85361ab15", "address": "fa:16:3e:2b:f6:d8", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9a533309-4d", "ovs_interfaceid": "9a533309-4d4d-4458-9a27-3fe85361ab15", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.332 2 DEBUG nova.network.os_vif_util [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.333 2 DEBUG os_vif [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9a533309-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.336 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.339 2 INFO os_vif [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:2b:f6:d8,bridge_name='br-int',has_traffic_filtering=True,id=9a533309-4d4d-4458-9a27-3fe85361ab15,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9a533309-4d')#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.339 2 DEBUG nova.virt.libvirt.guest [req-215ef7a1-fcd3-4b0f-990e-dc6408738e5b req-9f9801c2-b85a-474f-aff1-5f49ad574c2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:name>tempest-AttachInterfacesTestJSON-server-151822004</nova:name>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:15:06</nova:creationTime>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="fdcb59f4-9f89-4147-941b-a28bfa0621bf">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="3e6d7010-f744-42e8-b831-8a1955357b14">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    <nova:port uuid="be1f37db-0265-418f-bc5a-36bd71615d14">
Oct  7 10:15:06 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:15:06 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:15:06 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:15:06 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.643 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Successfully updated port: 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.687 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.687 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquired lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.687 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.836 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Successfully created port: 9999835e-253e-4f1f-82c7-59a30f3e1537 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:06 np0005473739 nova_compute[259550]: 2025-10-07 14:15:06.974 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.399 161536 WARNING neutron.agent.ovn.metadata.agent [-] Removing non-external type port 4cdb8222-4ae2-4185-90c6-f5e1374079e9 with type ""#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.400 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched DELETE: PortBindingDeletedEvent(events=('delete',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:b3:7c 10.100.0.4'], port_security=['fa:16:3e:4d:b3:7c 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[True], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-166023136', 'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-166023136', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=be1f37db-0265-418f-bc5a-36bd71615d14) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.402 161536 INFO neutron.agent.ovn.metadata.agent [-] Port be1f37db-0265-418f-bc5a-36bd71615d14 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.404 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:15:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:07Z|00442|binding|INFO|Removing iface tapbe1f37db-02 ovn-installed in OVS
Oct  7 10:15:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:07Z|00443|binding|INFO|Removing lport be1f37db-0265-418f-bc5a-36bd71615d14 ovn-installed in OVS
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68814c89-5f15-4f65-afee-44cd6e2d0964]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.457 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea3db4e-c2fd-47c6-85f8-31b84bcce89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.460 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8b544677-04e0-4736-8ded-81e1329ae6d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.489 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d5dfb984-60ec-4b51-a108-95b46c252a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.507 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97cb26e8-ba24-4d84-a31b-b3269b634613]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 13, 'rx_bytes': 1126, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 13, 'rx_bytes': 1126, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318207, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.525 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97a58195-3289-4c5d-b26e-fe066b34d8f5]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318208, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318208, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.527 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.530 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.530 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.531 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.531 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.624 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.625 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.625 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.625 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.625 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.626 2 INFO nova.compute.manager [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Terminating instance#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.627 2 DEBUG nova.compute.manager [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:15:07 np0005473739 kernel: tapfdcb59f4-9f (unregistering): left promiscuous mode
Oct  7 10:15:07 np0005473739 NetworkManager[44949]: <info>  [1759846507.7324] device (tapfdcb59f4-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:15:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1516: 305 pgs: 305 active+clean; 271 MiB data, 578 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.7 MiB/s wr, 31 op/s
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:07Z|00444|binding|INFO|Releasing lport fdcb59f4-9f89-4147-941b-a28bfa0621bf from this chassis (sb_readonly=0)
Oct  7 10:15:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:07Z|00445|binding|INFO|Setting lport fdcb59f4-9f89-4147-941b-a28bfa0621bf down in Southbound
Oct  7 10:15:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:07Z|00446|binding|INFO|Removing iface tapfdcb59f4-9f ovn-installed in OVS
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.747 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:65:98:6b 10.100.0.13'], port_security=['fa:16:3e:65:98:6b 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '206b777f-07ac-463f-aac7-dcc9bac5a7aa', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.180'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fdcb59f4-9f89-4147-941b-a28bfa0621bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.748 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fdcb59f4-9f89-4147-941b-a28bfa0621bf in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis#033[00m
Oct  7 10:15:07 np0005473739 kernel: tap3e6d7010-f7 (unregistering): left promiscuous mode
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.752 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:15:07 np0005473739 NetworkManager[44949]: <info>  [1759846507.7576] device (tap3e6d7010-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:07Z|00447|binding|INFO|Releasing lport 3e6d7010-f744-42e8-b831-8a1955357b14 from this chassis (sb_readonly=0)
Oct  7 10:15:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:07Z|00448|binding|INFO|Setting lport 3e6d7010-f744-42e8-b831-8a1955357b14 down in Southbound
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:07Z|00449|binding|INFO|Removing iface tap3e6d7010-f7 ovn-installed in OVS
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.775 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:02:a9 10.100.0.6'], port_security=['fa:16:3e:04:02:a9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'eb8777f3-5daa-49c7-8994-687012f20453', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3e6d7010-f744-42e8-b831-8a1955357b14) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.774 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dc76dced-f39f-495a-88ed-76f5eaca257f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 kernel: tapbe1f37db-02 (unregistering): left promiscuous mode
Oct  7 10:15:07 np0005473739 NetworkManager[44949]: <info>  [1759846507.8036] device (tapbe1f37db-02): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.813 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7bb6c84b-fd0e-4518-84d2-9acb45ac9e07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.817 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a32588-a538-4a74-927b-062459bf1c3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.853 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aa31e42c-1974-4107-b23d-84973e04cc43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000031.scope: Deactivated successfully.
Oct  7 10:15:07 np0005473739 systemd[1]: machine-qemu\x2d56\x2dinstance\x2d00000031.scope: Consumed 16.730s CPU time.
Oct  7 10:15:07 np0005473739 systemd-machined[214580]: Machine qemu-56-instance-00000031 terminated.
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.878 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e951296e-c83c-4ad3-9d4e-5b6e25764699]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 15, 'rx_bytes': 1126, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 15, 'rx_bytes': 1126, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 121], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702782, 'reachable_time': 23108, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318231, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.898 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b2b8863-3e99-4771-b925-1c2833773e73]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702795, 'tstamp': 702795}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318232, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 702798, 'tstamp': 702798}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318232, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.900 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 nova_compute[259550]: 2025-10-07 14:15:07.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.914 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.914 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.915 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.916 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.916 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3e6d7010-f744-42e8-b831-8a1955357b14 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.918 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1d9f332-f920-4d6e-8e91-dd13ec334d51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.919 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6bea6e48-b4ac-44cc-b81b-282249535bfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:07.920 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace which is not needed anymore#033[00m
Oct  7 10:15:08 np0005473739 NetworkManager[44949]: <info>  [1759846508.0690] manager: (tap3e6d7010-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/209)
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 NetworkManager[44949]: <info>  [1759846508.0839] manager: (tapbe1f37db-02): new Tun device (/org/freedesktop/NetworkManager/Devices/210)
Oct  7 10:15:08 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [NOTICE]   (314592) : haproxy version is 2.8.14-c23fe91
Oct  7 10:15:08 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [NOTICE]   (314592) : path to executable is /usr/sbin/haproxy
Oct  7 10:15:08 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [WARNING]  (314592) : Exiting Master process...
Oct  7 10:15:08 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [WARNING]  (314592) : Exiting Master process...
Oct  7 10:15:08 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [ALERT]    (314592) : Current worker (314594) exited with code 143 (Terminated)
Oct  7 10:15:08 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[314588]: [WARNING]  (314592) : All workers exited. Exiting... (0)
Oct  7 10:15:08 np0005473739 systemd[1]: libpod-61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327.scope: Deactivated successfully.
Oct  7 10:15:08 np0005473739 podman[318253]: 2025-10-07 14:15:08.098471622 +0000 UTC m=+0.062844081 container died 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.112 2 INFO nova.network.neutron [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Port 9a533309-4d4d-4458-9a27-3fe85361ab15 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.119 2 INFO nova.virt.libvirt.driver [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Instance destroyed successfully.#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.119 2 DEBUG nova.objects.instance [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'resources' on Instance uuid eb8777f3-5daa-49c7-8994-687012f20453 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327-userdata-shm.mount: Deactivated successfully.
Oct  7 10:15:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-16461d24c3ac066e546c480d661cadac4a5387b7af213607c5bfba84e3d1e3d1-merged.mount: Deactivated successfully.
Oct  7 10:15:08 np0005473739 podman[318253]: 2025-10-07 14:15:08.15214574 +0000 UTC m=+0.116518199 container cleanup 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 10:15:08 np0005473739 systemd[1]: libpod-conmon-61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327.scope: Deactivated successfully.
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.176 2 DEBUG nova.virt.libvirt.vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.177 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.178 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.178 2 DEBUG os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfdcb59f4-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.192 2 INFO os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:65:98:6b,bridge_name='br-int',has_traffic_filtering=True,id=fdcb59f4-9f89-4147-941b-a28bfa0621bf,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfdcb59f4-9f')#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.193 2 DEBUG nova.virt.libvirt.vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.193 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.194 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.195 2 DEBUG os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.197 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3e6d7010-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.207 2 INFO os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:04:02:a9,bridge_name='br-int',has_traffic_filtering=True,id=3e6d7010-f744-42e8-b831-8a1955357b14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3e6d7010-f7')#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.208 2 DEBUG nova.virt.libvirt.vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:13:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesTestJSON-server-151822004',display_name='tempest-AttachInterfacesTestJSON-server-151822004',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacestestjson-server-151822004',id=49,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK5JAiV9BVuCq239aB2e/KW/fZYFwYjAFX3YBwcl9/+jD+zdeGdM1XJC4allLyQ1QOT+Qp/XsEXeBu6RFt42XwFnXECOLZx/5gxeUFutVniZGFrQKSFf/y3ycdWnkr75bQ==',key_name='tempest-keypair-1967512478',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-ffwv7mgh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:09Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=eb8777f3-5daa-49c7-8994-687012f20453,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.208 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.209 2 DEBUG nova.network.os_vif_util [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.209 2 DEBUG os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.211 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbe1f37db-02, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:08 np0005473739 podman[318320]: 2025-10-07 14:15:08.217132937 +0000 UTC m=+0.042392891 container remove 61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.217 2 INFO os_vif [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:b3:7c,bridge_name='br-int',has_traffic_filtering=True,id=be1f37db-0265-418f-bc5a-36bd71615d14,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapbe1f37db-02')#033[00m
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.223 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cbf0b14-0edd-41f3-a3ea-56d4486406af]: (4, ('Tue Oct  7 02:15:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327)\n61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327\nTue Oct  7 02:15:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 (61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327)\n61801b5a551057e50a25f49695a016d692d5b55e910f30101cc74a133bdbe327\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.225 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f3fdd56-7def-4456-9c6e-595b90284874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.226 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:08 np0005473739 kernel: tapb1d9f332-f0: left promiscuous mode
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.247 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01b85d8d-be29-4e18-b6f2-a99ae05501bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.274 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c5117107-9489-431f-8eae-b29942fca9b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.276 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9cfb307-8deb-42ae-8eed-8344e16af5ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.296 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58d73921-61b2-4115-be4c-06b7c6c6c1d4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 702775, 'reachable_time': 39736, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318355, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:08 np0005473739 systemd[1]: run-netns-ovnmeta\x2db1d9f332\x2df920\x2d4d6e\x2d8e91\x2ddd13ec334d51.mount: Deactivated successfully.
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.302 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:15:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:08.302 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9eca11db-2981-4c22-9b7c-986c9e5b59c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.336 2 DEBUG nova.compute.manager [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.336 2 DEBUG oslo_concurrency.lockutils [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.337 2 DEBUG oslo_concurrency.lockutils [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.337 2 DEBUG oslo_concurrency.lockutils [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.338 2 DEBUG nova.compute.manager [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-unplugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.338 2 DEBUG nova.compute.manager [req-0a7e8061-1a5c-49b3-a056-39366a061013 req-a6ed056d-e283-4a73-86e9-8a8494c3ce75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.639 2 DEBUG nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-changed-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.640 2 DEBUG nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Refreshing instance network info cache due to event network-changed-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.642 2 DEBUG oslo_concurrency.lockutils [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.672 2 DEBUG nova.network.neutron [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Updating instance_info_cache with network_info: [{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.686 2 INFO nova.virt.libvirt.driver [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Deleting instance files /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453_del#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.687 2 INFO nova.virt.libvirt.driver [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Deletion of /var/lib/nova/instances/eb8777f3-5daa-49c7-8994-687012f20453_del complete#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.694 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Releasing lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.694 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance network_info: |[{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.695 2 DEBUG oslo_concurrency.lockutils [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.695 2 DEBUG nova.network.neutron [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Refreshing network info cache for port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.698 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start _get_guest_xml network_info=[{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.703 2 WARNING nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.708 2 DEBUG nova.virt.libvirt.host [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.709 2 DEBUG nova.virt.libvirt.host [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.717 2 DEBUG nova.virt.libvirt.host [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.718 2 DEBUG nova.virt.libvirt.host [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.718 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.718 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.719 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.720 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.720 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.720 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.720 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.721 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.721 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.722 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.722 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.722 2 DEBUG nova.virt.hardware [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.726 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.771 2 INFO nova.compute.manager [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.772 2 DEBUG oslo.service.loopingcall [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.773 2 DEBUG nova.compute.manager [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:15:08 np0005473739 nova_compute[259550]: 2025-10-07 14:15:08.774 2 DEBUG nova.network.neutron [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:15:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1211987338' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.201 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.228 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.233 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.347 2 DEBUG neutronclient.v2_0.client [-] Error message: {"NeutronError": {"type": "PortNotFound", "message": "Port be1f37db-0265-418f-bc5a-36bd71615d14 could not be found.", "detail": ""}} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.349 2 DEBUG nova.network.neutron [-] Unable to show port be1f37db-0265-418f-bc5a-36bd71615d14 as it no longer exists. _unbind_ports /usr/lib/python3.9/site-packages/nova/network/neutron.py:666#033[00m
Oct  7 10:15:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1158229080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.677 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.679 2 DEBUG nova.virt.libvirt.vif [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:02Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.679 2 DEBUG nova.network.os_vif_util [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.680 2 DEBUG nova.network.os_vif_util [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.682 2 DEBUG nova.objects.instance [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.695 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <uuid>a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</uuid>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <name>instance-00000035</name>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <nova:name>tempest-tempest.common.compute-instance-2003953244</nova:name>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:08</nova:creationTime>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <nova:user uuid="39e4681256e44d92ac5928e4f8e0d348">tempest-ServerActionsTestOtherA-508284156-project-member</nova:user>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <nova:project uuid="ef9390a1dd804281beea149e0086b360">tempest-ServerActionsTestOtherA-508284156</nova:project>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <nova:port uuid="2781ab1e-ba6c-4689-8da2-ddcf85b31ca8">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <entry name="serial">a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</entry>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <entry name="uuid">a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</entry>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:d4:48:b2"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <target dev="tap2781ab1e-ba"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/console.log" append="off"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:09 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:09 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:09 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:09 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.695 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Preparing to wait for external event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.696 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.696 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.696 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.697 2 DEBUG nova.virt.libvirt.vif [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:02Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.697 2 DEBUG nova.network.os_vif_util [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.697 2 DEBUG nova.network.os_vif_util [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.698 2 DEBUG os_vif [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2781ab1e-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2781ab1e-ba, col_values=(('external_ids', {'iface-id': '2781ab1e-ba6c-4689-8da2-ddcf85b31ca8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:48:b2', 'vm-uuid': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:09 np0005473739 NetworkManager[44949]: <info>  [1759846509.7048] manager: (tap2781ab1e-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/211)
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.711 2 INFO os_vif [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba')#033[00m
Oct  7 10:15:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1517: 305 pgs: 305 active+clean; 245 MiB data, 571 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.6 MiB/s wr, 73 op/s
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.760 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.760 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.760 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No VIF found with MAC fa:16:3e:d4:48:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.761 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Using config drive#033[00m
Oct  7 10:15:09 np0005473739 nova_compute[259550]: 2025-10-07 14:15:09.782 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.395 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Successfully updated port: 9999835e-253e-4f1f-82c7-59a30f3e1537 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.408 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.408 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquired lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.408 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.790 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.887 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating config drive at /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.893 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0v_2snf2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.955 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.956 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.956 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.956 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.956 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.956 2 WARNING nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-fdcb59f4-9f89-4147-941b-a28bfa0621bf for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-unplugged-3e6d7010-f744-42e8-b831-8a1955357b14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.957 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-unplugged-3e6d7010-f744-42e8-b831-8a1955357b14 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "eb8777f3-5daa-49c7-8994-687012f20453-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG oslo_concurrency.lockutils [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.958 2 DEBUG nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] No waiting events found dispatching network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:10 np0005473739 nova_compute[259550]: 2025-10-07 14:15:10.958 2 WARNING nova.compute.manager [req-17abce1b-6675-4f08-9899-a55f95aaf6fe req-0d443439-d380-4801-b240-9160cab55b19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received unexpected event network-vif-plugged-3e6d7010-f744-42e8-b831-8a1955357b14 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:15:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.022 2 DEBUG nova.compute.manager [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received event network-changed-9999835e-253e-4f1f-82c7-59a30f3e1537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.023 2 DEBUG nova.compute.manager [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Refreshing instance network info cache due to event network-changed-9999835e-253e-4f1f-82c7-59a30f3e1537. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.023 2 DEBUG oslo_concurrency.lockutils [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.051 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0v_2snf2" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.076 2 DEBUG nova.storage.rbd_utils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.080 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.275 2 DEBUG oslo_concurrency.processutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.276 2 INFO nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deleting local config drive /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:11 np0005473739 kernel: tap2781ab1e-ba: entered promiscuous mode
Oct  7 10:15:11 np0005473739 NetworkManager[44949]: <info>  [1759846511.3416] manager: (tap2781ab1e-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/212)
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:11Z|00450|binding|INFO|Claiming lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for this chassis.
Oct  7 10:15:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:11Z|00451|binding|INFO|2781ab1e-ba6c-4689-8da2-ddcf85b31ca8: Claiming fa:16:3e:d4:48:b2 10.100.0.5
Oct  7 10:15:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:11Z|00452|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 ovn-installed in OVS
Oct  7 10:15:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:11Z|00453|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 up in Southbound
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.362 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:48:b2 10.100.0.5'], port_security=['fa:16:3e:d4:48:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.366 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb bound to our chassis#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.368 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.391 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fda0ac18-6e4a-40af-b653-87e76b2b3f14]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:11 np0005473739 systemd-machined[214580]: New machine qemu-61-instance-00000035.
Oct  7 10:15:11 np0005473739 systemd[1]: Started Virtual Machine qemu-61-instance-00000035.
Oct  7 10:15:11 np0005473739 systemd-udevd[318517]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.446 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[80f6c820-e464-4dee-acea-02a2fbbf1435]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.451 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[85d3987b-cc48-4024-b38a-8b640b4ab05c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:11 np0005473739 NetworkManager[44949]: <info>  [1759846511.4529] device (tap2781ab1e-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:11 np0005473739 NetworkManager[44949]: <info>  [1759846511.4545] device (tap2781ab1e-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:11 np0005473739 podman[318490]: 2025-10-07 14:15:11.494062143 +0000 UTC m=+0.108707613 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.494 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fd63be78-7a8d-4aa5-a68d-f4a8e53f6703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.512 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b817283-2ffe-4f09-904f-724f4100dae6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318545, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5f103a4-bc27-4b9d-8283-55f4e18a3695]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318546, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318546, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.536 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:11 np0005473739 podman[318491]: 2025-10-07 14:15:11.539117453 +0000 UTC m=+0.151580705 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.540 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:11.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.633 2 DEBUG nova.network.neutron [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Updated VIF entry in instance network info cache for port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.634 2 DEBUG nova.network.neutron [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Updating instance_info_cache with network_info: [{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.649 2 DEBUG oslo_concurrency.lockutils [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.649 2 DEBUG nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-deleted-be1f37db-0265-418f-bc5a-36bd71615d14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.649 2 INFO nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Neutron deleted interface be1f37db-0265-418f-bc5a-36bd71615d14; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.650 2 DEBUG nova.network.neutron [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:11 np0005473739 nova_compute[259550]: 2025-10-07 14:15:11.668 2 DEBUG nova.compute.manager [req-87abe4f1-0346-4979-acd7-e6ade5bda0d1 req-d6da6c80-1cfc-4e30-9240-097c857d8c03 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Detach interface failed, port_id=be1f37db-0265-418f-bc5a-36bd71615d14, reason: Instance eb8777f3-5daa-49c7-8994-687012f20453 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:15:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1518: 305 pgs: 305 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.446 2 DEBUG nova.network.neutron [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "address": "fa:16:3e:65:98:6b", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.180", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfdcb59f4-9f", "ovs_interfaceid": "fdcb59f4-9f89-4147-941b-a28bfa0621bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.472 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-eb8777f3-5daa-49c7-8994-687012f20453" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.487 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846512.486061, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.487 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.507 2 DEBUG oslo_concurrency.lockutils [None req-ca83cce2-8220-452d-ad1e-0239519dae4a eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-eb8777f3-5daa-49c7-8994-687012f20453-9a533309-4d4d-4458-9a27-3fe85361ab15" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 9.339s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.510 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.515 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846512.4861999, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.516 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.532 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.538 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.546 2 DEBUG nova.network.neutron [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Updating instance_info_cache with network_info: [{"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.560 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.566 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Releasing lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.567 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance network_info: |[{"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.568 2 DEBUG oslo_concurrency.lockutils [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.569 2 DEBUG nova.network.neutron [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Refreshing network info cache for port 9999835e-253e-4f1f-82c7-59a30f3e1537 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.575 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start _get_guest_xml network_info=[{"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.582 2 WARNING nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.591 2 DEBUG nova.virt.libvirt.host [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.592 2 DEBUG nova.virt.libvirt.host [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.596 2 DEBUG nova.virt.libvirt.host [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.596 2 DEBUG nova.virt.libvirt.host [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.597 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.597 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.597 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.598 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.598 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.598 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.598 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.599 2 DEBUG nova.virt.hardware [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:12 np0005473739 nova_compute[259550]: 2025-10-07 14:15:12.604 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1848191264' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.092 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.117 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.122 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.320 2 DEBUG nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.321 2 DEBUG oslo_concurrency.lockutils [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.321 2 DEBUG oslo_concurrency.lockutils [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.321 2 DEBUG oslo_concurrency.lockutils [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.322 2 DEBUG nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Processing event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.322 2 DEBUG nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-deleted-fdcb59f4-9f89-4147-941b-a28bfa0621bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.322 2 INFO nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Neutron deleted interface fdcb59f4-9f89-4147-941b-a28bfa0621bf; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.322 2 DEBUG nova.network.neutron [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [{"id": "3e6d7010-f744-42e8-b831-8a1955357b14", "address": "fa:16:3e:04:02:a9", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3e6d7010-f7", "ovs_interfaceid": "3e6d7010-f744-42e8-b831-8a1955357b14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "be1f37db-0265-418f-bc5a-36bd71615d14", "address": "fa:16:3e:4d:b3:7c", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbe1f37db-02", "ovs_interfaceid": "be1f37db-0265-418f-bc5a-36bd71615d14", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.328 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.349 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846513.348338, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.349 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.354 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.358 2 DEBUG nova.compute.manager [req-cc8da2e3-71c4-472d-bf5a-f5f8d072a150 req-37b48678-812c-4c00-8123-70a84b77296b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Detach interface failed, port_id=fdcb59f4-9f89-4147-941b-a28bfa0621bf, reason: Instance eb8777f3-5daa-49c7-8994-687012f20453 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.361 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance spawned successfully.#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.362 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.371 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.375 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.394 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.394 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.395 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.395 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.395 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.396 2 DEBUG nova.virt.libvirt.driver [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.406 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.468 2 INFO nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Took 11.30 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.469 2 DEBUG nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.542 2 INFO nova.compute.manager [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Took 12.83 seconds to build instance.#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.565 2 DEBUG oslo_concurrency.lockutils [None req-15cf6191-a4db-4fa7-9425-0ed373873ac1 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1163290645' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.610 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.612 2 DEBUG nova.virt.libvirt.vif [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1063149174',display_name='tempest-InstanceActionsNegativeTestJSON-server-1063149174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1063149174',id=54,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a497c44829943d787416adb835d66e5',ramdisk_id='',reservation_id='r-p0tay0if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-243150513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-243150513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:04Z,user_data=None,user_id='b202876689054b5ebeef4c4648b455bf',uuid=52aed8a1-32e4-4242-881e-1b40f79f09e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.613 2 DEBUG nova.network.os_vif_util [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converting VIF {"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.614 2 DEBUG nova.network.os_vif_util [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.616 2 DEBUG nova.objects.instance [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lazy-loading 'pci_devices' on Instance uuid 52aed8a1-32e4-4242-881e-1b40f79f09e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.638 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <uuid>52aed8a1-32e4-4242-881e-1b40f79f09e1</uuid>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <name>instance-00000036</name>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <nova:name>tempest-InstanceActionsNegativeTestJSON-server-1063149174</nova:name>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:12</nova:creationTime>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <nova:user uuid="b202876689054b5ebeef4c4648b455bf">tempest-InstanceActionsNegativeTestJSON-243150513-project-member</nova:user>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <nova:project uuid="0a497c44829943d787416adb835d66e5">tempest-InstanceActionsNegativeTestJSON-243150513</nova:project>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <nova:port uuid="9999835e-253e-4f1f-82c7-59a30f3e1537">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <entry name="serial">52aed8a1-32e4-4242-881e-1b40f79f09e1</entry>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <entry name="uuid">52aed8a1-32e4-4242-881e-1b40f79f09e1</entry>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/52aed8a1-32e4-4242-881e-1b40f79f09e1_disk">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:fb:32:36"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <target dev="tap9999835e-25"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/console.log" append="off"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:13 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:13 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:13 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:13 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.639 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Preparing to wait for external event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.639 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.639 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.640 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.640 2 DEBUG nova.virt.libvirt.vif [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1063149174',display_name='tempest-InstanceActionsNegativeTestJSON-server-1063149174',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1063149174',id=54,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0a497c44829943d787416adb835d66e5',ramdisk_id='',reservation_id='r-p0tay0if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-InstanceActionsNegativeTestJSON-243150513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-243150513-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:04Z,user_data=None,user_id='b202876689054b5ebeef4c4648b455bf',uuid=52aed8a1-32e4-4242-881e-1b40f79f09e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.641 2 DEBUG nova.network.os_vif_util [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converting VIF {"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.641 2 DEBUG nova.network.os_vif_util [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.642 2 DEBUG os_vif [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.643 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9999835e-25, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.646 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9999835e-25, col_values=(('external_ids', {'iface-id': '9999835e-253e-4f1f-82c7-59a30f3e1537', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fb:32:36', 'vm-uuid': '52aed8a1-32e4-4242-881e-1b40f79f09e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:13 np0005473739 NetworkManager[44949]: <info>  [1759846513.6486] manager: (tap9999835e-25): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/213)
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.655 2 INFO os_vif [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25')#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.673 2 DEBUG nova.network.neutron [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.696 2 INFO nova.compute.manager [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Took 4.92 seconds to deallocate network for instance.#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1519: 305 pgs: 305 active+clean; 213 MiB data, 550 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 82 op/s
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.751 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.751 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.751 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] No VIF found with MAC fa:16:3e:fb:32:36, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.751 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Using config drive#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.771 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.808 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.808 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:13 np0005473739 nova_compute[259550]: 2025-10-07 14:15:13.934 2 DEBUG oslo_concurrency.processutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:13.957 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.405 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Creating config drive at /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.412 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ffkaf9_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3048651713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.466 2 DEBUG oslo_concurrency.processutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.473 2 DEBUG nova.compute.provider_tree [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.567 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_ffkaf9_" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.612 2 DEBUG nova.storage.rbd_utils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] rbd image 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.617 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.662 2 DEBUG nova.scheduler.client.report [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.720 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.796 2 INFO nova.scheduler.client.report [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Deleted allocations for instance eb8777f3-5daa-49c7-8994-687012f20453#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.797 2 DEBUG oslo_concurrency.processutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config 52aed8a1-32e4-4242-881e-1b40f79f09e1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.798 2 INFO nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Deleting local config drive /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.858 2 DEBUG nova.network.neutron [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Updated VIF entry in instance network info cache for port 9999835e-253e-4f1f-82c7-59a30f3e1537. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.859 2 DEBUG nova.network.neutron [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Updating instance_info_cache with network_info: [{"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:14 np0005473739 NetworkManager[44949]: <info>  [1759846514.8685] manager: (tap9999835e-25): new Tun device (/org/freedesktop/NetworkManager/Devices/214)
Oct  7 10:15:14 np0005473739 kernel: tap9999835e-25: entered promiscuous mode
Oct  7 10:15:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:14Z|00454|binding|INFO|Claiming lport 9999835e-253e-4f1f-82c7-59a30f3e1537 for this chassis.
Oct  7 10:15:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:14Z|00455|binding|INFO|9999835e-253e-4f1f-82c7-59a30f3e1537: Claiming fa:16:3e:fb:32:36 10.100.0.12
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:14 np0005473739 systemd-machined[214580]: New machine qemu-62-instance-00000036.
Oct  7 10:15:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:14Z|00456|binding|INFO|Setting lport 9999835e-253e-4f1f-82c7-59a30f3e1537 ovn-installed in OVS
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:14 np0005473739 nova_compute[259550]: 2025-10-07 14:15:14.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:14 np0005473739 systemd[1]: Started Virtual Machine qemu-62-instance-00000036.
Oct  7 10:15:14 np0005473739 systemd-udevd[318751]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:14 np0005473739 NetworkManager[44949]: <info>  [1759846514.9525] device (tap9999835e-25): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:14 np0005473739 NetworkManager[44949]: <info>  [1759846514.9536] device (tap9999835e-25): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:15Z|00457|binding|INFO|Setting lport 9999835e-253e-4f1f-82c7-59a30f3e1537 up in Southbound
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.075 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:32:36 10.100.0.12'], port_security=['fa:16:3e:fb:32:36 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '52aed8a1-32e4-4242-881e-1b40f79f09e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a497c44829943d787416adb835d66e5', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ad3edd26-6a78-4069-a120-e0c484f5035e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d008834-8f4c-4a69-8755-e7ba45e692b8, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9999835e-253e-4f1f-82c7-59a30f3e1537) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.078 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9999835e-253e-4f1f-82c7-59a30f3e1537 in datapath eff78bf7-14f6-4b50-9495-e8815b9b3aa7 bound to our chassis#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.079 2 DEBUG oslo_concurrency.lockutils [req-1347914c-91b0-4399-adcd-7c2107d6959d req-c85987aa-6719-4ab9-9a6e-2d9bbf5ef55a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52aed8a1-32e4-4242-881e-1b40f79f09e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.082 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network eff78bf7-14f6-4b50-9495-e8815b9b3aa7#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.101 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a62490f-aa24-4cb1-89a1-82f43eab984e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.102 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapeff78bf7-11 in ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.105 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapeff78bf7-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.106 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[180c98ec-f84b-40c9-9c41-b791edfd02a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.107 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb79b7b4-cb9c-4525-be10-58cc2273ada3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.125 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fd7ac55b-6efa-4bc6-8828-a435aad17c62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.144 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[15234c32-5885-4deb-b575-b7b57b340fbe]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.184 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1e92cb7a-ca98-453f-a336-84d870357c73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 NetworkManager[44949]: <info>  [1759846515.1979] manager: (tapeff78bf7-10): new Veth device (/org/freedesktop/NetworkManager/Devices/215)
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.196 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4ebc0ee-7ea6-4bb1-8848-1819e7135275]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 systemd-udevd[318754]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.214 2 DEBUG oslo_concurrency.lockutils [None req-d159a7e6-7ad2-4545-be4c-094a0fff0cb0 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "eb8777f3-5daa-49c7-8994-687012f20453" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.242 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[83f4324d-df5b-47de-acf1-8069fd4102a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.245 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[52f7273e-a095-4f2b-8787-108cf274f5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 NetworkManager[44949]: <info>  [1759846515.2748] device (tapeff78bf7-10): carrier: link connected
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.282 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1960ce76-cfe7-4cdd-9cec-21512e0a5311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.314 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[295a8cc7-9d89-437d-bf55-26ee3bbbb754]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeff78bf7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:73:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709883, 'reachable_time': 38864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 318786, 'error': None, 'target': 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.337 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5aacd3d9-7a9e-4072-8ddd-4c6b0a076b01]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec0:738a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 709883, 'tstamp': 709883}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 318787, 'error': None, 'target': 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.376 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a3b29f4d-eaca-4a3a-871d-84f2c8652d58]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapeff78bf7-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c0:73:8a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 141], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709883, 'reachable_time': 38864, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 318788, 'error': None, 'target': 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5124ac-4cf5-4675-9551-141a898986dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.526 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b793aa8-fa1f-44ab-a4e7-2af5cd1fcb9f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeff78bf7-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapeff78bf7-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:15 np0005473739 NetworkManager[44949]: <info>  [1759846515.5325] manager: (tapeff78bf7-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/216)
Oct  7 10:15:15 np0005473739 kernel: tapeff78bf7-10: entered promiscuous mode
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapeff78bf7-10, col_values=(('external_ids', {'iface-id': 'cc7c289d-e80e-4365-a774-6c85c9835f56'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:15Z|00458|binding|INFO|Releasing lport cc7c289d-e80e-4365-a774-6c85c9835f56 from this chassis (sb_readonly=0)
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.561 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/eff78bf7-14f6-4b50-9495-e8815b9b3aa7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/eff78bf7-14f6-4b50-9495-e8815b9b3aa7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.562 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[80f717ea-bdb4-48fe-8db0-0f760a0e1a4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.562 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-eff78bf7-14f6-4b50-9495-e8815b9b3aa7
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/eff78bf7-14f6-4b50-9495-e8815b9b3aa7.pid.haproxy
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID eff78bf7-14f6-4b50-9495-e8815b9b3aa7
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:15:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:15.563 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'env', 'PROCESS_TAG=haproxy-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/eff78bf7-14f6-4b50-9495-e8815b9b3aa7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.570 2 DEBUG nova.compute.manager [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.571 2 DEBUG oslo_concurrency.lockutils [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.571 2 DEBUG oslo_concurrency.lockutils [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.571 2 DEBUG oslo_concurrency.lockutils [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.572 2 DEBUG nova.compute.manager [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.572 2 WARNING nova.compute.manager [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:15:15 np0005473739 nova_compute[259550]: 2025-10-07 14:15:15.572 2 DEBUG nova.compute.manager [req-de110d3d-4895-451d-9d2d-7ff800da459c req-0e9d693f-604d-4682-96db-289b734c16ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Received event network-vif-deleted-3e6d7010-f744-42e8-b831-8a1955357b14 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1520: 305 pgs: 305 active+clean; 213 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct  7 10:15:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:15 np0005473739 podman[318862]: 2025-10-07 14:15:15.986911199 +0000 UTC m=+0.065083280 container create 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  7 10:15:16 np0005473739 podman[318862]: 2025-10-07 14:15:15.948071224 +0000 UTC m=+0.026243315 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:15:16 np0005473739 systemd[1]: Started libpod-conmon-1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3.scope.
Oct  7 10:15:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f008917257b7e03664568af339d089295b0525495a36406dab66623e6accadcf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.093 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846516.091174, 52aed8a1-32e4-4242-881e-1b40f79f09e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.094 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:16 np0005473739 podman[318862]: 2025-10-07 14:15:16.104678211 +0000 UTC m=+0.182850282 container init 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:15:16 np0005473739 podman[318862]: 2025-10-07 14:15:16.110831233 +0000 UTC m=+0.189003304 container start 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:15:16 np0005473739 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [NOTICE]   (318881) : New worker (318883) forked
Oct  7 10:15:16 np0005473739 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [NOTICE]   (318881) : Loading success.
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.150 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.155 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846516.0919826, 52aed8a1-32e4-4242-881e-1b40f79f09e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.155 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.176 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.180 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.216 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.810 2 DEBUG oslo_concurrency.lockutils [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.811 2 DEBUG oslo_concurrency.lockutils [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.811 2 DEBUG nova.compute.manager [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.815 2 DEBUG nova.compute.manager [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.816 2 DEBUG nova.objects.instance [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'flavor' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:16 np0005473739 nova_compute[259550]: 2025-10-07 14:15:16.848 2 DEBUG nova.virt.libvirt.driver [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:17 np0005473739 podman[319064]: 2025-10-07 14:15:17.420629614 +0000 UTC m=+0.077725594 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct  7 10:15:17 np0005473739 podman[319064]: 2025-10-07 14:15:17.534237936 +0000 UTC m=+0.191333946 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.724 2 DEBUG nova.compute.manager [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.724 2 DEBUG oslo_concurrency.lockutils [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.724 2 DEBUG oslo_concurrency.lockutils [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.725 2 DEBUG oslo_concurrency.lockutils [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.725 2 DEBUG nova.compute.manager [req-dbb01178-22bd-4ddc-99d0-18223b0262fb req-16087506-0b11-458d-bfdb-26f862fec630 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Processing event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.726 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.730 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846517.7297144, 52aed8a1-32e4-4242-881e-1b40f79f09e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.730 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.732 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.736 2 INFO nova.virt.libvirt.driver [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance spawned successfully.#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.736 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:15:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1521: 305 pgs: 305 active+clean; 213 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 2.7 MiB/s wr, 123 op/s
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.916 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.924 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.924 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.925 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.925 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.926 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.926 2 DEBUG nova.virt.libvirt.driver [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:17 np0005473739 nova_compute[259550]: 2025-10-07 14:15:17.932 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:18 np0005473739 nova_compute[259550]: 2025-10-07 14:15:18.182 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:15:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:18 np0005473739 nova_compute[259550]: 2025-10-07 14:15:18.343 2 INFO nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Took 14.03 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:15:18 np0005473739 nova_compute[259550]: 2025-10-07 14:15:18.343 2 DEBUG nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:15:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:18 np0005473739 nova_compute[259550]: 2025-10-07 14:15:18.505 2 INFO nova.compute.manager [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Took 15.10 seconds to build instance.#033[00m
Oct  7 10:15:18 np0005473739 nova_compute[259550]: 2025-10-07 14:15:18.627 2 DEBUG oslo_concurrency.lockutils [None req-8e1aa273-3ae6-4d84-b569-8c5400b268c7 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.306s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:18 np0005473739 nova_compute[259550]: 2025-10-07 14:15:18.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f76aaa61-c543-4163-9875-6a2685f3023a does not exist
Oct  7 10:15:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3c12dd02-6d54-4e96-9583-532d2faf7a76 does not exist
Oct  7 10:15:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e436ff3-ec4c-4f08-a05a-718131bbe410 does not exist
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:15:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1522: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 908 KiB/s wr, 148 op/s
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:15:19 np0005473739 podman[319490]: 2025-10-07 14:15:19.872144905 +0000 UTC m=+0.042531444 container create 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:15:19 np0005473739 systemd[1]: Started libpod-conmon-941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7.scope.
Oct  7 10:15:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:19 np0005473739 podman[319490]: 2025-10-07 14:15:19.852683681 +0000 UTC m=+0.023070240 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:15:19 np0005473739 podman[319490]: 2025-10-07 14:15:19.962385639 +0000 UTC m=+0.132772208 container init 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:15:19 np0005473739 podman[319490]: 2025-10-07 14:15:19.974744586 +0000 UTC m=+0.145131125 container start 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:15:19 np0005473739 podman[319490]: 2025-10-07 14:15:19.979181153 +0000 UTC m=+0.149567742 container attach 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:15:19 np0005473739 gallant_cannon[319506]: 167 167
Oct  7 10:15:19 np0005473739 systemd[1]: libpod-941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7.scope: Deactivated successfully.
Oct  7 10:15:19 np0005473739 conmon[319506]: conmon 941401266a500f45164a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7.scope/container/memory.events
Oct  7 10:15:19 np0005473739 podman[319490]: 2025-10-07 14:15:19.985079659 +0000 UTC m=+0.155466198 container died 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 10:15:20 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7cab6bb67954298f054bf1df89fbe4713ba2a1385c24c37e9c1bc6a3f7048888-merged.mount: Deactivated successfully.
Oct  7 10:15:20 np0005473739 podman[319490]: 2025-10-07 14:15:20.028865435 +0000 UTC m=+0.199251974 container remove 941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_cannon, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 10:15:20 np0005473739 systemd[1]: libpod-conmon-941401266a500f45164a86b892b6501a36e9ec257779ba8e0a8900fd0def4dc7.scope: Deactivated successfully.
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.096 2 DEBUG nova.compute.manager [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.098 2 DEBUG oslo_concurrency.lockutils [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.098 2 DEBUG oslo_concurrency.lockutils [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.098 2 DEBUG oslo_concurrency.lockutils [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.098 2 DEBUG nova.compute.manager [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] No waiting events found dispatching network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.098 2 WARNING nova.compute.manager [req-3561270f-16a4-4e60-bf86-97915450e551 req-63127477-3283-40ab-95c8-143cf092e3dd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received unexpected event network-vif-plugged-9999835e-253e-4f1f-82c7-59a30f3e1537 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:15:20 np0005473739 podman[319530]: 2025-10-07 14:15:20.218200957 +0000 UTC m=+0.051578903 container create 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 10:15:20 np0005473739 systemd[1]: Started libpod-conmon-95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672.scope.
Oct  7 10:15:20 np0005473739 podman[319530]: 2025-10-07 14:15:20.19633432 +0000 UTC m=+0.029712256 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:15:20 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:20 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:20 np0005473739 podman[319530]: 2025-10-07 14:15:20.346712752 +0000 UTC m=+0.180090688 container init 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:15:20 np0005473739 podman[319530]: 2025-10-07 14:15:20.359175521 +0000 UTC m=+0.192553437 container start 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:15:20 np0005473739 podman[319530]: 2025-10-07 14:15:20.362847598 +0000 UTC m=+0.196225534 container attach 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.442 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.444 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.445 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.445 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.446 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.447 2 INFO nova.compute.manager [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Terminating instance#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.448 2 DEBUG nova.compute.manager [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:15:20 np0005473739 kernel: tap9999835e-25 (unregistering): left promiscuous mode
Oct  7 10:15:20 np0005473739 NetworkManager[44949]: <info>  [1759846520.4970] device (tap9999835e-25): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:15:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:20Z|00459|binding|INFO|Releasing lport 9999835e-253e-4f1f-82c7-59a30f3e1537 from this chassis (sb_readonly=0)
Oct  7 10:15:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:20Z|00460|binding|INFO|Setting lport 9999835e-253e-4f1f-82c7-59a30f3e1537 down in Southbound
Oct  7 10:15:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:20Z|00461|binding|INFO|Removing iface tap9999835e-25 ovn-installed in OVS
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:20 np0005473739 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Deactivated successfully.
Oct  7 10:15:20 np0005473739 systemd[1]: machine-qemu\x2d62\x2dinstance\x2d00000036.scope: Consumed 3.779s CPU time.
Oct  7 10:15:20 np0005473739 systemd-machined[214580]: Machine qemu-62-instance-00000036 terminated.
Oct  7 10:15:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.668 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fb:32:36 10.100.0.12'], port_security=['fa:16:3e:fb:32:36 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '52aed8a1-32e4-4242-881e-1b40f79f09e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0a497c44829943d787416adb835d66e5', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ad3edd26-6a78-4069-a120-e0c484f5035e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4d008834-8f4c-4a69-8755-e7ba45e692b8, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9999835e-253e-4f1f-82c7-59a30f3e1537) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.670 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9999835e-253e-4f1f-82c7-59a30f3e1537 in datapath eff78bf7-14f6-4b50-9495-e8815b9b3aa7 unbound from our chassis#033[00m
Oct  7 10:15:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.672 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network eff78bf7-14f6-4b50-9495-e8815b9b3aa7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:15:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ceedcbdf-ecb2-4214-a5d9-ebc998dfcf13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:20.674 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 namespace which is not needed anymore#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.703 2 INFO nova.virt.libvirt.driver [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Instance destroyed successfully.#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.705 2 DEBUG nova.objects.instance [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lazy-loading 'resources' on Instance uuid 52aed8a1-32e4-4242-881e-1b40f79f09e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.830 2 DEBUG nova.virt.libvirt.vif [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-InstanceActionsNegativeTestJSON-server-1063149174',display_name='tempest-InstanceActionsNegativeTestJSON-server-1063149174',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-instanceactionsnegativetestjson-server-1063149174',id=54,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0a497c44829943d787416adb835d66e5',ramdisk_id='',reservation_id='r-p0tay0if',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-InstanceActionsNegativeTestJSON-243150513',owner_user_name='tempest-InstanceActionsNegativeTestJSON-243150513-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:18Z,user_data=None,user_id='b202876689054b5ebeef4c4648b455bf',uuid=52aed8a1-32e4-4242-881e-1b40f79f09e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.832 2 DEBUG nova.network.os_vif_util [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converting VIF {"id": "9999835e-253e-4f1f-82c7-59a30f3e1537", "address": "fa:16:3e:fb:32:36", "network": {"id": "eff78bf7-14f6-4b50-9495-e8815b9b3aa7", "bridge": "br-int", "label": "tempest-InstanceActionsNegativeTestJSON-1865655748-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0a497c44829943d787416adb835d66e5", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9999835e-25", "ovs_interfaceid": "9999835e-253e-4f1f-82c7-59a30f3e1537", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.833 2 DEBUG nova.network.os_vif_util [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.833 2 DEBUG os_vif [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:20 np0005473739 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [NOTICE]   (318881) : haproxy version is 2.8.14-c23fe91
Oct  7 10:15:20 np0005473739 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [NOTICE]   (318881) : path to executable is /usr/sbin/haproxy
Oct  7 10:15:20 np0005473739 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [WARNING]  (318881) : Exiting Master process...
Oct  7 10:15:20 np0005473739 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [WARNING]  (318881) : Exiting Master process...
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9999835e-25, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:20 np0005473739 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [ALERT]    (318881) : Current worker (318883) exited with code 143 (Terminated)
Oct  7 10:15:20 np0005473739 neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7[318877]: [WARNING]  (318881) : All workers exited. Exiting... (0)
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:20 np0005473739 systemd[1]: libpod-1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3.scope: Deactivated successfully.
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:20 np0005473739 nova_compute[259550]: 2025-10-07 14:15:20.843 2 INFO os_vif [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:32:36,bridge_name='br-int',has_traffic_filtering=True,id=9999835e-253e-4f1f-82c7-59a30f3e1537,network=Network(eff78bf7-14f6-4b50-9495-e8815b9b3aa7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9999835e-25')#033[00m
Oct  7 10:15:20 np0005473739 podman[319584]: 2025-10-07 14:15:20.849122114 +0000 UTC m=+0.053131275 container died 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:15:20 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3-userdata-shm.mount: Deactivated successfully.
Oct  7 10:15:20 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f008917257b7e03664568af339d089295b0525495a36406dab66623e6accadcf-merged.mount: Deactivated successfully.
Oct  7 10:15:20 np0005473739 podman[319584]: 2025-10-07 14:15:20.928805799 +0000 UTC m=+0.132814970 container cleanup 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 10:15:20 np0005473739 systemd[1]: libpod-conmon-1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3.scope: Deactivated successfully.
Oct  7 10:15:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:21 np0005473739 podman[319628]: 2025-10-07 14:15:21.022379211 +0000 UTC m=+0.062059820 container remove 1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.029 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b655a60a-dde8-487a-9c21-6853d91511a6]: (4, ('Tue Oct  7 02:15:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 (1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3)\n1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3\nTue Oct  7 02:15:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 (1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3)\n1cebc42406fe04dee9a0e46b2c6553a63e0f0b47f192ab711d82d9f83d1f7ca3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.032 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2781824-262e-42b3-bd45-debb7e0d477e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.033 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapeff78bf7-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:21 np0005473739 kernel: tapeff78bf7-10: left promiscuous mode
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.059 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a44b814f-6c54-43ae-9a66-a7d07f21ad15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.091 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f28a5e2-a67d-4f08-81c7-3e2ce270ab3e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.093 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a5254de-aa45-48cd-b17f-cf805e8493df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.120 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5751d15a-a85b-4090-ae44-776d06fad0be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 709873, 'reachable_time': 19112, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 319643, 'error': None, 'target': 'ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:21 np0005473739 systemd[1]: run-netns-ovnmeta\x2deff78bf7\x2d14f6\x2d4b50\x2d9495\x2de8815b9b3aa7.mount: Deactivated successfully.
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.124 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-eff78bf7-14f6-4b50-9495-e8815b9b3aa7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:15:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:21.124 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[bb24bc13-6568-4845-b313-48ecfad16940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.326 2 INFO nova.virt.libvirt.driver [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Deleting instance files /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1_del#033[00m
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.328 2 INFO nova.virt.libvirt.driver [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Deletion of /var/lib/nova/instances/52aed8a1-32e4-4242-881e-1b40f79f09e1_del complete#033[00m
Oct  7 10:15:21 np0005473739 focused_mcnulty[319546]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:15:21 np0005473739 focused_mcnulty[319546]: --> relative data size: 1.0
Oct  7 10:15:21 np0005473739 focused_mcnulty[319546]: --> All data devices are unavailable
Oct  7 10:15:21 np0005473739 systemd[1]: libpod-95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672.scope: Deactivated successfully.
Oct  7 10:15:21 np0005473739 systemd[1]: libpod-95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672.scope: Consumed 1.087s CPU time.
Oct  7 10:15:21 np0005473739 podman[319530]: 2025-10-07 14:15:21.56440927 +0000 UTC m=+1.397787186 container died 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:15:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dc1f300a7802b9347d61ae95cb2a299ac6328759152a468dae7b629e09b0a4ac-merged.mount: Deactivated successfully.
Oct  7 10:15:21 np0005473739 podman[319530]: 2025-10-07 14:15:21.639863893 +0000 UTC m=+1.473241829 container remove 95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_mcnulty, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:15:21 np0005473739 systemd[1]: libpod-conmon-95287c1459df3342c7438ed5422df9772742e8c57f863c15242de0cdc1f9f672.scope: Deactivated successfully.
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.682 2 INFO nova.compute.manager [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.684 2 DEBUG oslo.service.loopingcall [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.685 2 DEBUG nova.compute.manager [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:15:21 np0005473739 nova_compute[259550]: 2025-10-07 14:15:21.686 2 DEBUG nova.network.neutron [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:15:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1523: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 34 KiB/s wr, 138 op/s
Oct  7 10:15:22 np0005473739 podman[319820]: 2025-10-07 14:15:22.335558531 +0000 UTC m=+0.050758911 container create a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 10:15:22 np0005473739 systemd[1]: Started libpod-conmon-a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c.scope.
Oct  7 10:15:22 np0005473739 podman[319820]: 2025-10-07 14:15:22.314031642 +0000 UTC m=+0.029232032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:15:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:22 np0005473739 podman[319820]: 2025-10-07 14:15:22.427704376 +0000 UTC m=+0.142904776 container init a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:15:22 np0005473739 podman[319820]: 2025-10-07 14:15:22.443372949 +0000 UTC m=+0.158573329 container start a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 10:15:22 np0005473739 podman[319820]: 2025-10-07 14:15:22.447332124 +0000 UTC m=+0.162532524 container attach a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:15:22 np0005473739 pedantic_mccarthy[319836]: 167 167
Oct  7 10:15:22 np0005473739 systemd[1]: libpod-a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c.scope: Deactivated successfully.
Oct  7 10:15:22 np0005473739 podman[319820]: 2025-10-07 14:15:22.452266784 +0000 UTC m=+0.167467164 container died a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 10:15:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-663394db2c2f07de4bf0e2f63ea5d949b92c7e78e8040ed449a88ad69e71013e-merged.mount: Deactivated successfully.
Oct  7 10:15:22 np0005473739 podman[319820]: 2025-10-07 14:15:22.495590888 +0000 UTC m=+0.210791268 container remove a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_mccarthy, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:15:22 np0005473739 systemd[1]: libpod-conmon-a46836d9d94fa6c86737618f3ffff4678a971fde418a3d5f52a4116c35ca684c.scope: Deactivated successfully.
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:15:22
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'default.rgw.log', 'vms', 'images', 'volumes', 'default.rgw.meta', '.mgr', 'backups', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'default.rgw.control']
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:15:22 np0005473739 podman[319860]: 2025-10-07 14:15:22.691311289 +0000 UTC m=+0.053889555 container create b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:15:22 np0005473739 systemd[1]: Started libpod-conmon-b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9.scope.
Oct  7 10:15:22 np0005473739 podman[319860]: 2025-10-07 14:15:22.666048192 +0000 UTC m=+0.028626468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:15:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:22 np0005473739 podman[319860]: 2025-10-07 14:15:22.784398898 +0000 UTC m=+0.146977174 container init b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:15:22 np0005473739 podman[319860]: 2025-10-07 14:15:22.792587174 +0000 UTC m=+0.155165430 container start b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:15:22 np0005473739 podman[319860]: 2025-10-07 14:15:22.796145489 +0000 UTC m=+0.158723765 container attach b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:15:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:15:23 np0005473739 nova_compute[259550]: 2025-10-07 14:15:23.099 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846508.098485, eb8777f3-5daa-49c7-8994-687012f20453 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:23 np0005473739 nova_compute[259550]: 2025-10-07 14:15:23.102 2 INFO nova.compute.manager [-] [instance: eb8777f3-5daa-49c7-8994-687012f20453] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:15:23 np0005473739 nova_compute[259550]: 2025-10-07 14:15:23.225 2 DEBUG nova.compute.manager [None req-1243766d-f832-439d-b8f3-ef56befec3e2 - - - - - -] [instance: eb8777f3-5daa-49c7-8994-687012f20453] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]: {
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:    "0": [
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:        {
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "devices": [
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "/dev/loop3"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            ],
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_name": "ceph_lv0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_size": "21470642176",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "name": "ceph_lv0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "tags": {
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cluster_name": "ceph",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.crush_device_class": "",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.encrypted": "0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osd_id": "0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.type": "block",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.vdo": "0"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            },
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "type": "block",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "vg_name": "ceph_vg0"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:        }
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:    ],
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:    "1": [
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:        {
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "devices": [
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "/dev/loop4"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            ],
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_name": "ceph_lv1",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_size": "21470642176",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "name": "ceph_lv1",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "tags": {
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cluster_name": "ceph",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.crush_device_class": "",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.encrypted": "0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osd_id": "1",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.type": "block",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.vdo": "0"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            },
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "type": "block",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "vg_name": "ceph_vg1"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:        }
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:    ],
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:    "2": [
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:        {
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "devices": [
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "/dev/loop5"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            ],
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_name": "ceph_lv2",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_size": "21470642176",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "name": "ceph_lv2",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "tags": {
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.cluster_name": "ceph",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.crush_device_class": "",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.encrypted": "0",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osd_id": "2",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.type": "block",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:                "ceph.vdo": "0"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            },
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "type": "block",
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:            "vg_name": "ceph_vg2"
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:        }
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]:    ]
Oct  7 10:15:23 np0005473739 nifty_satoshi[319877]: }
Oct  7 10:15:23 np0005473739 systemd[1]: libpod-b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9.scope: Deactivated successfully.
Oct  7 10:15:23 np0005473739 podman[319860]: 2025-10-07 14:15:23.696540774 +0000 UTC m=+1.059119030 container died b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:15:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6d1d732419bc1b80779c3b856588668093e9e4d9fc4329bc7e124e01cf5efac2-merged.mount: Deactivated successfully.
Oct  7 10:15:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1524: 305 pgs: 305 active+clean; 214 MiB data, 541 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 34 KiB/s wr, 128 op/s
Oct  7 10:15:23 np0005473739 podman[319860]: 2025-10-07 14:15:23.762098086 +0000 UTC m=+1.124676342 container remove b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_satoshi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:15:23 np0005473739 systemd[1]: libpod-conmon-b0b0c1a7212fb9f13aa559fd0d3217af7a98aba880794cf70d024f873d1090f9.scope: Deactivated successfully.
Oct  7 10:15:23 np0005473739 nova_compute[259550]: 2025-10-07 14:15:23.946 2 DEBUG nova.network.neutron [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:23 np0005473739 nova_compute[259550]: 2025-10-07 14:15:23.979 2 INFO nova.compute.manager [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Took 2.29 seconds to deallocate network for instance.#033[00m
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.029 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.030 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.070 2 DEBUG nova.compute.manager [req-75bd97a6-a110-40ee-88e7-1711f54d4afa req-01e097a1-92c1-432b-abb5-c22d4d7b1e8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Received event network-vif-deleted-9999835e-253e-4f1f-82c7-59a30f3e1537 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.147 2 DEBUG oslo_concurrency.processutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:24 np0005473739 podman[320055]: 2025-10-07 14:15:24.424159425 +0000 UTC m=+0.051618064 container create b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:15:24 np0005473739 systemd[1]: Started libpod-conmon-b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7.scope.
Oct  7 10:15:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:24 np0005473739 podman[320055]: 2025-10-07 14:15:24.397668876 +0000 UTC m=+0.025127535 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:15:24 np0005473739 podman[320055]: 2025-10-07 14:15:24.508433712 +0000 UTC m=+0.135892391 container init b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:15:24 np0005473739 podman[320055]: 2025-10-07 14:15:24.517826689 +0000 UTC m=+0.145285338 container start b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:15:24 np0005473739 podman[320055]: 2025-10-07 14:15:24.521784294 +0000 UTC m=+0.149242953 container attach b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:15:24 np0005473739 sharp_dijkstra[320071]: 167 167
Oct  7 10:15:24 np0005473739 systemd[1]: libpod-b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7.scope: Deactivated successfully.
Oct  7 10:15:24 np0005473739 podman[320055]: 2025-10-07 14:15:24.526989162 +0000 UTC m=+0.154447851 container died b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:15:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-84f2e1f79e348795de54c1945eca0eea2e3e1a0c39320ad6f2bcf3c19f888b49-merged.mount: Deactivated successfully.
Oct  7 10:15:24 np0005473739 podman[320055]: 2025-10-07 14:15:24.572300038 +0000 UTC m=+0.199758677 container remove b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_dijkstra, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:15:24 np0005473739 systemd[1]: libpod-conmon-b118fda24da28d0eda4429e59b6dc211fa1a5ac36cbd3ce92116f7af611da3f7.scope: Deactivated successfully.
Oct  7 10:15:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1874586540' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.644 2 DEBUG oslo_concurrency.processutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.661 2 DEBUG nova.compute.provider_tree [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.679 2 DEBUG nova.scheduler.client.report [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.706 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.746 2 INFO nova.scheduler.client.report [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Deleted allocations for instance 52aed8a1-32e4-4242-881e-1b40f79f09e1#033[00m
Oct  7 10:15:24 np0005473739 podman[320098]: 2025-10-07 14:15:24.782300206 +0000 UTC m=+0.052225460 container create 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 10:15:24 np0005473739 systemd[1]: Started libpod-conmon-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope.
Oct  7 10:15:24 np0005473739 nova_compute[259550]: 2025-10-07 14:15:24.829 2 DEBUG oslo_concurrency.lockutils [None req-217d7e1b-166d-4ac9-a3ed-9eb5420523f4 b202876689054b5ebeef4c4648b455bf 0a497c44829943d787416adb835d66e5 - - default default] Lock "52aed8a1-32e4-4242-881e-1b40f79f09e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:24 np0005473739 podman[320098]: 2025-10-07 14:15:24.758689892 +0000 UTC m=+0.028615176 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:15:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:24 np0005473739 podman[320098]: 2025-10-07 14:15:24.890628418 +0000 UTC m=+0.160553692 container init 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:15:24 np0005473739 podman[320098]: 2025-10-07 14:15:24.89941131 +0000 UTC m=+0.169336564 container start 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:15:24 np0005473739 podman[320098]: 2025-10-07 14:15:24.910503493 +0000 UTC m=+0.180428957 container attach 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:15:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1525: 305 pgs: 305 active+clean; 191 MiB data, 531 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 35 KiB/s wr, 165 op/s
Oct  7 10:15:25 np0005473739 nova_compute[259550]: 2025-10-07 14:15:25.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:25 np0005473739 eager_hellman[320114]: {
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "osd_id": 2,
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "type": "bluestore"
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:    },
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "osd_id": 1,
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "type": "bluestore"
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:    },
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "osd_id": 0,
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:        "type": "bluestore"
Oct  7 10:15:25 np0005473739 eager_hellman[320114]:    }
Oct  7 10:15:25 np0005473739 eager_hellman[320114]: }
Oct  7 10:15:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:25 np0005473739 systemd[1]: libpod-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope: Deactivated successfully.
Oct  7 10:15:25 np0005473739 systemd[1]: libpod-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope: Consumed 1.063s CPU time.
Oct  7 10:15:25 np0005473739 conmon[320114]: conmon 89d49500d5170b20fae3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope/container/memory.events
Oct  7 10:15:25 np0005473739 podman[320098]: 2025-10-07 14:15:25.984947396 +0000 UTC m=+1.254872650 container died 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:15:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e6b4bcf72977ecf7270b99e80d8b9eef545aafd1f2174128218dfae760c7204d-merged.mount: Deactivated successfully.
Oct  7 10:15:26 np0005473739 podman[320098]: 2025-10-07 14:15:26.045682551 +0000 UTC m=+1.315607805 container remove 89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:15:26 np0005473739 systemd[1]: libpod-conmon-89d49500d5170b20fae31789fce49c04f9b98a8988cac9dbf829bc1bcd2c0967.scope: Deactivated successfully.
Oct  7 10:15:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:15:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:15:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0d85fd2a-0794-42e0-a1a9-f4988b8a16c1 does not exist
Oct  7 10:15:26 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6487c7c9-3441-4128-b178-4aab68a343f4 does not exist
Oct  7 10:15:26 np0005473739 nova_compute[259550]: 2025-10-07 14:15:26.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:26Z|00064|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d4:48:b2 10.100.0.5
Oct  7 10:15:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:26Z|00065|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d4:48:b2 10.100.0.5
Oct  7 10:15:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:15:26 np0005473739 nova_compute[259550]: 2025-10-07 14:15:26.912 2 DEBUG nova.virt.libvirt.driver [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:15:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1526: 305 pgs: 305 active+clean; 178 MiB data, 529 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 838 KiB/s wr, 164 op/s
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.220 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.221 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.237 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.305 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.305 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.313 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.314 2 INFO nova.compute.claims [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.458 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081888355' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.938 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.946 2 DEBUG nova.compute.provider_tree [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.966 2 DEBUG nova.scheduler.client.report [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:28 np0005473739 nova_compute[259550]: 2025-10-07 14:15:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.002 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.696s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.002 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.007 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.008 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.048 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.049 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.068 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.082 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:15:29 np0005473739 kernel: tap2781ab1e-ba (unregistering): left promiscuous mode
Oct  7 10:15:29 np0005473739 NetworkManager[44949]: <info>  [1759846529.1930] device (tap2781ab1e-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:15:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:29Z|00462|binding|INFO|Releasing lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 from this chassis (sb_readonly=0)
Oct  7 10:15:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:29Z|00463|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 down in Southbound
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:29Z|00464|binding|INFO|Removing iface tap2781ab1e-ba ovn-installed in OVS
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.235 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.236 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.237 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Creating image(s)#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.240 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:48:b2 10.100.0.5'], port_security=['fa:16:3e:d4:48:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.241 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb unbound from our chassis#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.243 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65f4c121-31fa-4316-884f-2b7611b8c0a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:29 np0005473739 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct  7 10:15:29 np0005473739 systemd[1]: machine-qemu\x2d61\x2dinstance\x2d00000035.scope: Consumed 13.953s CPU time.
Oct  7 10:15:29 np0005473739 systemd-machined[214580]: Machine qemu-61-instance-00000035 terminated.
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.303 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.302 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eb90d0e0-bcbd-4826-ba7a-b84eab67efaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.306 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0ed21cf3-1b2d-439b-8541-a3d9ae77de55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:29 np0005473739 podman[320251]: 2025-10-07 14:15:29.327443473 +0000 UTC m=+0.089748281 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  7 10:15:29 np0005473739 podman[320254]: 2025-10-07 14:15:29.328675586 +0000 UTC m=+0.093556082 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.341 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02eebe2e-9f57-4624-b693-b8d2bc761a64]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.350 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[627a26cb-c87f-40bb-804d-0990818a0fa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 320332, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.376 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4931be7a-34b6-4be1-af4a-478c469be848]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320350, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 320350, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.382 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.383 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.390 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.390 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.390 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:29.391 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.415 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.461 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.462 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.463 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.463 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3552535257' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.488 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.491 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.521 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.621 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.621 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000035 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.628 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.628 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000032 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:15:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1527: 305 pgs: 305 active+clean; 195 MiB data, 563 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.2 MiB/s wr, 177 op/s
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.791 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.848 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] resizing rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.885 2 DEBUG nova.policy [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.956 2 DEBUG nova.objects.instance [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'migration_context' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.958 2 INFO nova.virt.libvirt.driver [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.965 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.965 2 DEBUG nova.objects.instance [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'numa_topology' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.978 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.978 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Ensure instance console log exists: /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.978 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.979 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.979 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:29 np0005473739 nova_compute[259550]: 2025-10-07 14:15:29.980 2 DEBUG nova.compute.manager [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.003 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.004 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3992MB free_disk=59.91245651245117GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.033 2 DEBUG oslo_concurrency.lockutils [None req-71e58581-3bae-4f18-847c-073dca7fac41 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.070 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance fc163bed-856c-4ea5-9bf3-6989fb1027eb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.070 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.070 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.070 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.071 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.133 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.525 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Successfully created port: 62690261-dde3-43ca-929a-e6b75a76bafb _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279895366' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.632 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.639 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.656 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.694 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.695 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.863 2 DEBUG nova.compute.manager [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.863 2 DEBUG oslo_concurrency.lockutils [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.864 2 DEBUG oslo_concurrency.lockutils [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.864 2 DEBUG oslo_concurrency.lockutils [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.864 2 DEBUG nova.compute.manager [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:30 np0005473739 nova_compute[259550]: 2025-10-07 14:15:30.864 2 WARNING nova.compute.manager [req-d7d19666-93e7-414d-98bb-148c8a4ca51a req-49782467-37d0-4092-908f-a4d7cdd495f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:15:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:31Z|00465|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.185 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Successfully updated port: 62690261-dde3-43ca-929a-e6b75a76bafb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.206 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.206 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.206 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.296 2 DEBUG nova.compute.manager [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.297 2 DEBUG nova.compute.manager [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.297 2 DEBUG oslo_concurrency.lockutils [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.393 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.695 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.696 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:31 np0005473739 nova_compute[259550]: 2025-10-07 14:15:31.696 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:15:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1528: 305 pgs: 305 active+clean; 221 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.2 MiB/s wr, 145 op/s
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.116 2 DEBUG nova.network.neutron [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.134 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.135 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance network_info: |[{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.135 2 DEBUG oslo_concurrency.lockutils [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.136 2 DEBUG nova.network.neutron [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.142 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Start _get_guest_xml network_info=[{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.150 2 WARNING nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.163 2 DEBUG nova.virt.libvirt.host [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.164 2 DEBUG nova.virt.libvirt.host [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.170 2 DEBUG nova.virt.libvirt.host [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.171 2 DEBUG nova.virt.libvirt.host [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.172 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.172 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.174 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.174 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.175 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.176 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.176 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.177 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.177 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.178 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.179 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.179 2 DEBUG nova.virt.hardware [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.185 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.234 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.234 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.256 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.324 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.325 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.334 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.334 2 INFO nova.compute.claims [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0017198106930064465 of space, bias 1.0, pg target 0.5159432079019339 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:15:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.498 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163787589' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.661 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:15:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2108271123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:15:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:15:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2108271123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.711 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.717 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:32 np0005473739 nova_compute[259550]: 2025-10-07 14:15:32.989 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/355181987' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.009 2 INFO nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Rebuilding instance#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.026 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.032 2 DEBUG nova.compute.provider_tree [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.036 2 DEBUG nova.compute.manager [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.036 2 DEBUG oslo_concurrency.lockutils [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.037 2 DEBUG oslo_concurrency.lockutils [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.037 2 DEBUG oslo_concurrency.lockutils [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.037 2 DEBUG nova.compute.manager [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.037 2 WARNING nova.compute.manager [req-e1f4319e-1855-4694-8cd0-0296dd5f3554 req-cd70ae82-fc97-4155-a60b-a7969f419978 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state rebuilding.#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.045 2 DEBUG nova.scheduler.client.report [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.082 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.757s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.083 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.131 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.132 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:15:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/419391028' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.155 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.161 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.162 2 DEBUG nova.virt.libvirt.vif [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.163 2 DEBUG nova.network.os_vif_util [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.164 2 DEBUG nova.network.os_vif_util [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.165 2 DEBUG nova.objects.instance [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_devices' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.171 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.176 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <uuid>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</uuid>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <name>instance-00000037</name>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:32</nova:creationTime>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <entry name="serial">188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <entry name="uuid">188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:a5:aa:77"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <target dev="tap62690261-dd"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log" append="off"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:33 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:33 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:33 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:33 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.178 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Preparing to wait for external event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.179 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.179 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.179 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.180 2 DEBUG nova.virt.libvirt.vif [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.180 2 DEBUG nova.network.os_vif_util [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.181 2 DEBUG nova.network.os_vif_util [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.182 2 DEBUG os_vif [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap62690261-dd, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap62690261-dd, col_values=(('external_ids', {'iface-id': '62690261-dde3-43ca-929a-e6b75a76bafb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a5:aa:77', 'vm-uuid': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:33 np0005473739 NetworkManager[44949]: <info>  [1759846533.1900] manager: (tap62690261-dd): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/217)
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.198 2 INFO os_vif [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a5:aa:77,bridge_name='br-int',has_traffic_filtering=True,id=62690261-dde3-43ca-929a-e6b75a76bafb,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap62690261-dd')#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.263 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.264 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.265 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Creating image(s)#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.290 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.317 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.343 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.348 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.400 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.401 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.402 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:a5:aa:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.403 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Using config drive#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.429 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.439 2 DEBUG nova.policy [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ff37c390826e43079eff2a1423ccc2b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.443 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.444 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.445 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.445 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.474 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.479 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 83645517-a08a-46d7-b715-15b5d7f078ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.519 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'trusted_certs' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.561 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.603 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_requests' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.615 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.628 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'resources' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.640 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'migration_context' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.651 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.657 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance already shutdown.#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.664 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.673 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.675 2 DEBUG nova.virt.libvirt.vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:32Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.676 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.677 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.678 2 DEBUG os_vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.682 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2781ab1e-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.694 2 INFO os_vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba')#033[00m
Oct  7 10:15:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1529: 305 pgs: 305 active+clean; 221 MiB data, 575 MiB used, 59 GiB / 60 GiB avail; 993 KiB/s rd, 3.2 MiB/s wr, 113 op/s
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.897 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 83645517-a08a-46d7-b715-15b5d7f078ff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:33 np0005473739 nova_compute[259550]: 2025-10-07 14:15:33.981 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] resizing rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.036 2 DEBUG nova.network.neutron [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.037 2 DEBUG nova.network.neutron [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.055 2 DEBUG oslo_concurrency.lockutils [req-e0971caf-482d-4446-ae31-1963494fbf29 req-bff07ec3-4c3f-4085-bbe3-c321d37ffd6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.152 2 DEBUG nova.objects.instance [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'migration_context' on Instance uuid 83645517-a08a-46d7-b715-15b5d7f078ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.169 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.169 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Ensure instance console log exists: /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.170 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.170 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.170 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.371 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Creating config drive at /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.379 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe4duz0s0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.464 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deleting instance files /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_del#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.466 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deletion of /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_del complete#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.529 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpe4duz0s0" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.558 2 DEBUG nova.storage.rbd_utils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.562 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.663 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.664 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating image(s)#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.694 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.722 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.750 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.755 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.846 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.847 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.848 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.849 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.875 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:34 np0005473739 nova_compute[259550]: 2025-10-07 14:15:34.879 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.013 2 DEBUG oslo_concurrency.processutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config 188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.014 2 INFO nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Deleting local config drive /var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:35 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct  7 10:15:35 np0005473739 kernel: tap62690261-dd: entered promiscuous mode
Oct  7 10:15:35 np0005473739 NetworkManager[44949]: <info>  [1759846535.0813] manager: (tap62690261-dd): new Tun device (/org/freedesktop/NetworkManager/Devices/218)
Oct  7 10:15:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:35Z|00466|binding|INFO|Claiming lport 62690261-dde3-43ca-929a-e6b75a76bafb for this chassis.
Oct  7 10:15:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:35Z|00467|binding|INFO|62690261-dde3-43ca-929a-e6b75a76bafb: Claiming fa:16:3e:a5:aa:77 10.100.0.3
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:35Z|00468|binding|INFO|Setting lport 62690261-dde3-43ca-929a-e6b75a76bafb ovn-installed in OVS
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:35 np0005473739 systemd-udevd[320940]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:35 np0005473739 systemd-machined[214580]: New machine qemu-63-instance-00000037.
Oct  7 10:15:35 np0005473739 systemd[1]: Started Virtual Machine qemu-63-instance-00000037.
Oct  7 10:15:35 np0005473739 NetworkManager[44949]: <info>  [1759846535.1838] device (tap62690261-dd): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:35 np0005473739 NetworkManager[44949]: <info>  [1759846535.1845] device (tap62690261-dd): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.242 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a5:aa:77 10.100.0.3'], port_security=['fa:16:3e:a5:aa:77 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67397a02-5eae-462d-b5c7-e258b23b19a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=62690261-dde3-43ca-929a-e6b75a76bafb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:35Z|00469|binding|INFO|Setting lport 62690261-dde3-43ca-929a-e6b75a76bafb up in Southbound
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.243 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 62690261-dde3-43ca-929a-e6b75a76bafb in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.244 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.260 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ceb91f5-3fd7-43a3-a565-e5bb4a5d10ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.262 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb1d9f332-f1 in ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.264 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb1d9f332-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0ab4060-4d34-4517-8873-2b8a93459e5c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10a71023-b845-4e47-8af5-abcc0e49622f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.285 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[77745322-a51c-42d8-9a5e-44d6b247735c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.286 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully created port: 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.314 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.318 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[faba8609-3fb9-4b8a-8649-f77388739e24]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.345 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d3507b3e-a9c1-43da-9b20-ad0519908760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 NetworkManager[44949]: <info>  [1759846535.3565] manager: (tapb1d9f332-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/219)
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.355 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45c520e6-1985-41f3-9b5a-8b83c927181a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 systemd-udevd[320942]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.392 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cfafe03a-2788-453c-8475-08784ec2b645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.397 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a23d3234-ba27-42a0-adcd-c7c544b2a00f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.422 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] resizing rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:35 np0005473739 NetworkManager[44949]: <info>  [1759846535.4238] device (tapb1d9f332-f0): carrier: link connected
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.428 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2f0d6f-413a-4f24-99dd-269dc85bfd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.446 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df2ff0e4-84cf-4b7c-aaef-a5e9341cea92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321012, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.473 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4313708-4bfb-4ed4-a097-a1944c6a98ec]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe19:be96'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711898, 'tstamp': 711898}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321030, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.495 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e986c676-5a4e-42c6-b78a-da84f7a78bf2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 321031, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.513 2 DEBUG nova.compute.manager [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.513 2 DEBUG oslo_concurrency.lockutils [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.514 2 DEBUG oslo_concurrency.lockutils [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.514 2 DEBUG oslo_concurrency.lockutils [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.514 2 DEBUG nova.compute.manager [req-3870bad4-81bd-43cf-b74c-c76f6e9f61bb req-4c18e91d-61c1-45d6-9a8f-0ed2d513a812 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Processing event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[738c5bc2-ee47-446c-82d6-11f7bc1d888e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.558 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.559 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Ensure instance console log exists: /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.560 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.560 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.561 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.564 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start _get_guest_xml network_info=[{"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.577 2 WARNING nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.583 2 DEBUG nova.virt.libvirt.host [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.584 2 DEBUG nova.virt.libvirt.host [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.588 2 DEBUG nova.virt.libvirt.host [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.589 2 DEBUG nova.virt.libvirt.host [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.589 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.589 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.590 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.590 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.590 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.591 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.591 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.591 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.591 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.592 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.592 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.592 2 DEBUG nova.virt.hardware [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.593 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'vcpu_model' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.617 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.620 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70c290dd-1518-4102-ab55-3a9a0584559a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.622 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.622 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.622 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:35 np0005473739 NetworkManager[44949]: <info>  [1759846535.6254] manager: (tapb1d9f332-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/220)
Oct  7 10:15:35 np0005473739 kernel: tapb1d9f332-f0: entered promiscuous mode
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.631 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:35Z|00470|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.634 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[223e22bc-ada2-4a1c-b3a1-c4c68cd236cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.637 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/b1d9f332-f920-4d6e-8e91-dd13ec334d51.pid.haproxy
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID b1d9f332-f920-4d6e-8e91-dd13ec334d51
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:15:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:35.638 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'env', 'PROCESS_TAG=haproxy-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b1d9f332-f920-4d6e-8e91-dd13ec334d51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.698 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846520.695742, 52aed8a1-32e4-4242-881e-1b40f79f09e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.699 2 INFO nova.compute.manager [-] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:15:35 np0005473739 nova_compute[259550]: 2025-10-07 14:15:35.718 2 DEBUG nova.compute.manager [None req-88aa6437-bfbd-4482-8257-ba401c4bec9d - - - - - -] [instance: 52aed8a1-32e4-4242-881e-1b40f79f09e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1530: 305 pgs: 305 active+clean; 253 MiB data, 594 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 5.0 MiB/s wr, 172 op/s
Oct  7 10:15:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:36 np0005473739 podman[321143]: 2025-10-07 14:15:36.069308923 +0000 UTC m=+0.076932563 container create 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  7 10:15:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/949957800' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.104 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:36 np0005473739 podman[321143]: 2025-10-07 14:15:36.021412027 +0000 UTC m=+0.029035757 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:15:36 np0005473739 systemd[1]: Started libpod-conmon-1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4.scope.
Oct  7 10:15:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.141 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4be1fcb889aa3b4e4dd790a95fe0086d7925bd0a3e445f3c7ef44dfbecbd0e8a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.146 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:36 np0005473739 podman[321143]: 2025-10-07 14:15:36.165826933 +0000 UTC m=+0.173450593 container init 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:15:36 np0005473739 podman[321143]: 2025-10-07 14:15:36.173031433 +0000 UTC m=+0.180655073 container start 1da7f753f22f1fd1eb5a83c355a3bf63718fd9b868b9ba7adf51be31837203c4 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.187 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846536.1392632, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.188 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.193 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully created port: 83e99e50-2115-4dee-9274-a2a6528a8a8f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.197 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:15:36 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [NOTICE]   (321183) : New worker (321185) forked
Oct  7 10:15:36 np0005473739 neutron-haproxy-ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51[321166]: [NOTICE]   (321183) : Loading success.
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.209 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.216 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.219 2 INFO nova.virt.libvirt.driver [-] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Instance spawned successfully.#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.220 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.222 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.243 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.243 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846536.139543, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.243 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.249 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.249 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.250 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.250 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.250 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.251 2 DEBUG nova.virt.libvirt.driver [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.260 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.263 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846536.2090194, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.263 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.280 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.283 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.309 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.316 2 INFO nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Took 7.08 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.316 2 DEBUG nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.427 2 INFO nova.compute.manager [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Took 8.14 seconds to build instance.#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.446 2 DEBUG oslo_concurrency.lockutils [None req-7ea6ab8b-6795-4e72-aae8-5baf9dc0e957 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/516785795' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.690 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.692 2 DEBUG nova.virt.libvirt.vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:34Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.692 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.693 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.695 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <uuid>a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</uuid>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <name>instance-00000035</name>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <nova:name>tempest-tempest.common.compute-instance-2003953244</nova:name>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:35</nova:creationTime>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <nova:user uuid="39e4681256e44d92ac5928e4f8e0d348">tempest-ServerActionsTestOtherA-508284156-project-member</nova:user>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <nova:project uuid="ef9390a1dd804281beea149e0086b360">tempest-ServerActionsTestOtherA-508284156</nova:project>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <nova:port uuid="2781ab1e-ba6c-4689-8da2-ddcf85b31ca8">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <entry name="serial">a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</entry>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <entry name="uuid">a8585c64-eb21-491a-9a4c-b9ac6e8e4a30</entry>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:d4:48:b2"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <target dev="tap2781ab1e-ba"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/console.log" append="off"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:36 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:36 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:36 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:36 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.696 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Preparing to wait for external event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.696 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.696 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.696 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.697 2 DEBUG nova.virt.libvirt.vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:34Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.697 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.698 2 DEBUG nova.network.os_vif_util [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.698 2 DEBUG os_vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.699 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2781ab1e-ba, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2781ab1e-ba, col_values=(('external_ids', {'iface-id': '2781ab1e-ba6c-4689-8da2-ddcf85b31ca8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d4:48:b2', 'vm-uuid': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:36 np0005473739 NetworkManager[44949]: <info>  [1759846536.7072] manager: (tap2781ab1e-ba): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/221)
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.717 2 INFO os_vif [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba')#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.776 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.777 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.777 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No VIF found with MAC fa:16:3e:d4:48:b2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.777 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Using config drive#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.796 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.803 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully created port: c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.824 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'ec2_ids' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:36 np0005473739 nova_compute[259550]: 2025-10-07 14:15:36.853 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'keypairs' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.198 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Creating config drive at /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.205 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuphkpol5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.360 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpuphkpol5" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.387 2 DEBUG nova.storage.rbd_utils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.391 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.579 2 DEBUG oslo_concurrency.processutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.581 2 INFO nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deleting local config drive /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:37 np0005473739 kernel: tap2781ab1e-ba: entered promiscuous mode
Oct  7 10:15:37 np0005473739 NetworkManager[44949]: <info>  [1759846537.6340] manager: (tap2781ab1e-ba): new Tun device (/org/freedesktop/NetworkManager/Devices/222)
Oct  7 10:15:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:37Z|00471|binding|INFO|Claiming lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for this chassis.
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:37Z|00472|binding|INFO|2781ab1e-ba6c-4689-8da2-ddcf85b31ca8: Claiming fa:16:3e:d4:48:b2 10.100.0.5
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.644 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:48:b2 10.100.0.5'], port_security=['fa:16:3e:d4:48:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.646 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb bound to our chassis#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.647 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb#033[00m
Oct  7 10:15:37 np0005473739 NetworkManager[44949]: <info>  [1759846537.6591] device (tap2781ab1e-ba): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:37 np0005473739 NetworkManager[44949]: <info>  [1759846537.6603] device (tap2781ab1e-ba): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:37Z|00473|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 ovn-installed in OVS
Oct  7 10:15:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:37Z|00474|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 up in Southbound
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.669 2 DEBUG nova.compute.manager [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.670 2 DEBUG oslo_concurrency.lockutils [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.670 2 DEBUG oslo_concurrency.lockutils [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.670 2 DEBUG oslo_concurrency.lockutils [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.671 2 DEBUG nova.compute.manager [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.671 2 WARNING nova.compute.manager [req-46d400c7-0d7d-4a64-bfa6-5e1b2dc044d0 req-85c7dd56-4a00-4848-834d-22dc1c9c47a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-62690261-dde3-43ca-929a-e6b75a76bafb for instance with vm_state active and task_state None.#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.671 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1cc9f89-3107-441d-a1c0-ba6f35d55351]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:37 np0005473739 systemd-machined[214580]: New machine qemu-64-instance-00000035.
Oct  7 10:15:37 np0005473739 systemd[1]: Started Virtual Machine qemu-64-instance-00000035.
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.716 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[114667b4-e893-4710-9943-73dcc12fdbcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.720 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0f2796b6-1b21-4978-99af-15c227954500]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1531: 305 pgs: 305 active+clean; 250 MiB data, 584 MiB used, 59 GiB / 60 GiB avail; 387 KiB/s rd, 6.0 MiB/s wr, 158 op/s
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.756 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8b91c9d1-0fe0-41dc-80d6-cc2ba64452d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.776 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[32afd1c3-a129-4dc8-8f5f-fd1b9b2934d0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321299, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.798 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d448861-4ed4-426a-aaee-dc3194022d6b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321301, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321301, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.800 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.805 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.805 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.806 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:37.806 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:37 np0005473739 nova_compute[259550]: 2025-10-07 14:15:37.989 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.330 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully updated port: 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.619 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.620 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846538.618657, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.620 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.636 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.640 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846538.619887, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.641 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.659 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.663 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.682 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:15:38 np0005473739 nova_compute[259550]: 2025-10-07 14:15:38.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.000 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.322 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.322 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.322 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.323 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid fc163bed-856c-4ea5-9bf3-6989fb1027eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1532: 305 pgs: 305 active+clean; 260 MiB data, 596 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 6.7 MiB/s wr, 211 op/s
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.761 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.761 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.761 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Processing event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-changed-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing instance network info cache due to event network-changed-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.762 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.763 2 DEBUG nova.network.neutron [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing network info cache for port 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.764 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.773 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846539.773405, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.774 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.777 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.788 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance spawned successfully.#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.788 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.805 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.812 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: rebuild_spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.817 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.818 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.818 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.818 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.819 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.819 2 DEBUG nova.virt.libvirt.driver [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.853 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.909 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:39 np0005473739 nova_compute[259550]: 2025-10-07 14:15:39.971 2 INFO nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] bringing vm to original state: 'stopped'#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.042 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.043 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.043 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.047 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.056 2 DEBUG nova.network.neutron [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:40 np0005473739 kernel: tap2781ab1e-ba (unregistering): left promiscuous mode
Oct  7 10:15:40 np0005473739 NetworkManager[44949]: <info>  [1759846540.0888] device (tap2781ab1e-ba): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:40Z|00475|binding|INFO|Releasing lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 from this chassis (sb_readonly=0)
Oct  7 10:15:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:40Z|00476|binding|INFO|Setting lport 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 down in Southbound
Oct  7 10:15:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:40Z|00477|binding|INFO|Removing iface tap2781ab1e-ba ovn-installed in OVS
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.114 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d4:48:b2 10.100.0.5'], port_security=['fa:16:3e:d4:48:b2 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'a8585c64-eb21-491a-9a4c-b9ac6e8e4a30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.115 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb unbound from our chassis#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.116 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.135 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f4a6ae7-b5c4-49d0-a6d7-82d9d0cef61e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:40 np0005473739 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000035.scope: Deactivated successfully.
Oct  7 10:15:40 np0005473739 systemd[1]: machine-qemu\x2d64\x2dinstance\x2d00000035.scope: Consumed 1.044s CPU time.
Oct  7 10:15:40 np0005473739 systemd-machined[214580]: Machine qemu-64-instance-00000035 terminated.
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.175 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[698d6264-9ff1-43f1-b023-ddf98a3d3f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.182 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9ba94b-88fd-4a27-bf92-833ae1b7a2f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.214 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c9f1c4bf-36a9-44a9-8ad7-a5a371fe7302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.234 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e3524e2-938c-4238-9546-812e22a12920]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 11, 'rx_bytes': 916, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 321355, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.254 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[549dd7b7-c0e7-41ae-af3c-a09137b75dd1]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321356, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 321356, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.258 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.266 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.267 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.268 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:40.268 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.289 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.290 2 DEBUG nova.compute.manager [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.347 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.376 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.376 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.377 2 DEBUG nova.objects.instance [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.453 2 DEBUG oslo_concurrency.lockutils [None req-7b1eddfd-457d-49c6-8af0-ec9fa9c0b279 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.077s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.538 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.539 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.541 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully updated port: 83e99e50-2115-4dee-9274-a2a6528a8a8f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.543 2 DEBUG nova.network.neutron [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.568 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.570 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.571 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.571 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.571 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.571 2 DEBUG oslo_concurrency.lockutils [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.572 2 DEBUG nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.572 2 WARNING nova.compute.manager [req-99e4dac1-c837-42f4-ab3b-a191c472bc7f req-a2304ab9-45df-4497-8738-36c1c4aefcfa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state rebuild_spawning.#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.668 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.668 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.682 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.683 2 INFO nova.compute.claims [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:15:40 np0005473739 nova_compute[259550]: 2025-10-07 14:15:40.850 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2412246010' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.308 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.317 2 DEBUG nova.compute.provider_tree [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.336 2 DEBUG nova.scheduler.client.report [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.373 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.374 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.441 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Successfully updated port: c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.470 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.473 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.485 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.485 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquired lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.485 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.494 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.514 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.598 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.600 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.600 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Creating image(s)#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.627 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.650 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.677 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.681 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.727 2 DEBUG nova.policy [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8faa7636d634de587c1631c3452264e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '972aa9372a81406990460fb46cf827e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.731 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.736 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [{"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1533: 305 pgs: 305 active+clean; 260 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 5.4 MiB/s wr, 199 op/s
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.770 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.772 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.773 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.773 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.798 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.802 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cfd30417-ee01-41d3-8a93-e49cd960d338_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.842 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-fc163bed-856c-4ea5-9bf3-6989fb1027eb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.842 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:15:41 np0005473739 nova_compute[259550]: 2025-10-07 14:15:41.843 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:15:42 np0005473739 podman[321485]: 2025-10-07 14:15:42.094444676 +0000 UTC m=+0.070415211 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.137 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 cfd30417-ee01-41d3-8a93-e49cd960d338_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.335s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:42 np0005473739 podman[321486]: 2025-10-07 14:15:42.146937672 +0000 UTC m=+0.117536456 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.219 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] resizing rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.331 2 DEBUG nova.objects.instance [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'migration_context' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.344 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.344 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Ensure instance console log exists: /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.345 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.345 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.346 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.802 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.803 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.824 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.886 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.888 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.895 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:15:42 np0005473739 nova_compute[259550]: 2025-10-07 14:15:42.895 2 INFO nova.compute.claims [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.025 2 DEBUG nova.compute.manager [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-changed-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.025 2 DEBUG nova.compute.manager [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing instance network info cache due to event network-changed-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.026 2 DEBUG oslo_concurrency.lockutils [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.104 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.105 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.105 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.105 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.106 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.124 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.182 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Successfully created port: 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776255077' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.599 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.606 2 DEBUG nova.compute.provider_tree [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.622 2 DEBUG nova.scheduler.client.report [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1534: 305 pgs: 305 active+clean; 260 MiB data, 598 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 191 op/s
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.844 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.956s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.846 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.900 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.901 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.917 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.935 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.978 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:43 np0005473739 nova_compute[259550]: 2025-10-07 14:15:43.978 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.007 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.067 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.068 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.068 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Creating image(s)#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.091 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.116 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.143 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.147 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.199 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Successfully updated port: 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.212 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.212 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquired lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.213 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.221 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.222 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.224 2 DEBUG nova.policy [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.226 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.228 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.229 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.229 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.255 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.261 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.311 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.312 2 INFO nova.compute.claims [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.532 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.574 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.622 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.361s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.704 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] resizing rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.809 2 DEBUG nova.objects.instance [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'migration_context' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.838 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.839 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Ensure instance console log exists: /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.840 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.840 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.841 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.858 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:44 np0005473739 nova_compute[259550]: 2025-10-07 14:15:44.859 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1955181159' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.053 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.059 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.059 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-changed-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.059 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing instance network info cache due to event network-changed-83e99e50-2115-4dee-9274-a2a6528a8a8f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.060 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.063 2 DEBUG nova.compute.provider_tree [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.132 2 DEBUG nova.scheduler.client.report [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.170 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.948s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.172 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.217 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.218 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.239 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.257 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.335 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.337 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.337 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Creating image(s)#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.366 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.393 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.419 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.424 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.495 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.496 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.497 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.498 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.521 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.525 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1535: 305 pgs: 305 active+clean; 302 MiB data, 619 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.1 MiB/s wr, 205 op/s
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.865 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.946 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] resizing rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:45 np0005473739 nova_compute[259550]: 2025-10-07 14:15:45.979 2 DEBUG nova.policy [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8faa7636d634de587c1631c3452264e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '972aa9372a81406990460fb46cf827e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.038 2 DEBUG nova.objects.instance [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'migration_context' on Instance uuid b3d2cd05-012d-4189-bc6c-c40fc1f72c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.076 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.077 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Ensure instance console log exists: /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.078 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.078 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.078 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.268 2 DEBUG nova.network.neutron [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updating instance_info_cache with network_info: [{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.354 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.381 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Releasing lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.382 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance network_info: |[{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.387 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start _get_guest_xml network_info=[{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.395 2 WARNING nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.407 2 DEBUG nova.virt.libvirt.host [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.408 2 DEBUG nova.virt.libvirt.host [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.413 2 DEBUG nova.virt.libvirt.host [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.414 2 DEBUG nova.virt.libvirt.host [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.414 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.415 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.415 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.416 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.416 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.416 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.417 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.417 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.417 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.418 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.418 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.418 2 DEBUG nova.virt.hardware [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.421 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.778 2 DEBUG nova.compute.manager [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-changed-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.779 2 DEBUG nova.compute.manager [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Refreshing instance network info cache due to event network-changed-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.780 2 DEBUG oslo_concurrency.lockutils [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.780 2 DEBUG oslo_concurrency.lockutils [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.781 2 DEBUG nova.network.neutron [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Refreshing network info cache for port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3893354059' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.872 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.901 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:46 np0005473739 nova_compute[259550]: 2025-10-07 14:15:46.905 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.067 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Successfully created port: 8718eef8-8e7a-42ab-8df9-b469e81779d9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.201 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.202 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.229 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.311 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.313 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.319 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.319 2 INFO nova.compute.claims [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.323 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.323 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.324 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.324 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.324 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.326 2 INFO nova.compute.manager [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Terminating instance#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.327 2 DEBUG nova.compute.manager [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.329 2 DEBUG nova.network.neutron [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.337 2 INFO nova.virt.libvirt.driver [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Instance destroyed successfully.#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.337 2 DEBUG nova.objects.instance [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'resources' on Instance uuid a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.353 2 DEBUG nova.virt.libvirt.vif [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:14:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-2003953244',display_name='tempest-tempest.common.compute-instance-2003953244',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-2003953244',id=53,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-u5vbcnoi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:40Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=a8585c64-eb21-491a-9a4c-b9ac6e8e4a30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.354 2 DEBUG nova.network.os_vif_util [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "address": "fa:16:3e:d4:48:b2", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2781ab1e-ba", "ovs_interfaceid": "2781ab1e-ba6c-4689-8da2-ddcf85b31ca8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.355 2 DEBUG nova.network.os_vif_util [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.356 2 DEBUG os_vif [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.358 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2781ab1e-ba, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.361 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Releasing lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.361 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance network_info: |[{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.363 2 DEBUG oslo_concurrency.lockutils [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.363 2 DEBUG nova.network.neutron [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing network info cache for port c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.367 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start _get_guest_xml network_info=[{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.370 2 INFO os_vif [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d4:48:b2,bridge_name='br-int',has_traffic_filtering=True,id=2781ab1e-ba6c-4689-8da2-ddcf85b31ca8,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2781ab1e-ba')#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.410 2 WARNING nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1362211224' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.424 2 DEBUG nova.virt.libvirt.host [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.425 2 DEBUG nova.virt.libvirt.host [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.429 2 DEBUG nova.virt.libvirt.host [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.430 2 DEBUG nova.virt.libvirt.host [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.431 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.431 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.432 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.433 2 DEBUG nova.virt.hardware [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.437 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.488 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.583s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.499 2 DEBUG nova.virt.libvirt.vif [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:41Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.500 2 DEBUG nova.network.os_vif_util [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.501 2 DEBUG nova.network.os_vif_util [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.502 2 DEBUG nova.objects.instance [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:47 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #45. Immutable memtables: 2.
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.525 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <uuid>cfd30417-ee01-41d3-8a93-e49cd960d338</uuid>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <name>instance-00000039</name>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-3008329</nova:name>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:46</nova:creationTime>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <nova:user uuid="d8faa7636d634de587c1631c3452264e">tempest-ListServerFiltersTestJSON-937453277-project-member</nova:user>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <nova:project uuid="972aa9372a81406990460fb46cf827e0">tempest-ListServerFiltersTestJSON-937453277</nova:project>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <nova:port uuid="0b66f2d4-e098-4b4c-902f-2a9a2a9764cc">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <entry name="serial">cfd30417-ee01-41d3-8a93-e49cd960d338</entry>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <entry name="uuid">cfd30417-ee01-41d3-8a93-e49cd960d338</entry>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/cfd30417-ee01-41d3-8a93-e49cd960d338_disk">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:1e:6d:07"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <target dev="tap0b66f2d4-e0"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/console.log" append="off"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:47 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:47 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:47 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:47 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.527 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Preparing to wait for external event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.528 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.528 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.528 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.529 2 DEBUG nova.virt.libvirt.vif [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:41Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.530 2 DEBUG nova.network.os_vif_util [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.531 2 DEBUG nova.network.os_vif_util [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.532 2 DEBUG os_vif [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.533 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.549 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b66f2d4-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.550 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b66f2d4-e0, col_values=(('external_ids', {'iface-id': '0b66f2d4-e098-4b4c-902f-2a9a2a9764cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:6d:07', 'vm-uuid': 'cfd30417-ee01-41d3-8a93-e49cd960d338'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:47 np0005473739 NetworkManager[44949]: <info>  [1759846547.5843] manager: (tap0b66f2d4-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/223)
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.603 2 INFO os_vif [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0')#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.612 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Successfully created port: 41eef051-1c52-4c3c-9854-2ee923b4ab0e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.686 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.686 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.687 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No VIF found with MAC fa:16:3e:1e:6d:07, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.688 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Using config drive#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.720 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:47 np0005473739 nova_compute[259550]: 2025-10-07 14:15:47.726 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1536: 305 pgs: 305 active+clean; 350 MiB data, 631 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 6.5 MiB/s wr, 172 op/s
Oct  7 10:15:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/239535329' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.038 2 INFO nova.virt.libvirt.driver [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deleting instance files /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_del#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.039 2 INFO nova.virt.libvirt.driver [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deletion of /var/lib/nova/instances/a8585c64-eb21-491a-9a4c-b9ac6e8e4a30_del complete#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.041 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.064 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.070 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.149 2 INFO nova.compute.manager [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.150 2 DEBUG oslo.service.loopingcall [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.151 2 DEBUG nova.compute.manager [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.151 2 DEBUG nova.network.neutron [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:15:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2962877139' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.279 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.553s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.287 2 DEBUG nova.compute.provider_tree [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.309 2 DEBUG nova.scheduler.client.report [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.352 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.352 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.414 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.414 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.440 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.461 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.496 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Creating config drive at /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.501 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjar80ncy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2627039618' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.562 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.565 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.565 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.566 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.568 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.568 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.569 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.569 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.570 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.570 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.571 2 DEBUG nova.objects.instance [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'pci_devices' on Instance uuid 83645517-a08a-46d7-b715-15b5d7f078ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.593 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <uuid>83645517-a08a-46d7-b715-15b5d7f078ff</uuid>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <name>instance-00000038</name>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestMultiNic-server-1819774511</nova:name>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:47</nova:creationTime>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:user uuid="ff37c390826e43079eff2a1423ccc2b8">tempest-ServersTestMultiNic-1400500697-project-member</nova:user>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:project uuid="1a99ac1945604cf5a5a5bd917ea52280">tempest-ServersTestMultiNic-1400500697</nova:project>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:port uuid="2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.159" ipVersion="4"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:port uuid="83e99e50-2115-4dee-9274-a2a6528a8a8f">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.1.194" ipVersion="4"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <nova:port uuid="c1ccd58c-dbf6-4d2c-9a75-1effb73b5105">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.60" ipVersion="4"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <entry name="serial">83645517-a08a-46d7-b715-15b5d7f078ff</entry>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <entry name="uuid">83645517-a08a-46d7-b715-15b5d7f078ff</entry>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/83645517-a08a-46d7-b715-15b5d7f078ff_disk">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/83645517-a08a-46d7-b715-15b5d7f078ff_disk.config">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:7f:3a:0b"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <target dev="tap2c09dd65-3e"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:e7:89:37"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <target dev="tap83e99e50-21"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:34:8f:80"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <target dev="tapc1ccd58c-db"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/console.log" append="off"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:48 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:48 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:48 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:48 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.594 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Preparing to wait for external event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.594 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.594 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.594 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.595 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Preparing to wait for external event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.596 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.596 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Preparing to wait for external event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.597 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.598 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.598 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.599 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.599 2 DEBUG os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.601 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c09dd65-3e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.604 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c09dd65-3e, col_values=(('external_ids', {'iface-id': '2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:3a:0b', 'vm-uuid': '83645517-a08a-46d7-b715-15b5d7f078ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 NetworkManager[44949]: <info>  [1759846548.6078] manager: (tap2c09dd65-3e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/224)
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.611 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.612 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.613 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Creating image(s)#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.635 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.661 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.686 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.691 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.740 2 DEBUG nova.policy [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd8faa7636d634de587c1631c3452264e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '972aa9372a81406990460fb46cf827e0', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.744 2 DEBUG nova.network.neutron [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updated VIF entry in instance network info cache for port c1ccd58c-dbf6-4d2c-9a75-1effb73b5105. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.745 2 DEBUG nova.network.neutron [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.748 2 DEBUG nova.network.neutron [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updated VIF entry in instance network info cache for port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.749 2 DEBUG nova.network.neutron [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updating instance_info_cache with network_info: [{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.750 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjar80ncy" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.752 2 INFO os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e')#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.753 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.753 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.754 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.754 2 DEBUG os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.784 2 DEBUG nova.storage.rbd_utils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.788 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.829 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.832 2 DEBUG oslo_concurrency.lockutils [req-aac5b3ed-4f9f-46ca-82e3-527c2c098b2f req-48226dc6-2644-4acb-bcd3-55fecacb45ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.834 2 DEBUG oslo_concurrency.lockutils [req-d1557687-e8d3-4d41-aa25-dadfccaa4c0c req-087ebd70-b4ad-4abb-94e8-7fc701754777 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.834 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.835 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.835 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Refreshing network info cache for port 83e99e50-2115-4dee-9274-a2a6528a8a8f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.837 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.838 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.839 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.865 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.870 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a23d6956-f85a-40b1-9e54-1b32d2af191e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.917 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap83e99e50-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap83e99e50-21, col_values=(('external_ids', {'iface-id': '83e99e50-2115-4dee-9274-a2a6528a8a8f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e7:89:37', 'vm-uuid': '83645517-a08a-46d7-b715-15b5d7f078ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 NetworkManager[44949]: <info>  [1759846548.9214] manager: (tap83e99e50-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/225)
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.936 2 INFO os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21')#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.938 2 DEBUG nova.virt.libvirt.vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:33Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.938 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.939 2 DEBUG nova.network.os_vif_util [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.939 2 DEBUG os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.943 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1ccd58c-db, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.944 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc1ccd58c-db, col_values=(('external_ids', {'iface-id': 'c1ccd58c-dbf6-4d2c-9a75-1effb73b5105', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:34:8f:80', 'vm-uuid': '83645517-a08a-46d7-b715-15b5d7f078ff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 NetworkManager[44949]: <info>  [1759846548.9464] manager: (tapc1ccd58c-db): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/226)
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:48 np0005473739 nova_compute[259550]: 2025-10-07 14:15:48.961 2 INFO os_vif [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db')#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.028 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.028 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.029 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:7f:3a:0b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.029 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:e7:89:37, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.029 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:34:8f:80, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.030 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Using config drive#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.055 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.184 2 DEBUG oslo_concurrency.processutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.396s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.185 2 INFO nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Deleting local config drive /var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:49 np0005473739 NetworkManager[44949]: <info>  [1759846549.2598] manager: (tap0b66f2d4-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/227)
Oct  7 10:15:49 np0005473739 kernel: tap0b66f2d4-e0: entered promiscuous mode
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:49Z|00478|binding|INFO|Claiming lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for this chassis.
Oct  7 10:15:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:49Z|00479|binding|INFO|0b66f2d4-e098-4b4c-902f-2a9a2a9764cc: Claiming fa:16:3e:1e:6d:07 10.100.0.11
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.292 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Successfully updated port: 8718eef8-8e7a-42ab-8df9-b469e81779d9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.295 2 DEBUG nova.network.neutron [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.292 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:6d:07 10.100.0.11'], port_security=['fa:16:3e:1e:6d:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cfd30417-ee01-41d3-8a93-e49cd960d338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.293 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 bound to our chassis#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.295 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.308 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a23d6956-f85a-40b1-9e54-1b32d2af191e_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:49 np0005473739 systemd-udevd[322342]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.309 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e5d0f4c8-3a15-4de1-83ff-bbe5072627fc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.310 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4fd643de-a1 in ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.312 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4fd643de-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.312 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1c5a967-0611-438e-a4d6-98edc77fad1a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.313 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f8e61ff0-c507-4201-a4be-b49a78119d6c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 systemd-machined[214580]: New machine qemu-65-instance-00000039.
Oct  7 10:15:49 np0005473739 NetworkManager[44949]: <info>  [1759846549.3288] device (tap0b66f2d4-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.327 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[955906e3-4916-4fa5-b11b-0f2ca244cb4e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 systemd[1]: Started Virtual Machine qemu-65-instance-00000039.
Oct  7 10:15:49 np0005473739 NetworkManager[44949]: <info>  [1759846549.3346] device (tap0b66f2d4-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:49Z|00480|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc ovn-installed in OVS
Oct  7 10:15:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:49Z|00481|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc up in Southbound
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.349 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[09e4b181-a6a0-4a2f-a688-1e122ac533d1]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.366 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.367 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.367 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.370 2 INFO nova.compute.manager [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Took 1.22 seconds to deallocate network for instance.#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.388 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e895022d-189e-4d89-a2c8-2966ce3385a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 systemd-udevd[322353]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:49 np0005473739 NetworkManager[44949]: <info>  [1759846549.3951] manager: (tap4fd643de-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/228)
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.394 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[940ec0d6-fd83-41de-ba98-9cc719983adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.437 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[11fa80b3-f829-4549-b1cf-db03efdf1ac5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.441 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b12d9399-43d5-4b01-8107-98dc9e842703]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.450 2 DEBUG nova.compute.manager [req-82a263af-dd33-4870-bde6-2db29037b276 req-a9d77ad3-c0b5-4396-9fa0-84045a3361dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-deleted-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.453 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.453 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.463 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] resizing rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:15:49 np0005473739 NetworkManager[44949]: <info>  [1759846549.4682] device (tap4fd643de-a0): carrier: link connected
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.475 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b9f4cb31-0f43-42f5-afdd-e68f4ce9a51f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.500 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b872e74-3aed-451b-9cfc-77c3104122e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322438, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.520 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20749739-7057-4dfe-8742-7b0a38890779]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:808e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713303, 'tstamp': 713303}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322442, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17f2ce83-2257-4d53-bb6b-24c19962bd9b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322443, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.580 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c223f3-0d27-49bb-8e18-a07c316fbb11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.601 2 DEBUG nova.objects.instance [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'migration_context' on Instance uuid a23d6956-f85a-40b1-9e54-1b32d2af191e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:49Z|00066|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a5:aa:77 10.100.0.3
Oct  7 10:15:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:49Z|00067|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a5:aa:77 10.100.0.3
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.621 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.622 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Ensure instance console log exists: /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.622 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.623 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.623 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.634 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.644 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Creating config drive at /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.650 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp64izqdxs execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.655 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d256280f-56a8-4f1f-b549-5c52c6deac7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.657 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.657 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.657 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:49 np0005473739 NetworkManager[44949]: <info>  [1759846549.6602] manager: (tap4fd643de-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/229)
Oct  7 10:15:49 np0005473739 kernel: tap4fd643de-a0: entered promiscuous mode
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.672 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:49Z|00482|binding|INFO|Releasing lport 879f54f7-e219-4616-9199-264d02fdd4cf from this chassis (sb_readonly=0)
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.687 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Successfully updated port: 41eef051-1c52-4c3c-9854-2ee923b4ab0e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.710 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4fd643de-a9bb-4c41-8437-fb901dfd8879.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4fd643de-a9bb-4c41-8437-fb901dfd8879.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.711 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[819930d1-871e-49cd-9c9a-6037b625cfc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.711 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/4fd643de-a9bb-4c41-8437-fb901dfd8879.pid.haproxy
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 4fd643de-a9bb-4c41-8437-fb901dfd8879
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:15:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:49.712 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'env', 'PROCESS_TAG=haproxy-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4fd643de-a9bb-4c41-8437-fb901dfd8879.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:15:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1537: 305 pgs: 305 active+clean; 375 MiB data, 661 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 7.9 MiB/s wr, 245 op/s
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.757 2 DEBUG oslo_concurrency.processutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.794 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp64izqdxs" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.822 2 DEBUG nova.storage.rbd_utils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.829 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.876 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.877 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquired lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.878 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:49 np0005473739 nova_compute[259550]: 2025-10-07 14:15:49.888 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Successfully created port: ae1b9c2d-384d-4134-8799-babeadd70605 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.034 2 DEBUG nova.compute.manager [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.035 2 DEBUG oslo_concurrency.lockutils [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.035 2 DEBUG oslo_concurrency.lockutils [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.035 2 DEBUG oslo_concurrency.lockutils [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.035 2 DEBUG nova.compute.manager [req-0406cf40-1238-4ef3-a409-64bd150cd0f9 req-86e6fd57-b04d-410d-9272-ebda637e1f27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Processing event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.036 2 DEBUG oslo_concurrency.processutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config 83645517-a08a-46d7-b715-15b5d7f078ff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.207s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.037 2 INFO nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Deleting local config drive /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.0954] manager: (tap2c09dd65-3e): new Tun device (/org/freedesktop/NetworkManager/Devices/230)
Oct  7 10:15:50 np0005473739 systemd-udevd[322401]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:50 np0005473739 kernel: tap2c09dd65-3e: entered promiscuous mode
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00483|binding|INFO|Claiming lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b for this chassis.
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00484|binding|INFO|2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b: Claiming fa:16:3e:7f:3a:0b 10.100.0.159
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.1138] manager: (tap83e99e50-21): new Tun device (/org/freedesktop/NetworkManager/Devices/231)
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.1166] device (tap2c09dd65-3e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.1175] device (tap2c09dd65-3e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.1370] manager: (tapc1ccd58c-db): new Tun device (/org/freedesktop/NetworkManager/Devices/232)
Oct  7 10:15:50 np0005473739 kernel: tapc1ccd58c-db: entered promiscuous mode
Oct  7 10:15:50 np0005473739 kernel: tap83e99e50-21: entered promiscuous mode
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.1658] device (tapc1ccd58c-db): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:50 np0005473739 podman[322614]: 2025-10-07 14:15:50.166278368 +0000 UTC m=+0.066410585 container create 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.1667] device (tap83e99e50-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.1678] device (tapc1ccd58c-db): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.1682] device (tap83e99e50-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.169 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00485|if_status|INFO|Not updating pb chassis for c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 now as sb is readonly
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.178 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 systemd-machined[214580]: New machine qemu-66-instance-00000038.
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00486|binding|INFO|Claiming lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 for this chassis.
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00487|binding|INFO|c1ccd58c-dbf6-4d2c-9a75-1effb73b5105: Claiming fa:16:3e:34:8f:80 10.100.0.60
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00488|binding|INFO|Claiming lport 83e99e50-2115-4dee-9274-a2a6528a8a8f for this chassis.
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00489|binding|INFO|83e99e50-2115-4dee-9274-a2a6528a8a8f: Claiming fa:16:3e:e7:89:37 10.100.1.194
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00490|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00491|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00492|binding|INFO|Releasing lport 879f54f7-e219-4616-9199-264d02fdd4cf from this chassis (sb_readonly=0)
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.197 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:3a:0b 10.100.0.159'], port_security=['fa:16:3e:7f:3a:0b 10.100.0.159'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.159/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9366dbfb-d976-4858-b6b3-90aea7266ca1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:50 np0005473739 systemd[1]: Started Virtual Machine qemu-66-instance-00000038.
Oct  7 10:15:50 np0005473739 systemd[1]: Started libpod-conmon-5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f.scope.
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00493|binding|INFO|Setting lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b ovn-installed in OVS
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00494|binding|INFO|Setting lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b up in Southbound
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:50 np0005473739 podman[322614]: 2025-10-07 14:15:50.136284085 +0000 UTC m=+0.036416322 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:15:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/716b321e2ea49d08584ea3087661278ff343536565c536d17e2c3ecf1e952674/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:50 np0005473739 podman[322614]: 2025-10-07 14:15:50.249746963 +0000 UTC m=+0.149879190 container init 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  7 10:15:50 np0005473739 podman[322614]: 2025-10-07 14:15:50.257815036 +0000 UTC m=+0.157947253 container start 5270920f909fdea441dfb7ca72e61dcfd92f65902596cac2bd7577d2203e7c7f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00495|binding|INFO|Setting lport 83e99e50-2115-4dee-9274-a2a6528a8a8f ovn-installed in OVS
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00496|binding|INFO|Setting lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 ovn-installed in OVS
Oct  7 10:15:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:15:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3125380320' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.270 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:89:37 10.100.1.194'], port_security=['fa:16:3e:e7:89:37 10.100.1.194'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.194/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80d44241-e806-45e5-b77b-78848bbeea79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=303ccb5e-5aaa-463a-b70f-452ecb37838d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=83e99e50-2115-4dee-9274-a2a6528a8a8f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00497|binding|INFO|Setting lport 83e99e50-2115-4dee-9274-a2a6528a8a8f up in Southbound
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00498|binding|INFO|Setting lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 up in Southbound
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.271 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:8f:80 10.100.0.60'], port_security=['fa:16:3e:34:8f:80 10.100.0.60'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.60/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9366dbfb-d976-4858-b6b3-90aea7266ca1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:50 np0005473739 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [NOTICE]   (322654) : New worker (322659) forked
Oct  7 10:15:50 np0005473739 neutron-haproxy-ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879[322645]: [NOTICE]   (322654) : Loading success.
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.296 2 DEBUG oslo_concurrency.processutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.539s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.302 2 DEBUG nova.compute.provider_tree [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.348 2 DEBUG nova.scheduler.client.report [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.368 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b in datapath 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 unbound from our chassis#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.370 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.383 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ccfb6f93-11fa-4791-b01b-b6c9f44b5c78]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.383 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7dfb1828-21 in ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.386 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7dfb1828-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.386 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0bb7714c-2c09-47cd-b863-dda6b15982de]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.387 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a10bcde7-8879-45ce-bb94-70d7d884fe4a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.408 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1e7dea-3816-4bb5-ae8d-8ad6d3d0eb03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.414 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.427 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[477f8e7f-a465-43ca-a6de-dae3fb6aa3b7]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.469 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[664a44df-47ac-4176-ae29-7aa013d74cb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.4798] manager: (tap7dfb1828-20): new Veth device (/org/freedesktop/NetworkManager/Devices/233)
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.480 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd7cceae-7042-4b47-8099-e0ecd8d05b76]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.519 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cfbfd050-f425-4807-a5f5-a9e648ddfe97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.524 2 INFO nova.scheduler.client.report [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Deleted allocations for instance a8585c64-eb21-491a-9a4c-b9ac6e8e4a30#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.523 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b5c15e9f-22bd-4706-9c8f-418f651a8da3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.5519] device (tap7dfb1828-20): carrier: link connected
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.557 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d4352cd3-c485-406f-9e64-bcd174f63bab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.568 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846550.5682902, cfd30417-ee01-41d3-8a93-e49cd960d338 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.569 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.571 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.574 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.581 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance spawned successfully.#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.581 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[682732be-1de9-4427-809f-61379b203f01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7dfb1828-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:e5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713411, 'reachable_time': 35189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322719, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee503ee-1995-40af-8ce8-4fa528b87d4f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe50:e5c1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713411, 'tstamp': 713411}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322722, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.611 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.611 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.612 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.612 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.613 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.613 2 DEBUG nova.virt.libvirt.driver [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb890d4-dcb7-4b56-914e-de2537b1bf21]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7dfb1828-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:e5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713411, 'reachable_time': 35189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322723, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.656 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ddc9767a-96b2-48e1-ab17-877a5511a9e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.669 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.673 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.730 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.730 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846550.5691764, cfd30417-ee01-41d3-8a93-e49cd960d338 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.731 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1286281f-569c-40b3-850e-d996d36e46f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7dfb1828-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.736 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7dfb1828-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 NetworkManager[44949]: <info>  [1759846550.7394] manager: (tap7dfb1828-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/234)
Oct  7 10:15:50 np0005473739 kernel: tap7dfb1828-20: entered promiscuous mode
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7dfb1828-20, col_values=(('external_ids', {'iface-id': '8933a3d5-743b-489b-a9ca-89380da9bbe0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:50Z|00499|binding|INFO|Releasing lport 8933a3d5-743b-489b-a9ca-89380da9bbe0 from this chassis (sb_readonly=0)
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.765 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7dfb1828-2cb7-4626-9426-ecd9cd6a2b51.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7dfb1828-2cb7-4626-9426-ecd9cd6a2b51.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.768 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1fddc3-e2be-4e60-a0b2-bcd860f6e14d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.769 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/7dfb1828-2cb7-4626-9426-ecd9cd6a2b51.pid.haproxy
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:15:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:50.770 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'env', 'PROCESS_TAG=haproxy-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7dfb1828-2cb7-4626-9426-ecd9cd6a2b51.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.782 2 DEBUG oslo_concurrency.lockutils [None req-3f07337d-7713-433f-9599-8a0dc6dc0f0e 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.459s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.814 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.842 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.847 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846550.5781906, cfd30417-ee01-41d3-8a93-e49cd960d338 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.847 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.872 2 INFO nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Took 9.27 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.873 2 DEBUG nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.873 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.879 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.909 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.930 2 INFO nova.compute.manager [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Took 10.29 seconds to build instance.#033[00m
Oct  7 10:15:50 np0005473739 nova_compute[259550]: 2025-10-07 14:15:50.948 2 DEBUG oslo_concurrency.lockutils [None req-c3520fe9-c2c8-49ff-9dd2-754196a52b82 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.410s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.165 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846551.165249, 83645517-a08a-46d7-b715-15b5d7f078ff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.166 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.184 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.188 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846551.1664104, 83645517-a08a-46d7-b715-15b5d7f078ff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.188 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.213 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.217 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:51 np0005473739 podman[322754]: 2025-10-07 14:15:51.209805015 +0000 UTC m=+0.053252418 container create 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.237 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:51 np0005473739 systemd[1]: Started libpod-conmon-7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263.scope.
Oct  7 10:15:51 np0005473739 podman[322754]: 2025-10-07 14:15:51.185566255 +0000 UTC m=+0.029013688 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:15:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/785017db9e4b610117d840e5812177c07ef3c9b381f8f952e3b666c54f1281e4/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:51 np0005473739 podman[322754]: 2025-10-07 14:15:51.3212742 +0000 UTC m=+0.164721643 container init 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:15:51 np0005473739 podman[322754]: 2025-10-07 14:15:51.327987537 +0000 UTC m=+0.171434950 container start 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:15:51 np0005473739 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [NOTICE]   (322773) : New worker (322775) forked
Oct  7 10:15:51 np0005473739 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [NOTICE]   (322773) : Loading success.
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.398 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 83e99e50-2115-4dee-9274-a2a6528a8a8f in datapath 80d44241-e806-45e5-b77b-78848bbeea79 unbound from our chassis#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.400 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 80d44241-e806-45e5-b77b-78848bbeea79#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[921dc001-de02-446c-ab5e-1d0eb565a3b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.417 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap80d44241-e1 in ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.419 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap80d44241-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.420 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1814eb37-2b59-4df5-a151-5935c40b3f61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.421 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c92640f-53c9-4aa3-806b-b37c7283bfbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.433 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[61c3a1be-0d13-4b6b-8946-9db15872fcf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.453 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f79561c-4153-41e8-991e-9b9ad7c237b4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.494 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bd044880-4959-41f9-adba-28bb3ca2d9d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 NetworkManager[44949]: <info>  [1759846551.5024] manager: (tap80d44241-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/235)
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.503 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cfb6579-4666-470a-8d8b-9358ce3337aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.548 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bff7e7-daaf-4628-bbda-2f0fd11d6ebd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.553 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b730746c-9c8d-4176-8de8-d60df6e00840]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 NetworkManager[44949]: <info>  [1759846551.5835] device (tap80d44241-e0): carrier: link connected
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.588 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b303d409-410f-453f-96d9-78690705dffd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.611 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b88d986-3318-44d3-be7d-91617c4ab367]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80d44241-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:97:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713514, 'reachable_time': 41142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322794, 'error': None, 'target': 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.631 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92abc7be-1e94-42fb-ade0-0909e83de93d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe16:97a7'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713514, 'tstamp': 713514}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322795, 'error': None, 'target': 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d5d282c8-9411-48f0-8947-323b0466cd47]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap80d44241-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:16:97:a7'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 154], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713514, 'reachable_time': 41142, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 322796, 'error': None, 'target': 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fa19c4a5-6092-4202-9236-e127d01495f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1538: 305 pgs: 305 active+clean; 399 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 8.1 MiB/s wr, 219 op/s
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.772 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f0c96d17-efdf-43a4-9991-eeaaf83b63dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.774 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80d44241-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.775 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.775 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap80d44241-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:51 np0005473739 NetworkManager[44949]: <info>  [1759846551.7784] manager: (tap80d44241-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/236)
Oct  7 10:15:51 np0005473739 kernel: tap80d44241-e0: entered promiscuous mode
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.780 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap80d44241-e0, col_values=(('external_ids', {'iface-id': '5d22e327-7d41-463a-8ed7-53ae88715a72'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:51Z|00500|binding|INFO|Releasing lport 5d22e327-7d41-463a-8ed7-53ae88715a72 from this chassis (sb_readonly=0)
Oct  7 10:15:51 np0005473739 nova_compute[259550]: 2025-10-07 14:15:51.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.807 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/80d44241-e806-45e5-b77b-78848bbeea79.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/80d44241-e806-45e5-b77b-78848bbeea79.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.808 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7bd782d2-0c1c-4eef-b118-69398ae8652b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.809 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-80d44241-e806-45e5-b77b-78848bbeea79
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/80d44241-e806-45e5-b77b-78848bbeea79.pid.haproxy
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 80d44241-e806-45e5-b77b-78848bbeea79
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:15:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:51.811 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'env', 'PROCESS_TAG=haproxy-80d44241-e806-45e5-b77b-78848bbeea79', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/80d44241-e806-45e5-b77b-78848bbeea79.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.061 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updated VIF entry in instance network info cache for port 83e99e50-2115-4dee-9274-a2a6528a8a8f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.062 2 DEBUG nova.network.neutron [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [{"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.077 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-83645517-a08a-46d7-b715-15b5d7f078ff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.078 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.078 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.079 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.079 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.079 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.079 2 WARNING nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-unplugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.080 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.080 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.080 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.080 2 DEBUG oslo_concurrency.lockutils [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a8585c64-eb21-491a-9a4c-b9ac6e8e4a30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.081 2 DEBUG nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] No waiting events found dispatching network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.081 2 WARNING nova.compute.manager [req-f52de0fd-73d4-48bb-a81e-9b1994af2739 req-b8e798a5-3780-4370-9e5c-a6002b20a820 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Received unexpected event network-vif-plugged-2781ab1e-ba6c-4689-8da2-ddcf85b31ca8 for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.168 2 DEBUG nova.compute.manager [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.169 2 DEBUG nova.compute.manager [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.169 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:52 np0005473739 podman[322829]: 2025-10-07 14:15:52.221405399 +0000 UTC m=+0.098043731 container create 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:15:52 np0005473739 systemd[1]: Started libpod-conmon-61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14.scope.
Oct  7 10:15:52 np0005473739 podman[322829]: 2025-10-07 14:15:52.180526928 +0000 UTC m=+0.057165290 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:15:52 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:15:52 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0beb1c026ee7bdf433c8b1469d1aea543db15d951ca29d849193a9e68ff92284/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:15:52 np0005473739 podman[322829]: 2025-10-07 14:15:52.297136109 +0000 UTC m=+0.173774471 container init 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:15:52 np0005473739 podman[322829]: 2025-10-07 14:15:52.308835468 +0000 UTC m=+0.185473800 container start 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:15:52 np0005473739 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [NOTICE]   (322849) : New worker (322851) forked
Oct  7 10:15:52 np0005473739 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [NOTICE]   (322849) : Loading success.
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.410 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 in datapath 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 unbound from our chassis#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.418 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.440 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47e25c95-35e6-48c9-9645-cc89719e8e93]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.470 2 DEBUG nova.network.neutron [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.476 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[94b07a04-619d-4e1e-b3e2-857629cc1e4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.482 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[827d5a3a-f7ee-471d-9c8a-45a24c4ff827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.517 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9c99a6a6-f24f-44dc-9268-1f41e330d3bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.540 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[094ebb99-e0c1-40f2-a03b-0a907def4ab5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7dfb1828-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:e5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713411, 'reachable_time': 35189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 322865, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.559 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Successfully updated port: ae1b9c2d-384d-4134-8799-babeadd70605 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.564 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6af2dcb-67ca-4e78-bcfb-7e94fbfdfaf8]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7dfb1828-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713425, 'tstamp': 713425}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322866, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap7dfb1828-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713428, 'tstamp': 713428}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 322866, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.566 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7dfb1828-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.572 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7dfb1828-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.573 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.573 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7dfb1828-20, col_values=(('external_ids', {'iface-id': '8933a3d5-743b-489b-a9ca-89380da9bbe0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:52.574 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.676 2 DEBUG nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.677 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.678 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.678 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.679 2 DEBUG nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.679 2 WARNING nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state active and task_state None.#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.679 2 DEBUG nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.680 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.681 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.683 2 DEBUG oslo_concurrency.lockutils [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.683 2 DEBUG nova.compute.manager [req-48b530f2-64ab-4862-b314-132840ff6197 req-aecf92be-215b-4ba2-9301-9a0c72b9476b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Processing event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.686 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.686 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance network_info: |[{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.687 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.687 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquired lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.688 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.689 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.690 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.695 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Start _get_guest_xml network_info=[{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.704 2 WARNING nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.716 2 DEBUG nova.virt.libvirt.host [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.717 2 DEBUG nova.virt.libvirt.host [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.722 2 DEBUG nova.virt.libvirt.host [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.724 2 DEBUG nova.virt.libvirt.host [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.725 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.725 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.726 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.727 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.727 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.728 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.728 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.729 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.729 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.730 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.730 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.731 2 DEBUG nova.virt.hardware [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.736 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.780 2 DEBUG nova.network.neutron [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Updating instance_info_cache with network_info: [{"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.881 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.917 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Releasing lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.918 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance network_info: |[{"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.921 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Start _get_guest_xml network_info=[{"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': 'd37bdf89-ce37-478a-af4d-2b9cd0435b79'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.927 2 WARNING nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.931 2 DEBUG nova.virt.libvirt.host [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.932 2 DEBUG nova.virt.libvirt.host [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.936 2 DEBUG nova.virt.libvirt.host [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.936 2 DEBUG nova.virt.libvirt.host [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.937 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.937 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.938 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.938 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.939 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.939 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.939 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.940 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.940 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.940 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.941 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.941 2 DEBUG nova.virt.hardware [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:52 np0005473739 nova_compute[259550]: 2025-10-07 14:15:52.945 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1144808172' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.224 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.247 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.252 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3069102547' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.439 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.462 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.466 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1539: 305 pgs: 305 active+clean; 399 MiB data, 708 MiB used, 59 GiB / 60 GiB avail; 443 KiB/s rd, 8.1 MiB/s wr, 181 op/s
Oct  7 10:15:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2305087272' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.794 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.797 2 DEBUG nova.virt.libvirt.vif [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.797 2 DEBUG nova.network.os_vif_util [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.799 2 DEBUG nova.network.os_vif_util [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.800 2 DEBUG nova.objects.instance [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_devices' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.836 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <uuid>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</uuid>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <name>instance-0000003a</name>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <nova:name>tempest-tempest.common.compute-instance-985989312</nova:name>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:52</nova:creationTime>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <nova:port uuid="8718eef8-8e7a-42ab-8df9-b469e81779d9">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <entry name="serial">8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <entry name="uuid">8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:04:8c:cc"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <target dev="tap8718eef8-8e"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log" append="off"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:53 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:53 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:53 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:53 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.837 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Preparing to wait for external event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.842 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.843 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.843 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.844 2 DEBUG nova.virt.libvirt.vif [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:43Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.845 2 DEBUG nova.network.os_vif_util [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.846 2 DEBUG nova.network.os_vif_util [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.847 2 DEBUG os_vif [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.848 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.849 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8718eef8-8e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.853 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8718eef8-8e, col_values=(('external_ids', {'iface-id': '8718eef8-8e7a-42ab-8df9-b469e81779d9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:04:8c:cc', 'vm-uuid': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:53 np0005473739 NetworkManager[44949]: <info>  [1759846553.8569] manager: (tap8718eef8-8e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/237)
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.868 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.869 2 INFO os_vif [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:04:8c:cc,bridge_name='br-int',has_traffic_filtering=True,id=8718eef8-8e7a-42ab-8df9-b469e81779d9,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8718eef8-8e')#033[00m
Oct  7 10:15:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3623727302' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.985 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.986 2 DEBUG nova.virt.libvirt.vif [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-346182642',display_name='tempest-ListServerFiltersTestJSON-instance-346182642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-346182642',id=59,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-bm2choqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:45Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=b3d2cd05-012d-4189-bc6c-c40fc1f72c0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.987 2 DEBUG nova.network.os_vif_util [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.988 2 DEBUG nova.network.os_vif_util [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:53 np0005473739 nova_compute[259550]: 2025-10-07 14:15:53.989 2 DEBUG nova.objects.instance [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid b3d2cd05-012d-4189-bc6c-c40fc1f72c0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.074 2 DEBUG nova.network.neutron [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Updating instance_info_cache with network_info: [{"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.131 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <uuid>b3d2cd05-012d-4189-bc6c-c40fc1f72c0f</uuid>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <name>instance-0000003b</name>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-346182642</nova:name>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:52</nova:creationTime>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <nova:user uuid="d8faa7636d634de587c1631c3452264e">tempest-ListServerFiltersTestJSON-937453277-project-member</nova:user>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <nova:project uuid="972aa9372a81406990460fb46cf827e0">tempest-ListServerFiltersTestJSON-937453277</nova:project>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <nova:port uuid="41eef051-1c52-4c3c-9854-2ee923b4ab0e">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <entry name="serial">b3d2cd05-012d-4189-bc6c-c40fc1f72c0f</entry>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <entry name="uuid">b3d2cd05-012d-4189-bc6c-c40fc1f72c0f</entry>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:08:11:48"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <target dev="tap41eef051-1c"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/console.log" append="off"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:54 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:54 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:54 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:54 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.139 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Preparing to wait for external event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.140 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.141 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.141 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.143 2 DEBUG nova.virt.libvirt.vif [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-346182642',display_name='tempest-ListServerFiltersTestJSON-instance-346182642',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-346182642',id=59,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-bm2choqm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:45Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=b3d2cd05-012d-4189-bc6c-c40fc1f72c0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.144 2 DEBUG nova.network.os_vif_util [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.145 2 DEBUG nova.network.os_vif_util [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.146 2 DEBUG os_vif [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.147 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.148 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.150 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Releasing lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.151 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance network_info: |[{"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.155 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start _get_guest_xml network_info=[{"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.156 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap41eef051-1c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.157 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap41eef051-1c, col_values=(('external_ids', {'iface-id': '41eef051-1c52-4c3c-9854-2ee923b4ab0e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:08:11:48', 'vm-uuid': 'b3d2cd05-012d-4189-bc6c-c40fc1f72c0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:54 np0005473739 NetworkManager[44949]: <info>  [1759846554.1597] manager: (tap41eef051-1c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/238)
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.166 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.166 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.167 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:04:8c:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.167 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Using config drive#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.189 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.198 2 INFO os_vif [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:08:11:48,bridge_name='br-int',has_traffic_filtering=True,id=41eef051-1c52-4c3c-9854-2ee923b4ab0e,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap41eef051-1c')#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.209 2 WARNING nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.222 2 DEBUG nova.virt.libvirt.host [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.223 2 DEBUG nova.virt.libvirt.host [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.227 2 DEBUG nova.virt.libvirt.host [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.227 2 DEBUG nova.virt.libvirt.host [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.227 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.228 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ccc66c07-b66e-4392-aa22-965659a0d015',id=5,is_public=True,memory_mb=192,name='m1.micro',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.228 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.228 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.229 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.229 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.229 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.230 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.230 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.230 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.230 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.231 2 DEBUG nova.virt.hardware [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.234 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.303 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.304 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.304 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No VIF found with MAC fa:16:3e:08:11:48, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.305 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Using config drive#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.329 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2198154260' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.719 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.750 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.755 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.990 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:54 np0005473739 nova_compute[259550]: 2025-10-07 14:15:54.991 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.013 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.014 2 DEBUG nova.compute.manager [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-changed-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.014 2 DEBUG nova.compute.manager [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Refreshing instance network info cache due to event network-changed-41eef051-1c52-4c3c-9854-2ee923b4ab0e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.015 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.015 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.015 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Refreshing network info cache for port 41eef051-1c52-4c3c-9854-2ee923b4ab0e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.080 2 DEBUG nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.081 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.081 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.081 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.081 2 DEBUG nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Processing event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.082 2 DEBUG nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.082 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.082 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.083 2 DEBUG oslo_concurrency.lockutils [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.083 2 DEBUG nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No event matching network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 in dict_keys([('network-vif-plugged', '83e99e50-2115-4dee-9274-a2a6528a8a8f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.083 2 WARNING nova.compute.manager [req-9f574d12-ac65-4e42-bee7-bdf7d8362226 req-c9d7a349-f94b-4cd0-aff2-5f4a762110b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.130 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Creating config drive at /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.137 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgcef8k4i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.191 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Creating config drive at /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.198 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcbua35t5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:15:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1778173598' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.251 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.251 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.251 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.252 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.252 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No event matching network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b in dict_keys([('network-vif-plugged', '83e99e50-2115-4dee-9274-a2a6528a8a8f')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.252 2 WARNING nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.252 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.252 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Processing event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-changed-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.253 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Refreshing instance network info cache due to event network-changed-ae1b9c2d-384d-4134-8799-babeadd70605. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.254 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.254 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.254 2 DEBUG nova.network.neutron [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Refreshing network info cache for port ae1b9c2d-384d-4134-8799-babeadd70605 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.259 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.260 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance event wait completed in 4 seconds for network-vif-plugged,network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.262 2 DEBUG nova.virt.libvirt.vif [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-289403182',display_name='tempest-ListServerFiltersTestJSON-instance-289403182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-289403182',id=60,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-d58h1k0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:48Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=a23d6956-f85a-40b1-9e54-1b32d2af191e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.263 2 DEBUG nova.network.os_vif_util [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.263 2 DEBUG nova.network.os_vif_util [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.265 2 DEBUG nova.objects.instance [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid a23d6956-f85a-40b1-9e54-1b32d2af191e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.266 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846555.2655506, 83645517-a08a-46d7-b715-15b5d7f078ff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.267 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.269 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.274 2 INFO nova.virt.libvirt.driver [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance spawned successfully.#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.275 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.289 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846540.2850916, a8585c64-eb21-491a-9a4c-b9ac6e8e4a30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.289 2 INFO nova.compute.manager [-] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.292 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <uuid>a23d6956-f85a-40b1-9e54-1b32d2af191e</uuid>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <name>instance-0000003c</name>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <memory>196608</memory>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-289403182</nova:name>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:15:54</nova:creationTime>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.micro">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <nova:memory>192</nova:memory>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <nova:user uuid="d8faa7636d634de587c1631c3452264e">tempest-ListServerFiltersTestJSON-937453277-project-member</nova:user>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <nova:project uuid="972aa9372a81406990460fb46cf827e0">tempest-ListServerFiltersTestJSON-937453277</nova:project>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <nova:port uuid="ae1b9c2d-384d-4134-8799-babeadd70605">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <entry name="serial">a23d6956-f85a-40b1-9e54-1b32d2af191e</entry>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <entry name="uuid">a23d6956-f85a-40b1-9e54-1b32d2af191e</entry>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a23d6956-f85a-40b1-9e54-1b32d2af191e_disk">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:09:1b:e2"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <target dev="tapae1b9c2d-38"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/console.log" append="off"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:15:55 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:15:55 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:15:55 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:15:55 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.292 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Preparing to wait for external event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.293 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.293 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.293 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.294 2 DEBUG nova.virt.libvirt.vif [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-289403182',display_name='tempest-ListServerFiltersTestJSON-instance-289403182',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-289403182',id=60,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-d58h1k0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:15:48Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=a23d6956-f85a-40b1-9e54-1b32d2af191e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.294 2 DEBUG nova.network.os_vif_util [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.294 2 DEBUG nova.network.os_vif_util [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.295 2 DEBUG os_vif [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.296 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.297 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.298 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgcef8k4i" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.330 2 DEBUG nova.storage.rbd_utils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.337 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.380 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcbua35t5" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.415 2 DEBUG nova.storage.rbd_utils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] rbd image 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.421 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.473 2 DEBUG nova.compute.manager [None req-d5792eef-1d55-49bd-8c08-5b8ee2c23537 - - - - - -] [instance: a8585c64-eb21-491a-9a4c-b9ac6e8e4a30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.475 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapae1b9c2d-38, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapae1b9c2d-38, col_values=(('external_ids', {'iface-id': 'ae1b9c2d-384d-4134-8799-babeadd70605', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:1b:e2', 'vm-uuid': 'a23d6956-f85a-40b1-9e54-1b32d2af191e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:55 np0005473739 NetworkManager[44949]: <info>  [1759846555.4791] manager: (tapae1b9c2d-38): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/239)
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.485 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.493 2 INFO os_vif [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38')#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.498 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.498 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.499 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.500 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.500 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.500 2 DEBUG nova.virt.libvirt.driver [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.527 2 DEBUG oslo_concurrency.processutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config b3d2cd05-012d-4189-bc6c-c40fc1f72c0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.528 2 INFO nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Deleting local config drive /var/lib/nova/instances/b3d2cd05-012d-4189-bc6c-c40fc1f72c0f/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.539 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.577 2 INFO nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Took 22.31 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.578 2 DEBUG nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.597 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.598 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.598 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] No VIF found with MAC fa:16:3e:09:1b:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.598 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Using config drive#033[00m
Oct  7 10:15:55 np0005473739 NetworkManager[44949]: <info>  [1759846555.5994] manager: (tap41eef051-1c): new Tun device (/org/freedesktop/NetworkManager/Devices/240)
Oct  7 10:15:55 np0005473739 kernel: tap41eef051-1c: entered promiscuous mode
Oct  7 10:15:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:55Z|00501|binding|INFO|Claiming lport 41eef051-1c52-4c3c-9854-2ee923b4ab0e for this chassis.
Oct  7 10:15:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:55Z|00502|binding|INFO|41eef051-1c52-4c3c-9854-2ee923b4ab0e: Claiming fa:16:3e:08:11:48 10.100.0.8
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.627 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:08:11:48 10.100.0.8'], port_security=['fa:16:3e:08:11:48 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'b3d2cd05-012d-4189-bc6c-c40fc1f72c0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=41eef051-1c52-4c3c-9854-2ee923b4ab0e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.629 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 41eef051-1c52-4c3c-9854-2ee923b4ab0e in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 bound to our chassis#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.630 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879#033[00m
Oct  7 10:15:55 np0005473739 systemd-udevd[323204]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:15:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:55Z|00503|binding|INFO|Setting lport 41eef051-1c52-4c3c-9854-2ee923b4ab0e ovn-installed in OVS
Oct  7 10:15:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:55Z|00504|binding|INFO|Setting lport 41eef051-1c52-4c3c-9854-2ee923b4ab0e up in Southbound
Oct  7 10:15:55 np0005473739 NetworkManager[44949]: <info>  [1759846555.6551] device (tap41eef051-1c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:55 np0005473739 NetworkManager[44949]: <info>  [1759846555.6557] device (tap41eef051-1c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d0d1058e-8b2d-476e-ae80-d70ae546a5d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.652 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:55 np0005473739 systemd-machined[214580]: New machine qemu-67-instance-0000003b.
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 systemd[1]: Started Virtual Machine qemu-67-instance-0000003b.
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.695 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[101dc38b-f377-4f4f-b35c-2a32021b9d6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.699 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[61b9b183-6deb-4cd7-b2b6-0e924dd80ab6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.719 2 DEBUG oslo_concurrency.processutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.298s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.720 2 INFO nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Deleting local config drive /var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.735 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1233b24e-e418-4817-9968-f8b482b530cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1540: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 9.2 MiB/s wr, 280 op/s
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.764 2 INFO nova.compute.manager [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Took 23.46 seconds to build instance.#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.763 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2d4df6ef-70d9-477b-bdb1-9644cdfef7f2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 832, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323227, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.785 2 DEBUG oslo_concurrency.lockutils [None req-a827ced0-b08a-4370-a57b-9e4e84dfbdde ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.787 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0157e45c-1782-44dd-84b0-c4b9c1e8f9e4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323231, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323231, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.805 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.806 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:55 np0005473739 NetworkManager[44949]: <info>  [1759846555.8103] manager: (tap8718eef8-8e): new Tun device (/org/freedesktop/NetworkManager/Devices/241)
Oct  7 10:15:55 np0005473739 kernel: tap8718eef8-8e: entered promiscuous mode
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:55Z|00505|binding|INFO|Claiming lport 8718eef8-8e7a-42ab-8df9-b469e81779d9 for this chassis.
Oct  7 10:15:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:55Z|00506|binding|INFO|8718eef8-8e7a-42ab-8df9-b469e81779d9: Claiming fa:16:3e:04:8c:cc 10.100.0.13
Oct  7 10:15:55 np0005473739 NetworkManager[44949]: <info>  [1759846555.8285] device (tap8718eef8-8e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:55 np0005473739 NetworkManager[44949]: <info>  [1759846555.8294] device (tap8718eef8-8e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.833 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:04:8c:cc 10.100.0.13'], port_security=['fa:16:3e:04:8c:cc 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '67397a02-5eae-462d-b5c7-e258b23b19a8', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8718eef8-8e7a-42ab-8df9-b469e81779d9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.834 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8718eef8-8e7a-42ab-8df9-b469e81779d9 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.836 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:15:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:55Z|00507|binding|INFO|Setting lport 8718eef8-8e7a-42ab-8df9-b469e81779d9 ovn-installed in OVS
Oct  7 10:15:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:55Z|00508|binding|INFO|Setting lport 8718eef8-8e7a-42ab-8df9-b469e81779d9 up in Southbound
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.854 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 nova_compute[259550]: 2025-10-07 14:15:55.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:55 np0005473739 systemd-machined[214580]: New machine qemu-68-instance-0000003a.
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67312a5e-6981-4e71-a1d1-1b8527b4bcad]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 systemd[1]: Started Virtual Machine qemu-68-instance-0000003a.
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.902 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eec4375e-1a82-4e86-a398-ca3e7675f258]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.905 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[56dced3b-94cb-4d06-b803-cc116133102b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.944 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[61ad6104-4994-498a-9aee-def5864402d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:15:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:55.979 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b414c6f6-0d33-42c5-bc5e-9af2573134a6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323258, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.001 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4891d30-a519-4354-b387-24b241c54a9a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323259, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323259, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.003 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.010 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.010 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.011 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.012 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.272 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Creating config drive at /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.278 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4x6znez execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.425 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpz4x6znez" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.460 2 DEBUG nova.storage.rbd_utils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] rbd image a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.469 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.690 2 DEBUG oslo_concurrency.processutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config a23d6956-f85a-40b1-9e54-1b32d2af191e_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.221s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.691 2 INFO nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Deleting local config drive /var/lib/nova/instances/a23d6956-f85a-40b1-9e54-1b32d2af191e/disk.config because it was imported into RBD.#033[00m
Oct  7 10:15:56 np0005473739 kernel: tapae1b9c2d-38: entered promiscuous mode
Oct  7 10:15:56 np0005473739 NetworkManager[44949]: <info>  [1759846556.7641] manager: (tapae1b9c2d-38): new Tun device (/org/freedesktop/NetworkManager/Devices/242)
Oct  7 10:15:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:56Z|00509|binding|INFO|Claiming lport ae1b9c2d-384d-4134-8799-babeadd70605 for this chassis.
Oct  7 10:15:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:56Z|00510|binding|INFO|ae1b9c2d-384d-4134-8799-babeadd70605: Claiming fa:16:3e:09:1b:e2 10.100.0.12
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:56 np0005473739 NetworkManager[44949]: <info>  [1759846556.7820] device (tapae1b9c2d-38): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:15:56 np0005473739 NetworkManager[44949]: <info>  [1759846556.7828] device (tapae1b9c2d-38): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:56Z|00511|binding|INFO|Setting lport ae1b9c2d-384d-4134-8799-babeadd70605 ovn-installed in OVS
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:15:56Z|00512|binding|INFO|Setting lport ae1b9c2d-384d-4134-8799-babeadd70605 up in Southbound
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.869 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:1b:e2 10.100.0.12'], port_security=['fa:16:3e:09:1b:e2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a23d6956-f85a-40b1-9e54-1b32d2af191e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ae1b9c2d-384d-4134-8799-babeadd70605) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.871 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ae1b9c2d-384d-4134-8799-babeadd70605 in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 bound to our chassis#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.873 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879#033[00m
Oct  7 10:15:56 np0005473739 systemd-machined[214580]: New machine qemu-69-instance-0000003c.
Oct  7 10:15:56 np0005473739 systemd[1]: Started Virtual Machine qemu-69-instance-0000003c.
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.900 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846556.8991299, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.900 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40eb2261-006f-4ffa-809f-1fd44553adf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.951 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.956 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ec715a3b-b0ad-488a-88c8-56b383f2288b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:56.963 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a7748904-beb8-41b9-9155-6f15c1ce16ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.963 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846556.8996463, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.964 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.988 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:56 np0005473739 nova_compute[259550]: 2025-10-07 14:15:56.993 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.004 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d4be01af-f53a-44f4-ba40-eccb77f576bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.029 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3f926b2c-e042-4ff0-84ff-9e114030d3cf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323414, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.068 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7dcf99c4-270a-42bb-bdc8-fac3b6108f0f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323415, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323415, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:15:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.070 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:15:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.073 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.074 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.074 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:15:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:15:57.075 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.130 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.187 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846557.1872563, b3d2cd05-012d-4189-bc6c-c40fc1f72c0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.188 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.209 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.214 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846557.1875792, b3d2cd05-012d-4189-bc6c-c40fc1f72c0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.215 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.240 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.244 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.270 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.740 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846557.7395525, a23d6956-f85a-40b1-9e54-1b32d2af191e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.741 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] VM Started (Lifecycle Event)#033[00m
Oct  7 10:15:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1541: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 7.5 MiB/s wr, 274 op/s
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.762 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.767 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846557.7397404, a23d6956-f85a-40b1-9e54-1b32d2af191e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.767 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.786 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.790 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:57 np0005473739 nova_compute[259550]: 2025-10-07 14:15:57.821 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.103 2 DEBUG nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.103 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Processing event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.104 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.105 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.105 2 DEBUG oslo_concurrency.lockutils [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.105 2 DEBUG nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] No waiting events found dispatching network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.105 2 WARNING nova.compute.manager [req-f31883a7-0927-44c3-a4ab-2998eb67fabe req-cc37bf6d-de23-4f74-ab57-e7156617fb6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Received unexpected event network-vif-plugged-41eef051-1c52-4c3c-9854-2ee923b4ab0e for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.106 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.109 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846558.109567, b3d2cd05-012d-4189-bc6c-c40fc1f72c0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.110 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.113 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.117 2 INFO nova.virt.libvirt.driver [-] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Instance spawned successfully.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.117 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.141 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.156 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.161 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.161 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.162 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.162 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.162 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.163 2 DEBUG nova.virt.libvirt.driver [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.189 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.197 2 DEBUG nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Processing event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.198 2 DEBUG nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.199 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.199 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.199 2 DEBUG oslo_concurrency.lockutils [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.199 2 DEBUG nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.199 2 WARNING nova.compute.manager [req-fa3e1bbf-bc95-448a-a514-0199b6449036 req-92471bbb-dcb7-4830-9524-a953e1e78e2f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-plugged-8718eef8-8e7a-42ab-8df9-b469e81779d9 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.200 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.218 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846558.2131026, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.218 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.221 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.232 2 INFO nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Took 12.90 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.233 2 DEBUG nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.237 2 INFO nova.virt.libvirt.driver [-] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Instance spawned successfully.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.237 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.245 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.248 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.287 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.292 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.293 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.293 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.293 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.294 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.294 2 DEBUG nova.virt.libvirt.driver [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.333 2 INFO nova.compute.manager [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Took 14.14 seconds to build instance.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.355 2 DEBUG oslo_concurrency.lockutils [None req-ec1d3974-77b8-4a7b-8bd9-74c0843ff262 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.377s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.361 2 INFO nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Took 14.29 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.361 2 DEBUG nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.433 2 INFO nova.compute.manager [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Took 15.56 seconds to build instance.#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.449 2 DEBUG oslo_concurrency.lockutils [None req-302e17a3-bf9c-42b6-93fc-e1a86c1409a1 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 15.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.948 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Updated VIF entry in instance network info cache for port 41eef051-1c52-4c3c-9854-2ee923b4ab0e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.948 2 DEBUG nova.network.neutron [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b3d2cd05-012d-4189-bc6c-c40fc1f72c0f] Updating instance_info_cache with network_info: [{"id": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "address": "fa:16:3e:08:11:48", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap41eef051-1c", "ovs_interfaceid": "41eef051-1c52-4c3c-9854-2ee923b4ab0e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:58 np0005473739 nova_compute[259550]: 2025-10-07 14:15:58.966 2 DEBUG oslo_concurrency.lockutils [req-0a6f44d9-d4d8-457f-ad7a-176aa2c7f053 req-ee81ac00-72de-4f3b-bfcc-cd1b12b7d154 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b3d2cd05-012d-4189-bc6c-c40fc1f72c0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.046 2 DEBUG nova.network.neutron [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Updated VIF entry in instance network info cache for port ae1b9c2d-384d-4134-8799-babeadd70605. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.047 2 DEBUG nova.network.neutron [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Updating instance_info_cache with network_info: [{"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.063 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a23d6956-f85a-40b1-9e54-1b32d2af191e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.063 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.064 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.064 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.064 2 DEBUG oslo_concurrency.lockutils [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.064 2 DEBUG nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:15:59 np0005473739 nova_compute[259550]: 2025-10-07 14:15:59.065 2 WARNING nova.compute.manager [req-1912c1ee-8f49-4273-a296-95e66f869b63 req-0b3f3385-5c51-4de3-83d3-a1a6e4579739 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:15:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1542: 305 pgs: 305 active+clean; 432 MiB data, 722 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 5.3 MiB/s wr, 317 op/s
Oct  7 10:16:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:00.047 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:00.048 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:00.050 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:00 np0005473739 podman[323459]: 2025-10-07 14:16:00.195512029 +0000 UTC m=+0.130665213 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:16:00 np0005473739 podman[323458]: 2025-10-07 14:16:00.20124779 +0000 UTC m=+0.129276616 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.859 2 DEBUG nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.860 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.861 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.866 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.869 2 DEBUG nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Processing event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.873 2 DEBUG nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.875 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.877 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.880 2 DEBUG oslo_concurrency.lockutils [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.882 2 DEBUG nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] No waiting events found dispatching network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.883 2 WARNING nova.compute.manager [req-2ebb76fc-6146-49f5-be8a-846519872d8a req-cde56a73-b945-41f5-a520-73fe958b1d95 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Received unexpected event network-vif-plugged-ae1b9c2d-384d-4134-8799-babeadd70605 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.891 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.908 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846560.8999472, a23d6956-f85a-40b1-9e54-1b32d2af191e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.908 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.911 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.917 2 INFO nova.virt.libvirt.driver [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance spawned successfully.#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.918 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.944 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.954 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.955 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.955 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.955 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.956 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.957 2 INFO nova.compute.manager [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Terminating instance#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.958 2 DEBUG nova.compute.manager [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.962 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.967 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.967 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.968 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.968 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.969 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:00 np0005473739 nova_compute[259550]: 2025-10-07 14:16:00.969 2 DEBUG nova.virt.libvirt.driver [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.002 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:16:01 np0005473739 kernel: tap2c09dd65-3e (unregistering): left promiscuous mode
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.032 2 INFO nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Took 12.42 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.033 2 DEBUG nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:01 np0005473739 NetworkManager[44949]: <info>  [1759846561.0372] device (tap2c09dd65-3e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00513|binding|INFO|Releasing lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b from this chassis (sb_readonly=0)
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00514|binding|INFO|Setting lport 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b down in Southbound
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00515|binding|INFO|Removing iface tap2c09dd65-3e ovn-installed in OVS
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.067 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:3a:0b 10.100.0.159'], port_security=['fa:16:3e:7f:3a:0b 10.100.0.159'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.159/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9366dbfb-d976-4858-b6b3-90aea7266ca1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.068 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b in datapath 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 unbound from our chassis#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.073 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51#033[00m
Oct  7 10:16:01 np0005473739 kernel: tap83e99e50-21 (unregistering): left promiscuous mode
Oct  7 10:16:01 np0005473739 NetworkManager[44949]: <info>  [1759846561.0814] device (tap83e99e50-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00516|binding|INFO|Releasing lport 83e99e50-2115-4dee-9274-a2a6528a8a8f from this chassis (sb_readonly=0)
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00517|binding|INFO|Setting lport 83e99e50-2115-4dee-9274-a2a6528a8a8f down in Southbound
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00518|binding|INFO|Removing iface tap83e99e50-21 ovn-installed in OVS
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.103 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99f9f42e-3a8b-4876-b8e7-23be12f86a6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.109 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e7:89:37 10.100.1.194'], port_security=['fa:16:3e:e7:89:37 10.100.1.194'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.194/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-80d44241-e806-45e5-b77b-78848bbeea79', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=303ccb5e-5aaa-463a-b70f-452ecb37838d, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=83e99e50-2115-4dee-9274-a2a6528a8a8f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:01 np0005473739 kernel: tapc1ccd58c-db (unregistering): left promiscuous mode
Oct  7 10:16:01 np0005473739 NetworkManager[44949]: <info>  [1759846561.1251] device (tapc1ccd58c-db): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.126 2 INFO nova.compute.manager [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Took 13.84 seconds to build instance.#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00519|binding|INFO|Releasing lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 from this chassis (sb_readonly=0)
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00520|binding|INFO|Setting lport c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 down in Southbound
Oct  7 10:16:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:01Z|00521|binding|INFO|Removing iface tapc1ccd58c-db ovn-installed in OVS
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.156 2 DEBUG oslo_concurrency.lockutils [None req-cf928720-22c4-4b23-ba27-760ca9621ecc d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.161 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:34:8f:80 10.100.0.60'], port_security=['fa:16:3e:34:8f:80 10.100.0.60'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.60/24', 'neutron:device_id': '83645517-a08a-46d7-b715-15b5d7f078ff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9366dbfb-d976-4858-b6b3-90aea7266ca1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.176 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[726fa838-4fda-4e15-aee7-113108e03664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.183 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[da9189d8-d447-4c29-9ddb-aa700e952981]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000038.scope: Deactivated successfully.
Oct  7 10:16:01 np0005473739 systemd[1]: machine-qemu\x2d66\x2dinstance\x2d00000038.scope: Consumed 6.462s CPU time.
Oct  7 10:16:01 np0005473739 systemd-machined[214580]: Machine qemu-66-instance-00000038 terminated.
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.239 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dbeb0cd1-f276-411f-87d9-f74ed2f6a84c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.266 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1af95b-9d3b-4213-a0ea-5203fe3a292d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7dfb1828-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:50:e5:c1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 832, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 153], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713411, 'reachable_time': 35189, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323515, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5047ff30-1aea-4bf3-aafd-2cf8a6abe863]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7dfb1828-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713425, 'tstamp': 713425}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323516, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.255'], ['IFA_LABEL', 'tap7dfb1828-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713428, 'tstamp': 713428}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 323516, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.292 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7dfb1828-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.308 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7dfb1828-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.309 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.309 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7dfb1828-20, col_values=(('external_ids', {'iface-id': '8933a3d5-743b-489b-a9ca-89380da9bbe0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.310 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.311 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 83e99e50-2115-4dee-9274-a2a6528a8a8f in datapath 80d44241-e806-45e5-b77b-78848bbeea79 unbound from our chassis#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.313 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 80d44241-e806-45e5-b77b-78848bbeea79, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.314 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fb0fda1-3c1d-4927-b792-a9a3f054ed7c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.315 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 namespace which is not needed anymore#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 NetworkManager[44949]: <info>  [1759846561.4078] manager: (tap83e99e50-21): new Tun device (/org/freedesktop/NetworkManager/Devices/243)
Oct  7 10:16:01 np0005473739 NetworkManager[44949]: <info>  [1759846561.4192] manager: (tapc1ccd58c-db): new Tun device (/org/freedesktop/NetworkManager/Devices/244)
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.443 2 INFO nova.virt.libvirt.driver [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Instance destroyed successfully.#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.444 2 DEBUG nova.objects.instance [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'resources' on Instance uuid 83645517-a08a-46d7-b715-15b5d7f078ff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.458 2 DEBUG nova.virt.libvirt.vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:55Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.460 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "address": "fa:16:3e:7f:3a:0b", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.159", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c09dd65-3e", "ovs_interfaceid": "2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.461 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.461 2 DEBUG os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c09dd65-3e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.483 2 INFO os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:3a:0b,bridge_name='br-int',has_traffic_filtering=True,id=2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c09dd65-3e')#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.485 2 DEBUG nova.virt.libvirt.vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:55Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.485 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "address": "fa:16:3e:e7:89:37", "network": {"id": "80d44241-e806-45e5-b77b-78848bbeea79", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-770253262", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.194", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap83e99e50-21", "ovs_interfaceid": "83e99e50-2115-4dee-9274-a2a6528a8a8f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.486 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.487 2 DEBUG os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.489 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.489 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap83e99e50-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.491 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.500 2 INFO os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e7:89:37,bridge_name='br-int',has_traffic_filtering=True,id=83e99e50-2115-4dee-9274-a2a6528a8a8f,network=Network(80d44241-e806-45e5-b77b-78848bbeea79),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap83e99e50-21')#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.501 2 DEBUG nova.virt.libvirt.vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1819774511',display_name='tempest-ServersTestMultiNic-server-1819774511',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1819774511',id=56,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-svs0lr8c',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:55Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=83645517-a08a-46d7-b715-15b5d7f078ff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.501 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "address": "fa:16:3e:34:8f:80", "network": {"id": "7dfb1828-2cb7-4626-9426-ecd9cd6a2b51", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-10478537", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc1ccd58c-db", "ovs_interfaceid": "c1ccd58c-dbf6-4d2c-9a75-1effb73b5105", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.502 2 DEBUG nova.network.os_vif_util [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.502 2 DEBUG os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.504 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1ccd58c-db, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.505 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.507 2 INFO os_vif [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:34:8f:80,bridge_name='br-int',has_traffic_filtering=True,id=c1ccd58c-dbf6-4d2c-9a75-1effb73b5105,network=Network(7dfb1828-2cb7-4626-9426-ecd9cd6a2b51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc1ccd58c-db')#033[00m
Oct  7 10:16:01 np0005473739 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [NOTICE]   (322849) : haproxy version is 2.8.14-c23fe91
Oct  7 10:16:01 np0005473739 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [NOTICE]   (322849) : path to executable is /usr/sbin/haproxy
Oct  7 10:16:01 np0005473739 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [WARNING]  (322849) : Exiting Master process...
Oct  7 10:16:01 np0005473739 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [WARNING]  (322849) : Exiting Master process...
Oct  7 10:16:01 np0005473739 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [ALERT]    (322849) : Current worker (322851) exited with code 143 (Terminated)
Oct  7 10:16:01 np0005473739 neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79[322845]: [WARNING]  (322849) : All workers exited. Exiting... (0)
Oct  7 10:16:01 np0005473739 systemd[1]: libpod-61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14.scope: Deactivated successfully.
Oct  7 10:16:01 np0005473739 podman[323566]: 2025-10-07 14:16:01.56248613 +0000 UTC m=+0.063285013 container died 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:16:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14-userdata-shm.mount: Deactivated successfully.
Oct  7 10:16:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0beb1c026ee7bdf433c8b1469d1aea543db15d951ca29d849193a9e68ff92284-merged.mount: Deactivated successfully.
Oct  7 10:16:01 np0005473739 podman[323566]: 2025-10-07 14:16:01.644485336 +0000 UTC m=+0.145284179 container cleanup 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 10:16:01 np0005473739 systemd[1]: libpod-conmon-61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14.scope: Deactivated successfully.
Oct  7 10:16:01 np0005473739 podman[323611]: 2025-10-07 14:16:01.745902075 +0000 UTC m=+0.067470513 container remove 61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:16:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1543: 305 pgs: 305 active+clean; 432 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 2.9 MiB/s wr, 327 op/s
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[602b881e-c047-4d83-9fab-737b2b31951c]: (4, ('Tue Oct  7 02:16:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 (61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14)\n61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14\nTue Oct  7 02:16:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 (61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14)\n61fcfee2df3e3026a8a5f902d5a6cc83346cdc0722e812192c8dee835cf4ee14\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.764 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8b6b2b-8a65-428e-a968-368e8c0503d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap80d44241-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:01 np0005473739 kernel: tap80d44241-e0: left promiscuous mode
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.774 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.790 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66179e6e-710f-4d1a-9b4a-698df8bd302a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 nova_compute[259550]: 2025-10-07 14:16:01.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.813 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f92cf045-a484-4413-9b57-5338be3245c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.815 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f0059ec0-6e2f-449d-93e2-206627fdd4ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.837 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91e8c480-4718-4410-b50b-37f7f0af9aa7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713505, 'reachable_time': 26007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323627, 'error': None, 'target': 'ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.844 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-80d44241-e806-45e5-b77b-78848bbeea79 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:16:01 np0005473739 systemd[1]: run-netns-ovnmeta\x2d80d44241\x2de806\x2d45e5\x2db77b\x2d78848bbeea79.mount: Deactivated successfully.
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.844 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2054a9d3-004e-4a1f-998f-318ae9b27028]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.845 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 in datapath 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 unbound from our chassis#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.847 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5616f83e-f6f1-4002-b2c6-918304c70ceb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:01.850 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 namespace which is not needed anymore#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.027 2 INFO nova.virt.libvirt.driver [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Deleting instance files /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff_del#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.028 2 INFO nova.virt.libvirt.driver [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Deletion of /var/lib/nova/instances/83645517-a08a-46d7-b715-15b5d7f078ff_del complete#033[00m
Oct  7 10:16:02 np0005473739 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [NOTICE]   (322773) : haproxy version is 2.8.14-c23fe91
Oct  7 10:16:02 np0005473739 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [NOTICE]   (322773) : path to executable is /usr/sbin/haproxy
Oct  7 10:16:02 np0005473739 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [WARNING]  (322773) : Exiting Master process...
Oct  7 10:16:02 np0005473739 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [ALERT]    (322773) : Current worker (322775) exited with code 143 (Terminated)
Oct  7 10:16:02 np0005473739 neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51[322769]: [WARNING]  (322773) : All workers exited. Exiting... (0)
Oct  7 10:16:02 np0005473739 systemd[1]: libpod-7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263.scope: Deactivated successfully.
Oct  7 10:16:02 np0005473739 podman[323644]: 2025-10-07 14:16:02.058534373 +0000 UTC m=+0.068760707 container died 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.078 2 DEBUG nova.compute.manager [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.079 2 DEBUG oslo_concurrency.lockutils [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.079 2 DEBUG oslo_concurrency.lockutils [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.080 2 DEBUG oslo_concurrency.lockutils [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.080 2 DEBUG nova.compute.manager [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-unplugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.080 2 DEBUG nova.compute.manager [req-6bd94138-9a41-4daf-9389-0fdcf79a8ac4 req-c49f21c3-6235-4cbe-b972-42a081071df5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:16:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263-userdata-shm.mount: Deactivated successfully.
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.094 2 INFO nova.compute.manager [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.095 2 DEBUG oslo.service.loopingcall [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.095 2 DEBUG nova.compute.manager [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:16:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-785017db9e4b610117d840e5812177c07ef3c9b381f8f952e3b666c54f1281e4-merged.mount: Deactivated successfully.
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.095 2 DEBUG nova.network.neutron [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:16:02 np0005473739 podman[323644]: 2025-10-07 14:16:02.107453866 +0000 UTC m=+0.117680200 container cleanup 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:16:02 np0005473739 systemd[1]: libpod-conmon-7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263.scope: Deactivated successfully.
Oct  7 10:16:02 np0005473739 podman[323671]: 2025-10-07 14:16:02.202015754 +0000 UTC m=+0.060177900 container remove 7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.209 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a8f079f-d756-44de-80c8-dcf8e47c78ac]: (4, ('Tue Oct  7 02:16:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 (7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263)\n7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263\nTue Oct  7 02:16:02 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 (7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263)\n7720fbdb0ef3d707c4aa02e42bf952fc49e087b0c4853fccfabf7857f7ecb263\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.211 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4737c261-2d86-4e24-9543-a4f13e4382df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.212 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7dfb1828-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:02 np0005473739 kernel: tap7dfb1828-20: left promiscuous mode
Oct  7 10:16:02 np0005473739 nova_compute[259550]: 2025-10-07 14:16:02.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.246 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f7ba97e7-f04e-4565-9f9e-235f98ed35f7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b320e4fb-f99a-4954-be09-7a797b4369ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.270 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f7eaf56e-340d-485c-ad81-357a68563734]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[209a31bb-14a6-45bb-84ed-29c6a949e684]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713402, 'reachable_time': 15069, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 323684, 'error': None, 'target': 'ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.293 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7dfb1828-2cb7-4626-9426-ecd9cd6a2b51 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:16:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:02.294 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[af94bda6-8dea-4370-a23b-07b32ef37223]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:02 np0005473739 systemd[1]: run-netns-ovnmeta\x2d7dfb1828\x2d2cb7\x2d4626\x2d9426\x2decd9cd6a2b51.mount: Deactivated successfully.
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.106 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.106 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-unplugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.107 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.108 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.108 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.108 2 DEBUG oslo_concurrency.lockutils [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.108 2 DEBUG nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:03 np0005473739 nova_compute[259550]: 2025-10-07 14:16:03.108 2 WARNING nova.compute.manager [req-b8dad071-a4df-457d-a25a-6a936095a1f4 req-47c310ca-4e07-46b3-877a-565649ce3d7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:16:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1544: 305 pgs: 305 active+clean; 432 MiB data, 723 MiB used, 59 GiB / 60 GiB avail; 6.4 MiB/s rd, 1.2 MiB/s wr, 282 op/s
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.373 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.373 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.374 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.374 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.375 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.375 2 WARNING nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.376 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.377 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.377 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.378 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.379 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-unplugged-83e99e50-2115-4dee-9274-a2a6528a8a8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.379 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-unplugged-83e99e50-2115-4dee-9274-a2a6528a8a8f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.380 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.381 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.381 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.382 2 DEBUG oslo_concurrency.lockutils [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.382 2 DEBUG nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] No waiting events found dispatching network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:04 np0005473739 nova_compute[259550]: 2025-10-07 14:16:04.383 2 WARNING nova.compute.manager [req-e1f5f95a-4b4e-4372-97ae-90dd0e708fa3 req-bf5c33fd-84a1-4adf-b867-a946616606e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received unexpected event network-vif-plugged-83e99e50-2115-4dee-9274-a2a6528a8a8f for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:16:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:05.584 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:05.586 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:16:05 np0005473739 nova_compute[259550]: 2025-10-07 14:16:05.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:05 np0005473739 nova_compute[259550]: 2025-10-07 14:16:05.621 2 DEBUG nova.network.neutron [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:05 np0005473739 nova_compute[259550]: 2025-10-07 14:16:05.643 2 INFO nova.compute.manager [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Took 3.55 seconds to deallocate network for instance.#033[00m
Oct  7 10:16:05 np0005473739 nova_compute[259550]: 2025-10-07 14:16:05.700 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:05 np0005473739 nova_compute[259550]: 2025-10-07 14:16:05.700 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1545: 305 pgs: 305 active+clean; 407 MiB data, 710 MiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 1.2 MiB/s wr, 370 op/s
Oct  7 10:16:05 np0005473739 nova_compute[259550]: 2025-10-07 14:16:05.851 2 DEBUG nova.compute.manager [req-4ba8ac50-150c-4637-a6d2-c621395491c4 req-966faf70-c734-4376-bd1e-233774ec7775 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-deleted-83e99e50-2115-4dee-9274-a2a6528a8a8f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:05 np0005473739 nova_compute[259550]: 2025-10-07 14:16:05.851 2 DEBUG nova.compute.manager [req-4ba8ac50-150c-4637-a6d2-c621395491c4 req-966faf70-c734-4376-bd1e-233774ec7775 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-deleted-2c09dd65-3e2c-4ea4-b89a-4f4ed19a0a6b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:05 np0005473739 nova_compute[259550]: 2025-10-07 14:16:05.916 2 DEBUG oslo_concurrency.processutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:16:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1332087720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.459 2 DEBUG oslo_concurrency.processutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.543s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.467 2 DEBUG nova.compute.provider_tree [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.495 2 DEBUG nova.scheduler.client.report [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.520 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.556 2 INFO nova.scheduler.client.report [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Deleted allocations for instance 83645517-a08a-46d7-b715-15b5d7f078ff#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.633 2 DEBUG oslo_concurrency.lockutils [None req-42c288c1-3546-4cde-a899-092780b56f07 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "83645517-a08a-46d7-b715-15b5d7f078ff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.758 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.759 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.780 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.991 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:06 np0005473739 nova_compute[259550]: 2025-10-07 14:16:06.992 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.000 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.001 2 INFO nova.compute.claims [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.224 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:16:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3815651720' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.751 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.757 2 DEBUG nova.compute.provider_tree [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:16:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1546: 305 pgs: 305 active+clean; 386 MiB data, 701 MiB used, 59 GiB / 60 GiB avail; 7.7 MiB/s rd, 59 KiB/s wr, 316 op/s
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.774 2 DEBUG nova.scheduler.client.report [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.796 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.805s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.798 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.840 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.840 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.864 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.880 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.960 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.961 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.961 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.961 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.961 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.990 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.991 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:16:07 np0005473739 nova_compute[259550]: 2025-10-07 14:16:07.992 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Creating image(s)#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.017 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.042 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.089 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.093 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.133 2 DEBUG nova.policy [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '39e4681256e44d92ac5928e4f8e0d348', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ef9390a1dd804281beea149e0086b360', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.178 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.179 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.179 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.180 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:08Z|00068|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:6d:07 10.100.0.11
Oct  7 10:16:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:08Z|00069|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:6d:07 10.100.0.11
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.205 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.210 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.509 2 DEBUG nova.compute.manager [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.510 2 DEBUG nova.compute.manager [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.510 2 DEBUG oslo_concurrency.lockutils [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.510 2 DEBUG oslo_concurrency.lockutils [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.511 2 DEBUG nova.network.neutron [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.530 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.614 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] resizing rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:16:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:08Z|00522|binding|INFO|Releasing lport c1da8c7c-1812-4ab6-94d3-da2a23226328 from this chassis (sb_readonly=0)
Oct  7 10:16:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:08Z|00523|binding|INFO|Releasing lport 39e8b537-b932-40c7-bb18-5e90a537af13 from this chassis (sb_readonly=0)
Oct  7 10:16:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:08Z|00524|binding|INFO|Releasing lport 879f54f7-e219-4616-9199-264d02fdd4cf from this chassis (sb_readonly=0)
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.762 2 DEBUG nova.objects.instance [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'migration_context' on Instance uuid c14b06ec-ce54-4081-8d72-b22529c3b0b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.790 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.791 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Ensure instance console log exists: /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.792 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.792 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:08 np0005473739 nova_compute[259550]: 2025-10-07 14:16:08.792 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.103 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Successfully created port: ac811e84-843c-4265-b536-c653f6135295 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.154 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.155 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.155 2 DEBUG nova.objects.instance [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:09.590 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.608 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.610 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.628 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.628 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.628 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.629 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1547: 305 pgs: 305 active+clean; 428 MiB data, 721 MiB used, 59 GiB / 60 GiB avail; 8.0 MiB/s rd, 2.5 MiB/s wr, 372 op/s
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.804 2 DEBUG nova.objects.instance [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:09 np0005473739 nova_compute[259550]: 2025-10-07 14:16:09.818 2 DEBUG nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.144 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Successfully updated port: ac811e84-843c-4265-b536-c653f6135295 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.154 2 DEBUG nova.network.neutron [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.154 2 DEBUG nova.network.neutron [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.160 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.160 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquired lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.161 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.181 2 DEBUG oslo_concurrency.lockutils [req-b1eb01f0-7c03-4a21-a490-e0bffc97f1a4 req-c6b8cb42-ed30-4d6d-832a-e9da5d0501bc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.187 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.187 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.223 2 DEBUG nova.compute.manager [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-changed-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.223 2 DEBUG nova.compute.manager [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Refreshing instance network info cache due to event network-changed-ac811e84-843c-4265-b536-c653f6135295. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.224 2 DEBUG oslo_concurrency.lockutils [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.307 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.342 2 DEBUG oslo_concurrency.lockutils [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.342 2 DEBUG oslo_concurrency.lockutils [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.343 2 DEBUG nova.compute.manager [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.351 2 DEBUG nova.compute.manager [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.352 2 DEBUG nova.objects.instance [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'flavor' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.382 2 DEBUG nova.virt.libvirt.driver [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.449 2 DEBUG nova.policy [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG nova.compute.manager [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG nova.compute.manager [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG oslo_concurrency.lockutils [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG oslo_concurrency.lockutils [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:10 np0005473739 nova_compute[259550]: 2025-10-07 14:16:10.930 2 DEBUG nova.network.neutron [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.127 2 DEBUG nova.network.neutron [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updating instance_info_cache with network_info: [{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.156 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Releasing lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.156 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance network_info: |[{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.156 2 DEBUG oslo_concurrency.lockutils [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.157 2 DEBUG nova.network.neutron [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Refreshing network info cache for port ac811e84-843c-4265-b536-c653f6135295 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.159 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start _get_guest_xml network_info=[{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.167 2 WARNING nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.172 2 DEBUG nova.virt.libvirt.host [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.172 2 DEBUG nova.virt.libvirt.host [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.182 2 DEBUG nova.virt.libvirt.host [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.183 2 DEBUG nova.virt.libvirt.host [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.183 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.183 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.184 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.184 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.184 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.184 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.185 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.185 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.185 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.185 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.186 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.186 2 DEBUG nova.virt.hardware [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.188 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.531 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.532 2 DEBUG nova.network.neutron [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.548 2 DEBUG oslo_concurrency.lockutils [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.549 2 DEBUG nova.compute.manager [req-2456d483-1df3-4352-b4ad-d481ccfcabfe req-8655ddfd-b880-4c3b-bc3e-1409753f7f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Received event network-vif-deleted-c1ccd58c-dbf6-4d2c-9a75-1effb73b5105 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:16:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3701021763' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.701 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.724 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.728 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1548: 305 pgs: 305 active+clean; 465 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 6.6 MiB/s rd, 3.9 MiB/s wr, 331 op/s
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.817 2 DEBUG nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Successfully updated port: 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:16:11 np0005473739 nova_compute[259550]: 2025-10-07 14:16:11.832 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:16:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3969290928' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.218 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.220 2 DEBUG nova.virt.libvirt.vif [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-203431907',display_name='tempest-ServerActionsTestOtherA-server-203431907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-203431907',id=61,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-m8v55e0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:07Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=c14b06ec-ce54-4081-8d72-b22529c3b0b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.220 2 DEBUG nova.network.os_vif_util [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.226 2 DEBUG nova.network.os_vif_util [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.228 2 DEBUG nova.objects.instance [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'pci_devices' on Instance uuid c14b06ec-ce54-4081-8d72-b22529c3b0b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.246 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <uuid>c14b06ec-ce54-4081-8d72-b22529c3b0b7</uuid>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <name>instance-0000003d</name>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerActionsTestOtherA-server-203431907</nova:name>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:16:11</nova:creationTime>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <nova:user uuid="39e4681256e44d92ac5928e4f8e0d348">tempest-ServerActionsTestOtherA-508284156-project-member</nova:user>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <nova:project uuid="ef9390a1dd804281beea149e0086b360">tempest-ServerActionsTestOtherA-508284156</nova:project>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <nova:port uuid="ac811e84-843c-4265-b536-c653f6135295">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <entry name="serial">c14b06ec-ce54-4081-8d72-b22529c3b0b7</entry>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <entry name="uuid">c14b06ec-ce54-4081-8d72-b22529c3b0b7</entry>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:66:97:c9"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <target dev="tapac811e84-84"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/console.log" append="off"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:16:12 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:16:12 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:16:12 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:16:12 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.253 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Preparing to wait for external event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.254 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.255 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.255 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.256 2 DEBUG nova.virt.libvirt.vif [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-203431907',display_name='tempest-ServerActionsTestOtherA-server-203431907',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-203431907',id=61,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-m8v55e0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:07Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=c14b06ec-ce54-4081-8d72-b22529c3b0b7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.257 2 DEBUG nova.network.os_vif_util [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.258 2 DEBUG nova.network.os_vif_util [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.259 2 DEBUG os_vif [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.261 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac811e84-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.266 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapac811e84-84, col_values=(('external_ids', {'iface-id': 'ac811e84-843c-4265-b536-c653f6135295', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:66:97:c9', 'vm-uuid': 'c14b06ec-ce54-4081-8d72-b22529c3b0b7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:12 np0005473739 NetworkManager[44949]: <info>  [1759846572.2704] manager: (tapac811e84-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/245)
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.278 2 INFO os_vif [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84')#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.313 2 DEBUG nova.network.neutron [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.314 2 DEBUG nova.network.neutron [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.328 2 DEBUG oslo_concurrency.lockutils [req-13fbabd3-6078-4072-91bc-31000ce21c76 req-4b14bbc3-95b0-4900-89ed-cdca3310fb4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.329 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.330 2 DEBUG nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.355 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.355 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.355 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] No VIF found with MAC fa:16:3e:66:97:c9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.356 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Using config drive#033[00m
Oct  7 10:16:12 np0005473739 podman[323960]: 2025-10-07 14:16:12.402768166 +0000 UTC m=+0.085053848 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.407 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:12 np0005473739 podman[323961]: 2025-10-07 14:16:12.470144916 +0000 UTC m=+0.150995690 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.475 2 DEBUG nova.network.neutron [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updated VIF entry in instance network info cache for port ac811e84-843c-4265-b536-c653f6135295. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.476 2 DEBUG nova.network.neutron [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updating instance_info_cache with network_info: [{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.492 2 DEBUG oslo_concurrency.lockutils [req-8e9d058d-845e-4fab-b159-33f9e1c23a60 req-77c462a4-066a-45ce-bfa5-6760038fdf08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.501 2 WARNING nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:16:12 np0005473739 kernel: tap0b66f2d4-e0 (unregistering): left promiscuous mode
Oct  7 10:16:12 np0005473739 NetworkManager[44949]: <info>  [1759846572.7148] device (tap0b66f2d4-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:12Z|00525|binding|INFO|Releasing lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc from this chassis (sb_readonly=0)
Oct  7 10:16:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:12Z|00526|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc down in Southbound
Oct  7 10:16:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:12Z|00527|binding|INFO|Removing iface tap0b66f2d4-e0 ovn-installed in OVS
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.743 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:6d:07 10.100.0.11'], port_security=['fa:16:3e:1e:6d:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cfd30417-ee01-41d3-8a93-e49cd960d338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.744 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 unbound from our chassis#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.746 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.771 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a4153ad-f381-4f17-b4b7-0cdf831cc7eb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:12 np0005473739 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Deactivated successfully.
Oct  7 10:16:12 np0005473739 systemd[1]: machine-qemu\x2d65\x2dinstance\x2d00000039.scope: Consumed 17.283s CPU time.
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.809 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4a454797-f62f-4aee-9e94-652aba1cea5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:12 np0005473739 systemd-machined[214580]: Machine qemu-65-instance-00000039 terminated.
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.815 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ad6ebb83-c427-4631-b293-80c232062570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.842 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9224cce6-a9d0-4f77-bb56-e50f7d72c801]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2073594-0ae4-48b3-bda7-fcca755e08b4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 832, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324034, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.880 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78498cbc-833f-4ad7-b38d-422b4feea01c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324035, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324035, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.882 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.893 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.894 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.894 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:12.895 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.907 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Creating config drive at /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.920 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_gwamto execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:12 np0005473739 nova_compute[259550]: 2025-10-07 14:16:12.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.069 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi_gwamto" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.109 2 DEBUG nova.storage.rbd_utils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] rbd image c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.113 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.164 2 DEBUG nova.compute.manager [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.165 2 DEBUG nova.compute.manager [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-1b8e1852-2a5e-4c50-9ab0-110dfb492a49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.165 2 DEBUG oslo_concurrency.lockutils [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.310 2 DEBUG oslo_concurrency.processutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config c14b06ec-ce54-4081-8d72-b22529c3b0b7_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.311 2 INFO nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Deleting local config drive /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7/disk.config because it was imported into RBD.#033[00m
Oct  7 10:16:13 np0005473739 NetworkManager[44949]: <info>  [1759846573.3809] manager: (tapac811e84-84): new Tun device (/org/freedesktop/NetworkManager/Devices/246)
Oct  7 10:16:13 np0005473739 systemd-udevd[324025]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:16:13 np0005473739 kernel: tapac811e84-84: entered promiscuous mode
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:13Z|00528|binding|INFO|Claiming lport ac811e84-843c-4265-b536-c653f6135295 for this chassis.
Oct  7 10:16:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:13Z|00529|binding|INFO|ac811e84-843c-4265-b536-c653f6135295: Claiming fa:16:3e:66:97:c9 10.100.0.6
Oct  7 10:16:13 np0005473739 NetworkManager[44949]: <info>  [1759846573.4027] device (tapac811e84-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:16:13 np0005473739 NetworkManager[44949]: <info>  [1759846573.4079] device (tapac811e84-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.410 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:97:c9 10.100.0.6'], port_security=['fa:16:3e:66:97:c9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c14b06ec-ce54-4081-8d72-b22529c3b0b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd609a9ff-183f-496e-83cc-641ffdd2b1f9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ac811e84-843c-4265-b536-c653f6135295) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.411 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ac811e84-843c-4265-b536-c653f6135295 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb bound to our chassis#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.414 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:13Z|00530|binding|INFO|Setting lport ac811e84-843c-4265-b536-c653f6135295 ovn-installed in OVS
Oct  7 10:16:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:13Z|00531|binding|INFO|Setting lport ac811e84-843c-4265-b536-c653f6135295 up in Southbound
Oct  7 10:16:13 np0005473739 systemd-machined[214580]: New machine qemu-70-instance-0000003d.
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.436 2 INFO nova.virt.libvirt.driver [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance shutdown successfully after 3 seconds.#033[00m
Oct  7 10:16:13 np0005473739 systemd[1]: Started Virtual Machine qemu-70-instance-0000003d.
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.443 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance destroyed successfully.#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.443 2 DEBUG nova.objects.instance [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'numa_topology' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.443 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8df42042-2036-4ab0-b452-207d66bae9ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.460 2 DEBUG nova.compute.manager [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.503 2 DEBUG oslo_concurrency.lockutils [None req-aea1b537-300a-48ae-b7b4-7ecae23e4aac d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.508 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e9c53b23-9e5e-47b7-9913-c61e29f91cb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.513 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f5758e32-d28e-4dd9-b996-1bb1273d67d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.549 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[248e213f-dd23-481f-9e2a-526dda93297b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.572 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53704c9d-dbe5-4764-99f0-060f999294fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 13, 'rx_bytes': 916, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324117, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[066740e7-624e-4965-b4e9-f8f3c165b121]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324118, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324118, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:13 np0005473739 nova_compute[259550]: 2025-10-07 14:16:13.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.605 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.606 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.606 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:13.607 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1549: 305 pgs: 305 active+clean; 465 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 224 op/s
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.462 2 DEBUG nova.network.neutron [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.524 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.525 2 DEBUG oslo_concurrency.lockutils [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.526 2 DEBUG nova.network.neutron [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.529 2 DEBUG nova.virt.libvirt.vif [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.530 2 DEBUG nova.network.os_vif_util [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.530 2 DEBUG nova.network.os_vif_util [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.531 2 DEBUG os_vif [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.532 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.532 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b8e1852-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.535 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b8e1852-2a, col_values=(('external_ids', {'iface-id': '1b8e1852-2a5e-4c50-9ab0-110dfb492a49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:ab:6d', 'vm-uuid': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 NetworkManager[44949]: <info>  [1759846574.5388] manager: (tap1b8e1852-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/247)
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.552 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846574.5500708, c14b06ec-ce54-4081-8d72-b22529c3b0b7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.552 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] VM Started (Lifecycle Event)#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.556 2 INFO os_vif [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a')#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.557 2 DEBUG nova.virt.libvirt.vif [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.557 2 DEBUG nova.network.os_vif_util [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.558 2 DEBUG nova.network.os_vif_util [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.561 2 DEBUG nova.virt.libvirt.guest [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:1e:ab:6d"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <target dev="tap1b8e1852-2a"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:16:14 np0005473739 nova_compute[259550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  7 10:16:14 np0005473739 NetworkManager[44949]: <info>  [1759846574.5715] manager: (tap1b8e1852-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/248)
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.572 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:14 np0005473739 kernel: tap1b8e1852-2a: entered promiscuous mode
Oct  7 10:16:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:14Z|00532|binding|INFO|Claiming lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for this chassis.
Oct  7 10:16:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:14Z|00533|binding|INFO|1b8e1852-2a5e-4c50-9ab0-110dfb492a49: Claiming fa:16:3e:1e:ab:6d 10.100.0.9
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 NetworkManager[44949]: <info>  [1759846574.5897] device (tap1b8e1852-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:16:14 np0005473739 NetworkManager[44949]: <info>  [1759846574.5904] device (tap1b8e1852-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.588 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ab:6d 10.100.0.9'], port_security=['fa:16:3e:1e:ab:6d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '2', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1b8e1852-2a5e-4c50-9ab0-110dfb492a49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.590 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.591 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.593 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846574.5524998, c14b06ec-ce54-4081-8d72-b22529c3b0b7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.594 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:16:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:14Z|00534|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 ovn-installed in OVS
Oct  7 10:16:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:14Z|00535|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 up in Southbound
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.607 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0af29e5c-e2a9-4164-9661-4ff426e2ddb8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.612 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.622 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.643 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.652 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e3f3e11b-8fe8-4544-a864-5ffe68ca6996]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.656 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bf622f28-f4de-4be8-980d-12cfcbb9b3c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.661 2 DEBUG nova.virt.libvirt.driver [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.661 2 DEBUG nova.virt.libvirt.driver [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.662 2 DEBUG nova.virt.libvirt.driver [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:a5:aa:77, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.662 2 DEBUG nova.virt.libvirt.driver [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:1e:ab:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.681 2 DEBUG nova.virt.libvirt.guest [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:16:14</nova:creationTime>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct  7 10:16:14 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct  7 10:16:14 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:14 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:16:14 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:16:14 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.704 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f53bba93-51ad-490a-bbe5-92807fc126b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.708 2 DEBUG oslo_concurrency.lockutils [None req-14206d4b-e3c9-4ba3-8371-7cd3c53a0d8c eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 5.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.725 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac1bf657-d9d6-4336-80c6-2c9c0ea9caa8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 7, 'rx_bytes': 916, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324185, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.745 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[44080cb5-ad2e-484c-8a10-00f6d0175c41]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324186, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324186, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 nova_compute[259550]: 2025-10-07 14:16:14.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.753 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.754 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.754 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:14.754 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:15 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #46. Immutable memtables: 3.
Oct  7 10:16:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1550: 305 pgs: 305 active+clean; 465 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 3.9 MiB/s wr, 231 op/s
Oct  7 10:16:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.047 2 DEBUG nova.compute.manager [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.047 2 DEBUG oslo_concurrency.lockutils [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.048 2 DEBUG oslo_concurrency.lockutils [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.048 2 DEBUG oslo_concurrency.lockutils [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.048 2 DEBUG nova.compute.manager [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.048 2 WARNING nova.compute.manager [req-a1fe0b13-3f27-4797-ab34-c34b0f8f7155 req-4468b9fe-0d67-4e2a-a8ca-e905aa13480f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.126 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.126 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.127 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.127 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.127 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.127 2 WARNING nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-unplugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.127 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.128 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.128 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.128 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.128 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.129 2 WARNING nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.129 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.129 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.129 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.129 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.130 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Processing event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.130 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.131 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.131 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.131 2 DEBUG oslo_concurrency.lockutils [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.131 2 DEBUG nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] No waiting events found dispatching network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.132 2 WARNING nova.compute.manager [req-f4415f47-198c-4b3a-a156-f301c7512b88 req-d52c533e-0ed3-425b-9884-13f37609f17b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received unexpected event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.133 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.146 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846576.1448803, c14b06ec-ce54-4081-8d72-b22529c3b0b7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.147 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.152 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.159 2 INFO nova.virt.libvirt.driver [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance spawned successfully.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.160 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.171 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.176 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.187 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.188 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.188 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.189 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.189 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.189 2 DEBUG nova.virt.libvirt.driver [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.209 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.298 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.299 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.327 2 INFO nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Took 8.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.328 2 DEBUG nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.337 2 DEBUG nova.objects.instance [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid 188af2a5-ff92-4f42-8bdc-5dec2f24d46a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.371 2 DEBUG nova.virt.libvirt.vif [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.372 2 DEBUG nova.network.os_vif_util [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.373 2 DEBUG nova.network.os_vif_util [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.379 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.382 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.385 2 DEBUG nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Attempting to detach device tap1b8e1852-2a from instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.386 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:1e:ab:6d"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <target dev="tap1b8e1852-2a"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.394 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.400 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface>not found in domain: <domain type='kvm' id='63'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <name>instance-00000037</name>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <uuid>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</uuid>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:16:14</nova:creationTime>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='serial'>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='uuid'>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk' index='2'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config' index='1'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:a5:aa:77'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target dev='tap62690261-dd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:1e:ab:6d'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target dev='tap1b8e1852-2a'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='net1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log' append='off'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log' append='off'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c542,c990</label>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c542,c990</imagelabel>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.401 2 INFO nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap1b8e1852-2a from instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a from the persistent domain config.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.401 2 DEBUG nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] (1/8): Attempting to detach device tap1b8e1852-2a with device alias net1 from instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.402 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:1e:ab:6d"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <target dev="tap1b8e1852-2a"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.416 2 INFO nova.compute.manager [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Took 9.57 seconds to build instance.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.435 2 DEBUG nova.network.neutron [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.436 2 DEBUG nova.network.neutron [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.437 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846561.4340413, 83645517-a08a-46d7-b715-15b5d7f078ff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.438 2 INFO nova.compute.manager [-] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.447 2 DEBUG oslo_concurrency.lockutils [None req-201a9ecb-7e61-4e42-b14c-e22ba1586ef5 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.459 2 DEBUG oslo_concurrency.lockutils [req-ea0d7e9c-aaef-42f4-a560-4c5b4d161a7a req-41c8df02-12cf-4d21-b9ed-9d9105b5847f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.473 2 DEBUG nova.compute.manager [None req-ddf38e79-546a-4003-83fb-7a0f529aec3e - - - - - -] [instance: 83645517-a08a-46d7-b715-15b5d7f078ff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:16 np0005473739 kernel: tap1b8e1852-2a (unregistering): left promiscuous mode
Oct  7 10:16:16 np0005473739 NetworkManager[44949]: <info>  [1759846576.4857] device (tap1b8e1852-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:16Z|00536|binding|INFO|Releasing lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 from this chassis (sb_readonly=0)
Oct  7 10:16:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:16Z|00537|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 down in Southbound
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:16Z|00538|binding|INFO|Removing iface tap1b8e1852-2a ovn-installed in OVS
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.508 2 DEBUG nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Start waiting for the detach event from libvirt for device tap1b8e1852-2a with device alias net1 for instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.513 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759846576.5135515, 188af2a5-ff92-4f42-8bdc-5dec2f24d46a => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.514 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.523 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ab:6d 10.100.0.9'], port_security=['fa:16:3e:1e:ab:6d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '188af2a5-ff92-4f42-8bdc-5dec2f24d46a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '4', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1b8e1852-2a5e-4c50-9ab0-110dfb492a49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.523 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface>not found in domain: <domain type='kvm' id='63'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <name>instance-00000037</name>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <uuid>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</uuid>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:16:14</nova:creationTime>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='serial'>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='uuid'>188af2a5-ff92-4f42-8bdc-5dec2f24d46a</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.524 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 unbound from our chassis#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.526 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk' index='2'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/188af2a5-ff92-4f42-8bdc-5dec2f24d46a_disk.config' index='1'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:a5:aa:77'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target dev='tap62690261-dd'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log' append='off'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/188af2a5-ff92-4f42-8bdc-5dec2f24d46a/console.log' append='off'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c542,c990</label>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c542,c990</imagelabel>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.523 2 INFO nova.virt.libvirt.driver [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap1b8e1852-2a from instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a from the live domain config.#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.524 2 DEBUG nova.virt.libvirt.vif [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-229960760',display_name='tempest-tempest.common.compute-instance-229960760',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-229960760',id=55,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:36Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-c0jd8utk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=188af2a5-ff92-4f42-8bdc-5dec2f24d46a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.525 2 DEBUG nova.network.os_vif_util [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.526 2 DEBUG nova.network.os_vif_util [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.526 2 DEBUG os_vif [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1b8e1852-2a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.550 2 INFO os_vif [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a')#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.552 2 DEBUG nova.virt.libvirt.guest [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:name>tempest-tempest.common.compute-instance-229960760</nova:name>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:16:16</nova:creationTime>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    <nova:port uuid="62690261-dde3-43ca-929a-e6b75a76bafb">
Oct  7 10:16:16 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:16 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:16:16 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.564 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b361a39-8000-4704-a60c-42b432466fe0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c3061a57-eb48-463a-b3cb-2a3e200326ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.626 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[161b1f34-f532-4234-b74d-48f8ec9611c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.657 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8effd6f0-a80c-4be1-a963-2848f54d7064]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.680 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a67296e-968b-40ec-ab5d-d1462a930206]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 916, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324204, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.706 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4486dc30-9d89-44c3-95b5-024f2d4a45ab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324205, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324205, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:16 np0005473739 nova_compute[259550]: 2025-10-07 14:16:16.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.715 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.716 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.716 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:16.718 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:16Z|00070|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:08:11:48 10.100.0.8
Oct  7 10:16:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:16Z|00071|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:08:11:48 10.100.0.8
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.170 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'flavor' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.195 2 DEBUG oslo_concurrency.lockutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.196 2 DEBUG oslo_concurrency.lockutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquired lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.197 2 DEBUG nova.network.neutron [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.197 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'info_cache' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.467 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.468 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.488 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.559 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.559 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1551: 305 pgs: 305 active+clean; 481 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 5.0 MiB/s wr, 168 op/s
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.803 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:16:17 np0005473739 nova_compute[259550]: 2025-10-07 14:16:17.804 2 INFO nova.compute.claims [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:16:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:17Z|00072|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:04:8c:cc 10.100.0.13
Oct  7 10:16:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:17Z|00073|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:04:8c:cc 10.100.0.13
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.096 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.227 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.228 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.228 2 DEBUG nova.network.neutron [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.285 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.285 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.286 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.286 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.286 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.286 2 WARNING nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.286 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.287 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.287 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.287 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.287 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.287 2 WARNING nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-unplugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.288 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.288 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.288 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.288 2 DEBUG oslo_concurrency.lockutils [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "188af2a5-ff92-4f42-8bdc-5dec2f24d46a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.289 2 DEBUG nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.289 2 WARNING nova.compute.manager [req-1e4f5762-45b2-4eeb-85b0-2859bee75199 req-3494c4fd-900e-4c38-b387-1c96d3d99f4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:16:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:16:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1147183102' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.685 2 DEBUG nova.network.neutron [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Updating instance_info_cache with network_info: [{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.688 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.695 2 DEBUG nova.compute.provider_tree [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.715 2 DEBUG oslo_concurrency.lockutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Releasing lock "refresh_cache-cfd30417-ee01-41d3-8a93-e49cd960d338" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.721 2 DEBUG nova.scheduler.client.report [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.745 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.186s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.747 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.765 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance destroyed successfully.#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.765 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'numa_topology' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.780 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'resources' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.799 2 DEBUG nova.virt.libvirt.vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:13Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.800 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.801 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.801 2 DEBUG os_vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.805 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0b66f2d4-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.810 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.811 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.817 2 INFO os_vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0')#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.825 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Start _get_guest_xml network_info=[{"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.831 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.846 2 WARNING nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.854 2 DEBUG nova.virt.libvirt.host [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.855 2 DEBUG nova.virt.libvirt.host [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.857 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.863 2 DEBUG nova.virt.libvirt.host [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.864 2 DEBUG nova.virt.libvirt.host [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.864 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.865 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.865 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.865 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.865 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.866 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.866 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.866 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.virt.hardware [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.867 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'vcpu_model' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:18 np0005473739 nova_compute[259550]: 2025-10-07 14:16:18.901 2 DEBUG oslo_concurrency.processutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.021 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.023 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.023 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Creating image(s)#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.055 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.085 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.135 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.143 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.205 2 DEBUG nova.policy [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ff37c390826e43079eff2a1423ccc2b8', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.256 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.257 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.258 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.258 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.287 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.293 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:16:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/326782982' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.513 2 DEBUG oslo_concurrency.processutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.611s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.564 2 DEBUG oslo_concurrency.processutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.649 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.356s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.749 2 INFO nova.network.neutron [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.749 2 DEBUG nova.network.neutron [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1552: 305 pgs: 305 active+clean; 532 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 9.1 MiB/s wr, 250 op/s
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.776 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] resizing rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.824 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.855 2 DEBUG oslo_concurrency.lockutils [None req-d2a74fdc-469f-4659-b87a-3c2d8e30b227 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-188af2a5-ff92-4f42-8bdc-5dec2f24d46a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.872 2 DEBUG nova.compute.manager [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-changed-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.873 2 DEBUG nova.compute.manager [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Refreshing instance network info cache due to event network-changed-ac811e84-843c-4265-b536-c653f6135295. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.873 2 DEBUG oslo_concurrency.lockutils [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.873 2 DEBUG oslo_concurrency.lockutils [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.873 2 DEBUG nova.network.neutron [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Refreshing network info cache for port ac811e84-843c-4265-b536-c653f6135295 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.933 2 DEBUG nova.objects.instance [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'migration_context' on Instance uuid 4aa20e30-d71a-4765-9b3e-a72a156d2c88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.948 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.949 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Ensure instance console log exists: /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.949 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.950 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:19 np0005473739 nova_compute[259550]: 2025-10-07 14:16:19.950 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:19Z|00074|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:1b:e2 10.100.0.12
Oct  7 10:16:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:19Z|00075|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:1b:e2 10.100.0.12
Oct  7 10:16:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:16:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2810400117' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.189 2 DEBUG oslo_concurrency.processutils [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.625s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.191 2 DEBUG nova.virt.libvirt.vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:13Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.191 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.193 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.194 2 DEBUG nova.objects.instance [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'pci_devices' on Instance uuid cfd30417-ee01-41d3-8a93-e49cd960d338 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.226 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Successfully created port: 68a7ca31-4ee2-4e32-9574-3113f63090cf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.231 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <uuid>cfd30417-ee01-41d3-8a93-e49cd960d338</uuid>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <name>instance-00000039</name>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <nova:name>tempest-ListServerFiltersTestJSON-instance-3008329</nova:name>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:16:18</nova:creationTime>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <nova:user uuid="d8faa7636d634de587c1631c3452264e">tempest-ListServerFiltersTestJSON-937453277-project-member</nova:user>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <nova:project uuid="972aa9372a81406990460fb46cf827e0">tempest-ListServerFiltersTestJSON-937453277</nova:project>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <nova:port uuid="0b66f2d4-e098-4b4c-902f-2a9a2a9764cc">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <entry name="serial">cfd30417-ee01-41d3-8a93-e49cd960d338</entry>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <entry name="uuid">cfd30417-ee01-41d3-8a93-e49cd960d338</entry>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/cfd30417-ee01-41d3-8a93-e49cd960d338_disk">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/cfd30417-ee01-41d3-8a93-e49cd960d338_disk.config">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:1e:6d:07"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <target dev="tap0b66f2d4-e0"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/cfd30417-ee01-41d3-8a93-e49cd960d338/console.log" append="off"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:16:20 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:16:20 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:16:20 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:16:20 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.237 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.237 2 DEBUG nova.virt.libvirt.driver [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.238 2 DEBUG nova.virt.libvirt.vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:39Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-3008329',display_name='tempest-ListServerFiltersTestJSON-instance-3008329',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-3008329',id=57,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-hm3q3ej4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:13Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=cfd30417-ee01-41d3-8a93-e49cd960d338,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.238 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "address": "fa:16:3e:1e:6d:07", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0b66f2d4-e0", "ovs_interfaceid": "0b66f2d4-e098-4b4c-902f-2a9a2a9764cc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.239 2 DEBUG nova.network.os_vif_util [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.239 2 DEBUG os_vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.241 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.245 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0b66f2d4-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.246 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0b66f2d4-e0, col_values=(('external_ids', {'iface-id': '0b66f2d4-e098-4b4c-902f-2a9a2a9764cc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:6d:07', 'vm-uuid': 'cfd30417-ee01-41d3-8a93-e49cd960d338'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:20 np0005473739 NetworkManager[44949]: <info>  [1759846580.2486] manager: (tap0b66f2d4-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/249)
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.255 2 INFO os_vif [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:6d:07,bridge_name='br-int',has_traffic_filtering=True,id=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0b66f2d4-e0')#033[00m
Oct  7 10:16:20 np0005473739 kernel: tap0b66f2d4-e0: entered promiscuous mode
Oct  7 10:16:20 np0005473739 NetworkManager[44949]: <info>  [1759846580.3314] manager: (tap0b66f2d4-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/250)
Oct  7 10:16:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:20Z|00539|binding|INFO|Claiming lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for this chassis.
Oct  7 10:16:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:20Z|00540|binding|INFO|0b66f2d4-e098-4b4c-902f-2a9a2a9764cc: Claiming fa:16:3e:1e:6d:07 10.100.0.11
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:20Z|00541|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc ovn-installed in OVS
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:20 np0005473739 systemd-udevd[324471]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:16:20 np0005473739 systemd-machined[214580]: New machine qemu-71-instance-00000039.
Oct  7 10:16:20 np0005473739 systemd[1]: Started Virtual Machine qemu-71-instance-00000039.
Oct  7 10:16:20 np0005473739 NetworkManager[44949]: <info>  [1759846580.3961] device (tap0b66f2d4-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:16:20 np0005473739 NetworkManager[44949]: <info>  [1759846580.3974] device (tap0b66f2d4-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.399 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:6d:07 10.100.0.11'], port_security=['fa:16:3e:1e:6d:07 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'cfd30417-ee01-41d3-8a93-e49cd960d338', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '5', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0b66f2d4-e098-4b4c-902f-2a9a2a9764cc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.402 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 bound to our chassis#033[00m
Oct  7 10:16:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:20Z|00542|binding|INFO|Setting lport 0b66f2d4-e098-4b4c-902f-2a9a2a9764cc up in Southbound
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.405 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b999519-8c00-4ea0-a0d9-2aec6c519be6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.463 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2975baee-5ecf-4303-a167-23f813494e30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.467 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0a20044b-144b-4829-85e5-032bcadb28fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.511 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[62557919-fff3-48bb-91fc-b73a053bcf0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b627c2ee-0481-4f0e-8d2b-1a4227b46980]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 11, 'rx_bytes': 832, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324486, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.554 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd8c7a37-d3f8-446c-a5c8-f02032463c8a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324487, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324487, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.557 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.560 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.561 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.561 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:20.561 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.937 2 DEBUG nova.compute.manager [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Received event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.938 2 DEBUG nova.compute.manager [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing instance network info cache due to event network-changed-62690261-dde3-43ca-929a-e6b75a76bafb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.938 2 DEBUG oslo_concurrency.lockutils [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.938 2 DEBUG oslo_concurrency.lockutils [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:20 np0005473739 nova_compute[259550]: 2025-10-07 14:16:20.939 2 DEBUG nova.network.neutron [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Refreshing network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.213 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for cfd30417-ee01-41d3-8a93-e49cd960d338 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.214 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846581.212521, cfd30417-ee01-41d3-8a93-e49cd960d338 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.214 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.224 2 DEBUG nova.compute.manager [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.230 2 INFO nova.virt.libvirt.driver [-] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Instance rebooted successfully.#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.230 2 DEBUG nova.compute.manager [None req-8249e89a-f290-4927-8ad7-50a37e4e8b76 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.245 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.251 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.279 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.279 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.279 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.280 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.280 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.281 2 INFO nova.compute.manager [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Terminating instance#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.282 2 DEBUG nova.compute.manager [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.284 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.284 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846581.2126734, cfd30417-ee01-41d3-8a93-e49cd960d338 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.284 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] VM Started (Lifecycle Event)#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.308 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.315 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:16:21 np0005473739 kernel: tapac811e84-84 (unregistering): left promiscuous mode
Oct  7 10:16:21 np0005473739 NetworkManager[44949]: <info>  [1759846581.3301] device (tapac811e84-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:21Z|00543|binding|INFO|Releasing lport ac811e84-843c-4265-b536-c653f6135295 from this chassis (sb_readonly=0)
Oct  7 10:16:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:21Z|00544|binding|INFO|Setting lport ac811e84-843c-4265-b536-c653f6135295 down in Southbound
Oct  7 10:16:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:21Z|00545|binding|INFO|Removing iface tapac811e84-84 ovn-installed in OVS
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.347 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.351 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:66:97:c9 10.100.0.6'], port_security=['fa:16:3e:66:97:c9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'c14b06ec-ce54-4081-8d72-b22529c3b0b7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ac811e84-843c-4265-b536-c653f6135295) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.352 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ac811e84-843c-4265-b536-c653f6135295 in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb unbound from our chassis#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.354 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.373 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d0b5369e-e95f-4375-9d3d-5e054dd3cf09]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:21 np0005473739 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Deactivated successfully.
Oct  7 10:16:21 np0005473739 systemd[1]: machine-qemu\x2d70\x2dinstance\x2d0000003d.scope: Consumed 5.985s CPU time.
Oct  7 10:16:21 np0005473739 systemd-machined[214580]: Machine qemu-70-instance-0000003d terminated.
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.410 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d80ca6-088d-42ab-b760-cea73efc3b20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.414 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31107181-faa5-4709-88be-c020e5c183f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.446 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6a125fdf-571d-4fe7-91fc-84bc78f378e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.468 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95cdeac9-0e85-42a6-b7a9-70d34a3cf2fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7ba9d553-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e3:7b:82'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 15, 'rx_bytes': 916, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 123], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703719, 'reachable_time': 17176, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324539, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.498 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cd31408a-2dc5-4cd0-b9eb-a718851b8810]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703735, 'tstamp': 703735}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324540, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7ba9d553-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 703740, 'tstamp': 703740}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 324540, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.500 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.510 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7ba9d553-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.510 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.511 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7ba9d553-b0, col_values=(('external_ids', {'iface-id': 'c1da8c7c-1812-4ab6-94d3-da2a23226328'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:21.512 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.524 2 INFO nova.virt.libvirt.driver [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Instance destroyed successfully.#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.525 2 DEBUG nova.objects.instance [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'resources' on Instance uuid c14b06ec-ce54-4081-8d72-b22529c3b0b7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.542 2 DEBUG nova.virt.libvirt.vif [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:16:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-203431907',display_name='tempest-ServerActionsTestOtherA-server-203431907',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-203431907',id=61,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:16:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-m8v55e0v',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:16Z,user_data=None,user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=c14b06ec-ce54-4081-8d72-b22529c3b0b7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.543 2 DEBUG nova.network.os_vif_util [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.543 2 DEBUG nova.network.os_vif_util [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.544 2 DEBUG os_vif [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.546 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac811e84-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.555 2 INFO os_vif [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:66:97:c9,bridge_name='br-int',has_traffic_filtering=True,id=ac811e84-843c-4265-b536-c653f6135295,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapac811e84-84')#033[00m
Oct  7 10:16:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1553: 305 pgs: 305 active+clean; 572 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 8.2 MiB/s wr, 272 op/s
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.859 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Successfully created port: ec2bde76-e053-498c-9d73-ef340b6cfe82 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.864 2 DEBUG nova.network.neutron [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updated VIF entry in instance network info cache for port ac811e84-843c-4265-b536-c653f6135295. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.864 2 DEBUG nova.network.neutron [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updating instance_info_cache with network_info: [{"id": "ac811e84-843c-4265-b536-c653f6135295", "address": "fa:16:3e:66:97:c9", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapac811e84-84", "ovs_interfaceid": "ac811e84-843c-4265-b536-c653f6135295", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.904 2 DEBUG oslo_concurrency.lockutils [req-99290488-f883-4ebf-a0fa-63fbfb38c23c req-fb94d3c8-33f9-4c24-8a65-c6f784264303 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c14b06ec-ce54-4081-8d72-b22529c3b0b7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.971 2 DEBUG nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.972 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.972 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.973 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.973 2 DEBUG nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.973 2 WARNING nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state active and task_state None.#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.974 2 DEBUG nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.974 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.974 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.975 2 DEBUG oslo_concurrency.lockutils [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "cfd30417-ee01-41d3-8a93-e49cd960d338-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.975 2 DEBUG nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] No waiting events found dispatching network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.975 2 WARNING nova.compute.manager [req-51e5c843-93bd-42e4-a6bc-c4cf049d0ab8 req-d6fdf961-a4bb-4f60-b93b-28781b801b70 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: cfd30417-ee01-41d3-8a93-e49cd960d338] Received unexpected event network-vif-plugged-0b66f2d4-e098-4b4c-902f-2a9a2a9764cc for instance with vm_state active and task_state None.#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.982 2 INFO nova.virt.libvirt.driver [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Deleting instance files /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7_del#033[00m
Oct  7 10:16:21 np0005473739 nova_compute[259550]: 2025-10-07 14:16:21.983 2 INFO nova.virt.libvirt.driver [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Deletion of /var/lib/nova/instances/c14b06ec-ce54-4081-8d72-b22529c3b0b7_del complete#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.052 2 DEBUG nova.network.neutron [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updated VIF entry in instance network info cache for port 62690261-dde3-43ca-929a-e6b75a76bafb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.052 2 DEBUG nova.network.neutron [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 188af2a5-ff92-4f42-8bdc-5dec2f24d46a] Updating instance_info_cache with network_info: [{"id": "62690261-dde3-43ca-929a-e6b75a76bafb", "address": "fa:16:3e:a5:aa:77", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap62690261-dd", "ovs_interfaceid": "62690261-dde3-43ca-929a-e6b75a76bafb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.060 2 INFO nova.compute.manager [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.061 2 DEBUG oslo.service.loopingcall [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.061 2 DEBUG nova.compute.manager [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.062 2 DEBUG nova.network.neutron [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.152 2 DEBUG oslo_concurrency.lockutils [req-ab612754-72e0-48c3-85bc-7c53a3ef95e4 req-a4b01765-93d4-48ea-b973-32aee32b9942 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-188af2a5-ff92-4f42-8bdc-5dec2f24d46a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:16:22
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.meta', 'default.rgw.control', '.mgr', 'vms', 'volumes', '.rgw.root', 'backups', 'images', 'default.rgw.log']
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.715 2 DEBUG nova.network.neutron [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.735 2 INFO nova.compute.manager [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Took 0.67 seconds to deallocate network for instance.#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.772 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Successfully updated port: 68a7ca31-4ee2-4e32-9574-3113f63090cf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.775 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.775 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:16:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:16:22 np0005473739 nova_compute[259550]: 2025-10-07 14:16:22.933 2 DEBUG oslo_concurrency.processutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.075 2 DEBUG nova.compute.manager [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.075 2 DEBUG nova.compute.manager [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-8718eef8-8e7a-42ab-8df9-b469e81779d9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.076 2 DEBUG oslo_concurrency.lockutils [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.076 2 DEBUG oslo_concurrency.lockutils [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.076 2 DEBUG nova.network.neutron [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:16:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2765835727' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.450 2 DEBUG oslo_concurrency.processutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.466 2 DEBUG nova.compute.provider_tree [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.496 2 DEBUG nova.scheduler.client.report [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:16:23 np0005473739 ceph-mgr[74587]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3626055412
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.540 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.765s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.569 2 INFO nova.scheduler.client.report [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Deleted allocations for instance c14b06ec-ce54-4081-8d72-b22529c3b0b7#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.632 2 DEBUG oslo_concurrency.lockutils [None req-ccf3cfd0-2e45-4a05-8cfa-2527d108e8f2 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.352s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.647 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Successfully updated port: ec2bde76-e053-498c-9d73-ef340b6cfe82 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.663 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.663 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquired lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.663 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:16:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1554: 305 pgs: 305 active+clean; 572 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 6.7 MiB/s wr, 244 op/s
Oct  7 10:16:23 np0005473739 nova_compute[259550]: 2025-10-07 14:16:23.807 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.308 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.308 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.308 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.309 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.309 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.310 2 INFO nova.compute.manager [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Terminating instance#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.311 2 DEBUG nova.compute.manager [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.324 2 DEBUG nova.network.neutron [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 8718eef8-8e7a-42ab-8df9-b469e81779d9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.325 2 DEBUG nova.network.neutron [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.358 2 DEBUG oslo_concurrency.lockutils [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.359 2 DEBUG nova.compute.manager [req-93358a9b-7c60-48ea-af2b-d396b15a330d req-3a3f3577-2966-42de-857c-5809a7d165cc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-deleted-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:24 np0005473739 kernel: tap432c69dd-fb (unregistering): left promiscuous mode
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.371 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-unplugged-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.371 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.372 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.372 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.372 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] No waiting events found dispatching network-vif-unplugged-ac811e84-843c-4265-b536-c653f6135295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.372 2 WARNING nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received unexpected event network-vif-unplugged-ac811e84-843c-4265-b536-c653f6135295 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.372 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.373 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.373 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.373 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c14b06ec-ce54-4081-8d72-b22529c3b0b7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.373 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] No waiting events found dispatching network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:24 np0005473739 NetworkManager[44949]: <info>  [1759846584.3760] device (tap432c69dd-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.373 2 WARNING nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Received unexpected event network-vif-plugged-ac811e84-843c-4265-b536-c653f6135295 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.381 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-changed-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.381 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Refreshing instance network info cache due to event network-changed-68a7ca31-4ee2-4e32-9574-3113f63090cf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.382 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:24Z|00546|binding|INFO|Releasing lport 432c69dd-fb1b-432b-b867-9fe29716430d from this chassis (sb_readonly=0)
Oct  7 10:16:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:24Z|00547|binding|INFO|Setting lport 432c69dd-fb1b-432b-b867-9fe29716430d down in Southbound
Oct  7 10:16:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:24Z|00548|binding|INFO|Removing iface tap432c69dd-fb ovn-installed in OVS
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.415 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a6:54:7c 10.100.0.7'], port_security=['fa:16:3e:a6:54:7c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'fc163bed-856c-4ea5-9bf3-6989fb1027eb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ef9390a1dd804281beea149e0086b360', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0a36afbb-3eb8-42d9-b597-25ad85279a69', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=80c61b76-cba3-471b-9dc7-bab9d6303f6a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=432c69dd-fb1b-432b-b867-9fe29716430d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.416 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 432c69dd-fb1b-432b-b867-9fe29716430d in datapath 7ba9d553-bbaa-47f8-8281-6a74e53c37fb unbound from our chassis#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.418 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7ba9d553-bbaa-47f8-8281-6a74e53c37fb, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.421 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66bbf6fb-d745-4a2c-bac2-ec0c15ae093d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.421 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb namespace which is not needed anymore#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.434 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.434 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.435 2 DEBUG nova.objects.instance [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:24 np0005473739 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000032.scope: Deactivated successfully.
Oct  7 10:16:24 np0005473739 systemd[1]: machine-qemu\x2d57\x2dinstance\x2d00000032.scope: Consumed 18.499s CPU time.
Oct  7 10:16:24 np0005473739 systemd-machined[214580]: Machine qemu-57-instance-00000032 terminated.
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.554 2 INFO nova.virt.libvirt.driver [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Instance destroyed successfully.#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.555 2 DEBUG nova.objects.instance [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lazy-loading 'resources' on Instance uuid fc163bed-856c-4ea5-9bf3-6989fb1027eb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.570 2 DEBUG nova.virt.libvirt.vif [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:14:04Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherA-server-307445853',display_name='tempest-ServerActionsTestOtherA-server-307445853',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestothera-server-307445853',id=50,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBG+9Z65pTClTOFGfwBQwoBDEk0wDdHVeNmjMfU680t6jhHHvju/LmHnN+5TGqyxhWrME7/S2SBjWiIYsOdkRBZmw+292d2qkOy0bnGNB53h//Xfe51NNgLX77Oc4GTlk5Q==',key_name='tempest-keypair-1536550939',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:14:16Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ef9390a1dd804281beea149e0086b360',ramdisk_id='',reservation_id='r-n9ekxyu9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherA-508284156',owner_user_name='tempest-ServerActionsTestOtherA-508284156-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:14:16Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='39e4681256e44d92ac5928e4f8e0d348',uuid=fc163bed-856c-4ea5-9bf3-6989fb1027eb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.570 2 DEBUG nova.network.os_vif_util [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converting VIF {"id": "432c69dd-fb1b-432b-b867-9fe29716430d", "address": "fa:16:3e:a6:54:7c", "network": {"id": "7ba9d553-bbaa-47f8-8281-6a74e53c37fb", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherA-570899770-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ef9390a1dd804281beea149e0086b360", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap432c69dd-fb", "ovs_interfaceid": "432c69dd-fb1b-432b-b867-9fe29716430d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.571 2 DEBUG nova.network.os_vif_util [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.571 2 DEBUG os_vif [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.574 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap432c69dd-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.580 2 INFO os_vif [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a6:54:7c,bridge_name='br-int',has_traffic_filtering=True,id=432c69dd-fb1b-432b-b867-9fe29716430d,network=Network(7ba9d553-bbaa-47f8-8281-6a74e53c37fb),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap432c69dd-fb')#033[00m
Oct  7 10:16:24 np0005473739 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [NOTICE]   (316364) : haproxy version is 2.8.14-c23fe91
Oct  7 10:16:24 np0005473739 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [NOTICE]   (316364) : path to executable is /usr/sbin/haproxy
Oct  7 10:16:24 np0005473739 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [WARNING]  (316364) : Exiting Master process...
Oct  7 10:16:24 np0005473739 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [WARNING]  (316364) : Exiting Master process...
Oct  7 10:16:24 np0005473739 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [ALERT]    (316364) : Current worker (316368) exited with code 143 (Terminated)
Oct  7 10:16:24 np0005473739 neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb[316346]: [WARNING]  (316364) : All workers exited. Exiting... (0)
Oct  7 10:16:24 np0005473739 systemd[1]: libpod-f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153.scope: Deactivated successfully.
Oct  7 10:16:24 np0005473739 podman[324623]: 2025-10-07 14:16:24.631874078 +0000 UTC m=+0.065567123 container died f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:16:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153-userdata-shm.mount: Deactivated successfully.
Oct  7 10:16:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a901171075059a0193ddbd3fb0f9b59df344514fed4a53bc1eb915147968b7ce-merged.mount: Deactivated successfully.
Oct  7 10:16:24 np0005473739 podman[324623]: 2025-10-07 14:16:24.705246686 +0000 UTC m=+0.138939731 container cleanup f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:16:24 np0005473739 systemd[1]: libpod-conmon-f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153.scope: Deactivated successfully.
Oct  7 10:16:24 np0005473739 podman[324671]: 2025-10-07 14:16:24.781321215 +0000 UTC m=+0.047733342 container remove f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.791 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70851d26-4540-456a-860a-2aaf639716a3]: (4, ('Tue Oct  7 02:16:24 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb (f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153)\nf4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153\nTue Oct  7 02:16:24 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb (f4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153)\nf4a066fa5bec19a3b007a10a434d2597913ac606ec3205542c3841f191a47153\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.795 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f904101-7ba7-47f5-b37f-1b240e894f7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.797 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7ba9d553-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:24 np0005473739 kernel: tap7ba9d553-b0: left promiscuous mode
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.805 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[521f66b1-8af8-4db2-8cd9-b53ef80705b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.827 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3d08b07-9286-44b1-a5e2-a3a05c017c87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a7b866e3-3065-4f57-8a0e-8686ea0316aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c8e7ece3-9bef-4cf4-ab5f-35bca2a148b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 703710, 'reachable_time': 28689, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 324688, 'error': None, 'target': 'ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:24 np0005473739 systemd[1]: run-netns-ovnmeta\x2d7ba9d553\x2dbbaa\x2d47f8\x2d8281\x2d6a74e53c37fb.mount: Deactivated successfully.
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.860 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7ba9d553-bbaa-47f8-8281-6a74e53c37fb deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:16:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:24.861 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[694f659f-50e8-4c78-8615-2039555e1cda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.894 2 DEBUG nova.objects.instance [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'pci_requests' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:24 np0005473739 nova_compute[259550]: 2025-10-07 14:16:24.950 2 DEBUG nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:16:25 np0005473739 nova_compute[259550]: 2025-10-07 14:16:25.069 2 INFO nova.virt.libvirt.driver [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Deleting instance files /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb_del#033[00m
Oct  7 10:16:25 np0005473739 nova_compute[259550]: 2025-10-07 14:16:25.070 2 INFO nova.virt.libvirt.driver [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Deletion of /var/lib/nova/instances/fc163bed-856c-4ea5-9bf3-6989fb1027eb_del complete#033[00m
Oct  7 10:16:25 np0005473739 nova_compute[259550]: 2025-10-07 14:16:25.230 2 INFO nova.compute.manager [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Took 0.92 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:16:25 np0005473739 nova_compute[259550]: 2025-10-07 14:16:25.230 2 DEBUG oslo.service.loopingcall [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:16:25 np0005473739 nova_compute[259550]: 2025-10-07 14:16:25.231 2 DEBUG nova.compute.manager [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:16:25 np0005473739 nova_compute[259550]: 2025-10-07 14:16:25.231 2 DEBUG nova.network.neutron [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:16:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1555: 305 pgs: 305 active+clean; 587 MiB data, 855 MiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 8.2 MiB/s wr, 367 op/s
Oct  7 10:16:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:16:26 np0005473739 nova_compute[259550]: 2025-10-07 14:16:26.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:26 np0005473739 nova_compute[259550]: 2025-10-07 14:16:26.980 2 DEBUG nova.policy [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'eb31457d04de49c28158a546d1b30b77', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'a12799b2087644358b2597f825ff94da', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:16:26 np0005473739 nova_compute[259550]: 2025-10-07 14:16:26.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:26 np0005473739 nova_compute[259550]: 2025-10-07 14:16:26.985 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:16:27 np0005473739 nova_compute[259550]: 2025-10-07 14:16:27.024 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:16:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8d4ab9af-9cb6-4697-81bc-0550a38a30c9 does not exist
Oct  7 10:16:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 335cad32-8f5e-495c-abbc-f7b476e10ab1 does not exist
Oct  7 10:16:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 85879097-8a40-400e-87d2-a0a5c12a5aee does not exist
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:16:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:16:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1556: 305 pgs: 305 active+clean; 534 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 8.2 MiB/s wr, 382 op/s
Oct  7 10:16:27 np0005473739 podman[324960]: 2025-10-07 14:16:27.964429673 +0000 UTC m=+0.074187611 container create 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 10:16:28 np0005473739 podman[324960]: 2025-10-07 14:16:27.916202889 +0000 UTC m=+0.025960817 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:16:28 np0005473739 systemd[1]: Started libpod-conmon-10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc.scope.
Oct  7 10:16:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:16:28 np0005473739 podman[324960]: 2025-10-07 14:16:28.091715106 +0000 UTC m=+0.201473054 container init 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:16:28 np0005473739 podman[324960]: 2025-10-07 14:16:28.099913552 +0000 UTC m=+0.209671470 container start 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:16:28 np0005473739 stupefied_villani[324977]: 167 167
Oct  7 10:16:28 np0005473739 systemd[1]: libpod-10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc.scope: Deactivated successfully.
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.107 2 DEBUG nova.network.neutron [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updating instance_info_cache with network_info: [{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:28 np0005473739 podman[324960]: 2025-10-07 14:16:28.119181431 +0000 UTC m=+0.228939389 container attach 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:16:28 np0005473739 podman[324960]: 2025-10-07 14:16:28.120746922 +0000 UTC m=+0.230504820 container died 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.140 2 DEBUG nova.network.neutron [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0d74f88123681738f50f4da98dc23b128f2ef609295ad4fb42ae87860887d220-merged.mount: Deactivated successfully.
Oct  7 10:16:28 np0005473739 podman[324960]: 2025-10-07 14:16:28.266485982 +0000 UTC m=+0.376243880 container remove 10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_villani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 10:16:28 np0005473739 systemd[1]: libpod-conmon-10012d6dcebcd3a65297e81d5c0485230231adb228f4b3ed965e411064cf4fbc.scope: Deactivated successfully.
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.441 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Releasing lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.441 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance network_info: |[{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.442 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.442 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Refreshing network info cache for port 68a7ca31-4ee2-4e32-9574-3113f63090cf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.445 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start _get_guest_xml network_info=[{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.451 2 WARNING nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.457 2 DEBUG nova.virt.libvirt.host [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.458 2 DEBUG nova.virt.libvirt.host [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.462 2 DEBUG nova.virt.libvirt.host [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.462 2 DEBUG nova.virt.libvirt.host [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.463 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.463 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.463 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.464 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.465 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.480 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.481 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.481 2 DEBUG nova.virt.hardware [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.486 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.529 2 INFO nova.compute.manager [-] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Took 3.30 seconds to deallocate network for instance.#033[00m
Oct  7 10:16:28 np0005473739 podman[325002]: 2025-10-07 14:16:28.541184309 +0000 UTC m=+0.094372304 container create 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:16:28 np0005473739 podman[325002]: 2025-10-07 14:16:28.491744883 +0000 UTC m=+0.044932898 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:16:28 np0005473739 systemd[1]: Started libpod-conmon-3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32.scope.
Oct  7 10:16:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:16:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:28 np0005473739 podman[325002]: 2025-10-07 14:16:28.700030785 +0000 UTC m=+0.253218790 container init 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:16:28 np0005473739 podman[325002]: 2025-10-07 14:16:28.711533309 +0000 UTC m=+0.264721304 container start 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:16:28 np0005473739 podman[325002]: 2025-10-07 14:16:28.747708265 +0000 UTC m=+0.300896310 container attach 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.833 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.835 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:16:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3214005704' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:16:28 np0005473739 nova_compute[259550]: 2025-10-07 14:16:28.983 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.005 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.010 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.043 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.044 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.205 2 DEBUG oslo_concurrency.processutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:16:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1799583381' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.500 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.502 2 DEBUG nova.virt.libvirt.vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:18Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.502 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.503 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.505 2 DEBUG nova.virt.libvirt.vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:18Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.506 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.506 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.516 2 DEBUG nova.objects.instance [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4aa20e30-d71a-4765-9b3e-a72a156d2c88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.611 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <uuid>4aa20e30-d71a-4765-9b3e-a72a156d2c88</uuid>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <name>instance-0000003e</name>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestMultiNic-server-1632304559</nova:name>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:16:28</nova:creationTime>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:user uuid="ff37c390826e43079eff2a1423ccc2b8">tempest-ServersTestMultiNic-1400500697-project-member</nova:user>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:project uuid="1a99ac1945604cf5a5a5bd917ea52280">tempest-ServersTestMultiNic-1400500697</nova:project>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:port uuid="68a7ca31-4ee2-4e32-9574-3113f63090cf">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <nova:port uuid="ec2bde76-e053-498c-9d73-ef340b6cfe82">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.1.60" ipVersion="4"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <entry name="serial">4aa20e30-d71a-4765-9b3e-a72a156d2c88</entry>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <entry name="uuid">4aa20e30-d71a-4765-9b3e-a72a156d2c88</entry>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6f:be:e9"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <target dev="tap68a7ca31-4e"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:09:34:a2"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <target dev="tapec2bde76-e0"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/console.log" append="off"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:16:29 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:16:29 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:16:29 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:16:29 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.612 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Preparing to wait for external event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.612 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.612 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.612 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.613 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Preparing to wait for external event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.613 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.613 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.613 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.614 2 DEBUG nova.virt.libvirt.vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:18Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.614 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.615 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.615 2 DEBUG os_vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68a7ca31-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68a7ca31-4e, col_values=(('external_ids', {'iface-id': '68a7ca31-4ee2-4e32-9574-3113f63090cf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6f:be:e9', 'vm-uuid': '4aa20e30-d71a-4765-9b3e-a72a156d2c88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:29 np0005473739 NetworkManager[44949]: <info>  [1759846589.6241] manager: (tap68a7ca31-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/251)
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.632 2 INFO os_vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6f:be:e9,bridge_name='br-int',has_traffic_filtering=True,id=68a7ca31-4ee2-4e32-9574-3113f63090cf,network=Network(b044fd19-c1bd-4478-b5ed-6feb8831fea0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68a7ca31-4e')#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.633 2 DEBUG nova.virt.libvirt.vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:16:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestMultiNic-server-1632304559',display_name='tempest-ServersTestMultiNic-server-1632304559',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmultinic-server-1632304559',id=62,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='1a99ac1945604cf5a5a5bd917ea52280',ramdisk_id='',reservation_id='r-c2aa0bw8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestMultiNic-1400500697',owner_user_name='tempest-ServersTestMultiNic-1400500697-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:16:18Z,user_data=None,user_id='ff37c390826e43079eff2a1423ccc2b8',uuid=4aa20e30-d71a-4765-9b3e-a72a156d2c88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.633 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converting VIF {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.635 2 DEBUG nova.network.os_vif_util [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.635 2 DEBUG os_vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.636 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapec2bde76-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.639 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapec2bde76-e0, col_values=(('external_ids', {'iface-id': 'ec2bde76-e053-498c-9d73-ef340b6cfe82', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:34:a2', 'vm-uuid': '4aa20e30-d71a-4765-9b3e-a72a156d2c88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:29 np0005473739 NetworkManager[44949]: <info>  [1759846589.6412] manager: (tapec2bde76-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/252)
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.650 2 INFO os_vif [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:34:a2,bridge_name='br-int',has_traffic_filtering=True,id=ec2bde76-e053-498c-9d73-ef340b6cfe82,network=Network(8ca58653-b705-43e9-827e-3bbf7240a807),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapec2bde76-e0')#033[00m
Oct  7 10:16:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:16:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4139568954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.720 2 DEBUG oslo_concurrency.processutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.728 2 DEBUG nova.compute.provider_tree [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:16:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1557: 305 pgs: 305 active+clean; 484 MiB data, 802 MiB used, 59 GiB / 60 GiB avail; 4.5 MiB/s rd, 7.1 MiB/s wr, 380 op/s
Oct  7 10:16:29 np0005473739 dreamy_cohen[325021]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:16:29 np0005473739 dreamy_cohen[325021]: --> relative data size: 1.0
Oct  7 10:16:29 np0005473739 dreamy_cohen[325021]: --> All data devices are unavailable
Oct  7 10:16:29 np0005473739 systemd[1]: libpod-3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32.scope: Deactivated successfully.
Oct  7 10:16:29 np0005473739 systemd[1]: libpod-3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32.scope: Consumed 1.025s CPU time.
Oct  7 10:16:29 np0005473739 podman[325002]: 2025-10-07 14:16:29.817741401 +0000 UTC m=+1.370929396 container died 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:16:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-173110fb48c71b72589668bb9b4f369e5c95ab0b6b4623e98d0f6e94508f8e5e-merged.mount: Deactivated successfully.
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.896 2 DEBUG nova.scheduler.client.report [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:16:29 np0005473739 podman[325002]: 2025-10-07 14:16:29.897304793 +0000 UTC m=+1.450492798 container remove 3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_cohen, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.905 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.907 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.907 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:6f:be:e9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.907 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] No VIF found with MAC fa:16:3e:09:34:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.908 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Using config drive#033[00m
Oct  7 10:16:29 np0005473739 systemd[1]: libpod-conmon-3f972c6b336a25cfb1be9c95fb421bf96af36c4098f419bb96d02723492b4d32.scope: Deactivated successfully.
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.930 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.959 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.995 2 INFO nova.scheduler.client.report [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Deleted allocations for instance fc163bed-856c-4ea5-9bf3-6989fb1027eb#033[00m
Oct  7 10:16:29 np0005473739 nova_compute[259550]: 2025-10-07 14:16:29.997 2 DEBUG nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Successfully updated port: 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.105 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.105 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.105 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.105 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.106 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.150 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.159 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.159 2 DEBUG nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.164 2 DEBUG nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-unplugged-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.164 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.164 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.164 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] No waiting events found dispatching network-vif-unplugged-432c69dd-fb1b-432b-b867-9fe29716430d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.165 2 WARNING nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received unexpected event network-vif-unplugged-432c69dd-fb1b-432b-b867-9fe29716430d for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.165 2 DEBUG oslo_concurrency.lockutils [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.166 2 DEBUG nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] No waiting events found dispatching network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.166 2 WARNING nova.compute.manager [req-14228750-3d2c-4424-9b3c-9718c43956ea req-68e4e782-636e-46a6-a4a1-a0bcf84def8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received unexpected event network-vif-plugged-432c69dd-fb1b-432b-b867-9fe29716430d for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.242 2 DEBUG oslo_concurrency.lockutils [None req-f3ada68a-2aa6-4a2f-ad24-d9eaa9467809 39e4681256e44d92ac5928e4f8e0d348 ef9390a1dd804281beea149e0086b360 - - default default] Lock "fc163bed-856c-4ea5-9bf3-6989fb1027eb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.934s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:30 np0005473739 podman[325268]: 2025-10-07 14:16:30.308422434 +0000 UTC m=+0.068945982 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:16:30 np0005473739 podman[325269]: 2025-10-07 14:16:30.308673491 +0000 UTC m=+0.063313004 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:16:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:16:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1014860140' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.595 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:30 np0005473739 podman[325368]: 2025-10-07 14:16:30.618818524 +0000 UTC m=+0.060604202 container create 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:16:30 np0005473739 systemd[1]: Started libpod-conmon-9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5.scope.
Oct  7 10:16:30 np0005473739 podman[325368]: 2025-10-07 14:16:30.585907014 +0000 UTC m=+0.027692702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:16:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:16:30 np0005473739 podman[325368]: 2025-10-07 14:16:30.696689171 +0000 UTC m=+0.138474879 container init 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:16:30 np0005473739 podman[325368]: 2025-10-07 14:16:30.704838486 +0000 UTC m=+0.146624204 container start 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:16:30 np0005473739 podman[325368]: 2025-10-07 14:16:30.709468128 +0000 UTC m=+0.151253826 container attach 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:16:30 np0005473739 laughing_williamson[325385]: 167 167
Oct  7 10:16:30 np0005473739 systemd[1]: libpod-9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5.scope: Deactivated successfully.
Oct  7 10:16:30 np0005473739 podman[325368]: 2025-10-07 14:16:30.713596267 +0000 UTC m=+0.155381955 container died 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:16:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e491b4bdaaa625084a57e1c0bac85f63b38b550f3e64af51647f290012f22852-merged.mount: Deactivated successfully.
Oct  7 10:16:30 np0005473739 podman[325368]: 2025-10-07 14:16:30.764241615 +0000 UTC m=+0.206027313 container remove 9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=laughing_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:16:30 np0005473739 systemd[1]: libpod-conmon-9d977bb49934add5953920a40f21c49d3bcfc36ca7e33cb82d6a3430224c85c5.scope: Deactivated successfully.
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.810 2 WARNING nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] b1d9f332-f920-4d6e-8e91-dd13ec334d51 already exists in list: networks containing: ['b1d9f332-f920-4d6e-8e91-dd13ec334d51']. ignoring it#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.822 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.822 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000039 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.828 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.828 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000037 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.833 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.834 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.838 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.838 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.843 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.843 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.848 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.848 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000003e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.978 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Creating config drive at /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config#033[00m
Oct  7 10:16:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:16:30 np0005473739 podman[325408]: 2025-10-07 14:16:30.982399718 +0000 UTC m=+0.055801735 container create b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:16:30 np0005473739 nova_compute[259550]: 2025-10-07 14:16:30.988 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprc1u6kdl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:31 np0005473739 systemd[1]: Started libpod-conmon-b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a.scope.
Oct  7 10:16:31 np0005473739 podman[325408]: 2025-10-07 14:16:30.960741116 +0000 UTC m=+0.034143153 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:16:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:16:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:31 np0005473739 podman[325408]: 2025-10-07 14:16:31.089257012 +0000 UTC m=+0.162659059 container init b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:16:31 np0005473739 podman[325408]: 2025-10-07 14:16:31.099592904 +0000 UTC m=+0.172994921 container start b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:16:31 np0005473739 podman[325408]: 2025-10-07 14:16:31.103776925 +0000 UTC m=+0.177178942 container attach b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.136 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprc1u6kdl" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.163 2 DEBUG nova.storage.rbd_utils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] rbd image 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.168 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.215 2 DEBUG nova.compute.manager [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-changed-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.215 2 DEBUG nova.compute.manager [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing instance network info cache due to event network-changed-1b8e1852-2a5e-4c50-9ab0-110dfb492a49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.216 2 DEBUG oslo_concurrency.lockutils [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.266 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.267 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3184MB free_disk=59.73991012573242GB free_vcpus=3 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.267 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.267 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.337 2 DEBUG oslo_concurrency.processutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config 4aa20e30-d71a-4765-9b3e-a72a156d2c88_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.170s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.338 2 INFO nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Deleting local config drive /var/lib/nova/instances/4aa20e30-d71a-4765-9b3e-a72a156d2c88/disk.config because it was imported into RBD.#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.4201] manager: (tap68a7ca31-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/253)
Oct  7 10:16:31 np0005473739 kernel: tap68a7ca31-4e: entered promiscuous mode
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00549|binding|INFO|Claiming lport 68a7ca31-4ee2-4e32-9574-3113f63090cf for this chassis.
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00550|binding|INFO|68a7ca31-4ee2-4e32-9574-3113f63090cf: Claiming fa:16:3e:6f:be:e9 10.100.0.6
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.4455] manager: (tapec2bde76-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/254)
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.457 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:be:e9 10.100.0.6'], port_security=['fa:16:3e:6f:be:e9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/24', 'neutron:device_id': '4aa20e30-d71a-4765-9b3e-a72a156d2c88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21c6dcc3-8dd4-4e2a-af6b-6893629fd93b, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=68a7ca31-4ee2-4e32-9574-3113f63090cf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.459 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 68a7ca31-4ee2-4e32-9574-3113f63090cf in datapath b044fd19-c1bd-4478-b5ed-6feb8831fea0 bound to our chassis#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.460 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b044fd19-c1bd-4478-b5ed-6feb8831fea0#033[00m
Oct  7 10:16:31 np0005473739 systemd-udevd[325487]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:16:31 np0005473739 systemd-udevd[325485]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.481 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a296e3c8-8fe3-45bf-ae8d-4287ffb3506f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.483 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb044fd19-c1 in ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:16:31 np0005473739 kernel: tapec2bde76-e0: entered promiscuous mode
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.487 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb044fd19-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:16:31 np0005473739 systemd-machined[214580]: New machine qemu-72-instance-0000003e.
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00551|binding|INFO|Claiming lport ec2bde76-e053-498c-9d73-ef340b6cfe82 for this chassis.
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00552|binding|INFO|ec2bde76-e053-498c-9d73-ef340b6cfe82: Claiming fa:16:3e:09:34:a2 10.100.1.60
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.4958] device (tapec2bde76-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.4968] device (tap68a7ca31-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.4975] device (tapec2bde76-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.488 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b076e94e-f212-4f8a-b960-e8d5f6547adc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[11998fd6-0f4a-436e-b35c-1af79bc7782c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 systemd[1]: Started Virtual Machine qemu-72-instance-0000003e.
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.5023] device (tap68a7ca31-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00553|binding|INFO|Setting lport 68a7ca31-4ee2-4e32-9574-3113f63090cf ovn-installed in OVS
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 188af2a5-ff92-4f42-8bdc-5dec2f24d46a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance cfd30417-ee01-41d3-8a93-e49cd960d338 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b3d2cd05-012d-4189-bc6c-c40fc1f72c0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.517 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a23d6956-f85a-40b1-9e54-1b32d2af191e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 192, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.518 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4aa20e30-d71a-4765-9b3e-a72a156d2c88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.518 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 6 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.518 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1344MB phys_disk=59GB used_disk=6GB total_vcpus=8 used_vcpus=6 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.524 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a4c9d5c4-55f7-44a9-b375-bdf48bad544e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00554|binding|INFO|Setting lport ec2bde76-e053-498c-9d73-ef340b6cfe82 ovn-installed in OVS
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.553 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[defb5ce8-1b72-4770-8424-1d5e15afd0fd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.595 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d84de245-1632-48e4-a23a-ee522bd1d107]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.6024] manager: (tapb044fd19-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/255)
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.603 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7d7a0b9-3eda-4bb8-b989-c9e2b1567756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.610 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:34:a2 10.100.1.60'], port_security=['fa:16:3e:09:34:a2 10.100.1.60'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.60/24', 'neutron:device_id': '4aa20e30-d71a-4765-9b3e-a72a156d2c88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ca58653-b705-43e9-827e-3bbf7240a807', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '2', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a397202a-2ca0-49d4-bf1d-28612c8843c1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ec2bde76-e053-498c-9d73-ef340b6cfe82) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00555|binding|INFO|Setting lport ec2bde76-e053-498c-9d73-ef340b6cfe82 up in Southbound
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00556|binding|INFO|Setting lport 68a7ca31-4ee2-4e32-9574-3113f63090cf up in Southbound
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.638 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.660 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a42d1476-8d3c-4a34-8c12-f8c1914047da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.670 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02a64aad-714d-4e8e-8bea-fb8b7a91bb94]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.7027] device (tapb044fd19-c0): carrier: link connected
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.711 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c3a2d041-f7b1-4ee0-9de6-0685cb092b11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.737 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7957135-1c85-4e95-b5bb-879b889afd60]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb044fd19-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a4:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717526, 'reachable_time': 32623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325522, 'error': None, 'target': 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.759 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c07d00e8-5a02-4345-bc73-9fffaf96f9b1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feeb:a44a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717526, 'tstamp': 717526}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325523, 'error': None, 'target': 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1558: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 3.0 MiB/s wr, 253 op/s
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.793 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1499f181-23a6-46f6-8e74-2168f572300d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb044fd19-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:eb:a4:4a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 169], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717526, 'reachable_time': 32623, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325524, 'error': None, 'target': 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24335326-5171-4574-8f7c-42e0e7a5950e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.922 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c867552a-30d1-468f-bbf1-512e5d5b7ae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.924 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb044fd19-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.924 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.925 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb044fd19-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:31 np0005473739 kernel: tapb044fd19-c0: entered promiscuous mode
Oct  7 10:16:31 np0005473739 NetworkManager[44949]: <info>  [1759846591.9278] manager: (tapb044fd19-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/256)
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.930 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb044fd19-c0, col_values=(('external_ids', {'iface-id': '4fe1f261-bc24-44ec-9b31-81ad525446fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:31Z|00557|binding|INFO|Releasing lport 4fe1f261-bc24-44ec-9b31-81ad525446fb from this chassis (sb_readonly=0)
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]: {
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:    "0": [
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:        {
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "devices": [
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "/dev/loop3"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            ],
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_name": "ceph_lv0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_size": "21470642176",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "name": "ceph_lv0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "tags": {
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cluster_name": "ceph",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.crush_device_class": "",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.encrypted": "0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osd_id": "0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.type": "block",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.vdo": "0"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            },
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "type": "block",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "vg_name": "ceph_vg0"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:        }
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:    ],
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:    "1": [
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:        {
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "devices": [
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "/dev/loop4"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            ],
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_name": "ceph_lv1",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_size": "21470642176",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "name": "ceph_lv1",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "tags": {
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cluster_name": "ceph",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.crush_device_class": "",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.encrypted": "0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osd_id": "1",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.type": "block",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.vdo": "0"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            },
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "type": "block",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "vg_name": "ceph_vg1"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:        }
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:    ],
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:    "2": [
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:        {
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "devices": [
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "/dev/loop5"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            ],
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_name": "ceph_lv2",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_size": "21470642176",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "name": "ceph_lv2",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "tags": {
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.cluster_name": "ceph",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.crush_device_class": "",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.encrypted": "0",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osd_id": "2",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.type": "block",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:                "ceph.vdo": "0"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            },
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "type": "block",
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:            "vg_name": "ceph_vg2"
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:        }
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]:    ]
Oct  7 10:16:31 np0005473739 elegant_stonebraker[325427]: }
Oct  7 10:16:31 np0005473739 nova_compute[259550]: 2025-10-07 14:16:31.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.962 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b044fd19-c1bd-4478-b5ed-6feb8831fea0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b044fd19-c1bd-4478-b5ed-6feb8831fea0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.963 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[760af0a5-0923-4162-9be7-57928b9b5871]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.964 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-b044fd19-c1bd-4478-b5ed-6feb8831fea0
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/b044fd19-c1bd-4478-b5ed-6feb8831fea0.pid.haproxy
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID b044fd19-c1bd-4478-b5ed-6feb8831fea0
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:16:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:31.965 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'env', 'PROCESS_TAG=haproxy-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b044fd19-c1bd-4478-b5ed-6feb8831fea0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:16:31 np0005473739 systemd[1]: libpod-b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a.scope: Deactivated successfully.
Oct  7 10:16:31 np0005473739 podman[325408]: 2025-10-07 14:16:31.987443918 +0000 UTC m=+1.060845955 container died b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:16:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7fab0535584fe63a8f32fd786e140daa1868d09b07451bd136f6a6f44f0ec137-merged.mount: Deactivated successfully.
Oct  7 10:16:32 np0005473739 podman[325408]: 2025-10-07 14:16:32.077858717 +0000 UTC m=+1.151260734 container remove b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_stonebraker, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:16:32 np0005473739 systemd[1]: libpod-conmon-b83e6098c8b58391081fbfd5aa599999c6525bda74613622bbea0e36306b988a.scope: Deactivated successfully.
Oct  7 10:16:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:16:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3127327563' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.234 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.595s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.245 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.307 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0041403273954874 of space, bias 1.0, pg target 1.24209821864622 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.1991676866616201 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006084358924269063 quantized to 16 (current 16)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.605448655336329e-05 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006464631357035879 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:16:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015210897310672657 quantized to 32 (current 32)
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.424 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.424 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.425 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.425 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:16:32 np0005473739 podman[325710]: 2025-10-07 14:16:32.431826377 +0000 UTC m=+0.071197241 container create 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.431 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updated VIF entry in instance network info cache for port 68a7ca31-4ee2-4e32-9574-3113f63090cf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.432 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updating instance_info_cache with network_info: [{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:32 np0005473739 systemd[1]: Started libpod-conmon-179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c.scope.
Oct  7 10:16:32 np0005473739 podman[325710]: 2025-10-07 14:16:32.396278149 +0000 UTC m=+0.035649043 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:16:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:16:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7783d4fd75c24f0a72d4cfed4e7ddb762e4c7fbdb995022c9958d288ab1b9453/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.527 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.528 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-changed-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.528 2 DEBUG nova.compute.manager [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Refreshing instance network info cache due to event network-changed-ec2bde76-e053-498c-9d73-ef340b6cfe82. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.528 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.528 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.529 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Refreshing network info cache for port ec2bde76-e053-498c-9d73-ef340b6cfe82 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:32 np0005473739 podman[325710]: 2025-10-07 14:16:32.530798932 +0000 UTC m=+0.170169836 container init 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:16:32 np0005473739 podman[325710]: 2025-10-07 14:16:32.539018069 +0000 UTC m=+0.178388953 container start 179b031c8dc198d24aba4a9eabc4d52a9bc31d2cefb1f827c82b8f8efdff2a8c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:16:32 np0005473739 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [NOTICE]   (325758) : New worker (325765) forked
Oct  7 10:16:32 np0005473739 neutron-haproxy-ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0[325748]: [NOTICE]   (325758) : Loading success.
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.607 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ec2bde76-e053-498c-9d73-ef340b6cfe82 in datapath 8ca58653-b705-43e9-827e-3bbf7240a807 unbound from our chassis#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.609 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8ca58653-b705-43e9-827e-3bbf7240a807#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.624 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4a500c-cbff-47f4-9ee8-6f22533598ab]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.625 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8ca58653-b1 in ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.627 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8ca58653-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.627 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[312ff651-7ffd-48a6-88d1-ab4d08351b17]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a74d376b-46c9-4dc4-ae29-26600ed781fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.642 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ff90daba-64fe-40b3-9d59-b4547eec8ec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:16:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3024468296' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:16:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:16:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3024468296' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5ceb4c1d-4fe5-4009-b4d9-e8f6b4775470]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.703 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846592.7027683, 4aa20e30-d71a-4765-9b3e-a72a156d2c88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.703 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] VM Started (Lifecycle Event)#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.718 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b078c6d5-614d-4214-af5d-c4f09ad70482]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 NetworkManager[44949]: <info>  [1759846592.7257] manager: (tap8ca58653-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/257)
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[936a1b3e-2d25-43cd-a6d5-1741e8fb383f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 systemd-udevd[325516]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.748 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.755 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846592.703018, 4aa20e30-d71a-4765-9b3e-a72a156d2c88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.755 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.765 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d169dffa-f8fd-4e42-966e-22350b24c789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.768 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbe4b6a-a236-40ab-b57f-cfbe51425309]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 podman[325807]: 2025-10-07 14:16:32.782134012 +0000 UTC m=+0.051489341 container create ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 10:16:32 np0005473739 NetworkManager[44949]: <info>  [1759846592.7973] device (tap8ca58653-b0): carrier: link connected
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.804 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71417853-1274-4e4f-9838-e1f3890af24f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.827 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24f2ef52-e1e8-4220-8b92-0830266f838e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ca58653-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:fa:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717636, 'reachable_time': 18852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 325826, 'error': None, 'target': 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 systemd[1]: Started libpod-conmon-ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a.scope.
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.849 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cda5c29e-642f-4b0c-8344-c9597b7eaba1]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe8e:fa9a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 717636, 'tstamp': 717636}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 325829, 'error': None, 'target': 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 podman[325807]: 2025-10-07 14:16:32.765254866 +0000 UTC m=+0.034610225 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:16:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.875 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0bccad0-51a1-4f24-bd77-40589a25a34e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8ca58653-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:8e:fa:9a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 170], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 717636, 'reachable_time': 18852, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 325833, 'error': None, 'target': 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 podman[325807]: 2025-10-07 14:16:32.887529356 +0000 UTC m=+0.156884705 container init ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:16:32 np0005473739 podman[325807]: 2025-10-07 14:16:32.896683737 +0000 UTC m=+0.166039066 container start ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:16:32 np0005473739 podman[325807]: 2025-10-07 14:16:32.9005427 +0000 UTC m=+0.169898059 container attach ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 10:16:32 np0005473739 systemd[1]: libpod-ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a.scope: Deactivated successfully.
Oct  7 10:16:32 np0005473739 compassionate_mendeleev[325830]: 167 167
Oct  7 10:16:32 np0005473739 conmon[325830]: conmon ba3244946807c252c44d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a.scope/container/memory.events
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.914 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23462fda-bf81-4dd1-baad-cbfaff14bac4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 podman[325838]: 2025-10-07 14:16:32.955691397 +0000 UTC m=+0.032809758 container died ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.984 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1920694f-ae8c-435a-bf38-d491eb52d969]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.986 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8ca58653-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.986 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.987 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8ca58653-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.988 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:32 np0005473739 NetworkManager[44949]: <info>  [1759846592.9898] manager: (tap8ca58653-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/258)
Oct  7 10:16:32 np0005473739 kernel: tap8ca58653-b0: entered promiscuous mode
Oct  7 10:16:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:32.993 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8ca58653-b0, col_values=(('external_ids', {'iface-id': 'f1bc0e82-dad6-41a5-b620-24a2adc845a9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:32 np0005473739 nova_compute[259550]: 2025-10-07 14:16:32.988 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:32Z|00558|binding|INFO|Releasing lport f1bc0e82-dad6-41a5-b620-24a2adc845a9 from this chassis (sb_readonly=0)
Oct  7 10:16:33 np0005473739 nova_compute[259550]: 2025-10-07 14:16:33.001 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:16:33 np0005473739 podman[325838]: 2025-10-07 14:16:33.007344861 +0000 UTC m=+0.084463202 container remove ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:16:33 np0005473739 nova_compute[259550]: 2025-10-07 14:16:33.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:33.011 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8ca58653-b705-43e9-827e-3bbf7240a807.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8ca58653-b705-43e9-827e-3bbf7240a807.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:33.012 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[60634390-b973-4e24-99ad-0e65870a3604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:33.013 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-8ca58653-b705-43e9-827e-3bbf7240a807
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/8ca58653-b705-43e9-827e-3bbf7240a807.pid.haproxy
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 8ca58653-b705-43e9-827e-3bbf7240a807
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:16:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:33.014 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807', 'env', 'PROCESS_TAG=haproxy-8ca58653-b705-43e9-827e-3bbf7240a807', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8ca58653-b705-43e9-827e-3bbf7240a807.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:16:33 np0005473739 systemd[1]: libpod-conmon-ba3244946807c252c44db9f5e97d019fa35768a122a94ffb56ed4e86c5a5c62a.scope: Deactivated successfully.
Oct  7 10:16:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-939088120ee116cceb0c3ade37b0269a883a25d9c9a1a5c68d4bc1eee4c58bb4-merged.mount: Deactivated successfully.
Oct  7 10:16:33 np0005473739 nova_compute[259550]: 2025-10-07 14:16:33.128 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:16:33 np0005473739 podman[325865]: 2025-10-07 14:16:33.232424807 +0000 UTC m=+0.045226236 container create e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:16:33 np0005473739 systemd[1]: Started libpod-conmon-e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec.scope.
Oct  7 10:16:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:16:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:33 np0005473739 podman[325865]: 2025-10-07 14:16:33.210877038 +0000 UTC m=+0.023678477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:16:33 np0005473739 podman[325865]: 2025-10-07 14:16:33.323353349 +0000 UTC m=+0.136154778 container init e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:16:33 np0005473739 podman[325865]: 2025-10-07 14:16:33.332286805 +0000 UTC m=+0.145088224 container start e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:16:33 np0005473739 podman[325865]: 2025-10-07 14:16:33.33741801 +0000 UTC m=+0.150219459 container attach e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:16:33 np0005473739 podman[325908]: 2025-10-07 14:16:33.424637214 +0000 UTC m=+0.052466676 container create 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:16:33 np0005473739 systemd[1]: Started libpod-conmon-05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753.scope.
Oct  7 10:16:33 np0005473739 nova_compute[259550]: 2025-10-07 14:16:33.481 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:33 np0005473739 nova_compute[259550]: 2025-10-07 14:16:33.482 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:33 np0005473739 nova_compute[259550]: 2025-10-07 14:16:33.482 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:16:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:16:33 np0005473739 podman[325908]: 2025-10-07 14:16:33.395938167 +0000 UTC m=+0.023767659 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:16:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b7b1cb4503c16a2a98e5b8ed9040d0c7346f943fe58ca2f33977494e163ea458/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:16:33 np0005473739 podman[325908]: 2025-10-07 14:16:33.516910522 +0000 UTC m=+0.144740014 container init 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:16:33 np0005473739 podman[325908]: 2025-10-07 14:16:33.525397706 +0000 UTC m=+0.153227168 container start 05cf35aca424565d0626631f889b3225bbbaac461a5badf791563efd529b3753 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:16:33 np0005473739 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [NOTICE]   (325927) : New worker (325929) forked
Oct  7 10:16:33 np0005473739 neutron-haproxy-ovnmeta-8ca58653-b705-43e9-827e-3bbf7240a807[325923]: [NOTICE]   (325927) : Loading success.
Oct  7 10:16:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1559: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.5 MiB/s wr, 168 op/s
Oct  7 10:16:33 np0005473739 nova_compute[259550]: 2025-10-07 14:16:33.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:33 np0005473739 nova_compute[259550]: 2025-10-07 14:16:33.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:16:34 np0005473739 nova_compute[259550]: 2025-10-07 14:16:34.372 2 DEBUG nova.compute.manager [req-aaf35bb0-7400-4e00-898b-25d5be6a11c8 req-f7d8c007-4ffc-49cb-a927-945b4fd32105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: fc163bed-856c-4ea5-9bf3-6989fb1027eb] Received event network-vif-deleted-432c69dd-fb1b-432b-b867-9fe29716430d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:34 np0005473739 focused_solomon[325886]: {
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "osd_id": 2,
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "type": "bluestore"
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:    },
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "osd_id": 1,
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "type": "bluestore"
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:    },
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "osd_id": 0,
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:        "type": "bluestore"
Oct  7 10:16:34 np0005473739 focused_solomon[325886]:    }
Oct  7 10:16:34 np0005473739 focused_solomon[325886]: }
Oct  7 10:16:34 np0005473739 systemd[1]: libpod-e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec.scope: Deactivated successfully.
Oct  7 10:16:34 np0005473739 systemd[1]: libpod-e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec.scope: Consumed 1.050s CPU time.
Oct  7 10:16:34 np0005473739 podman[325966]: 2025-10-07 14:16:34.446853369 +0000 UTC m=+0.028436913 container died e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:16:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-717c5c98f00ec96e267e62aa8f24ad374bc25e3b59593d083d1578a260701cd0-merged.mount: Deactivated successfully.
Oct  7 10:16:34 np0005473739 podman[325966]: 2025-10-07 14:16:34.508584989 +0000 UTC m=+0.090168533 container remove e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_solomon, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:16:34 np0005473739 systemd[1]: libpod-conmon-e45deaabb1ad6a3d5eecd45436e7a9f563b5a011ea1fe50f52063d856c35e2ec.scope: Deactivated successfully.
Oct  7 10:16:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:16:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:16:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:16:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:16:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 59215874-037b-4cbe-900c-990d51b957e2 does not exist
Oct  7 10:16:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 80aa4256-efa3-4386-8166-c69b3d8a5a4b does not exist
Oct  7 10:16:34 np0005473739 nova_compute[259550]: 2025-10-07 14:16:34.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:34Z|00076|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:6d:07 10.100.0.11
Oct  7 10:16:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:34Z|00077|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:6d:07 10.100.0.11
Oct  7 10:16:34 np0005473739 nova_compute[259550]: 2025-10-07 14:16:34.874 2 DEBUG nova.network.neutron [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.044 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.046 2 DEBUG oslo_concurrency.lockutils [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.046 2 DEBUG nova.network.neutron [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Refreshing network info cache for port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.049 2 DEBUG nova.virt.libvirt.vif [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.049 2 DEBUG nova.network.os_vif_util [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.050 2 DEBUG nova.network.os_vif_util [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.051 2 DEBUG os_vif [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.056 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1b8e1852-2a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.057 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1b8e1852-2a, col_values=(('external_ids', {'iface-id': '1b8e1852-2a5e-4c50-9ab0-110dfb492a49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:1e:ab:6d', 'vm-uuid': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 NetworkManager[44949]: <info>  [1759846595.0603] manager: (tap1b8e1852-2a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/259)
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.061 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.070 2 INFO os_vif [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a')#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.071 2 DEBUG nova.virt.libvirt.vif [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.072 2 DEBUG nova.network.os_vif_util [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.072 2 DEBUG nova.network.os_vif_util [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.088 2 DEBUG nova.virt.libvirt.guest [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] attach device xml: <interface type="ethernet">
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:1e:ab:6d"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <target dev="tap1b8e1852-2a"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:16:35 np0005473739 nova_compute[259550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  7 10:16:35 np0005473739 kernel: tap1b8e1852-2a: entered promiscuous mode
Oct  7 10:16:35 np0005473739 NetworkManager[44949]: <info>  [1759846595.1042] manager: (tap1b8e1852-2a): new Tun device (/org/freedesktop/NetworkManager/Devices/260)
Oct  7 10:16:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:35Z|00559|binding|INFO|Claiming lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for this chassis.
Oct  7 10:16:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:35Z|00560|binding|INFO|1b8e1852-2a5e-4c50-9ab0-110dfb492a49: Claiming fa:16:3e:1e:ab:6d 10.100.0.9
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:35Z|00561|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 ovn-installed in OVS
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.140 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ab:6d 10.100.0.9'], port_security=['fa:16:3e:1e:ab:6d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '7', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1b8e1852-2a5e-4c50-9ab0-110dfb492a49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:35Z|00562|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 up in Southbound
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.142 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 in datapath b1d9f332-f920-4d6e-8e91-dd13ec334d51 bound to our chassis#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.144 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b1d9f332-f920-4d6e-8e91-dd13ec334d51#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 systemd-udevd[326038]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.152 2 DEBUG nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.153 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.153 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.153 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Processing event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.154 2 DEBUG oslo_concurrency.lockutils [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.155 2 DEBUG nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] No event matching network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 in dict_keys([('network-vif-plugged', '68a7ca31-4ee2-4e32-9574-3113f63090cf')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.155 2 WARNING nova.compute.manager [req-3cca5a28-093b-4c5d-b922-cf0c3a256806 req-e5789542-a27a-466e-9025-d2a8c460fada 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received unexpected event network-vif-plugged-ec2bde76-e053-498c-9d73-ef340b6cfe82 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:16:35 np0005473739 NetworkManager[44949]: <info>  [1759846595.1654] device (tap1b8e1852-2a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:16:35 np0005473739 NetworkManager[44949]: <info>  [1759846595.1694] device (tap1b8e1852-2a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.169 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6760f675-852e-4ace-b04f-3a3d9c1bfc05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.185 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updated VIF entry in instance network info cache for port ec2bde76-e053-498c-9d73-ef340b6cfe82. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.186 2 DEBUG nova.network.neutron [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Updating instance_info_cache with network_info: [{"id": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "address": "fa:16:3e:6f:be:e9", "network": {"id": "b044fd19-c1bd-4478-b5ed-6feb8831fea0", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1731875233", "subnets": [{"cidr": "10.100.0.0/24", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68a7ca31-4e", "ovs_interfaceid": "68a7ca31-4ee2-4e32-9574-3113f63090cf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "address": "fa:16:3e:09:34:a2", "network": {"id": "8ca58653-b705-43e9-827e-3bbf7240a807", "bridge": "br-int", "label": "tempest-ServersTestMultiNic-1758201769", "subnets": [{"cidr": "10.100.1.0/24", "dns": [], "gateway": {"address": "10.100.1.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.1.60", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "1a99ac1945604cf5a5a5bd917ea52280", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapec2bde76-e0", "ovs_interfaceid": "ec2bde76-e053-498c-9d73-ef340b6cfe82", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.203 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[11c8642d-cb4c-4692-82a1-9c99fb5def66]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.207 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7257dd9d-4337-4d6d-bbe9-cc8d4bdfc7cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.244 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[667545a8-8178-4135-95f8-09f830dc10ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.269 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd710a74-7b39-4656-8ae9-ff631807e7db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb1d9f332-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:19:be:96'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 11, 'rx_bytes': 1000, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 145], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 711898, 'reachable_time': 29007, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326046, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.293 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8aec241-f037-49f3-9585-a5d685a304f0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711913, 'tstamp': 711913}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326047, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb1d9f332-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 711917, 'tstamp': 711917}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326047, 'error': None, 'target': 'ovnmeta-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.294 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb1d9f332-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.299 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb1d9f332-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.299 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.299 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb1d9f332-f0, col_values=(('external_ids', {'iface-id': '39e8b537-b932-40c7-bb18-5e90a537af13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:35.300 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.413 2 DEBUG oslo_concurrency.lockutils [req-d68140bd-5f44-45a7-90c2-5e671f0fdaa8 req-ddabdc56-a841-426e-819e-c2fa50d42b56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4aa20e30-d71a-4765-9b3e-a72a156d2c88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.420 2 DEBUG nova.virt.libvirt.driver [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.420 2 DEBUG nova.virt.libvirt.driver [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.421 2 DEBUG nova.virt.libvirt.driver [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:04:8c:cc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.421 2 DEBUG nova.virt.libvirt.driver [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] No VIF found with MAC fa:16:3e:1e:ab:6d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.537 2 DEBUG nova.virt.libvirt.guest [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <nova:name>tempest-tempest.common.compute-instance-985989312</nova:name>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:16:35</nova:creationTime>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:port uuid="8718eef8-8e7a-42ab-8df9-b469e81779d9">
Oct  7 10:16:35 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct  7 10:16:35 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:35 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:16:35 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:16:35 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:16:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:16:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:16:35 np0005473739 nova_compute[259550]: 2025-10-07 14:16:35.600 2 DEBUG oslo_concurrency.lockutils [None req-b1fedf57-80d3-46c6-ab3f-432701d2897f eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 11.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1560: 305 pgs: 305 active+clean; 484 MiB data, 798 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.5 MiB/s wr, 213 op/s
Oct  7 10:16:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e213 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:36Z|00078|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:1e:ab:6d 10.100.0.9
Oct  7 10:16:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:36Z|00079|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:1e:ab:6d 10.100.0.9
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.523 2 DEBUG nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.524 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.524 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Processing event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.525 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.526 2 DEBUG oslo_concurrency.lockutils [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.526 2 DEBUG nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] No waiting events found dispatching network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.526 2 WARNING nova.compute.manager [req-7b74a8eb-7ed1-4b0b-b7e5-a43334897ed9 req-f88b02b9-aa66-44dc-961b-f85a462bdb93 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Received unexpected event network-vif-plugged-68a7ca31-4ee2-4e32-9574-3113f63090cf for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.526 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846581.5224988, c14b06ec-ce54-4081-8d72-b22529c3b0b7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.526 2 INFO nova.compute.manager [-] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.528 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance event wait completed in 3 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.532 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846596.5318658, 4aa20e30-d71a-4765-9b3e-a72a156d2c88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.532 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.535 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.540 2 INFO nova.virt.libvirt.driver [-] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Instance spawned successfully.#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.541 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.576 2 DEBUG nova.compute.manager [None req-52f76325-e81f-4d0c-86d2-b5abaf04d7cf - - - - - -] [instance: c14b06ec-ce54-4081-8d72-b22529c3b0b7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.612 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.616 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.617 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.617 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.617 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.618 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.618 2 DEBUG nova.virt.libvirt.driver [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.622 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.706 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.764 2 INFO nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Took 17.74 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.764 2 DEBUG nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.870 2 INFO nova.compute.manager [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Took 19.34 seconds to build instance.#033[00m
Oct  7 10:16:36 np0005473739 nova_compute[259550]: 2025-10-07 14:16:36.934 2 DEBUG oslo_concurrency.lockutils [None req-ce5522ba-ef99-4914-9004-90213e104469 ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.466s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.384 2 DEBUG nova.network.neutron [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updated VIF entry in instance network info cache for port 1b8e1852-2a5e-4c50-9ab0-110dfb492a49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.385 2 DEBUG nova.network.neutron [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Updating instance_info_cache with network_info: [{"id": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "address": "fa:16:3e:04:8c:cc", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8718eef8-8e", "ovs_interfaceid": "8718eef8-8e7a-42ab-8df9-b469e81779d9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.419 2 DEBUG nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.419 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.420 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.420 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.420 2 DEBUG nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.420 2 WARNING nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.420 2 DEBUG nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.421 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.421 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.421 2 DEBUG oslo_concurrency.lockutils [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.421 2 DEBUG nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] No waiting events found dispatching network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.422 2 WARNING nova.compute.manager [req-d5cadc2b-4c4a-48cf-8369-7a07105c606e req-e607f0ad-5af0-405c-9bdf-76f5fb52ef82 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a] Received unexpected event network-vif-plugged-1b8e1852-2a5e-4c50-9ab0-110dfb492a49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.424 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.425 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.425 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Acquiring lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.425 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.425 2 DEBUG oslo_concurrency.lockutils [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lock "a23d6956-f85a-40b1-9e54-1b32d2af191e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.427 2 INFO nova.compute.manager [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Terminating instance#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.427 2 DEBUG nova.compute.manager [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.467 2 DEBUG oslo_concurrency.lockutils [req-79fe507a-8ae7-45ac-a7a8-5462d79c66bd req-9b18cba2-4893-4847-abb8-6e6c18696bac 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:16:37 np0005473739 kernel: tapae1b9c2d-38 (unregistering): left promiscuous mode
Oct  7 10:16:37 np0005473739 NetworkManager[44949]: <info>  [1759846597.4822] device (tapae1b9c2d-38): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00563|binding|INFO|Releasing lport ae1b9c2d-384d-4134-8799-babeadd70605 from this chassis (sb_readonly=0)
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00564|binding|INFO|Setting lport ae1b9c2d-384d-4134-8799-babeadd70605 down in Southbound
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00565|binding|INFO|Removing iface tapae1b9c2d-38 ovn-installed in OVS
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.506 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:1b:e2 10.100.0.12'], port_security=['fa:16:3e:09:1b:e2 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': 'a23d6956-f85a-40b1-9e54-1b32d2af191e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '972aa9372a81406990460fb46cf827e0', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21573a58-df26-46b3-96bc-30ac8d7d5432', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8439febc-2ab3-4376-877e-4af159445d58, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ae1b9c2d-384d-4134-8799-babeadd70605) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.507 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ae1b9c2d-384d-4134-8799-babeadd70605 in datapath 4fd643de-a9bb-4c41-8437-fb901dfd8879 unbound from our chassis#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.508 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4fd643de-a9bb-4c41-8437-fb901dfd8879#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.518 2 DEBUG oslo_concurrency.lockutils [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Acquiring lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.519 2 DEBUG oslo_concurrency.lockutils [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lock "interface-8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a-1b8e1852-2a5e-4c50-9ab0-110dfb492a49" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.528 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[886139a8-31d2-4a00-9349-596df3c6f159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.546 2 DEBUG nova.objects.instance [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Lazy-loading 'flavor' on Instance uuid 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.566 2 DEBUG nova.virt.libvirt.vif [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:41Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-tempest.common.compute-instance-985989312',display_name='tempest-tempest.common.compute-instance-985989312',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-tempest-common-compute-instance-985989312',id=58,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBMkfSkQ93Qtzd5IzdmUwhapwTZlk6XmzqauVYMwawYEg7PS5Qu+K2TkaUA05QLzSGqVi+tAqLl7Z1F1ye3YCecbLZ5Ci1FXr7K1Vx56G5xesPmyz1iflwCI9+ENs+SvalA==',key_name='tempest-keypair-1366576589',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:15:58Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a12799b2087644358b2597f825ff94da',ramdisk_id='',reservation_id='r-hnk7jfbi',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesTestJSON-1744123112',owner_user_name='tempest-AttachInterfacesTestJSON-1744123112-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:15:58Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='eb31457d04de49c28158a546d1b30b77',uuid=8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.567 2 DEBUG nova.network.os_vif_util [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converting VIF {"id": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "address": "fa:16:3e:1e:ab:6d", "network": {"id": "b1d9f332-f920-4d6e-8e91-dd13ec334d51", "bridge": "br-int", "label": "tempest-AttachInterfacesTestJSON-23134797-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a12799b2087644358b2597f825ff94da", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1b8e1852-2a", "ovs_interfaceid": "1b8e1852-2a5e-4c50-9ab0-110dfb492a49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.568 2 DEBUG nova.network.os_vif_util [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:1e:ab:6d,bridge_name='br-int',has_traffic_filtering=True,id=1b8e1852-2a5e-4c50-9ab0-110dfb492a49,network=Network(b1d9f332-f920-4d6e-8e91-dd13ec334d51),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap1b8e1852-2a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.572 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.576 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.578 2 DEBUG nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Attempting to detach device tap1b8e1852-2a from instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.579 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:1e:ab:6d"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <target dev="tap1b8e1852-2a"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:16:37 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:16:37 np0005473739 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Deactivated successfully.
Oct  7 10:16:37 np0005473739 systemd[1]: machine-qemu\x2d69\x2dinstance\x2d0000003c.scope: Consumed 19.608s CPU time.
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.583 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8018e650-01fe-4548-897f-2282e78f5b53]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:37 np0005473739 systemd-machined[214580]: Machine qemu-69-instance-0000003c terminated.
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.594 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.597 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[41aea983-aa6f-4bc7-8b60-90381cdf0f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.604 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <name>instance-0000003a</name>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <uuid>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</uuid>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <nova:name>tempest-tempest.common.compute-instance-985989312</nova:name>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:16:35</nova:creationTime>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:user uuid="eb31457d04de49c28158a546d1b30b77">tempest-AttachInterfacesTestJSON-1744123112-project-member</nova:user>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:project uuid="a12799b2087644358b2597f825ff94da">tempest-AttachInterfacesTestJSON-1744123112</nova:project>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:port uuid="8718eef8-8e7a-42ab-8df9-b469e81779d9">
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <nova:port uuid="1b8e1852-2a5e-4c50-9ab0-110dfb492a49">
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:16:37 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <entry name='serial'>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <entry name='uuid'>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</entry>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk' index='2'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a_disk.config' index='1'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:04:8c:cc'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target dev='tap8718eef8-8e'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:1e:ab:6d'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target dev='tap1b8e1852-2a'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='net1'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <source path='/dev/pts/4'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log' append='off'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/4'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <source path='/dev/pts/4'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a/console.log' append='off'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5905' autoport='yes' listen='::0'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c111,c966</label>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c111,c966</imagelabel>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:16:37 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:16:37 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.604 2 INFO nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Successfully detached device tap1b8e1852-2a from instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a from the persistent domain config.#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.605 2 DEBUG nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] (1/8): Attempting to detach device tap1b8e1852-2a with device alias net1 from instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.605 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:1e:ab:6d"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <target dev="tap1b8e1852-2a"/>
Oct  7 10:16:37 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:16:37 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.614 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.615 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.615 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Acquiring lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.615 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.616 2 DEBUG oslo_concurrency.lockutils [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] Lock "4aa20e30-d71a-4765-9b3e-a72a156d2c88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.617 2 INFO nova.compute.manager [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Terminating instance#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.618 2 DEBUG nova.compute.manager [None req-9df2c7f6-8d8a-4fb5-998e-0bf7d85a56fa ff37c390826e43079eff2a1423ccc2b8 1a99ac1945604cf5a5a5bd917ea52280 - - default default] [instance: 4aa20e30-d71a-4765-9b3e-a72a156d2c88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.635 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[adc029ad-69ec-4f33-92b0-494a74f109e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.662 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23739bad-f541-4fb9-939b-1f430c24b546]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4fd643de-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:80:8e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 149], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 713303, 'reachable_time': 32732, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 326059, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.671 2 INFO nova.virt.libvirt.driver [-] [instance: a23d6956-f85a-40b1-9e54-1b32d2af191e] Instance destroyed successfully.#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.673 2 DEBUG nova.objects.instance [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Lazy-loading 'resources' on Instance uuid a23d6956-f85a-40b1-9e54-1b32d2af191e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.684 2 DEBUG nova.virt.libvirt.vif [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:15:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ListServerFiltersTestJSON-instance-289403182',display_name='tempest-ListServerFiltersTestJSON-instance-289403182',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(5),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-listserverfilterstestjson-instance-289403182',id=60,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=5,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:16:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=192,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='972aa9372a81406990460fb46cf827e0',ramdisk_id='',reservation_id='r-d58h1k0w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ListServerFiltersTestJSON-937453277',owner_user_name='tempest-ListServerFiltersTestJSON-937453277-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:16:01Z,user_data=None,user_id='d8faa7636d634de587c1631c3452264e',uuid=a23d6956-f85a-40b1-9e54-1b32d2af191e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.684 2 DEBUG nova.network.os_vif_util [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converting VIF {"id": "ae1b9c2d-384d-4134-8799-babeadd70605", "address": "fa:16:3e:09:1b:e2", "network": {"id": "4fd643de-a9bb-4c41-8437-fb901dfd8879", "bridge": "br-int", "label": "tempest-ListServerFiltersTestJSON-2019304827-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "972aa9372a81406990460fb46cf827e0", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapae1b9c2d-38", "ovs_interfaceid": "ae1b9c2d-384d-4134-8799-babeadd70605", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.685 2 DEBUG nova.network.os_vif_util [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.685 2 DEBUG os_vif [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.686 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7d4121-3d9e-49d7-a03b-0e05378e7605]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713317, 'tstamp': 713317}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326069, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4fd643de-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 713321, 'tstamp': 713321}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 326069, 'error': None, 'target': 'ovnmeta-4fd643de-a9bb-4c41-8437-fb901dfd8879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapae1b9c2d-38, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.688 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4fd643de-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:16:37 np0005473739 kernel: tap68a7ca31-4e (unregistering): left promiscuous mode
Oct  7 10:16:37 np0005473739 NetworkManager[44949]: <info>  [1759846597.6971] device (tap68a7ca31-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00566|binding|INFO|Releasing lport 68a7ca31-4ee2-4e32-9574-3113f63090cf from this chassis (sb_readonly=0)
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00567|binding|INFO|Setting lport 68a7ca31-4ee2-4e32-9574-3113f63090cf down in Southbound
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00568|binding|INFO|Removing iface tap68a7ca31-4e ovn-installed in OVS
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.720 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4fd643de-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.720 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.721 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4fd643de-a0, col_values=(('external_ids', {'iface-id': '879f54f7-e219-4616-9199-264d02fdd4cf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.721 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:37 np0005473739 kernel: tap1b8e1852-2a (unregistering): left promiscuous mode
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.729 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6f:be:e9 10.100.0.6'], port_security=['fa:16:3e:6f:be:e9 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/24', 'neutron:device_id': '4aa20e30-d71a-4765-9b3e-a72a156d2c88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b044fd19-c1bd-4478-b5ed-6feb8831fea0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=21c6dcc3-8dd4-4e2a-af6b-6893629fd93b, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=68a7ca31-4ee2-4e32-9574-3113f63090cf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.731 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 68a7ca31-4ee2-4e32-9574-3113f63090cf in datapath b044fd19-c1bd-4478-b5ed-6feb8831fea0 unbound from our chassis#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.729 2 INFO os_vif [None req-b4ba2e0c-01da-431d-9cd5-a419d8311db0 d8faa7636d634de587c1631c3452264e 972aa9372a81406990460fb46cf827e0 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:1b:e2,bridge_name='br-int',has_traffic_filtering=True,id=ae1b9c2d-384d-4134-8799-babeadd70605,network=Network(4fd643de-a9bb-4c41-8437-fb901dfd8879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapae1b9c2d-38')#033[00m
Oct  7 10:16:37 np0005473739 kernel: tapec2bde76-e0 (unregistering): left promiscuous mode
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.733 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b044fd19-c1bd-4478-b5ed-6feb8831fea0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ab98bc5-891b-47a6-bb95-fa2a00660631]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.734 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b044fd19-c1bd-4478-b5ed-6feb8831fea0 namespace which is not needed anymore#033[00m
Oct  7 10:16:37 np0005473739 NetworkManager[44949]: <info>  [1759846597.7360] device (tap1b8e1852-2a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:37 np0005473739 NetworkManager[44949]: <info>  [1759846597.7372] device (tapec2bde76-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.768 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759846597.7391524, 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00569|binding|INFO|Releasing lport ec2bde76-e053-498c-9d73-ef340b6cfe82 from this chassis (sb_readonly=0)
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00570|binding|INFO|Setting lport ec2bde76-e053-498c-9d73-ef340b6cfe82 down in Southbound
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00571|binding|INFO|Releasing lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 from this chassis (sb_readonly=0)
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00572|binding|INFO|Setting lport 1b8e1852-2a5e-4c50-9ab0-110dfb492a49 down in Southbound
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00573|binding|INFO|Removing iface tapec2bde76-e0 ovn-installed in OVS
Oct  7 10:16:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:16:37Z|00574|binding|INFO|Removing iface tap1b8e1852-2a ovn-installed in OVS
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.769 2 DEBUG nova.virt.libvirt.driver [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] Start waiting for the detach event from libvirt for device tap1b8e1852-2a with device alias net1 for instance 8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.774 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:16:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1561: 305 pgs: 305 active+clean; 484 MiB data, 799 MiB used, 59 GiB / 60 GiB avail; 636 KiB/s rd, 56 KiB/s wr, 99 op/s
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.782 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:1e:ab:6d 10.100.0.9'], port_security=['fa:16:3e:1e:ab:6d 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1d9f332-f920-4d6e-8e91-dd13ec334d51', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-AttachInterfacesTestJSON-997458546', 'neutron:project_id': 'a12799b2087644358b2597f825ff94da', 'neutron:revision_number': '9', 'neutron:security_group_ids': '66746743-039f-411c-bc2d-66e123229fb6', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=07be2d9a-2580-4f49-84bb-cee931c4f6d6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1b8e1852-2a5e-4c50-9ab0-110dfb492a49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:37 np0005473739 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003e.scope: Deactivated successfully.
Oct  7 10:16:37 np0005473739 systemd[1]: machine-qemu\x2d72\x2dinstance\x2d0000003e.scope: Consumed 2.020s CPU time.
Oct  7 10:16:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:16:37.784 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:34:a2 10.100.1.60'], port_security=['fa:16:3e:09:34:a2 10.100.1.60'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.1.60/24', 'neutron:device_id': '4aa20e30-d71a-4765-9b3e-a72a156d2c88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8ca58653-b705-43e9-827e-3bbf7240a807', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '1a99ac1945604cf5a5a5bd917ea52280', 'neutron:revision_number': '4', 'neutron:security_group_ids': '40fc16f5-a52d-41ed-a9c0-651d80df54b4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a397202a-2ca0-49d4-bf1d-28612c8843c1, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ec2bde76-e053-498c-9d73-ef340b6cfe82) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:16:37 np0005473739 systemd-machined[214580]: Machine qemu-72-instance-0000003e terminated.
Oct  7 10:16:37 np0005473739 nova_compute[259550]: 2025-10-07 14:16:37.788 2 DEBUG nova.virt.libvirt.guest [None req-41d40662-72cc-41e4-b833-6c5f50c26243 eb31457d04de49c28158a546d1b30b77 a12799b2087644358b2597f825ff94da - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:1e:ab:6d"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap1b8e1852-2a"/></interface>not found in domain: <domain type='kvm' id='68'>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <name>instance-0000003a</name>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <uuid>8eb2e7f8-3d9c-4afb-840b-5a70b0f8446a</uuid>
Oct  7 10:16:37 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:01 np0005473739 podman[348203]: 2025-10-07 14:22:01.755417338 +0000 UTC m=+0.088040066 container cleanup 8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:22:01 np0005473739 systemd[1]: libpod-conmon-8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee.scope: Deactivated successfully.
Oct  7 10:22:01 np0005473739 podman[348248]: 2025-10-07 14:22:01.813741649 +0000 UTC m=+0.036590777 container remove 8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dde34601-90bb-416b-8b2f-f4585ecd2096]: (4, ('Tue Oct  7 02:22:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70 (8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee)\n8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee\nTue Oct  7 02:22:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70 (8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee)\n8c4351ba767e7d11a71761e7c81e4779aaaafba053ed65d4da447bc8606973ee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.820 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f58bda8-c763-4695-bdb2-b2518761fc62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.821 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07bd0a98-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.823 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:01 np0005473739 kernel: tap07bd0a98-40: left promiscuous mode
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.842 2 DEBUG nova.virt.libvirt.vif [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerMetadataNegativeTestJSON-server-502915421',display_name='tempest-ServerMetadataNegativeTestJSON-server-502915421',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servermetadatanegativetestjson-server-502915421',id=87,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:21:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0675ef8ab0b84423b35c16687980a886',ramdisk_id='',reservation_id='r-a57ir3jh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerMetadataNegativeTestJSON-1962853244',owner_user_name='tempest-ServerMetadataNegativeTestJSON-1962853244-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:56Z,user_data=None,user_id='f3a5238c8d6b406aa83ab9cfd1b31cf1',uuid=ad8af3d5-66d4-4db1-bd40-42d766f2fde7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.843 2 DEBUG nova.network.os_vif_util [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Converting VIF {"id": "f13bcd51-d70f-4e6d-9161-9020447812fc", "address": "fa:16:3e:cd:66:73", "network": {"id": "07bd0a98-4ed2-404c-b943-5ab56d6fbe70", "bridge": "br-int", "label": "tempest-ServerMetadataNegativeTestJSON-842528261-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0675ef8ab0b84423b35c16687980a886", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf13bcd51-d7", "ovs_interfaceid": "f13bcd51-d70f-4e6d-9161-9020447812fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.844 2 DEBUG nova.network.os_vif_util [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.844 2 DEBUG os_vif [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.846 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf13bcd51-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.852 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8583bcff-c4ec-4eae-87a0-b602fe45c843]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:01 np0005473739 nova_compute[259550]: 2025-10-07 14:22:01.867 2 INFO os_vif [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cd:66:73,bridge_name='br-int',has_traffic_filtering=True,id=f13bcd51-d70f-4e6d-9161-9020447812fc,network=Network(07bd0a98-4ed2-404c-b943-5ab56d6fbe70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf13bcd51-d7')#033[00m
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.870 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f412c636-f322-4f93-ae1b-53808d39c47d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.875 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7f4344-7373-41cc-8c88-19d1fd907203]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.894 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a3f75a2-5fea-4c32-934f-661acc5e6bd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749849, 'reachable_time': 20850, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 348276, 'error': None, 'target': 'ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.897 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-07bd0a98-4ed2-404c-b943-5ab56d6fbe70 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:22:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:01.897 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9db2a649-4c26-417f-a94d-7ac26ebbde46]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:01 np0005473739 systemd[1]: run-netns-ovnmeta\x2d07bd0a98\x2d4ed2\x2d404c\x2db943\x2d5ab56d6fbe70.mount: Deactivated successfully.
Oct  7 10:22:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1744: 305 pgs: 305 active+clean; 339 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 55 KiB/s wr, 223 op/s
Oct  7 10:22:02 np0005473739 rsyslogd[1004]: imjournal: 18258 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  7 10:22:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e234 do_prune osdmap full prune enabled
Oct  7 10:22:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e235 e235: 3 total, 3 up, 3 in
Oct  7 10:22:02 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e235: 3 total, 3 up, 3 in
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.324 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] cloning vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk@97b2f88dc1be43ee906215f7a98ce3b8 to images/4a1936df-f472-466a-a569-3e6ba7a787d4 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.433 2 INFO nova.virt.libvirt.driver [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Deleting instance files /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7_del#033[00m
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.434 2 INFO nova.virt.libvirt.driver [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Deletion of /var/lib/nova/instances/ad8af3d5-66d4-4db1-bd40-42d766f2fde7_del complete#033[00m
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.449 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] flattening images/4a1936df-f472-466a-a569-3e6ba7a787d4 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.514 2 INFO nova.compute.manager [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Took 1.02 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.514 2 DEBUG oslo.service.loopingcall [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.515 2 DEBUG nova.compute.manager [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.515 2 DEBUG nova.network.neutron [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:22:02 np0005473739 nova_compute[259550]: 2025-10-07 14:22:02.936 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] removing snapshot(97b2f88dc1be43ee906215f7a98ce3b8) on rbd image(d932a7ab-839c-48b9-804f-90cc8634e93b_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.160 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-unplugged-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.160 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] No waiting events found dispatching network-vif-unplugged-f13bcd51-d70f-4e6d-9161-9020447812fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-unplugged-f13bcd51-d70f-4e6d-9161-9020447812fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.161 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.162 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.162 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.162 2 DEBUG oslo_concurrency.lockutils [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.162 2 DEBUG nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] No waiting events found dispatching network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.162 2 WARNING nova.compute.manager [req-da1b0891-962b-4ee7-8147-819c29c834f1 req-cf612407-0460-4438-b8f3-b18c737748ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received unexpected event network-vif-plugged-f13bcd51-d70f-4e6d-9161-9020447812fc for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.202 2 DEBUG nova.network.neutron [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.222 2 INFO nova.compute.manager [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Took 0.71 seconds to deallocate network for instance.#033[00m
Oct  7 10:22:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e235 do_prune osdmap full prune enabled
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.259 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.260 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e236 e236: 3 total, 3 up, 3 in
Oct  7 10:22:03 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e236: 3 total, 3 up, 3 in
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.318 2 DEBUG nova.storage.rbd_utils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] creating snapshot(snap) on rbd image(4a1936df-f472-466a-a569-3e6ba7a787d4) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.355 2 DEBUG nova.compute.manager [req-d54c52f2-b80f-41af-adc6-92f4cbb6da85 req-9ddc8412-f384-4a33-8955-0ef99826bccd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Received event network-vif-deleted-f13bcd51-d70f-4e6d-9161-9020447812fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.404 2 DEBUG oslo_concurrency.processutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3537870051' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.850 2 DEBUG oslo_concurrency.processutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.856 2 DEBUG nova.compute.provider_tree [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:03 np0005473739 nova_compute[259550]: 2025-10-07 14:22:03.921 2 DEBUG nova.scheduler.client.report [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1747: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 386 MiB data, 838 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 5.3 MiB/s wr, 414 op/s
Oct  7 10:22:04 np0005473739 nova_compute[259550]: 2025-10-07 14:22:04.030 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.770s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:04 np0005473739 nova_compute[259550]: 2025-10-07 14:22:04.085 2 INFO nova.scheduler.client.report [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Deleted allocations for instance ad8af3d5-66d4-4db1-bd40-42d766f2fde7#033[00m
Oct  7 10:22:04 np0005473739 nova_compute[259550]: 2025-10-07 14:22:04.233 2 DEBUG oslo_concurrency.lockutils [None req-0773ff51-b35f-4e10-8bc1-31a814c114a1 f3a5238c8d6b406aa83ab9cfd1b31cf1 0675ef8ab0b84423b35c16687980a886 - - default default] Lock "ad8af3d5-66d4-4db1-bd40-42d766f2fde7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.745s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e236 do_prune osdmap full prune enabled
Oct  7 10:22:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e237 e237: 3 total, 3 up, 3 in
Oct  7 10:22:04 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e237: 3 total, 3 up, 3 in
Oct  7 10:22:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:05Z|00919|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct  7 10:22:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:05Z|00920|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.799 2 INFO nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Snapshot image upload complete#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.799 2 DEBUG nova.compute.manager [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.870 2 INFO nova.compute.manager [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Shelve offloading#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.877 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance destroyed successfully.#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.877 2 DEBUG nova.compute.manager [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.879 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.879 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.879 2 DEBUG nova.network.neutron [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.894 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.895 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:05 np0005473739 nova_compute[259550]: 2025-10-07 14:22:05.932 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:22:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1749: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 16 MiB/s rd, 7.8 MiB/s wr, 481 op/s
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.141 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.144 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.152 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.152 2 INFO nova.compute.claims [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.361 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:22:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d1a88338-6437-484e-8c42-0319df240c32 does not exist
Oct  7 10:22:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bf38e8d2-bb48-4150-8017-7c3da1158835 does not exist
Oct  7 10:22:06 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1287e576-192e-4c3b-bbce-2faa29287201 does not exist
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:22:06 np0005473739 podman[348559]: 2025-10-07 14:22:06.535956415 +0000 UTC m=+0.064460263 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:22:06 np0005473739 podman[348558]: 2025-10-07 14:22:06.563496552 +0000 UTC m=+0.092060992 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1197930786' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.842 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.481s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.848 2 DEBUG nova.compute.provider_tree [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:06 np0005473739 nova_compute[259550]: 2025-10-07 14:22:06.906 2 DEBUG nova.scheduler.client.report [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:06 np0005473739 podman[348733]: 2025-10-07 14:22:06.967730561 +0000 UTC m=+0.047942477 container create 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:22:07 np0005473739 systemd[1]: Started libpod-conmon-95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1.scope.
Oct  7 10:22:07 np0005473739 podman[348733]: 2025-10-07 14:22:06.940485472 +0000 UTC m=+0.020697388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:22:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:22:07 np0005473739 nova_compute[259550]: 2025-10-07 14:22:07.053 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:07 np0005473739 nova_compute[259550]: 2025-10-07 14:22:07.054 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:22:07 np0005473739 podman[348733]: 2025-10-07 14:22:07.101744541 +0000 UTC m=+0.181956457 container init 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:22:07 np0005473739 podman[348733]: 2025-10-07 14:22:07.109890107 +0000 UTC m=+0.190102023 container start 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:22:07 np0005473739 objective_banzai[348749]: 167 167
Oct  7 10:22:07 np0005473739 systemd[1]: libpod-95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1.scope: Deactivated successfully.
Oct  7 10:22:07 np0005473739 podman[348733]: 2025-10-07 14:22:07.175744166 +0000 UTC m=+0.255956092 container attach 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:22:07 np0005473739 podman[348733]: 2025-10-07 14:22:07.176252869 +0000 UTC m=+0.256464795 container died 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:22:07 np0005473739 nova_compute[259550]: 2025-10-07 14:22:07.213 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:22:07 np0005473739 nova_compute[259550]: 2025-10-07 14:22:07.214 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:22:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-483dce97899cb92b2e7c1f4f9fc91d7dcbb0cb254e905219494fff7e6ddba55e-merged.mount: Deactivated successfully.
Oct  7 10:22:07 np0005473739 podman[348733]: 2025-10-07 14:22:07.259052286 +0000 UTC m=+0.339264212 container remove 95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_banzai, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:22:07 np0005473739 systemd[1]: libpod-conmon-95acac42c385337fb149810c131208059c4fceb745f3c12a131b931f3a481fc1.scope: Deactivated successfully.
Oct  7 10:22:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:22:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:22:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:22:07 np0005473739 nova_compute[259550]: 2025-10-07 14:22:07.440 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:22:07 np0005473739 podman[348775]: 2025-10-07 14:22:07.44512211 +0000 UTC m=+0.039914904 container create 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:22:07 np0005473739 systemd[1]: Started libpod-conmon-76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9.scope.
Oct  7 10:22:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:22:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:07 np0005473739 podman[348775]: 2025-10-07 14:22:07.428582503 +0000 UTC m=+0.023375307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:22:07 np0005473739 podman[348775]: 2025-10-07 14:22:07.524745364 +0000 UTC m=+0.119538178 container init 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:22:07 np0005473739 podman[348775]: 2025-10-07 14:22:07.537757777 +0000 UTC m=+0.132550571 container start 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:22:07 np0005473739 podman[348775]: 2025-10-07 14:22:07.540835798 +0000 UTC m=+0.135628592 container attach 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:22:07 np0005473739 nova_compute[259550]: 2025-10-07 14:22:07.582 2 DEBUG nova.policy [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:22:07 np0005473739 nova_compute[259550]: 2025-10-07 14:22:07.779 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:22:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1750: 305 pgs: 1 active+clean+snaptrim, 304 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 12 MiB/s rd, 7.8 MiB/s wr, 326 op/s
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.325 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.326 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.326 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Creating image(s)#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.354 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.398 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.427 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.433 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.478 2 DEBUG nova.network.neutron [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.521 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.522 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.523 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.523 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.549 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.554 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:08 np0005473739 great_poitras[348793]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:22:08 np0005473739 great_poitras[348793]: --> relative data size: 1.0
Oct  7 10:22:08 np0005473739 great_poitras[348793]: --> All data devices are unavailable
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.592 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:08 np0005473739 systemd[1]: libpod-76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9.scope: Deactivated successfully.
Oct  7 10:22:08 np0005473739 systemd[1]: libpod-76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9.scope: Consumed 1.016s CPU time.
Oct  7 10:22:08 np0005473739 podman[348775]: 2025-10-07 14:22:08.617627114 +0000 UTC m=+1.212419918 container died 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 10:22:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-32bc02db5c1cdf59de1a652a2137d090bacee84fad3befe3a9877dcd37c58124-merged.mount: Deactivated successfully.
Oct  7 10:22:08 np0005473739 podman[348775]: 2025-10-07 14:22:08.679770677 +0000 UTC m=+1.274563471 container remove 76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 10:22:08 np0005473739 systemd[1]: libpod-conmon-76d05159f8d3740f0c330ae55079e5375b62ff686f84085080814552cd650cb9.scope: Deactivated successfully.
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.890 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.942 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Successfully created port: 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:22:08 np0005473739 nova_compute[259550]: 2025-10-07 14:22:08.981 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:22:09 np0005473739 nova_compute[259550]: 2025-10-07 14:22:09.081 2 DEBUG nova.objects.instance [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid e1499379-f6d6-4edd-8af0-2ccf7c6e6683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:09 np0005473739 nova_compute[259550]: 2025-10-07 14:22:09.101 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:22:09 np0005473739 nova_compute[259550]: 2025-10-07 14:22:09.101 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Ensure instance console log exists: /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:22:09 np0005473739 nova_compute[259550]: 2025-10-07 14:22:09.102 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:09 np0005473739 nova_compute[259550]: 2025-10-07 14:22:09.102 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:09 np0005473739 nova_compute[259550]: 2025-10-07 14:22:09.102 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:09 np0005473739 podman[349139]: 2025-10-07 14:22:09.305177877 +0000 UTC m=+0.034849761 container create 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:22:09 np0005473739 systemd[1]: Started libpod-conmon-9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042.scope.
Oct  7 10:22:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:22:09 np0005473739 podman[349139]: 2025-10-07 14:22:09.289910254 +0000 UTC m=+0.019582158 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:22:09 np0005473739 podman[349139]: 2025-10-07 14:22:09.388781275 +0000 UTC m=+0.118453179 container init 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:22:09 np0005473739 podman[349139]: 2025-10-07 14:22:09.396198482 +0000 UTC m=+0.125870366 container start 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:22:09 np0005473739 podman[349139]: 2025-10-07 14:22:09.399830697 +0000 UTC m=+0.129502611 container attach 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:22:09 np0005473739 jovial_dewdney[349155]: 167 167
Oct  7 10:22:09 np0005473739 systemd[1]: libpod-9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042.scope: Deactivated successfully.
Oct  7 10:22:09 np0005473739 podman[349139]: 2025-10-07 14:22:09.405160598 +0000 UTC m=+0.134832482 container died 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:22:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-66a2f4bf1e6fdec8b2278831b4227c54424b6bc7a37267c46084cd436904fb2f-merged.mount: Deactivated successfully.
Oct  7 10:22:09 np0005473739 podman[349139]: 2025-10-07 14:22:09.453361741 +0000 UTC m=+0.183033625 container remove 9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_dewdney, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:22:09 np0005473739 systemd[1]: libpod-conmon-9d3cdf63e625591ef7e7ef1f0f8a04223de1d6fcbb448c8c130d716072713042.scope: Deactivated successfully.
Oct  7 10:22:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:09Z|00921|binding|INFO|Releasing lport e0f4a07d-63f3-4c49-8cad-69cdf20a2608 from this chassis (sb_readonly=0)
Oct  7 10:22:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:09Z|00922|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct  7 10:22:09 np0005473739 nova_compute[259550]: 2025-10-07 14:22:09.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:09 np0005473739 podman[349178]: 2025-10-07 14:22:09.639764515 +0000 UTC m=+0.050724851 container create ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:22:09 np0005473739 systemd[1]: Started libpod-conmon-ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b.scope.
Oct  7 10:22:09 np0005473739 podman[349178]: 2025-10-07 14:22:09.612646399 +0000 UTC m=+0.023606785 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:22:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:22:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:09 np0005473739 podman[349178]: 2025-10-07 14:22:09.740404584 +0000 UTC m=+0.151364950 container init ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:22:09 np0005473739 podman[349178]: 2025-10-07 14:22:09.746266179 +0000 UTC m=+0.157226515 container start ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:22:09 np0005473739 podman[349178]: 2025-10-07 14:22:09.7497096 +0000 UTC m=+0.160669956 container attach ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:22:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1751: 305 pgs: 305 active+clean; 372 MiB data, 829 MiB used, 59 GiB / 60 GiB avail; 9.1 MiB/s rd, 6.1 MiB/s wr, 276 op/s
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.236 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance destroyed successfully.#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.237 2 DEBUG nova.objects.instance [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'resources' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.250 2 DEBUG nova.virt.libvirt.vif [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqlzBNyTrZ3ukqlO6TVbreu9ViSyA0zBmuJZOrzxut70yuUvNen/46QBxy28dcNWssGIhIyWpj4pHS1UxVS0zSbAfGPV97emvFP5GyJtAaeCUYpqJsMV8Zmvgjpt+/vMQ==',key_name='tempest-keypair-1118909469',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member',shelved_at='2025-10-07T14:22:05.799435',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4a1936df-f472-466a-a569-3e6ba7a787d4'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.251 2 DEBUG nova.network.os_vif_util [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.252 2 DEBUG nova.network.os_vif_util [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.252 2 DEBUG os_vif [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.255 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e1eba9e-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.260 2 INFO os_vif [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1')#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.328 2 DEBUG nova.compute.manager [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.329 2 DEBUG nova.compute.manager [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing instance network info cache due to event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.329 2 DEBUG oslo_concurrency.lockutils [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.330 2 DEBUG oslo_concurrency.lockutils [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.330 2 DEBUG nova.network.neutron [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:22:10 np0005473739 modest_swartz[349195]: {
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:    "0": [
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:        {
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "devices": [
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "/dev/loop3"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            ],
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_name": "ceph_lv0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_size": "21470642176",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "name": "ceph_lv0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "tags": {
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cluster_name": "ceph",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.crush_device_class": "",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.encrypted": "0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osd_id": "0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.type": "block",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.vdo": "0"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            },
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "type": "block",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "vg_name": "ceph_vg0"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:        }
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:    ],
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:    "1": [
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:        {
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "devices": [
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "/dev/loop4"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            ],
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_name": "ceph_lv1",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_size": "21470642176",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "name": "ceph_lv1",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "tags": {
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cluster_name": "ceph",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.crush_device_class": "",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.encrypted": "0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osd_id": "1",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.type": "block",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.vdo": "0"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            },
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "type": "block",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "vg_name": "ceph_vg1"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:        }
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:    ],
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:    "2": [
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:        {
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "devices": [
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "/dev/loop5"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            ],
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_name": "ceph_lv2",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_size": "21470642176",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "name": "ceph_lv2",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "tags": {
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.cluster_name": "ceph",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.crush_device_class": "",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.encrypted": "0",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osd_id": "2",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.type": "block",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:                "ceph.vdo": "0"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            },
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "type": "block",
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:            "vg_name": "ceph_vg2"
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:        }
Oct  7 10:22:10 np0005473739 modest_swartz[349195]:    ]
Oct  7 10:22:10 np0005473739 modest_swartz[349195]: }
Oct  7 10:22:10 np0005473739 systemd[1]: libpod-ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b.scope: Deactivated successfully.
Oct  7 10:22:10 np0005473739 podman[349178]: 2025-10-07 14:22:10.564844193 +0000 UTC m=+0.975804539 container died ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.574 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Successfully updated port: 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:22:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b6ab8101037099106c51b2a4c672256dd16e6c9c6403ed190611229cd0b57785-merged.mount: Deactivated successfully.
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.615 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.616 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.616 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:22:10 np0005473739 podman[349178]: 2025-10-07 14:22:10.658263811 +0000 UTC m=+1.069224147 container remove ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_swartz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:22:10 np0005473739 systemd[1]: libpod-conmon-ccb6a53f7d62756e9d1dfe01bcab2f34e30087be859f3b5ecc493730dc818b6b.scope: Deactivated successfully.
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.736 2 INFO nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deleting instance files /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b_del#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.737 2 INFO nova.virt.libvirt.driver [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deletion of /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b_del complete#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.833 2 INFO nova.scheduler.client.report [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Deleted allocations for instance d932a7ab-839c-48b9-804f-90cc8634e93b#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.889 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.890 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:10 np0005473739 nova_compute[259550]: 2025-10-07 14:22:10.993 2 DEBUG oslo_concurrency.processutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:11 np0005473739 nova_compute[259550]: 2025-10-07 14:22:11.029 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:22:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e237 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e237 do_prune osdmap full prune enabled
Oct  7 10:22:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 e238: 3 total, 3 up, 3 in
Oct  7 10:22:11 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e238: 3 total, 3 up, 3 in
Oct  7 10:22:11 np0005473739 podman[349392]: 2025-10-07 14:22:11.304716199 +0000 UTC m=+0.040110641 container create e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:22:11 np0005473739 systemd[1]: Started libpod-conmon-e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e.scope.
Oct  7 10:22:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:22:11 np0005473739 podman[349392]: 2025-10-07 14:22:11.288794198 +0000 UTC m=+0.024188660 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:22:11 np0005473739 podman[349392]: 2025-10-07 14:22:11.399104922 +0000 UTC m=+0.134499384 container init e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:22:11 np0005473739 podman[349392]: 2025-10-07 14:22:11.405686446 +0000 UTC m=+0.141080898 container start e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:22:11 np0005473739 xenodochial_lamport[349406]: 167 167
Oct  7 10:22:11 np0005473739 podman[349392]: 2025-10-07 14:22:11.410557564 +0000 UTC m=+0.145952026 container attach e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:22:11 np0005473739 systemd[1]: libpod-e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e.scope: Deactivated successfully.
Oct  7 10:22:11 np0005473739 conmon[349406]: conmon e8db30969d725b4407aa <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e.scope/container/memory.events
Oct  7 10:22:11 np0005473739 podman[349392]: 2025-10-07 14:22:11.412398753 +0000 UTC m=+0.147793195 container died e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 10:22:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-462ba15222914317d75ae07f3d51a76a62d2d063584a7637d8c71609333b1291-merged.mount: Deactivated successfully.
Oct  7 10:22:11 np0005473739 podman[349392]: 2025-10-07 14:22:11.447367427 +0000 UTC m=+0.182761859 container remove e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_lamport, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:22:11 np0005473739 systemd[1]: libpod-conmon-e8db30969d725b4407aaba82d9b47b6e0632e71de3257bd03b41e4dd0dddd82e.scope: Deactivated successfully.
Oct  7 10:22:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3690602146' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:11 np0005473739 nova_compute[259550]: 2025-10-07 14:22:11.507 2 DEBUG oslo_concurrency.processutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:11 np0005473739 nova_compute[259550]: 2025-10-07 14:22:11.520 2 DEBUG nova.compute.provider_tree [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:11 np0005473739 nova_compute[259550]: 2025-10-07 14:22:11.542 2 DEBUG nova.scheduler.client.report [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:11 np0005473739 nova_compute[259550]: 2025-10-07 14:22:11.564 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.675s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:11 np0005473739 nova_compute[259550]: 2025-10-07 14:22:11.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:11 np0005473739 nova_compute[259550]: 2025-10-07 14:22:11.620 2 DEBUG oslo_concurrency.lockutils [None req-23f855aa-81c7-4fa1-a4b0-7560dfc2227d b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:11 np0005473739 podman[349432]: 2025-10-07 14:22:11.646455006 +0000 UTC m=+0.049427987 container create 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:22:11 np0005473739 systemd[1]: Started libpod-conmon-436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59.scope.
Oct  7 10:22:11 np0005473739 podman[349432]: 2025-10-07 14:22:11.62541312 +0000 UTC m=+0.028386121 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:22:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:22:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:11 np0005473739 podman[349432]: 2025-10-07 14:22:11.748280675 +0000 UTC m=+0.151253676 container init 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  7 10:22:11 np0005473739 podman[349432]: 2025-10-07 14:22:11.756057901 +0000 UTC m=+0.159030872 container start 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:22:11 np0005473739 podman[349432]: 2025-10-07 14:22:11.760905109 +0000 UTC m=+0.163878100 container attach 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:22:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1753: 305 pgs: 305 active+clean; 373 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 2.1 MiB/s wr, 140 op/s
Oct  7 10:22:12 np0005473739 nova_compute[259550]: 2025-10-07 14:22:12.088 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846917.0868595, 73126140-a6e8-4630-a01d-3738d29c02b8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:12 np0005473739 nova_compute[259550]: 2025-10-07 14:22:12.089 2 INFO nova.compute.manager [-] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:22:12 np0005473739 nova_compute[259550]: 2025-10-07 14:22:12.110 2 DEBUG nova.compute.manager [None req-24a2d0b3-dd5e-4790-86de-e98263ecd9db - - - - - -] [instance: 73126140-a6e8-4630-a01d-3738d29c02b8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:12 np0005473739 elated_rubin[349449]: {
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "osd_id": 2,
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "type": "bluestore"
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:    },
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "osd_id": 1,
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "type": "bluestore"
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:    },
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "osd_id": 0,
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:        "type": "bluestore"
Oct  7 10:22:12 np0005473739 elated_rubin[349449]:    }
Oct  7 10:22:12 np0005473739 elated_rubin[349449]: }
Oct  7 10:22:12 np0005473739 systemd[1]: libpod-436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59.scope: Deactivated successfully.
Oct  7 10:22:12 np0005473739 systemd[1]: libpod-436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59.scope: Consumed 1.013s CPU time.
Oct  7 10:22:12 np0005473739 podman[349432]: 2025-10-07 14:22:12.811147084 +0000 UTC m=+1.214120075 container died 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:22:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4b83e6ea8f714521e98c7db094e960749086fb60113503fc5e562457758632ac-merged.mount: Deactivated successfully.
Oct  7 10:22:12 np0005473739 podman[349432]: 2025-10-07 14:22:12.891451075 +0000 UTC m=+1.294424036 container remove 436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_rubin, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:22:12 np0005473739 systemd[1]: libpod-conmon-436d752c1417394d20f94623f5c2af27c72669879c65c0bd6cce7cc040401e59.scope: Deactivated successfully.
Oct  7 10:22:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:22:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:22:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:22:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:22:12 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9cd0e662-f3c7-48d3-898d-1cf1607a8a03 does not exist
Oct  7 10:22:12 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 77cbe83d-c178-42eb-a7a7-c844f3661db2 does not exist
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.212 2 DEBUG nova.compute.manager [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-changed-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.213 2 DEBUG nova.compute.manager [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Refreshing instance network info cache due to event network-changed-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.213 2 DEBUG oslo_concurrency.lockutils [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:13.282 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=22, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=21) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:13.287 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.289 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "fdd84566-c63e-469a-9173-55b845d32171" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.291 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.311 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.315 2 DEBUG nova.network.neutron [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updated VIF entry in instance network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.315 2 DEBUG nova.network.neutron [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": null, "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.346 2 DEBUG oslo_concurrency.lockutils [req-3c0e76e1-441d-4178-9fa3-3123f69768e5 req-3152159e-cd89-4376-b458-703c8b92f46c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.395 2 DEBUG nova.network.neutron [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Updating instance_info_cache with network_info: [{"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.404 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.405 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.412 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.413 2 INFO nova.compute.claims [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.422 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.423 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance network_info: |[{"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.423 2 DEBUG oslo_concurrency.lockutils [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.423 2 DEBUG nova.network.neutron [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Refreshing network info cache for port 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.427 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start _get_guest_xml network_info=[{"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.435 2 WARNING nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.440 2 DEBUG nova.virt.libvirt.host [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.441 2 DEBUG nova.virt.libvirt.host [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.445 2 DEBUG nova.virt.libvirt.host [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.446 2 DEBUG nova.virt.libvirt.host [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.446 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.446 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.447 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.447 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.447 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.448 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.448 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.448 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.449 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.449 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.449 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.449 2 DEBUG nova.virt.hardware [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.454 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.617 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:22:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.0 total, 600.0 interval#012Cumulative writes: 26K writes, 107K keys, 26K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.04 MB/s#012Cumulative WAL: 26K writes, 8981 syncs, 2.96 writes per sync, written: 0.10 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 10K writes, 41K keys, 10K commit groups, 1.0 writes per commit group, ingest: 41.53 MB, 0.07 MB/s#012Interval WAL: 10K writes, 3999 syncs, 2.57 writes per sync, written: 0.04 GB, 0.07 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:22:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3943841341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.938 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:22:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:22:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1754: 305 pgs: 305 active+clean; 359 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.7 MiB/s wr, 183 op/s
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.965 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:13 np0005473739 nova_compute[259550]: 2025-10-07 14:22:13.969 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2239424145' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.102 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.109 2 DEBUG nova.compute.provider_tree [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.200 2 DEBUG nova.scheduler.client.report [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.313 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.314 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:22:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2763622679' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.439 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.441 2 DEBUG nova.virt.libvirt.vif [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1150053982',display_name='tempest-ServersTestJSON-server-1150053982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1150053982',id=89,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-44wx00ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:08Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=e1499379-f6d6-4edd-8af0-2ccf7c6e6683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.441 2 DEBUG nova.network.os_vif_util [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.442 2 DEBUG nova.network.os_vif_util [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.443 2 DEBUG nova.objects.instance [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid e1499379-f6d6-4edd-8af0-2ccf7c6e6683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.589 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <uuid>e1499379-f6d6-4edd-8af0-2ccf7c6e6683</uuid>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <name>instance-00000059</name>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestJSON-server-1150053982</nova:name>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:22:13</nova:creationTime>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <nova:port uuid="0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <entry name="serial">e1499379-f6d6-4edd-8af0-2ccf7c6e6683</entry>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <entry name="uuid">e1499379-f6d6-4edd-8af0-2ccf7c6e6683</entry>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:98:63:8c"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <target dev="tap0a1ae764-a8"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/console.log" append="off"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:22:14 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:22:14 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:22:14 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:22:14 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.590 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Preparing to wait for external event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.590 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.591 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.591 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.591 2 DEBUG nova.virt.libvirt.vif [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1150053982',display_name='tempest-ServersTestJSON-server-1150053982',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1150053982',id=89,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-44wx00ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:08Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=e1499379-f6d6-4edd-8af0-2ccf7c6e6683,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.592 2 DEBUG nova.network.os_vif_util [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.592 2 DEBUG nova.network.os_vif_util [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.593 2 DEBUG os_vif [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.594 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0a1ae764-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.597 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0a1ae764-a8, col_values=(('external_ids', {'iface-id': '0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:63:8c', 'vm-uuid': 'e1499379-f6d6-4edd-8af0-2ccf7c6e6683'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:14 np0005473739 NetworkManager[44949]: <info>  [1759846934.6004] manager: (tap0a1ae764-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/383)
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.606 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.608 2 INFO os_vif [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8')#033[00m
Oct  7 10:22:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:14Z|00101|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:9a:2b 10.100.0.4
Oct  7 10:22:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:14Z|00102|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:9a:2b 10.100.0.4
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.717 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.781 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.782 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.782 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:98:63:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.783 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Using config drive#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.817 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:14 np0005473739 nova_compute[259550]: 2025-10-07 14:22:14.823 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.061 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.062 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.063 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Creating image(s)#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.087 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.122 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.149 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.154 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.207 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846920.0854871, d932a7ab-839c-48b9-804f-90cc8634e93b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.207 2 INFO nova.compute.manager [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.219 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.220 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.259 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.260 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.261 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.261 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.283 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.289 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 fdd84566-c63e-469a-9173-55b845d32171_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.341 2 DEBUG nova.compute.manager [None req-863a0d83-3370-41ec-8b59-b5b071dddd24 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.385 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.439 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Creating config drive at /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.448 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyazyw5px execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.613 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyazyw5px" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.642 2 DEBUG nova.storage.rbd_utils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.648 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.693 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 fdd84566-c63e-469a-9173-55b845d32171_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.404s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.771 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] resizing rbd image fdd84566-c63e-469a-9173-55b845d32171_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.796 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.797 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.803 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.804 2 INFO nova.compute.claims [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.825 2 DEBUG oslo_concurrency.processutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config e1499379-f6d6-4edd-8af0-2ccf7c6e6683_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.825 2 INFO nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Deleting local config drive /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683/disk.config because it was imported into RBD.#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.869 2 DEBUG nova.objects.instance [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'migration_context' on Instance uuid fdd84566-c63e-469a-9173-55b845d32171 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:15 np0005473739 kernel: tap0a1ae764-a8: entered promiscuous mode
Oct  7 10:22:15 np0005473739 NetworkManager[44949]: <info>  [1759846935.8830] manager: (tap0a1ae764-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/384)
Oct  7 10:22:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:15Z|00923|binding|INFO|Claiming lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 for this chassis.
Oct  7 10:22:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:15Z|00924|binding|INFO|0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1: Claiming fa:16:3e:98:63:8c 10.100.0.7
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:15Z|00925|binding|INFO|Setting lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 ovn-installed in OVS
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.913 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:15 np0005473739 systemd-udevd[349870]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:22:15 np0005473739 systemd-machined[214580]: New machine qemu-110-instance-00000059.
Oct  7 10:22:15 np0005473739 NetworkManager[44949]: <info>  [1759846935.9298] device (tap0a1ae764-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:22:15 np0005473739 systemd[1]: Started Virtual Machine qemu-110-instance-00000059.
Oct  7 10:22:15 np0005473739 NetworkManager[44949]: <info>  [1759846935.9315] device (tap0a1ae764-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:22:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1755: 305 pgs: 305 active+clean; 370 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 412 KiB/s rd, 4.6 MiB/s wr, 148 op/s
Oct  7 10:22:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:15.988 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:63:8c 10.100.0.7'], port_security=['fa:16:3e:98:63:8c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1499379-f6d6-4edd-8af0-2ccf7c6e6683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:15Z|00926|binding|INFO|Setting lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 up in Southbound
Oct  7 10:22:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:15.989 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis#033[00m
Oct  7 10:22:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:15.991 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.996 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.996 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Ensure instance console log exists: /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.997 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.998 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:15 np0005473739 nova_compute[259550]: 2025-10-07 14:22:15.998 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.000 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.006 2 WARNING nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.012 2 DEBUG nova.virt.libvirt.host [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.013 2 DEBUG nova.virt.libvirt.host [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.016 2 DEBUG nova.virt.libvirt.host [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.017 2 DEBUG nova.virt.libvirt.host [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.017 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.017 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.018 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.018 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.018 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.019 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.019 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.019 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.019 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28716ae1-7eef-4dd2-bab6-8d5d972916e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.020 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.020 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.020 2 DEBUG nova.virt.hardware [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.024 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.058 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bac890-6edd-44a9-8ac7-6db11fdadf16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.062 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[be82e7de-0023-4598-a3ad-28445f64fb60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.097 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[002d510a-cbd5-4e49-9a2e-e68323a75760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.121 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[384bbcd2-b662-405c-9a47-63cc9357415e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 349885, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.144 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd7952c-4481-4a3b-a64d-55e662f67dd0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349886, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 349886, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.145 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.147 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.150 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.151 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.151 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:16.152 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4279221662' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.480 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.508 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.513 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.547 2 DEBUG nova.network.neutron [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Updated VIF entry in instance network info cache for port 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.548 2 DEBUG nova.network.neutron [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Updating instance_info_cache with network_info: [{"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.677 2 DEBUG oslo_concurrency.lockutils [req-1c0e7a86-b06e-4a77-a4b4-6d08cacf81f0 req-d047eeb6-da05-43f0-afd4-279c5c1e356c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e1499379-f6d6-4edd-8af0-2ccf7c6e6683" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.683 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.730 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846921.718816, ad8af3d5-66d4-4db1-bd40-42d766f2fde7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.730 2 INFO nova.compute.manager [-] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.937 2 DEBUG nova.compute.manager [None req-9a3140d1-4a2b-4127-b880-1025c224b666 - - - - - -] [instance: ad8af3d5-66d4-4db1-bd40-42d766f2fde7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3080529391' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.972 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:16 np0005473739 nova_compute[259550]: 2025-10-07 14:22:16.974 2 DEBUG nova.objects.instance [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'pci_devices' on Instance uuid fdd84566-c63e-469a-9173-55b845d32171 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.003 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <uuid>fdd84566-c63e-469a-9173-55b845d32171</uuid>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <name>instance-0000005a</name>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerShowV247Test-server-1393680851</nova:name>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:22:16</nova:creationTime>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <nova:user uuid="680153f599e540e2ad16e791561c02e2">tempest-ServerShowV247Test-1983417179-project-member</nova:user>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <nova:project uuid="a580b052291544cca604ec1cdb73a416">tempest-ServerShowV247Test-1983417179</nova:project>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <entry name="serial">fdd84566-c63e-469a-9173-55b845d32171</entry>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <entry name="uuid">fdd84566-c63e-469a-9173-55b845d32171</entry>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/fdd84566-c63e-469a-9173-55b845d32171_disk">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/fdd84566-c63e-469a-9173-55b845d32171_disk.config">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/console.log" append="off"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:22:17 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:22:17 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:22:17 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:22:17 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:22:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4166670939' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.168 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.173 2 DEBUG nova.compute.provider_tree [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.226 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846937.2255259, e1499379-f6d6-4edd-8af0-2ccf7c6e6683 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.226 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] VM Started (Lifecycle Event)#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.274 2 DEBUG nova.scheduler.client.report [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.279 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.285 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846937.2266014, e1499379-f6d6-4edd-8af0-2ccf7c6e6683 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.285 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.416 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.420 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.420 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.420 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Using config drive#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.439 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.445 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.566 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.567 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.685 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.689 2 DEBUG nova.compute.manager [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.690 2 DEBUG oslo_concurrency.lockutils [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.690 2 DEBUG oslo_concurrency.lockutils [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.690 2 DEBUG oslo_concurrency.lockutils [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.691 2 DEBUG nova.compute.manager [req-be041000-22b3-4446-b2ff-60766679b310 req-f4c16992-050f-4267-8812-f9ca55987b4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Processing event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.692 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.697 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846937.6964352, e1499379-f6d6-4edd-8af0-2ccf7c6e6683 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.697 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.699 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.702 2 INFO nova.virt.libvirt.driver [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance spawned successfully.#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.702 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.726 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.729 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.740 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.741 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.741 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.741 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.742 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.742 2 DEBUG nova.virt.libvirt.driver [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.747 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.778 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.893 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.951 2 INFO nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Took 9.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:22:17 np0005473739 nova_compute[259550]: 2025-10-07 14:22:17.951 2 DEBUG nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1756: 305 pgs: 305 active+clean; 370 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 413 KiB/s rd, 4.6 MiB/s wr, 148 op/s
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.044 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.167 2 INFO nova.compute.manager [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Took 12.04 seconds to build instance.#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.183 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Creating config drive at /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.188 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2vrtjreb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:22:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 30K writes, 120K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.11 GB, 0.04 MB/s#012Cumulative WAL: 30K writes, 10K syncs, 2.87 writes per sync, written: 0.11 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 12K writes, 45K keys, 12K commit groups, 1.0 writes per commit group, ingest: 47.24 MB, 0.08 MB/s#012Interval WAL: 12K writes, 4782 syncs, 2.52 writes per sync, written: 0.05 GB, 0.08 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.238 2 DEBUG oslo_concurrency.lockutils [None req-eb74d268-19fb-4831-a66d-f9726a109ed3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.343s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.332 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2vrtjreb" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.356 2 DEBUG nova.storage.rbd_utils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image fdd84566-c63e-469a-9173-55b845d32171_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.360 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config fdd84566-c63e-469a-9173-55b845d32171_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.422 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.425 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.427 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating image(s)#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.456 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.484 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.510 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.514 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.591 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.592 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.593 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.594 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.619 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:18 np0005473739 nova_compute[259550]: 2025-10-07 14:22:18.624 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.110 2 DEBUG oslo_concurrency.processutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config fdd84566-c63e-469a-9173-55b845d32171_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.750s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.111 2 INFO nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Deleting local config drive /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171/disk.config because it was imported into RBD.#033[00m
Oct  7 10:22:19 np0005473739 systemd-machined[214580]: New machine qemu-111-instance-0000005a.
Oct  7 10:22:19 np0005473739 systemd[1]: Started Virtual Machine qemu-111-instance-0000005a.
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.419 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.420 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.421 2 INFO nova.compute.manager [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Unshelving#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.541 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.542 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.550 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'pci_requests' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.592 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'numa_topology' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.852 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.917 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.917 2 INFO nova.compute.claims [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.948 2 DEBUG nova.compute.manager [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.948 2 DEBUG oslo_concurrency.lockutils [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.948 2 DEBUG oslo_concurrency.lockutils [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.948 2 DEBUG oslo_concurrency.lockutils [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.949 2 DEBUG nova.compute.manager [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] No waiting events found dispatching network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.949 2 WARNING nova.compute.manager [req-34364224-9a1a-4398-b10f-569f580ab712 req-09fafbf0-e0c5-4a6a-af1a-2c4aa9482c8a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received unexpected event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:22:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1757: 305 pgs: 305 active+clean; 407 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 6.5 MiB/s wr, 194 op/s
Oct  7 10:22:19 np0005473739 nova_compute[259550]: 2025-10-07 14:22:19.957 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] resizing rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.095 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.095 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.096 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.096 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.096 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.097 2 INFO nova.compute.manager [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Terminating instance#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.098 2 DEBUG nova.compute.manager [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.283 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:20 np0005473739 kernel: tap0a1ae764-a8 (unregistering): left promiscuous mode
Oct  7 10:22:20 np0005473739 NetworkManager[44949]: <info>  [1759846940.3256] device (tap0a1ae764-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:20Z|00927|binding|INFO|Releasing lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 from this chassis (sb_readonly=0)
Oct  7 10:22:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:20Z|00928|binding|INFO|Setting lport 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 down in Southbound
Oct  7 10:22:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:20Z|00929|binding|INFO|Removing iface tap0a1ae764-a8 ovn-installed in OVS
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.372 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846940.37246, fdd84566-c63e-469a-9173-55b845d32171 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.373 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:22:20 np0005473739 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Deactivated successfully.
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.376 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:22:20 np0005473739 systemd[1]: machine-qemu\x2d110\x2dinstance\x2d00000059.scope: Consumed 3.738s CPU time.
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.379 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:22:20 np0005473739 systemd-machined[214580]: Machine qemu-110-instance-00000059 terminated.
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.383 2 INFO nova.virt.libvirt.driver [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance spawned successfully.#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.383 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.480 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:63:8c 10.100.0.7'], port_security=['fa:16:3e:98:63:8c 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'e1499379-f6d6-4edd-8af0-2ccf7c6e6683', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.482 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.483 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.502 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.501 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67835ee6-ec07-4898-8d18-b9ba43645b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.528 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.529 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.529 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.529 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.530 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.530 2 DEBUG nova.virt.libvirt.driver [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.540 2 INFO nova.virt.libvirt.driver [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Instance destroyed successfully.#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.540 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[be5482df-20cd-4ce9-89ca-5c30183380d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.540 2 DEBUG nova.objects.instance [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid e1499379-f6d6-4edd-8af0-2ccf7c6e6683 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.552 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3255692e-bbf5-4cdf-8c29-21249e655076]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.585 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fcd19856-c3ef-4e85-b932-9f63638f97bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.609 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[433f84eb-e607-462c-8978-9318011942c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 874, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 874, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350336, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.631 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6abf3529-d1f5-41ef-be75-6baff4e6d3cc]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350342, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350342, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.633 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.640 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.640 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.641 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:20.641 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.666 2 DEBUG nova.virt.libvirt.vif [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=2001:2001::3,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:05Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1150053982',display_name='tempest-ServersTestJSON-server-1150053982',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1150053982',id=89,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:17Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-44wx00ob',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:18Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=e1499379-f6d6-4edd-8af0-2ccf7c6e6683,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.666 2 DEBUG nova.network.os_vif_util [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "address": "fa:16:3e:98:63:8c", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0a1ae764-a8", "ovs_interfaceid": "0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.667 2 DEBUG nova.network.os_vif_util [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.668 2 DEBUG os_vif [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:22:20 np0005473739 podman[350315]: 2025-10-07 14:22:20.6696638 +0000 UTC m=+0.094809096 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.672 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.673 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846940.3758037, fdd84566-c63e-469a-9173-55b845d32171 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.673 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] VM Started (Lifecycle Event)#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0a1ae764-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.681 2 INFO os_vif [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:63:8c,bridge_name='br-int',has_traffic_filtering=True,id=0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0a1ae764-a8')#033[00m
Oct  7 10:22:20 np0005473739 podman[350317]: 2025-10-07 14:22:20.683225538 +0000 UTC m=+0.108370854 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.707 2 DEBUG nova.objects.instance [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'migration_context' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.740 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.740 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Ensure instance console log exists: /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.741 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.741 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.742 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.743 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.748 2 WARNING nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.753 2 DEBUG nova.virt.libvirt.host [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.754 2 DEBUG nova.virt.libvirt.host [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.758 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.758 2 DEBUG nova.virt.libvirt.host [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.759 2 DEBUG nova.virt.libvirt.host [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.759 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.759 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.760 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.760 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.761 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.761 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.761 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.762 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.762 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.762 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.763 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.763 2 DEBUG nova.virt.hardware [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.766 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1480256532' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.804 2 INFO nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Took 5.74 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.804 2 DEBUG nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.805 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.522s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.810 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.817 2 DEBUG nova.compute.provider_tree [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.856 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.953 2 DEBUG nova.scheduler.client.report [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:20 np0005473739 nova_compute[259550]: 2025-10-07 14:22:20.986 2 INFO nova.compute.manager [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Took 7.61 seconds to build instance.#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.055 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.067 2 DEBUG oslo_concurrency.lockutils [None req-ea1582ae-af43-41b5-83d6-246594c76024 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.182 2 INFO nova.virt.libvirt.driver [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Deleting instance files /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683_del#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.183 2 INFO nova.virt.libvirt.driver [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Deletion of /var/lib/nova/instances/e1499379-f6d6-4edd-8af0-2ccf7c6e6683_del complete#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.210 2 INFO nova.network.neutron [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  7 10:22:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2002588659' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.266 2 INFO nova.compute.manager [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Took 1.17 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.268 2 DEBUG oslo.service.loopingcall [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.268 2 DEBUG nova.compute.manager [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.268 2 DEBUG nova.network.neutron [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.279 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.303 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.307 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981774959' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.757 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.760 2 DEBUG nova.objects.instance [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.781 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <uuid>c8c2d410-01f0-4ef2-9ce3-232347c32e46</uuid>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <name>instance-0000005b</name>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerShowV247Test-server-1207130222</nova:name>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:22:20</nova:creationTime>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <nova:user uuid="680153f599e540e2ad16e791561c02e2">tempest-ServerShowV247Test-1983417179-project-member</nova:user>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <nova:project uuid="a580b052291544cca604ec1cdb73a416">tempest-ServerShowV247Test-1983417179</nova:project>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <entry name="serial">c8c2d410-01f0-4ef2-9ce3-232347c32e46</entry>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <entry name="uuid">c8c2d410-01f0-4ef2-9ce3-232347c32e46</entry>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/console.log" append="off"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:22:21 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:22:21 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:22:21 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:22:21 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.839 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.839 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.840 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Using config drive#033[00m
Oct  7 10:22:21 np0005473739 nova_compute[259550]: 2025-10-07 14:22:21.865 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1758: 305 pgs: 305 active+clean; 440 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 6.4 MiB/s wr, 198 op/s
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.068 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating config drive at /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.073 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoa7vc73i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.213 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpoa7vc73i" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.251 2 DEBUG nova.storage.rbd_utils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.255 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:22.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '22'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.368 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-unplugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.369 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.370 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.370 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.370 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] No waiting events found dispatching network-vif-unplugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.371 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-unplugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.371 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.371 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.372 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.372 2 DEBUG oslo_concurrency.lockutils [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.372 2 DEBUG nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] No waiting events found dispatching network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.373 2 WARNING nova.compute.manager [req-e01fb43f-6812-4fbf-af5e-231e339d8658 req-de342a6b-fd1c-4700-9c2d-91c28b77c959 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received unexpected event network-vif-plugged-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:22:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:22:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3000.1 total, 600.0 interval#012Cumulative writes: 22K writes, 88K keys, 22K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.03 MB/s#012Cumulative WAL: 22K writes, 7417 syncs, 2.99 writes per sync, written: 0.09 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 7697 writes, 31K keys, 7697 commit groups, 1.0 writes per commit group, ingest: 36.55 MB, 0.06 MB/s#012Interval WAL: 7697 writes, 3027 syncs, 2.54 writes per sync, written: 0.04 GB, 0.06 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.472 2 DEBUG oslo_concurrency.processutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.473 2 INFO nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deleting local config drive /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config because it was imported into RBD.#033[00m
Oct  7 10:22:22 np0005473739 systemd-machined[214580]: New machine qemu-112-instance-0000005b.
Oct  7 10:22:22 np0005473739 systemd[1]: Started Virtual Machine qemu-112-instance-0000005b.
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.648 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.649 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.649 2 DEBUG nova.network.neutron [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:22:22
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.log', 'cephfs.cephfs.data', '.rgw.root', '.mgr', 'images', 'default.rgw.meta', 'vms', 'backups', 'volumes']
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.685 2 DEBUG nova.network.neutron [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.789 2 INFO nova.compute.manager [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Took 1.52 seconds to deallocate network for instance.#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.844 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.845 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:22:22 np0005473739 nova_compute[259550]: 2025-10-07 14:22:22.961 2 DEBUG oslo_concurrency.processutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.437 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846943.436468, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.438 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.441 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.441 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.449 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance spawned successfully.#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.449 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:22:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2908031063' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.460 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.464 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.478 2 DEBUG oslo_concurrency.processutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.517s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.480 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.481 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.481 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.481 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.481 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.482 2 DEBUG nova.virt.libvirt.driver [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.488 2 DEBUG nova.compute.provider_tree [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.491 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.491 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846943.4411333, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.491 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Started (Lifecycle Event)#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.521 2 DEBUG nova.scheduler.client.report [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.527 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.530 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.561 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.568 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.723s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.574 2 INFO nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Took 5.15 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.574 2 DEBUG nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.594 2 INFO nova.scheduler.client.report [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance e1499379-f6d6-4edd-8af0-2ccf7c6e6683#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.645 2 INFO nova.compute.manager [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Took 7.88 seconds to build instance.#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.671 2 DEBUG oslo_concurrency.lockutils [None req-14419037-6307-4a3d-807e-4e93bd789a5b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "e1499379-f6d6-4edd-8af0-2ccf7c6e6683" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.576s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.673 2 DEBUG oslo_concurrency.lockutils [None req-b2019b31-0859-48f5-8e16-7806e418b206 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.453s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:23 np0005473739 nova_compute[259550]: 2025-10-07 14:22:23.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1759: 305 pgs: 305 active+clean; 441 MiB data, 865 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 6.5 MiB/s wr, 294 op/s
Oct  7 10:22:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.302 2 DEBUG nova.network.neutron [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.323 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.324 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.325 2 INFO nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating image(s)#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.349 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.353 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.385 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.414 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.419 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "b50efc6a4419348da95302d442f7696307d8e85c" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.420 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "b50efc6a4419348da95302d442f7696307d8e85c" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.471 2 DEBUG nova.compute.manager [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Received event network-vif-deleted-0a1ae764-a89b-4cb1-874a-9b01e5a9c3e1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.471 2 DEBUG nova.compute.manager [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.471 2 DEBUG nova.compute.manager [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing instance network info cache due to event network-changed-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.472 2 DEBUG oslo_concurrency.lockutils [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.472 2 DEBUG oslo_concurrency.lockutils [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.472 2 DEBUG nova.network.neutron [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Refreshing network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.571 2 INFO nova.compute.manager [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Rebuilding instance#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.606 2 DEBUG nova.virt.libvirt.imagebackend [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/4a1936df-f472-466a-a569-3e6ba7a787d4/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/4a1936df-f472-466a-a569-3e6ba7a787d4/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.655 2 DEBUG nova.virt.libvirt.imagebackend [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/4a1936df-f472-466a-a569-3e6ba7a787d4/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.656 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] cloning images/4a1936df-f472-466a-a569-3e6ba7a787d4@snap to None/d932a7ab-839c-48b9-804f-90cc8634e93b_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.769 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "b50efc6a4419348da95302d442f7696307d8e85c" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.865 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'trusted_certs' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.918 2 DEBUG nova.compute.manager [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.930 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'migration_context' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.984 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'pci_requests' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:24 np0005473739 nova_compute[259550]: 2025-10-07 14:22:24.994 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] flattening vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.038 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'pci_devices' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.051 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'resources' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.080 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'migration_context' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.092 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.104 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.461 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Image rbd:vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.462 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.463 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Ensure instance console log exists: /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.463 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.464 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.464 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.467 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Start _get_guest_xml network_info=[{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:21:57Z,direct_url=<?>,disk_format='raw',id=4a1936df-f472-466a-a569-3e6ba7a787d4,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-32969053-shelved',owner='a266c4b5f8164bceb621e0e23116c515',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:22:05Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.472 2 WARNING nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.476 2 DEBUG nova.virt.libvirt.host [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.477 2 DEBUG nova.virt.libvirt.host [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.479 2 DEBUG nova.virt.libvirt.host [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.479 2 DEBUG nova.virt.libvirt.host [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.480 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.480 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:21:57Z,direct_url=<?>,disk_format='raw',id=4a1936df-f472-466a-a569-3e6ba7a787d4,min_disk=1,min_ram=0,name='tempest-ServerActionsTestOtherB-server-32969053-shelved',owner='a266c4b5f8164bceb621e0e23116c515',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:22:05Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.480 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.481 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.482 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.482 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.482 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.482 2 DEBUG nova.virt.hardware [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.483 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.509 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:25 np0005473739 nova_compute[259550]: 2025-10-07 14:22:25.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1760: 305 pgs: 305 active+clean; 434 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.4 MiB/s rd, 5.0 MiB/s wr, 320 op/s
Oct  7 10:22:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1192574666' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.069 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.560s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.096 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.101 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.377 2 DEBUG nova.network.neutron [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updated VIF entry in instance network info cache for port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.379 2 DEBUG nova.network.neutron [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [{"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.403 2 DEBUG oslo_concurrency.lockutils [req-1f2ef5bd-a523-465b-8fb7-416968aa207f req-095565e8-1117-4e0e-9904-4958375e8f46 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d932a7ab-839c-48b9-804f-90cc8634e93b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1677105785' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.588 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.593 2 DEBUG nova.virt.libvirt.vif [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='4a1936df-f472-466a-a569-3e6ba7a787d4',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1118909469',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member',shelved_at='2025-10-07T14:22:05.799435',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4a1936df-f472-466a-a569-3e6ba7a787d4'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.594 2 DEBUG nova.network.os_vif_util [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.595 2 DEBUG nova.network.os_vif_util [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.597 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'pci_devices' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.617 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <uuid>d932a7ab-839c-48b9-804f-90cc8634e93b</uuid>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <name>instance-0000004e</name>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerActionsTestOtherB-server-32969053</nova:name>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:22:25</nova:creationTime>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <nova:user uuid="b99e8c19767d42aa96c7d646cacc3772">tempest-ServerActionsTestOtherB-1033607333-project-member</nova:user>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <nova:project uuid="a266c4b5f8164bceb621e0e23116c515">tempest-ServerActionsTestOtherB-1033607333</nova:project>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="4a1936df-f472-466a-a569-3e6ba7a787d4"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <nova:port uuid="6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <entry name="serial">d932a7ab-839c-48b9-804f-90cc8634e93b</entry>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <entry name="uuid">d932a7ab-839c-48b9-804f-90cc8634e93b</entry>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:71:12:7c"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <target dev="tap6e1eba9e-f1"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/console.log" append="off"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:22:26 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:22:26 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:22:26 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:22:26 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.623 2 DEBUG nova.compute.manager [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Preparing to wait for external event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.624 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.624 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.624 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.625 2 DEBUG nova.virt.libvirt.vif [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='4a1936df-f472-466a-a569-3e6ba7a787d4',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-keypair-1118909469',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:20:15Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member',shelved_at='2025-10-07T14:22:05.799435',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='4a1936df-f472-466a-a569-3e6ba7a787d4'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.626 2 DEBUG nova.network.os_vif_util [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.626 2 DEBUG nova.network.os_vif_util [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.627 2 DEBUG os_vif [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e1eba9e-f1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e1eba9e-f1, col_values=(('external_ids', {'iface-id': '6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:12:7c', 'vm-uuid': 'd932a7ab-839c-48b9-804f-90cc8634e93b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:26 np0005473739 NetworkManager[44949]: <info>  [1759846946.6366] manager: (tap6e1eba9e-f1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/385)
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.645 2 INFO os_vif [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1')#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.711 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.712 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.712 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] No VIF found with MAC fa:16:3e:71:12:7c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.713 2 INFO nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Using config drive#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.740 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.776 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:26 np0005473739 nova_compute[259550]: 2025-10-07 14:22:26.823 2 DEBUG nova.objects.instance [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'keypairs' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.128 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.129 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.156 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.237 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.238 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.249 2 INFO nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Creating config drive at /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.262 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj9u2bb1p execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.325 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.326 2 INFO nova.compute.claims [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.427 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj9u2bb1p" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.456 2 DEBUG nova.storage.rbd_utils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] rbd image d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.460 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.582 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.640 2 DEBUG oslo_concurrency.processutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config d932a7ab-839c-48b9-804f-90cc8634e93b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.646 2 INFO nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deleting local config drive /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b/disk.config because it was imported into RBD.#033[00m
Oct  7 10:22:27 np0005473739 NetworkManager[44949]: <info>  [1759846947.7206] manager: (tap6e1eba9e-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/386)
Oct  7 10:22:27 np0005473739 kernel: tap6e1eba9e-f1: entered promiscuous mode
Oct  7 10:22:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:27Z|00930|binding|INFO|Claiming lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for this chassis.
Oct  7 10:22:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:27Z|00931|binding|INFO|6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8: Claiming fa:16:3e:71:12:7c 10.100.0.3
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.735 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:12:7c 10.100.0.3'], port_security=['fa:16:3e:71:12:7c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd932a7ab-839c-48b9-804f-90cc8634e93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '7', 'neutron:security_group_ids': '2167ecc9-baa9-4f24-bd7c-9baa94d55bea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.738 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 bound to our chassis#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.740 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2#033[00m
Oct  7 10:22:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:27Z|00932|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 ovn-installed in OVS
Oct  7 10:22:27 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:27Z|00933|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 up in Southbound
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.764 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.768 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b6aca46d-5bf9-460e-99e4-8ff435af85aa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:27 np0005473739 systemd-machined[214580]: New machine qemu-113-instance-0000004e.
Oct  7 10:22:27 np0005473739 systemd[1]: Started Virtual Machine qemu-113-instance-0000004e.
Oct  7 10:22:27 np0005473739 systemd-udevd[350974]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:22:27 np0005473739 NetworkManager[44949]: <info>  [1759846947.8192] device (tap6e1eba9e-f1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:22:27 np0005473739 NetworkManager[44949]: <info>  [1759846947.8204] device (tap6e1eba9e-f1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.818 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7ccb59e4-32a9-4d75-bc03-298a3365dc10]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.829 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[774a67ad-cffd-4920-97cd-ef97b2b4a802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.870 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[90b9d3f6-4198-4b13-b262-14ff9183929a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.888 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad190ab-3b11-41d1-ade0-5ca7d952f30d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 13, 'rx_bytes': 1000, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 350985, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[799cf8ef-ca05-42d6-9c82-da1dc022dd7e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739748, 'tstamp': 739748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350986, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739751, 'tstamp': 739751}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 350986, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.905 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.908 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.909 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.910 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.910 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:27.911 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1761: 305 pgs: 305 active+clean; 434 MiB data, 862 MiB used, 59 GiB / 60 GiB avail; 5.3 MiB/s rd, 4.6 MiB/s wr, 298 op/s
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.980 2 DEBUG nova.compute.manager [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.980 2 DEBUG oslo_concurrency.lockutils [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.981 2 DEBUG oslo_concurrency.lockutils [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.981 2 DEBUG oslo_concurrency.lockutils [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:27 np0005473739 nova_compute[259550]: 2025-10-07 14:22:27.981 2 DEBUG nova.compute.manager [req-29c41a9e-3fa5-473d-8712-0a21956de7bf req-6c405f37-18b9-4157-94af-42f84442a4ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Processing event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:22:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3131344438' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.141 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.148 2 DEBUG nova.compute.provider_tree [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.209 2 DEBUG nova.scheduler.client.report [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.285 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.286 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.364 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.365 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.432 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.498 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.510 2 DEBUG nova.policy [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.667 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.668 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.669 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Creating image(s)#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.698 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.728 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.762 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.769 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.819 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846948.76521, d932a7ab-839c-48b9-804f-90cc8634e93b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.820 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Started (Lifecycle Event)#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.826 2 DEBUG nova.compute.manager [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.830 2 DEBUG nova.virt.libvirt.driver [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.836 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance spawned successfully.#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.889 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.890 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.891 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.892 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.921 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.926 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a888e66f-9992-460a-ab15-79ac0261c4e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.982 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:28 np0005473739 nova_compute[259550]: 2025-10-07 14:22:28.987 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.092 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.092 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846948.7653127, d932a7ab-839c-48b9-804f-90cc8634e93b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.092 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.116 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e238 do_prune osdmap full prune enabled
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.127 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846948.8298697, d932a7ab-839c-48b9-804f-90cc8634e93b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.127 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.196 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.208 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.232 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e239 e239: 3 total, 3 up, 3 in
Oct  7 10:22:29 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e239: 3 total, 3 up, 3 in
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.334 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a888e66f-9992-460a-ab15-79ac0261c4e2_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.408s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.414 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.549 2 DEBUG nova.objects.instance [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid a888e66f-9992-460a-ab15-79ac0261c4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.636 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.637 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Ensure instance console log exists: /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.638 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.639 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.639 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.721 2 DEBUG nova.compute.manager [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.843 2 DEBUG oslo_concurrency.lockutils [None req-2c094d3a-388a-4545-9062-c11e513e365f b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:29 np0005473739 nova_compute[259550]: 2025-10-07 14:22:29.913 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Successfully created port: c2c79fd1-616c-4d60-86ec-7f0535cd0015 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:22:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1763: 305 pgs: 305 active+clean; 498 MiB data, 894 MiB used, 59 GiB / 60 GiB avail; 11 MiB/s rd, 7.2 MiB/s wr, 405 op/s
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.521 2 DEBUG nova.compute.manager [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.523 2 DEBUG oslo_concurrency.lockutils [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.524 2 DEBUG oslo_concurrency.lockutils [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.524 2 DEBUG oslo_concurrency.lockutils [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.524 2 DEBUG nova.compute.manager [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.525 2 WARNING nova.compute.manager [req-15afd940-6329-446b-bdaf-7680b0aac6a5 req-fc34f445-71bb-4420-9fd0-9429d6182bd7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received unexpected event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.956 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Successfully updated port: c2c79fd1-616c-4d60-86ec-7f0535cd0015 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.977 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.978 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:30 np0005473739 nova_compute[259550]: 2025-10-07 14:22:30.978 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.198 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:22:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e239 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.259 2 DEBUG nova.compute.manager [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-changed-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.260 2 DEBUG nova.compute.manager [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Refreshing instance network info cache due to event network-changed-c2c79fd1-616c-4d60-86ec-7f0535cd0015. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.261 2 DEBUG oslo_concurrency.lockutils [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.656 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.657 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e239 do_prune osdmap full prune enabled
Oct  7 10:22:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e240 e240: 3 total, 3 up, 3 in
Oct  7 10:22:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e240: 3 total, 3 up, 3 in
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.735 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.956 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1765: 305 pgs: 305 active+clean; 489 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 6.4 MiB/s wr, 336 op/s
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.960 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.964 2 DEBUG nova.network.neutron [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Updating instance_info_cache with network_info: [{"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.982 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.983 2 INFO nova.compute.claims [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.997 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.997 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance network_info: |[{"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.998 2 DEBUG oslo_concurrency.lockutils [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:31 np0005473739 nova_compute[259550]: 2025-10-07 14:22:31.999 2 DEBUG nova.network.neutron [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Refreshing network info cache for port c2c79fd1-616c-4d60-86ec-7f0535cd0015 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.005 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start _get_guest_xml network_info=[{"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.010 2 WARNING nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.015 2 DEBUG nova.virt.libvirt.host [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.016 2 DEBUG nova.virt.libvirt.host [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.020 2 DEBUG nova.virt.libvirt.host [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.021 2 DEBUG nova.virt.libvirt.host [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.022 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.023 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.023 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.024 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.024 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.025 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.025 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.025 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.026 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.026 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.026 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.027 2 DEBUG nova.virt.hardware [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.031 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.258 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.003039508652406069 of space, bias 1.0, pg target 0.9118525957218208 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.001691131283779384 of space, bias 1.0, pg target 0.5073393851338152 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:22:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:22:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1403994943' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.589 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.611 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.616 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:22:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655297123' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:22:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:22:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3655297123' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:22:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3866371752' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.837 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.579s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.849 2 DEBUG nova.compute.provider_tree [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.870 2 DEBUG nova.scheduler.client.report [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.896 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.898 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.952 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.952 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.974 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:22:32 np0005473739 nova_compute[259550]: 2025-10-07 14:22:32.997 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.040 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.041 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.041 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.049 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.050 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.051 2 INFO nova.compute.manager [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Terminating instance#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.052 2 DEBUG nova.compute.manager [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.095 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:22:33 np0005473739 kernel: tapf4f9776d-fb (unregistering): left promiscuous mode
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.102 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.102 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Creating image(s)#033[00m
Oct  7 10:22:33 np0005473739 NetworkManager[44949]: <info>  [1759846953.1104] device (tapf4f9776d-fb): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:22:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:33Z|00934|binding|INFO|Releasing lport f4f9776d-fbc4-4c99-a076-d6391636ed53 from this chassis (sb_readonly=0)
Oct  7 10:22:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:33Z|00935|binding|INFO|Setting lport f4f9776d-fbc4-4c99-a076-d6391636ed53 down in Southbound
Oct  7 10:22:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:33Z|00936|binding|INFO|Removing iface tapf4f9776d-fb ovn-installed in OVS
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.128 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5d:02:42 10.100.0.7'], port_security=['fa:16:3e:5d:02:42 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '152070e7-9c74-429d-b9d8-c09cbcba121e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '4', 'neutron:security_group_ids': '38933899-ccd3-4eef-a7a6-a99092e32db4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=f4f9776d-fbc4-4c99-a076-d6391636ed53) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.130 161536 INFO neutron.agent.ovn.metadata.agent [-] Port f4f9776d-fbc4-4c99-a076-d6391636ed53 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 unbound from our chassis#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.131 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ebea7a9d-f576-4b9e-8316-859c29b06dc2#033[00m
Oct  7 10:22:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2514063770' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.149 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2de252c9-afec-47d1-8d73-2d43892c8e25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:33 np0005473739 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000053.scope: Deactivated successfully.
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.163 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:33 np0005473739 systemd[1]: machine-qemu\x2d104\x2dinstance\x2d00000053.scope: Consumed 16.905s CPU time.
Oct  7 10:22:33 np0005473739 systemd-machined[214580]: Machine qemu-104-instance-00000053 terminated.
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.188 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71c3bec2-340a-42dc-961f-7c7a455b6ba4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.191 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a0739a99-0154-4156-a172-49abb74cd8c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.202 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.220 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39c70fe2-41f1-4a52-ba68-27aa6e827672]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.240 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.240 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a7cfee8-faa9-4650-a0c3-6ca96447f78f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapebea7a9d-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bc:58:fe'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 15, 'rx_bytes': 1000, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 234], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739733, 'reachable_time': 16237, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351344, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.247 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f597ede-b785-4dad-b24b-04430bf33de0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739748, 'tstamp': 739748}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351348, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapebea7a9d-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 739751, 'tstamp': 739751}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351348, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.258 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.265 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapebea7a9d-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.265 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.266 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapebea7a9d-f0, col_values=(('external_ids', {'iface-id': 'e0f4a07d-63f3-4c49-8cad-69cdf20a2608'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:33.266 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.290 2 DEBUG nova.policy [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '505542e64c504f158e0c4a26a0a79480', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f9c0c1088f744e55acc586a4d180b728', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.295 2 DEBUG nova.network.neutron [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Updated VIF entry in instance network info cache for port c2c79fd1-616c-4d60-86ec-7f0535cd0015. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.296 2 DEBUG nova.network.neutron [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Updating instance_info_cache with network_info: [{"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.303 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.687s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.305 2 DEBUG nova.virt.libvirt.vif [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1845921390',display_name='tempest-ServersTestJSON-server-1845921390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1845921390',id=92,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDvisbnvR+SBDTBpkRZYoHfuGCO3MwR4MB48Zh5eLs/OIEMqZaVECbFrh+wCEnmCSXpmeyo6hZaqLKqY9Su9eL7k6N2nZ41Rvo5bx+Cu1oQT9zbA59f1WkYryJwF3IsUxA==',key_name='tempest-key-2011271221',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-13zl71li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:28Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=a888e66f-9992-460a-ab15-79ac0261c4e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.306 2 DEBUG nova.network.os_vif_util [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.307 2 DEBUG nova.network.os_vif_util [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.313 2 DEBUG nova.objects.instance [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid a888e66f-9992-460a-ab15-79ac0261c4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.317 2 DEBUG oslo_concurrency.lockutils [req-06b2f8a7-ea9b-442d-899d-2184ef3d2f91 req-8bd56dfb-214a-4ba9-aff2-a155848bbcca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a888e66f-9992-460a-ab15-79ac0261c4e2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.323 2 INFO nova.virt.libvirt.driver [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Instance destroyed successfully.#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.324 2 DEBUG nova.objects.instance [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'resources' on Instance uuid 152070e7-9c74-429d-b9d8-c09cbcba121e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.331 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.331 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.332 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.332 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.354 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.365 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 059cdf38-dead-4636-8397-0037c0c4ced3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.405 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <uuid>a888e66f-9992-460a-ab15-79ac0261c4e2</uuid>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <name>instance-0000005c</name>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestJSON-server-1845921390</nova:name>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:22:32</nova:creationTime>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <nova:port uuid="c2c79fd1-616c-4d60-86ec-7f0535cd0015">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <entry name="serial">a888e66f-9992-460a-ab15-79ac0261c4e2</entry>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <entry name="uuid">a888e66f-9992-460a-ab15-79ac0261c4e2</entry>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a888e66f-9992-460a-ab15-79ac0261c4e2_disk">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:8c:6c:90"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <target dev="tapc2c79fd1-61"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/console.log" append="off"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:22:33 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:22:33 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:22:33 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:22:33 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.406 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Preparing to wait for external event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.406 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.407 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.407 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.408 2 DEBUG nova.virt.libvirt.vif [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1845921390',display_name='tempest-ServersTestJSON-server-1845921390',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1845921390',id=92,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDvisbnvR+SBDTBpkRZYoHfuGCO3MwR4MB48Zh5eLs/OIEMqZaVECbFrh+wCEnmCSXpmeyo6hZaqLKqY9Su9eL7k6N2nZ41Rvo5bx+Cu1oQT9zbA59f1WkYryJwF3IsUxA==',key_name='tempest-key-2011271221',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-13zl71li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:28Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=a888e66f-9992-460a-ab15-79ac0261c4e2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.408 2 DEBUG nova.network.os_vif_util [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.409 2 DEBUG nova.network.os_vif_util [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.409 2 DEBUG os_vif [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.410 2 DEBUG nova.virt.libvirt.vif [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-1872291949',display_name='tempest-ServerActionsTestOtherB-server-1872291949',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-1872291949',id=83,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:21:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-3sm7005j',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:21:20Z,user_data=None,user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=152070e7-9c74-429d-b9d8-c09cbcba121e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.410 2 DEBUG nova.network.os_vif_util [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "address": "fa:16:3e:5d:02:42", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf4f9776d-fb", "ovs_interfaceid": "f4f9776d-fbc4-4c99-a076-d6391636ed53", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.411 2 DEBUG nova.network.os_vif_util [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.411 2 DEBUG os_vif [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.414 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.415 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.417 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.418 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf4f9776d-fb, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.423 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc2c79fd1-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc2c79fd1-61, col_values=(('external_ids', {'iface-id': 'c2c79fd1-616c-4d60-86ec-7f0535cd0015', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:6c:90', 'vm-uuid': 'a888e66f-9992-460a-ab15-79ac0261c4e2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:33 np0005473739 NetworkManager[44949]: <info>  [1759846953.4277] manager: (tapc2c79fd1-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/387)
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.435 2 INFO os_vif [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5d:02:42,bridge_name='br-int',has_traffic_filtering=True,id=f4f9776d-fbc4-4c99-a076-d6391636ed53,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf4f9776d-fb')#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.458 2 INFO os_vif [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61')#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.519 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.519 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.520 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:8c:6c:90, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.521 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Using config drive#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.554 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.742 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 059cdf38-dead-4636-8397-0037c0c4ced3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.804 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] resizing rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.880 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Creating config drive at /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.885 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jsrov4l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.923 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Successfully created port: a0352a7f-7725-4e54-abe0-3bcd200e802d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:22:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1766: 305 pgs: 305 active+clean; 470 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 9.9 MiB/s rd, 9.6 MiB/s wr, 410 op/s
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.970 2 DEBUG nova.objects.instance [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lazy-loading 'migration_context' on Instance uuid 059cdf38-dead-4636-8397-0037c0c4ced3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.987 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.987 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Ensure instance console log exists: /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.988 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.989 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:33 np0005473739 nova_compute[259550]: 2025-10-07 14:22:33.989 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.028 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8jsrov4l" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.048 2 DEBUG nova.storage.rbd_utils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.052 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.094 2 INFO nova.virt.libvirt.driver [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Deleting instance files /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e_del#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.096 2 INFO nova.virt.libvirt.driver [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Deletion of /var/lib/nova/instances/152070e7-9c74-429d-b9d8-c09cbcba121e_del complete#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.147 2 INFO nova.compute.manager [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Took 1.09 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.148 2 DEBUG oslo.service.loopingcall [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.148 2 DEBUG nova.compute.manager [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.149 2 DEBUG nova.network.neutron [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.216 2 DEBUG oslo_concurrency.processutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config a888e66f-9992-460a-ab15-79ac0261c4e2_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.217 2 INFO nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Deleting local config drive /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2/disk.config because it was imported into RBD.#033[00m
Oct  7 10:22:34 np0005473739 kernel: tapc2c79fd1-61: entered promiscuous mode
Oct  7 10:22:34 np0005473739 systemd-udevd[351299]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:22:34 np0005473739 NetworkManager[44949]: <info>  [1759846954.2659] manager: (tapc2c79fd1-61): new Tun device (/org/freedesktop/NetworkManager/Devices/388)
Oct  7 10:22:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:34Z|00937|binding|INFO|Claiming lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 for this chassis.
Oct  7 10:22:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:34Z|00938|binding|INFO|c2c79fd1-616c-4d60-86ec-7f0535cd0015: Claiming fa:16:3e:8c:6c:90 10.100.0.14
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:34 np0005473739 NetworkManager[44949]: <info>  [1759846954.2746] device (tapc2c79fd1-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:22:34 np0005473739 NetworkManager[44949]: <info>  [1759846954.2778] device (tapc2c79fd1-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.280 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:6c:90 10.100.0.14'], port_security=['fa:16:3e:8c:6c:90 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a888e66f-9992-460a-ab15-79ac0261c4e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c2c79fd1-616c-4d60-86ec-7f0535cd0015) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.282 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c2c79fd1-616c-4d60-86ec-7f0535cd0015 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.284 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:22:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:34Z|00939|binding|INFO|Setting lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 ovn-installed in OVS
Oct  7 10:22:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:34Z|00940|binding|INFO|Setting lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 up in Southbound
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:34 np0005473739 systemd-machined[214580]: New machine qemu-114-instance-0000005c.
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.313 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[18aef3d8-8889-45f8-9f75-0fdcaf7616fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:34 np0005473739 systemd[1]: Started Virtual Machine qemu-114-instance-0000005c.
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.353 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[77a5dd4d-5887-42bb-94ab-941d268d0e5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.360 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ebf8a2c2-ee4b-41bf-9c87-6c359a2d5fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.393 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e02a63de-65a6-4a95-8733-c96f7244c1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73ef6cba-d307-4a1a-9c82-c3717d3897bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 10, 'rx_bytes': 916, 'tx_bytes': 612, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351575, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.428 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8bb4bc62-59c8-46d9-805b-9b5296d74b34]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351577, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351577, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.429 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.432 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:34.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:34 np0005473739 nova_compute[259550]: 2025-10-07 14:22:34.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.143 2 DEBUG nova.compute.manager [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.144 2 DEBUG oslo_concurrency.lockutils [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.144 2 DEBUG oslo_concurrency.lockutils [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.144 2 DEBUG oslo_concurrency.lockutils [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.145 2 DEBUG nova.compute.manager [req-a8d989d3-2e9c-4951-be3c-4d0190a64b03 req-0739daea-c5e5-4dcc-bd3d-adfdda7d07f6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Processing event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.217 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-unplugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.217 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] No waiting events found dispatching network-vif-unplugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-unplugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.218 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.219 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.219 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.219 2 DEBUG oslo_concurrency.lockutils [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.219 2 DEBUG nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] No waiting events found dispatching network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.219 2 WARNING nova.compute.manager [req-b1ac2bab-fd6b-4bc7-a241-f92294c92821 req-b5d73a15-5be6-41ff-973e-c965a9700c54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received unexpected event network-vif-plugged-f4f9776d-fbc4-4c99-a076-d6391636ed53 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.229 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846955.228769, a888e66f-9992-460a-ab15-79ac0261c4e2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.229 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] VM Started (Lifecycle Event)#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.231 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.234 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.237 2 INFO nova.virt.libvirt.driver [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance spawned successfully.#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.237 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.246 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Successfully updated port: a0352a7f-7725-4e54-abe0-3bcd200e802d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.248 2 DEBUG nova.network.neutron [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.330 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.506 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.508 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.508 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquired lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.509 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.512 2 INFO nova.compute.manager [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Took 1.36 seconds to deallocate network for instance.#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.521 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.522 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.523 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.523 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.523 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.524 2 DEBUG nova.virt.libvirt.driver [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.533 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.576 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.576 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846955.2309647, a888e66f-9992-460a-ab15-79ac0261c4e2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.576 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.618 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.618 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.619 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.623 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846955.2333872, a888e66f-9992-460a-ab15-79ac0261c4e2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.624 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.639 2 INFO nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Took 6.97 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.640 2 DEBUG nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.650 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.658 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.684 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.717 2 INFO nova.compute.manager [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Took 8.51 seconds to build instance.#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.736 2 DEBUG oslo_concurrency.lockutils [None req-359e9893-e92f-40f8-9d6c-b7b8d0f9f7c3 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.607s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.774 2 DEBUG oslo_concurrency.processutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.858 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846940.5337255, e1499379-f6d6-4edd-8af0-2ccf7c6e6683 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.859 2 INFO nova.compute.manager [-] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.880 2 DEBUG nova.compute.manager [None req-5ab4f7a0-8032-43f2-a1c9-d961680a4b2b - - - - - -] [instance: e1499379-f6d6-4edd-8af0-2ccf7c6e6683] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1767: 305 pgs: 305 active+clean; 452 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 10 MiB/s rd, 12 MiB/s wr, 499 op/s
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:35 np0005473739 nova_compute[259550]: 2025-10-07 14:22:35.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:22:36 np0005473739 nova_compute[259550]: 2025-10-07 14:22:36.087 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:22:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e240 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e240 do_prune osdmap full prune enabled
Oct  7 10:22:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 e241: 3 total, 3 up, 3 in
Oct  7 10:22:36 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e241: 3 total, 3 up, 3 in
Oct  7 10:22:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2322852226' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:36 np0005473739 nova_compute[259550]: 2025-10-07 14:22:36.309 2 DEBUG oslo_concurrency.processutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:36 np0005473739 nova_compute[259550]: 2025-10-07 14:22:36.314 2 DEBUG nova.compute.provider_tree [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:36 np0005473739 nova_compute[259550]: 2025-10-07 14:22:36.330 2 DEBUG nova.scheduler.client.report [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:36 np0005473739 nova_compute[259550]: 2025-10-07 14:22:36.357 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:36 np0005473739 nova_compute[259550]: 2025-10-07 14:22:36.380 2 INFO nova.scheduler.client.report [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Deleted allocations for instance 152070e7-9c74-429d-b9d8-c09cbcba121e#033[00m
Oct  7 10:22:36 np0005473739 nova_compute[259550]: 2025-10-07 14:22:36.467 2 DEBUG oslo_concurrency.lockutils [None req-ed42565c-5bd0-44f9-b839-9bcc7d112435 b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "152070e7-9c74-429d-b9d8-c09cbcba121e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:36 np0005473739 nova_compute[259550]: 2025-10-07 14:22:36.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:37 np0005473739 podman[351643]: 2025-10-07 14:22:37.083082731 +0000 UTC m=+0.072042425 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:22:37 np0005473739 podman[351642]: 2025-10-07 14:22:37.10768673 +0000 UTC m=+0.097374793 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.344 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Received event network-vif-deleted-f4f9776d-fbc4-4c99-a076-d6391636ed53 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.345 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.346 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.346 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.346 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.346 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] No waiting events found dispatching network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.347 2 WARNING nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received unexpected event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.347 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-changed-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.347 2 DEBUG nova.compute.manager [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Refreshing instance network info cache due to event network-changed-a0352a7f-7725-4e54-abe0-3bcd200e802d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.347 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.524 2 DEBUG nova.network.neutron [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Updating instance_info_cache with network_info: [{"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.544 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Releasing lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.545 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance network_info: |[{"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.547 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.549 2 DEBUG nova.network.neutron [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Refreshing network info cache for port a0352a7f-7725-4e54-abe0-3bcd200e802d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.553 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start _get_guest_xml network_info=[{"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.563 2 WARNING nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.568 2 DEBUG nova.virt.libvirt.host [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.569 2 DEBUG nova.virt.libvirt.host [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.582 2 DEBUG nova.virt.libvirt.host [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.583 2 DEBUG nova.virt.libvirt.host [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.584 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.585 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.586 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.587 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.587 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.588 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.588 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.589 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.589 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.590 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.590 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.591 2 DEBUG nova.virt.hardware [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.596 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.639 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.641 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.641 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.642 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.643 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.646 2 INFO nova.compute.manager [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Terminating instance#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.648 2 DEBUG nova.compute.manager [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:22:37 np0005473739 kernel: tap6e1eba9e-f1 (unregistering): left promiscuous mode
Oct  7 10:22:37 np0005473739 NetworkManager[44949]: <info>  [1759846957.6990] device (tap6e1eba9e-f1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:22:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:37Z|00941|binding|INFO|Releasing lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 from this chassis (sb_readonly=0)
Oct  7 10:22:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:37Z|00942|binding|INFO|Setting lport 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 down in Southbound
Oct  7 10:22:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:37Z|00943|binding|INFO|Removing iface tap6e1eba9e-f1 ovn-installed in OVS
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.714 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:12:7c 10.100.0.3'], port_security=['fa:16:3e:71:12:7c 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': 'd932a7ab-839c-48b9-804f-90cc8634e93b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'a266c4b5f8164bceb621e0e23116c515', 'neutron:revision_number': '9', 'neutron:security_group_ids': '2167ecc9-baa9-4f24-bd7c-9baa94d55bea', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=061d6c9e-c728-4632-b92d-e6b85ba42658, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.715 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 in datapath ebea7a9d-f576-4b9e-8316-859c29b06dc2 unbound from our chassis#033[00m
Oct  7 10:22:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.716 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ebea7a9d-f576-4b9e-8316-859c29b06dc2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:22:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.717 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97606e87-1708-4a38-9a63-fdf7734975bb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.717 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 namespace which is not needed anymore#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:37 np0005473739 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000004e.scope: Deactivated successfully.
Oct  7 10:22:37 np0005473739 systemd[1]: machine-qemu\x2d113\x2dinstance\x2d0000004e.scope: Consumed 9.835s CPU time.
Oct  7 10:22:37 np0005473739 systemd-machined[214580]: Machine qemu-113-instance-0000004e terminated.
Oct  7 10:22:37 np0005473739 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [NOTICE]   (339916) : haproxy version is 2.8.14-c23fe91
Oct  7 10:22:37 np0005473739 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [NOTICE]   (339916) : path to executable is /usr/sbin/haproxy
Oct  7 10:22:37 np0005473739 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [WARNING]  (339916) : Exiting Master process...
Oct  7 10:22:37 np0005473739 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [WARNING]  (339916) : Exiting Master process...
Oct  7 10:22:37 np0005473739 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [ALERT]    (339916) : Current worker (339918) exited with code 143 (Terminated)
Oct  7 10:22:37 np0005473739 neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2[339909]: [WARNING]  (339916) : All workers exited. Exiting... (0)
Oct  7 10:22:37 np0005473739 systemd[1]: libpod-d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5.scope: Deactivated successfully.
Oct  7 10:22:37 np0005473739 podman[351719]: 2025-10-07 14:22:37.863819425 +0000 UTC m=+0.047792364 container died d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:22:37 np0005473739 NetworkManager[44949]: <info>  [1759846957.8672] manager: (tap6e1eba9e-f1): new Tun device (/org/freedesktop/NetworkManager/Devices/389)
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.893 2 INFO nova.virt.libvirt.driver [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Instance destroyed successfully.#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.893 2 DEBUG nova.objects.instance [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lazy-loading 'resources' on Instance uuid d932a7ab-839c-48b9-804f-90cc8634e93b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5-userdata-shm.mount: Deactivated successfully.
Oct  7 10:22:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6dcdee683237114ce949886d137839ba727ebc7082b9a6e07603460b0f5ad933-merged.mount: Deactivated successfully.
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.906 2 DEBUG nova.virt.libvirt.vif [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:20:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestOtherB-server-32969053',display_name='tempest-ServerActionsTestOtherB-server-32969053',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestotherb-server-32969053',id=78,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOqlzBNyTrZ3ukqlO6TVbreu9ViSyA0zBmuJZOrzxut70yuUvNen/46QBxy28dcNWssGIhIyWpj4pHS1UxVS0zSbAfGPV97emvFP5GyJtAaeCUYpqJsMV8Zmvgjpt+/vMQ==',key_name='tempest-keypair-1118909469',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:29Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='a266c4b5f8164bceb621e0e23116c515',ramdisk_id='',reservation_id='r-r374dg9m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestOtherB-1033607333',owner_user_name='tempest-ServerActionsTestOtherB-1033607333-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b99e8c19767d42aa96c7d646cacc3772',uuid=d932a7ab-839c-48b9-804f-90cc8634e93b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.907 2 DEBUG nova.network.os_vif_util [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converting VIF {"id": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "address": "fa:16:3e:71:12:7c", "network": {"id": "ebea7a9d-f576-4b9e-8316-859c29b06dc2", "bridge": "br-int", "label": "tempest-ServerActionsTestOtherB-408609024-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "a266c4b5f8164bceb621e0e23116c515", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e1eba9e-f1", "ovs_interfaceid": "6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.908 2 DEBUG nova.network.os_vif_util [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.909 2 DEBUG os_vif [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:37 np0005473739 podman[351719]: 2025-10-07 14:22:37.911885805 +0000 UTC m=+0.095858744 container cleanup d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.911 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e1eba9e-f1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.917 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:37 np0005473739 nova_compute[259550]: 2025-10-07 14:22:37.920 2 INFO os_vif [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:12:7c,bridge_name='br-int',has_traffic_filtering=True,id=6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8,network=Network(ebea7a9d-f576-4b9e-8316-859c29b06dc2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e1eba9e-f1')#033[00m
Oct  7 10:22:37 np0005473739 systemd[1]: libpod-conmon-d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5.scope: Deactivated successfully.
Oct  7 10:22:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1769: 305 pgs: 305 active+clean; 452 MiB data, 873 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 7.1 MiB/s wr, 362 op/s
Oct  7 10:22:37 np0005473739 podman[351755]: 2025-10-07 14:22:37.994373444 +0000 UTC m=+0.053591177 container remove d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:37.999 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfa5defb-596a-4fc2-934a-39d148ff0b92]: (4, ('Tue Oct  7 02:22:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 (d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5)\nd6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5\nTue Oct  7 02:22:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 (d6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5)\nd6eeb7070490f86537d6dfa9dc361c74bd3f37601c2ee1ba6fb16cc849e4c6b5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.001 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66e30561-f818-4fb2-8c53-e38a9d5c13b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.005 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapebea7a9d-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:38 np0005473739 kernel: tapebea7a9d-f0: left promiscuous mode
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.031 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[328d1fd0-ed66-4ba7-981d-a23d575741bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.059 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[012189ba-0461-4794-acfc-e38ff6a547dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5378c28a-3b93-4921-a71e-c52eebe52480]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.079 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef222116-fff3-4cdd-a2ae-e3eeb511ea28]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 739726, 'reachable_time': 27240, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351781, 'error': None, 'target': 'ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.082 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ebea7a9d-f576-4b9e-8316-859c29b06dc2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:22:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:38.082 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[34383eb1-4033-44a0-a103-90be42f03f97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:38 np0005473739 systemd[1]: run-netns-ovnmeta\x2debea7a9d\x2df576\x2d4b9e\x2d8316\x2d859c29b06dc2.mount: Deactivated successfully.
Oct  7 10:22:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/652490974' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.110 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.129 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.132 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.391 2 INFO nova.virt.libvirt.driver [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deleting instance files /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b_del#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.392 2 INFO nova.virt.libvirt.driver [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deletion of /var/lib/nova/instances/d932a7ab-839c-48b9-804f-90cc8634e93b_del complete#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.450 2 INFO nova.compute.manager [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.451 2 DEBUG oslo.service.loopingcall [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.451 2 DEBUG nova.compute.manager [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.451 2 DEBUG nova.network.neutron [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:22:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193519337' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.599 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.601 2 DEBUG nova.virt.libvirt.vif [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-348220581',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-348220581',id=93,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9c0c1088f744e55acc586a4d180b728',ramdisk_id='',reservation_id='r-ezwd203p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-2004432481',owner_user_name='tempest-ServerTagsTestJSON-2004432481-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:33Z,user_data=None,user_id='505542e64c504f158e0c4a26a0a79480',uuid=059cdf38-dead-4636-8397-0037c0c4ced3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.602 2 DEBUG nova.network.os_vif_util [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converting VIF {"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.603 2 DEBUG nova.network.os_vif_util [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.605 2 DEBUG nova.objects.instance [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lazy-loading 'pci_devices' on Instance uuid 059cdf38-dead-4636-8397-0037c0c4ced3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.620 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <uuid>059cdf38-dead-4636-8397-0037c0c4ced3</uuid>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <name>instance-0000005d</name>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerTagsTestJSON-server-348220581</nova:name>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:22:37</nova:creationTime>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <nova:user uuid="505542e64c504f158e0c4a26a0a79480">tempest-ServerTagsTestJSON-2004432481-project-member</nova:user>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <nova:project uuid="f9c0c1088f744e55acc586a4d180b728">tempest-ServerTagsTestJSON-2004432481</nova:project>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <nova:port uuid="a0352a7f-7725-4e54-abe0-3bcd200e802d">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <entry name="serial">059cdf38-dead-4636-8397-0037c0c4ced3</entry>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <entry name="uuid">059cdf38-dead-4636-8397-0037c0c4ced3</entry>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/059cdf38-dead-4636-8397-0037c0c4ced3_disk">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/059cdf38-dead-4636-8397-0037c0c4ced3_disk.config">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:c6:b3:e4"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <target dev="tapa0352a7f-77"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/console.log" append="off"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:22:38 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:22:38 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:22:38 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:22:38 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.622 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Preparing to wait for external event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.622 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.623 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.623 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.624 2 DEBUG nova.virt.libvirt.vif [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-348220581',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-348220581',id=93,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f9c0c1088f744e55acc586a4d180b728',ramdisk_id='',reservation_id='r-ezwd203p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerTagsTestJSON-2004432481',owner_user_name='tempest-ServerTagsTestJSON-2004432481-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:33Z,user_data=None,user_id='505542e64c504f158e0c4a26a0a79480',uuid=059cdf38-dead-4636-8397-0037c0c4ced3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.625 2 DEBUG nova.network.os_vif_util [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converting VIF {"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.626 2 DEBUG nova.network.os_vif_util [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.626 2 DEBUG os_vif [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.628 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.629 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.632 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa0352a7f-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.633 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa0352a7f-77, col_values=(('external_ids', {'iface-id': 'a0352a7f-7725-4e54-abe0-3bcd200e802d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c6:b3:e4', 'vm-uuid': '059cdf38-dead-4636-8397-0037c0c4ced3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:38 np0005473739 NetworkManager[44949]: <info>  [1759846958.6354] manager: (tapa0352a7f-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/390)
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.640 2 INFO os_vif [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77')#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.691 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.691 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.691 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] No VIF found with MAC fa:16:3e:c6:b3:e4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.692 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Using config drive#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.715 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:38 np0005473739 nova_compute[259550]: 2025-10-07 14:22:38.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.003 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.004 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.005 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.296 2 DEBUG nova.compute.manager [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.297 2 DEBUG oslo_concurrency.lockutils [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.298 2 DEBUG oslo_concurrency.lockutils [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.298 2 DEBUG oslo_concurrency.lockutils [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.298 2 DEBUG nova.compute.manager [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.298 2 DEBUG nova.compute.manager [req-2f2b7096-04b1-405c-9161-b678031706d4 req-8ff631e3-7f00-439d-b474-1fa0d19de503 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-unplugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.397 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Creating config drive at /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.402 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprrw7f2id execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3564616810' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.452 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.544 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprrw7f2id" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.567 2 DEBUG nova.storage.rbd_utils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] rbd image 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.572 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.612 2 DEBUG nova.network.neutron [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Updated VIF entry in instance network info cache for port a0352a7f-7725-4e54-abe0-3bcd200e802d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.613 2 DEBUG nova.network.neutron [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Updating instance_info_cache with network_info: [{"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.621 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.622 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.632 2 DEBUG nova.network.neutron [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.634 2 DEBUG oslo_concurrency.lockutils [req-f8e2516a-fd6c-484d-8850-3d461c62f985 req-b665477d-3392-47d5-8eda-a5b35e39e562 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-059cdf38-dead-4636-8397-0037c0c4ced3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.644 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.644 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.651 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.652 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.654 2 INFO nova.compute.manager [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Took 1.20 seconds to deallocate network for instance.#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.664 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.665 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.665 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.665 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.665 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.667 2 INFO nova.compute.manager [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Terminating instance#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.668 2 DEBUG nova.compute.manager [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.671 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.671 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.675 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.675 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000005d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.701 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.701 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:39 np0005473739 kernel: tapc2c79fd1-61 (unregistering): left promiscuous mode
Oct  7 10:22:39 np0005473739 NetworkManager[44949]: <info>  [1759846959.7197] device (tapc2c79fd1-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.727 2 DEBUG nova.compute.manager [req-6bba1dd7-a7f0-4c44-95e8-42569aaea07c req-76a07e91-9c9a-46a5-854f-bffaa8f117fa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-deleted-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:39Z|00944|binding|INFO|Releasing lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 from this chassis (sb_readonly=0)
Oct  7 10:22:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:39Z|00945|binding|INFO|Setting lport c2c79fd1-616c-4d60-86ec-7f0535cd0015 down in Southbound
Oct  7 10:22:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:39Z|00946|binding|INFO|Removing iface tapc2c79fd1-61 ovn-installed in OVS
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.737 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:6c:90 10.100.0.14'], port_security=['fa:16:3e:8c:6c:90 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a888e66f-9992-460a-ab15-79ac0261c4e2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c2c79fd1-616c-4d60-86ec-7f0535cd0015) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.739 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c2c79fd1-616c-4d60-86ec-7f0535cd0015 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.741 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.761 2 DEBUG oslo_concurrency.processutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config 059cdf38-dead-4636-8397-0037c0c4ced3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.190s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.762 2 INFO nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Deleting local config drive /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3/disk.config because it was imported into RBD.#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.764 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[568087b7-c094-4391-966e-9c731e07aa92]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005c.scope: Deactivated successfully.
Oct  7 10:22:39 np0005473739 systemd[1]: machine-qemu\x2d114\x2dinstance\x2d0000005c.scope: Consumed 5.144s CPU time.
Oct  7 10:22:39 np0005473739 systemd-machined[214580]: Machine qemu-114-instance-0000005c terminated.
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.803 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a67bf6-6856-4682-b6bd-5a04a5b1a664]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.807 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8bc160d2-87bd-4643-852a-557ab25a0857]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.820 2 DEBUG oslo_concurrency.processutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:39 np0005473739 kernel: tapa0352a7f-77: entered promiscuous mode
Oct  7 10:22:39 np0005473739 NetworkManager[44949]: <info>  [1759846959.8383] manager: (tapa0352a7f-77): new Tun device (/org/freedesktop/NetworkManager/Devices/391)
Oct  7 10:22:39 np0005473739 systemd-udevd[351680]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:22:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:39Z|00947|binding|INFO|Claiming lport a0352a7f-7725-4e54-abe0-3bcd200e802d for this chassis.
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.843 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d9f5ba6d-7ce4-48f6-96b8-b8606bf81898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:39Z|00948|binding|INFO|a0352a7f-7725-4e54-abe0-3bcd200e802d: Claiming fa:16:3e:c6:b3:e4 10.100.0.10
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.852 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:b3:e4 10.100.0.10'], port_security=['fa:16:3e:c6:b3:e4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '059cdf38-dead-4636-8397-0037c0c4ced3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9c0c1088f744e55acc586a4d180b728', 'neutron:revision_number': '2', 'neutron:security_group_ids': '64310848-1d82-4b02-993c-52bf19515ed4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d26f8f35-607a-45a2-b669-669744490ffb, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a0352a7f-7725-4e54-abe0-3bcd200e802d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:39 np0005473739 NetworkManager[44949]: <info>  [1759846959.8596] device (tapa0352a7f-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:22:39 np0005473739 NetworkManager[44949]: <info>  [1759846959.8603] device (tapa0352a7f-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.869 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e82a43c0-87ce-4845-ae40-888b76b1631a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 12, 'rx_bytes': 916, 'tx_bytes': 696, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 351931, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:39Z|00949|binding|INFO|Setting lport a0352a7f-7725-4e54-abe0-3bcd200e802d ovn-installed in OVS
Oct  7 10:22:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:39Z|00950|binding|INFO|Setting lport a0352a7f-7725-4e54-abe0-3bcd200e802d up in Southbound
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.888 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[82f724a4-920e-4252-aea8-adb64c590517]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351936, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 351936, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 systemd-machined[214580]: New machine qemu-115-instance-0000005d.
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.892 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:39 np0005473739 systemd[1]: Started Virtual Machine qemu-115-instance-0000005d.
Oct  7 10:22:39 np0005473739 NetworkManager[44949]: <info>  [1759846959.9102] manager: (tapc2c79fd1-61): new Tun device (/org/freedesktop/NetworkManager/Devices/392)
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.920 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.920 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.921 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.921 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.923 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a0352a7f-7725-4e54-abe0-3bcd200e802d in datapath c1cdabcd-9e01-4a29-bf25-66798546aca6 unbound from our chassis#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.924 2 INFO nova.virt.libvirt.driver [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Instance destroyed successfully.#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.924 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c1cdabcd-9e01-4a29-bf25-66798546aca6#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.924 2 DEBUG nova.objects.instance [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid a888e66f-9992-460a-ab15-79ac0261c4e2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.936 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d56a4cd8-3dd1-4691-a13d-744e391d1909]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.938 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc1cdabcd-91 in ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.941 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc1cdabcd-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3561cf-19ce-450b-ac6b-7867a95fc835]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.942 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f28f3156-501b-47a1-a6e4-3fc810f2a46b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.958 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6e58799d-0a6e-4eb6-bad5-ed424b10d8e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1770: 305 pgs: 305 active+clean; 371 MiB data, 868 MiB used, 59 GiB / 60 GiB avail; 4.9 MiB/s rd, 9.4 MiB/s wr, 481 op/s
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.983 2 DEBUG nova.virt.libvirt.vif [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1845921390',display_name='tempest-ServersTestJSON-server-1845921390',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1845921390',id=92,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDvisbnvR+SBDTBpkRZYoHfuGCO3MwR4MB48Zh5eLs/OIEMqZaVECbFrh+wCEnmCSXpmeyo6hZaqLKqY9Su9eL7k6N2nZ41Rvo5bx+Cu1oQT9zbA59f1WkYryJwF3IsUxA==',key_name='tempest-key-2011271221',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:35Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-13zl71li',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:35Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=a888e66f-9992-460a-ab15-79ac0261c4e2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.984 2 DEBUG nova.network.os_vif_util [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "address": "fa:16:3e:8c:6c:90", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc2c79fd1-61", "ovs_interfaceid": "c2c79fd1-616c-4d60-86ec-7f0535cd0015", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.985 2 DEBUG nova.network.os_vif_util [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.986 2 DEBUG os_vif [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:22:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:39.989 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4431661d-4ced-4c58-8560-60dc47a7ff0c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.992 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc2c79fd1-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:39 np0005473739 nova_compute[259550]: 2025-10-07 14:22:39.997 2 INFO os_vif [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:6c:90,bridge_name='br-int',has_traffic_filtering=True,id=c2c79fd1-616c-4d60-86ec-7f0535cd0015,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc2c79fd1-61')#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.022 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[23c1e284-ae9d-456a-9153-4fa76c76d334]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.028 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[15f88588-a27a-4e79-be41-fd536d507814]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 NetworkManager[44949]: <info>  [1759846960.0297] manager: (tapc1cdabcd-90): new Veth device (/org/freedesktop/NetworkManager/Devices/393)
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.065 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c7eaf2d8-91b8-4679-9242-01e96463d05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.069 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a9d9d24d-9810-4ddf-b006-fc154f9e5e97]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 NetworkManager[44949]: <info>  [1759846960.1020] device (tapc1cdabcd-90): carrier: link connected
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.109 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[401a8d05-b48c-4221-ba64-417f944fdc62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.136 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b534447-ab04-48b2-b97d-55d562d6dc86]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1cdabcd-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:c6:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754366, 'reachable_time': 23769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352016, 'error': None, 'target': 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.159 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f4e0270f-da23-4fea-b371-8968fc855848]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feff:c658'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 754366, 'tstamp': 754366}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352017, 'error': None, 'target': 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.181 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc399664-7ded-45d6-8aa9-252c72141b54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc1cdabcd-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ff:c6:58'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 278], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754366, 'reachable_time': 23769, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 352018, 'error': None, 'target': 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.207 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.209 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3014MB free_disk=59.766944885253906GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.209 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.221 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[968b1a4e-8cdb-47e2-86d3-7b2b6481d626]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712830154' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.308 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47d90804-f6c8-42c2-b71f-36e6d89d01a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.310 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1cdabcd-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.311 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.311 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc1cdabcd-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.311 2 DEBUG oslo_concurrency.processutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:40 np0005473739 NetworkManager[44949]: <info>  [1759846960.3145] manager: (tapc1cdabcd-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/394)
Oct  7 10:22:40 np0005473739 kernel: tapc1cdabcd-90: entered promiscuous mode
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.318 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc1cdabcd-90, col_values=(('external_ids', {'iface-id': '9159867a-1370-4ad3-9916-8e3a37e797ca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:40Z|00951|binding|INFO|Releasing lport 9159867a-1370-4ad3-9916-8e3a37e797ca from this chassis (sb_readonly=0)
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.322 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c1cdabcd-9e01-4a29-bf25-66798546aca6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c1cdabcd-9e01-4a29-bf25-66798546aca6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.323 2 DEBUG nova.compute.provider_tree [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.327 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[709ed8a6-7481-48be-864d-35569d755cfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.328 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-c1cdabcd-9e01-4a29-bf25-66798546aca6
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/c1cdabcd-9e01-4a29-bf25-66798546aca6.pid.haproxy
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID c1cdabcd-9e01-4a29-bf25-66798546aca6
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:22:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:40.329 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'env', 'PROCESS_TAG=haproxy-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c1cdabcd-9e01-4a29-bf25-66798546aca6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.488 2 INFO nova.virt.libvirt.driver [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Deleting instance files /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2_del#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.489 2 INFO nova.virt.libvirt.driver [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Deletion of /var/lib/nova/instances/a888e66f-9992-460a-ab15-79ac0261c4e2_del complete#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.572 2 DEBUG nova.scheduler.client.report [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.622 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.921s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.624 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.415s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.660 2 INFO nova.scheduler.client.report [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Deleted allocations for instance d932a7ab-839c-48b9-804f-90cc8634e93b#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.669 2 INFO nova.compute.manager [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.670 2 DEBUG oslo.service.loopingcall [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.671 2 DEBUG nova.compute.manager [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.671 2 DEBUG nova.network.neutron [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:22:40 np0005473739 podman[352095]: 2025-10-07 14:22:40.744101464 +0000 UTC m=+0.044789554 container create 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 2f0de516-cf33-49b6-b036-aee8c2f72943 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance fdd84566-c63e-469a-9173-55b845d32171 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance c8c2d410-01f0-4ef2-9ce3-232347c32e46 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a888e66f-9992-460a-ab15-79ac0261c4e2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 059cdf38-dead-4636-8397-0037c0c4ced3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 5 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1152MB phys_disk=59GB used_disk=5GB total_vcpus=8 used_vcpus=5 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:22:40 np0005473739 systemd[1]: Started libpod-conmon-87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c.scope.
Oct  7 10:22:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:22:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/98b51d9d24f5f6a74d05b637a8d047f3a70db0e177c0c965ac9cfc29d076662a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:22:40 np0005473739 podman[352095]: 2025-10-07 14:22:40.720660814 +0000 UTC m=+0.021348924 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.819 2 DEBUG oslo_concurrency.lockutils [None req-c5e2f42f-76f8-4e59-bf28-2a5882eef76a b99e8c19767d42aa96c7d646cacc3772 a266c4b5f8164bceb621e0e23116c515 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:40 np0005473739 podman[352095]: 2025-10-07 14:22:40.830120056 +0000 UTC m=+0.130808166 container init 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  7 10:22:40 np0005473739 podman[352095]: 2025-10-07 14:22:40.836508515 +0000 UTC m=+0.137196605 container start 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.857 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:40 np0005473739 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [NOTICE]   (352114) : New worker (352116) forked
Oct  7 10:22:40 np0005473739 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [NOTICE]   (352114) : Loading success.
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.967 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846960.967343, 059cdf38-dead-4636-8397-0037c0c4ced3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.968 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] VM Started (Lifecycle Event)#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.989 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.995 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846960.9674647, 059cdf38-dead-4636-8397-0037c0c4ced3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:40 np0005473739 nova_compute[259550]: 2025-10-07 14:22:40.995 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.013 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.016 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.032 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.266 2 DEBUG nova.network.neutron [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.283 2 INFO nova.compute.manager [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Took 0.61 seconds to deallocate network for instance.#033[00m
Oct  7 10:22:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3714819094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.334 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.345 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.349 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.366 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.389 2 DEBUG nova.compute.manager [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.389 2 DEBUG oslo_concurrency.lockutils [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.390 2 DEBUG oslo_concurrency.lockutils [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.390 2 DEBUG oslo_concurrency.lockutils [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d932a7ab-839c-48b9-804f-90cc8634e93b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.390 2 DEBUG nova.compute.manager [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] No waiting events found dispatching network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.390 2 WARNING nova.compute.manager [req-11adf930-fe8e-4e1b-954a-ec77c6fab3e0 req-53721dee-e9b2-483e-aeb2-414bf54edde3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Received unexpected event network-vif-plugged-6e1eba9e-f1c1-4ad7-9c7f-013ab376eea8 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.392 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.392 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.392 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.058s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.494 2 DEBUG oslo_concurrency.processutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.803 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-unplugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.806 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.806 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.806 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.806 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] No waiting events found dispatching network-vif-unplugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.807 2 WARNING nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received unexpected event network-vif-unplugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.807 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.807 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.807 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.807 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.808 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] No waiting events found dispatching network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.808 2 WARNING nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received unexpected event network-vif-plugged-c2c79fd1-616c-4d60-86ec-7f0535cd0015 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.808 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.809 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.809 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.809 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.809 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Processing event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG oslo_concurrency.lockutils [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.810 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] No waiting events found dispatching network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.811 2 WARNING nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received unexpected event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.811 2 DEBUG nova.compute.manager [req-f39c5bfc-1fb4-47e4-ba8f-956883d898bd req-5208823e-3536-4f9b-a318-542790451778 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Received event network-vif-deleted-c2c79fd1-616c-4d60-86ec-7f0535cd0015 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.812 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.840 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846961.821746, 059cdf38-dead-4636-8397-0037c0c4ced3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.840 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.843 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.846 2 INFO nova.virt.libvirt.driver [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance spawned successfully.#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.847 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.861 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.871 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.875 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.875 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.876 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.876 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.877 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.877 2 DEBUG nova.virt.libvirt.driver [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.900 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.951 2 INFO nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Took 8.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.952 2 DEBUG nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1771: 305 pgs: 305 active+clean; 368 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 9.0 MiB/s wr, 465 op/s
Oct  7 10:22:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2553049972' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.987 2 DEBUG oslo_concurrency.processutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:41 np0005473739 nova_compute[259550]: 2025-10-07 14:22:41.992 2 DEBUG nova.compute.provider_tree [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:42 np0005473739 nova_compute[259550]: 2025-10-07 14:22:42.008 2 DEBUG nova.scheduler.client.report [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:42 np0005473739 nova_compute[259550]: 2025-10-07 14:22:42.012 2 INFO nova.compute.manager [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Took 10.08 seconds to build instance.#033[00m
Oct  7 10:22:42 np0005473739 nova_compute[259550]: 2025-10-07 14:22:42.059 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:42 np0005473739 nova_compute[259550]: 2025-10-07 14:22:42.063 2 DEBUG oslo_concurrency.lockutils [None req-394e5066-0628-44bc-b370-8c056a3444a0 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.406s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:42 np0005473739 nova_compute[259550]: 2025-10-07 14:22:42.100 2 INFO nova.scheduler.client.report [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance a888e66f-9992-460a-ab15-79ac0261c4e2#033[00m
Oct  7 10:22:42 np0005473739 nova_compute[259550]: 2025-10-07 14:22:42.172 2 DEBUG oslo_concurrency.lockutils [None req-a510a035-4213-464b-910b-9a8abfb6ba3b 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "a888e66f-9992-460a-ab15-79ac0261c4e2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.508s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1772: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.3 MiB/s wr, 389 op/s
Oct  7 10:22:44 np0005473739 nova_compute[259550]: 2025-10-07 14:22:44.392 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:44 np0005473739 nova_compute[259550]: 2025-10-07 14:22:44.392 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:44Z|00952|binding|INFO|Releasing lport 9159867a-1370-4ad3-9916-8e3a37e797ca from this chassis (sb_readonly=0)
Oct  7 10:22:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:44Z|00953|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct  7 10:22:44 np0005473739 nova_compute[259550]: 2025-10-07 14:22:44.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:44Z|00954|binding|INFO|Releasing lport 9159867a-1370-4ad3-9916-8e3a37e797ca from this chassis (sb_readonly=0)
Oct  7 10:22:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:44Z|00955|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct  7 10:22:44 np0005473739 nova_compute[259550]: 2025-10-07 14:22:44.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:44 np0005473739 nova_compute[259550]: 2025-10-07 14:22:44.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:45 np0005473739 nova_compute[259550]: 2025-10-07 14:22:45.472 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:45 np0005473739 nova_compute[259550]: 2025-10-07 14:22:45.472 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:45 np0005473739 nova_compute[259550]: 2025-10-07 14:22:45.495 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:22:45 np0005473739 nova_compute[259550]: 2025-10-07 14:22:45.558 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:45 np0005473739 nova_compute[259550]: 2025-10-07 14:22:45.559 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:45 np0005473739 nova_compute[259550]: 2025-10-07 14:22:45.565 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:22:45 np0005473739 nova_compute[259550]: 2025-10-07 14:22:45.565 2 INFO nova.compute.claims [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:22:45 np0005473739 nova_compute[259550]: 2025-10-07 14:22:45.767 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1773: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 5.0 MiB/s rd, 3.7 MiB/s wr, 338 op/s
Oct  7 10:22:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2270741428' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.194 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.200 2 DEBUG nova.compute.provider_tree [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.230 2 DEBUG nova.scheduler.client.report [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.257 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.258 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.311 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.312 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.335 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.360 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.455 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.456 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.457 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Creating image(s)#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.476 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.503 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.525 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.531 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.573 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance in state 1 after 21 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.610 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.610 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.611 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.611 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.639 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.644 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.692 2 DEBUG nova.policy [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.991 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.992 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.993 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.993 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.993 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.995 2 INFO nova.compute.manager [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Terminating instance#033[00m
Oct  7 10:22:46 np0005473739 nova_compute[259550]: 2025-10-07 14:22:46.997 2 DEBUG nova.compute.manager [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:22:47 np0005473739 kernel: tapa0352a7f-77 (unregistering): left promiscuous mode
Oct  7 10:22:47 np0005473739 NetworkManager[44949]: <info>  [1759846967.0973] device (tapa0352a7f-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:22:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:47Z|00956|binding|INFO|Releasing lport a0352a7f-7725-4e54-abe0-3bcd200e802d from this chassis (sb_readonly=0)
Oct  7 10:22:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:47Z|00957|binding|INFO|Setting lport a0352a7f-7725-4e54-abe0-3bcd200e802d down in Southbound
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:47Z|00958|binding|INFO|Removing iface tapa0352a7f-77 ovn-installed in OVS
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.116 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:b3:e4 10.100.0.10'], port_security=['fa:16:3e:c6:b3:e4 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '059cdf38-dead-4636-8397-0037c0c4ced3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f9c0c1088f744e55acc586a4d180b728', 'neutron:revision_number': '4', 'neutron:security_group_ids': '64310848-1d82-4b02-993c-52bf19515ed4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d26f8f35-607a-45a2-b669-669744490ffb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a0352a7f-7725-4e54-abe0-3bcd200e802d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.118 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a0352a7f-7725-4e54-abe0-3bcd200e802d in datapath c1cdabcd-9e01-4a29-bf25-66798546aca6 unbound from our chassis#033[00m
Oct  7 10:22:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.119 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c1cdabcd-9e01-4a29-bf25-66798546aca6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:22:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.120 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6d3d530b-581f-4334-a231-7b98942b65d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:47.120 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 namespace which is not needed anymore#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:47 np0005473739 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005d.scope: Deactivated successfully.
Oct  7 10:22:47 np0005473739 systemd[1]: machine-qemu\x2d115\x2dinstance\x2d0000005d.scope: Consumed 6.204s CPU time.
Oct  7 10:22:47 np0005473739 systemd-machined[214580]: Machine qemu-115-instance-0000005d terminated.
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.239 2 INFO nova.virt.libvirt.driver [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Instance destroyed successfully.#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.240 2 DEBUG nova.objects.instance [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lazy-loading 'resources' on Instance uuid 059cdf38-dead-4636-8397-0037c0c4ced3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:47 np0005473739 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [NOTICE]   (352114) : haproxy version is 2.8.14-c23fe91
Oct  7 10:22:47 np0005473739 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [NOTICE]   (352114) : path to executable is /usr/sbin/haproxy
Oct  7 10:22:47 np0005473739 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [WARNING]  (352114) : Exiting Master process...
Oct  7 10:22:47 np0005473739 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [WARNING]  (352114) : Exiting Master process...
Oct  7 10:22:47 np0005473739 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [ALERT]    (352114) : Current worker (352116) exited with code 143 (Terminated)
Oct  7 10:22:47 np0005473739 neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6[352110]: [WARNING]  (352114) : All workers exited. Exiting... (0)
Oct  7 10:22:47 np0005473739 systemd[1]: libpod-87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c.scope: Deactivated successfully.
Oct  7 10:22:47 np0005473739 podman[352311]: 2025-10-07 14:22:47.296883176 +0000 UTC m=+0.064320471 container died 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.369 2 DEBUG nova.virt.libvirt.vif [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:30Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='tempest-ServerTagsTestJSON-server-348220581',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-servertagstestjson-server-348220581',id=93,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f9c0c1088f744e55acc586a4d180b728',ramdisk_id='',reservation_id='r-ezwd203p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerTagsTestJSON-2004432481',owner_user_name='tempest-ServerTagsTestJSON-2004432481-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:41Z,user_data=None,user_id='505542e64c504f158e0c4a26a0a79480',uuid=059cdf38-dead-4636-8397-0037c0c4ced3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.370 2 DEBUG nova.network.os_vif_util [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converting VIF {"id": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "address": "fa:16:3e:c6:b3:e4", "network": {"id": "c1cdabcd-9e01-4a29-bf25-66798546aca6", "bridge": "br-int", "label": "tempest-ServerTagsTestJSON-2067174547-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f9c0c1088f744e55acc586a4d180b728", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa0352a7f-77", "ovs_interfaceid": "a0352a7f-7725-4e54-abe0-3bcd200e802d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.371 2 DEBUG nova.network.os_vif_util [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.371 2 DEBUG os_vif [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.374 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa0352a7f-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.379 2 INFO os_vif [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c6:b3:e4,bridge_name='br-int',has_traffic_filtering=True,id=a0352a7f-7725-4e54-abe0-3bcd200e802d,network=Network(c1cdabcd-9e01-4a29-bf25-66798546aca6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa0352a7f-77')#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.435 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Successfully created port: 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.482 2 DEBUG nova.compute.manager [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-unplugged-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.483 2 DEBUG oslo_concurrency.lockutils [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.483 2 DEBUG oslo_concurrency.lockutils [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.483 2 DEBUG oslo_concurrency.lockutils [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.484 2 DEBUG nova.compute.manager [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] No waiting events found dispatching network-vif-unplugged-a0352a7f-7725-4e54-abe0-3bcd200e802d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.484 2 DEBUG nova.compute.manager [req-3b687e5b-c209-4fa3-8f43-4ded788d41c3 req-0582e57b-7674-4c3b-9f91-6a4b7a5c55b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-unplugged-a0352a7f-7725-4e54-abe0-3bcd200e802d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:22:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c-userdata-shm.mount: Deactivated successfully.
Oct  7 10:22:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-98b51d9d24f5f6a74d05b637a8d047f3a70db0e177c0c965ac9cfc29d076662a-merged.mount: Deactivated successfully.
Oct  7 10:22:47 np0005473739 podman[352311]: 2025-10-07 14:22:47.859535639 +0000 UTC m=+0.626972924 container cleanup 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:22:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1774: 305 pgs: 305 active+clean; 326 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.3 MiB/s rd, 3.2 MiB/s wr, 289 op/s
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:47 np0005473739 nova_compute[259550]: 2025-10-07 14:22:47.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:22:48 np0005473739 podman[352364]: 2025-10-07 14:22:48.20556973 +0000 UTC m=+0.314415557 container remove 87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.214 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8dfd7629-9967-49c5-96f0-ef726beb8547]: (4, ('Tue Oct  7 02:22:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 (87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c)\n87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c\nTue Oct  7 02:22:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 (87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c)\n87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.215 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.571s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.215 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d173e469-002c-4894-831a-e010b992a74e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.216 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc1cdabcd-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:48 np0005473739 kernel: tapc1cdabcd-90: left promiscuous mode
Oct  7 10:22:48 np0005473739 systemd[1]: libpod-conmon-87a030d1cfa4e1ce729c2ded659f8650ce6de318d49f4f5547eae75189cc8c7c.scope: Deactivated successfully.
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.240 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[57f0102d-de10-41cd-bc43-8d8d95575b2c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.280 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[80ed9070-1780-404c-aa7b-b2e3b6b90a83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.281 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c9d3994-5aa0-4409-83ea-03f434ecaa47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.301 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5c2f96a1-a3ce-4d33-9558-6085f3e86d78]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 754358, 'reachable_time': 27467, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352386, 'error': None, 'target': 'ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:48 np0005473739 systemd[1]: run-netns-ovnmeta\x2dc1cdabcd\x2d9e01\x2d4a29\x2dbf25\x2d66798546aca6.mount: Deactivated successfully.
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.306 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c1cdabcd-9e01-4a29-bf25-66798546aca6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:22:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:48.307 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e570ea9c-87a3-478f-b5ac-53ffd3471e4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.314 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Successfully updated port: 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.318 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846953.2941175, 152070e7-9c74-429d-b9d8-c09cbcba121e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.318 2 INFO nova.compute.manager [-] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.350 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.350 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.351 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.352 2 DEBUG nova.compute.manager [None req-5e3a8669-68b4-46b1-b800-6853889bda6c - - - - - -] [instance: 152070e7-9c74-429d-b9d8-c09cbcba121e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.359 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.657 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:22:48 np0005473739 nova_compute[259550]: 2025-10-07 14:22:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.179 2 DEBUG nova.objects.instance [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid c53d76d2-4525-4798-bcf4-ad2e6b18071a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.198 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.198 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Ensure instance console log exists: /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.199 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.199 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.199 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:49 np0005473739 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct  7 10:22:49 np0005473739 systemd[1]: machine-qemu\x2d112\x2dinstance\x2d0000005b.scope: Consumed 14.905s CPU time.
Oct  7 10:22:49 np0005473739 systemd-machined[214580]: Machine qemu-112-instance-0000005b terminated.
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.599 2 DEBUG nova.network.neutron [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Updating instance_info_cache with network_info: [{"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.621 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.622 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance network_info: |[{"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.624 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start _get_guest_xml network_info=[{"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.630 2 WARNING nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.635 2 DEBUG nova.virt.libvirt.host [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.635 2 DEBUG nova.virt.libvirt.host [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.639 2 DEBUG nova.virt.libvirt.host [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.639 2 DEBUG nova.virt.libvirt.host [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.640 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.641 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.641 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.641 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.642 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.643 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.643 2 DEBUG nova.virt.hardware [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.645 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.685 2 DEBUG nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.685 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.686 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.686 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.686 2 DEBUG nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] No waiting events found dispatching network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.687 2 WARNING nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received unexpected event network-vif-plugged-a0352a7f-7725-4e54-abe0-3bcd200e802d for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-changed-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG nova.compute.manager [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Refreshing instance network info cache due to event network-changed-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.687 2 DEBUG nova.network.neutron [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Refreshing network info cache for port 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.694 2 INFO nova.virt.libvirt.driver [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Deleting instance files /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3_del#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.695 2 INFO nova.virt.libvirt.driver [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Deletion of /var/lib/nova/instances/059cdf38-dead-4636-8397-0037c0c4ced3_del complete#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.699 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance shutdown successfully after 24 seconds.#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.705 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance destroyed successfully.#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.710 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance destroyed successfully.#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.769 2 INFO nova.compute.manager [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Took 2.77 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.770 2 DEBUG oslo.service.loopingcall [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.770 2 DEBUG nova.compute.manager [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:22:49 np0005473739 nova_compute[259550]: 2025-10-07 14:22:49.770 2 DEBUG nova.network.neutron [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:22:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1775: 305 pgs: 305 active+clean; 351 MiB data, 823 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 4.0 MiB/s wr, 303 op/s
Oct  7 10:22:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3966010511' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.135 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deleting instance files /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46_del#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.136 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deletion of /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46_del complete#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.139 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.158 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.161 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.284 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.285 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating image(s)#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.310 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.333 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.353 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.357 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.426 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.427 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.428 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.428 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.447 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.450 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/561040190' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.645 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.646 2 DEBUG nova.virt.libvirt.vif [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=94,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-3nuzvc5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:46Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=c53d76d2-4525-4798-bcf4-ad2e6b18071a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.647 2 DEBUG nova.network.os_vif_util [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.647 2 DEBUG nova.network.os_vif_util [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.648 2 DEBUG nova.objects.instance [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid c53d76d2-4525-4798-bcf4-ad2e6b18071a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.680 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <uuid>c53d76d2-4525-4798-bcf4-ad2e6b18071a</uuid>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <name>instance-0000005e</name>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestJSON-server-316574553</nova:name>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:22:49</nova:creationTime>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <nova:port uuid="132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <entry name="serial">c53d76d2-4525-4798-bcf4-ad2e6b18071a</entry>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <entry name="uuid">c53d76d2-4525-4798-bcf4-ad2e6b18071a</entry>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:06:7c:47"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <target dev="tap132f4c57-4e"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/console.log" append="off"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:22:50 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:22:50 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:22:50 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:22:50 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.682 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Preparing to wait for external event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.682 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.683 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.683 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.683 2 DEBUG nova.virt.libvirt.vif [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=94,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-3nuzvc5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:46Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=c53d76d2-4525-4798-bcf4-ad2e6b18071a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.684 2 DEBUG nova.network.os_vif_util [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.685 2 DEBUG nova.network.os_vif_util [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.685 2 DEBUG os_vif [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.686 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.691 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap132f4c57-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.692 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap132f4c57-4e, col_values=(('external_ids', {'iface-id': '132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:06:7c:47', 'vm-uuid': 'c53d76d2-4525-4798-bcf4-ad2e6b18071a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:50 np0005473739 NetworkManager[44949]: <info>  [1759846970.6956] manager: (tap132f4c57-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/395)
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.700 2 INFO os_vif [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e')#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.758 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:50 np0005473739 podman[352633]: 2025-10-07 14:22:50.817871938 +0000 UTC m=+0.082883250 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:22:50 np0005473739 podman[352634]: 2025-10-07 14:22:50.826518547 +0000 UTC m=+0.089658619 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.838 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] resizing rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.870 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.870 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.871 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:06:7c:47, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.871 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Using config drive#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.891 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.899 2 DEBUG nova.network.neutron [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.950 2 INFO nova.compute.manager [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Took 1.18 seconds to deallocate network for instance.#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.960 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.960 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Ensure instance console log exists: /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.961 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.961 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.961 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.962 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.966 2 WARNING nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.971 2 DEBUG nova.virt.libvirt.host [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.971 2 DEBUG nova.virt.libvirt.host [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.976 2 DEBUG nova.virt.libvirt.host [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.978 2 DEBUG nova.virt.libvirt.host [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.978 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.978 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.979 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.virt.hardware [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:22:50 np0005473739 nova_compute[259550]: 2025-10-07 14:22:50.980 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'vcpu_model' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.000 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.036 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.037 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.154 2 DEBUG oslo_concurrency.processutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.297 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Creating config drive at /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.302 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6cbyll5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.341 2 DEBUG nova.network.neutron [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Updated VIF entry in instance network info cache for port 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.342 2 DEBUG nova.network.neutron [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Updating instance_info_cache with network_info: [{"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.359 2 DEBUG oslo_concurrency.lockutils [req-d8d3dced-a86c-4e7a-b676-347650cc3a66 req-cd41e705-c8b5-458c-95b0-1cccf62f6ce5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c53d76d2-4525-4798-bcf4-ad2e6b18071a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1349800485' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.449 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj6cbyll5" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.473 2 DEBUG nova.storage.rbd_utils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.478 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.524 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.554 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.560 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2251524762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.644 2 DEBUG oslo_concurrency.processutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.651 2 DEBUG nova.compute.provider_tree [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.664 2 DEBUG oslo_concurrency.processutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config c53d76d2-4525-4798-bcf4-ad2e6b18071a_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.665 2 INFO nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Deleting local config drive /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a/disk.config because it was imported into RBD.#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.668 2 DEBUG nova.scheduler.client.report [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.715 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:51 np0005473739 NetworkManager[44949]: <info>  [1759846971.7216] manager: (tap132f4c57-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/396)
Oct  7 10:22:51 np0005473739 kernel: tap132f4c57-4e: entered promiscuous mode
Oct  7 10:22:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:51Z|00959|binding|INFO|Claiming lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d for this chassis.
Oct  7 10:22:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:51Z|00960|binding|INFO|132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d: Claiming fa:16:3e:06:7c:47 10.100.0.14
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.739 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:7c:47 10.100.0.14'], port_security=['fa:16:3e:06:7c:47 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c53d76d2-4525-4798-bcf4-ad2e6b18071a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.740 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.741 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:22:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:51Z|00961|binding|INFO|Setting lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d ovn-installed in OVS
Oct  7 10:22:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:51Z|00962|binding|INFO|Setting lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d up in Southbound
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.757 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a417c338-a782-4c80-952a-636d83bdb051]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:51 np0005473739 systemd-udevd[352901]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.763 2 INFO nova.scheduler.client.report [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Deleted allocations for instance 059cdf38-dead-4636-8397-0037c0c4ced3#033[00m
Oct  7 10:22:51 np0005473739 systemd-machined[214580]: New machine qemu-116-instance-0000005e.
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.766 2 DEBUG nova.compute.manager [req-b792fbff-c155-420e-a16e-316e0cdb6ea2 req-398f8e71-2887-4b70-bd15-28f62bc24bd4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Received event network-vif-deleted-a0352a7f-7725-4e54-abe0-3bcd200e802d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:51 np0005473739 systemd[1]: Started Virtual Machine qemu-116-instance-0000005e.
Oct  7 10:22:51 np0005473739 NetworkManager[44949]: <info>  [1759846971.7774] device (tap132f4c57-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:22:51 np0005473739 NetworkManager[44949]: <info>  [1759846971.7787] device (tap132f4c57-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.794 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2be6097c-0e03-4b23-8c19-9dae7eb87cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.798 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ee509d83-ec03-4911-b211-b7df10a1b077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.828 2 DEBUG oslo_concurrency.lockutils [None req-4251630a-8c2e-439d-82ee-31cfe9626b59 505542e64c504f158e0c4a26a0a79480 f9c0c1088f744e55acc586a4d180b728 - - default default] Lock "059cdf38-dead-4636-8397-0037c0c4ced3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.835s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.834 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2e11f64f-5bf9-4095-9495-c03b100a7562]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.860 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9799cbfa-3991-4caf-a0c7-3b68c43cb411]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 916, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 14, 'rx_bytes': 916, 'tx_bytes': 780, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 31052, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 352913, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.884 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5eccecd4-eb75-4e1e-b33e-c53f96962bf3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352915, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 352915, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.886 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:51 np0005473739 nova_compute[259550]: 2025-10-07 14:22:51.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.890 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.890 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.890 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:22:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:22:51.891 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:22:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1776: 305 pgs: 305 active+clean; 323 MiB data, 806 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.8 MiB/s wr, 190 op/s
Oct  7 10:22:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:22:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2875037884' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.060 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.063 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <uuid>c8c2d410-01f0-4ef2-9ce3-232347c32e46</uuid>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <name>instance-0000005b</name>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerShowV247Test-server-1207130222</nova:name>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:22:50</nova:creationTime>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <nova:user uuid="680153f599e540e2ad16e791561c02e2">tempest-ServerShowV247Test-1983417179-project-member</nova:user>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <nova:project uuid="a580b052291544cca604ec1cdb73a416">tempest-ServerShowV247Test-1983417179</nova:project>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <entry name="serial">c8c2d410-01f0-4ef2-9ce3-232347c32e46</entry>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <entry name="uuid">c8c2d410-01f0-4ef2-9ce3-232347c32e46</entry>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/console.log" append="off"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:22:52 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:22:52 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:22:52 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:22:52 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.116 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.116 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.117 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Using config drive#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.136 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.153 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'ec2_ids' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.183 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'keypairs' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.379 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Creating config drive at /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.384 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp15dq76l1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.526 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp15dq76l1" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.552 2 DEBUG nova.storage.rbd_utils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] rbd image c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.558 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.725 2 DEBUG oslo_concurrency.processutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config c8c2d410-01f0-4ef2-9ce3-232347c32e46_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.727 2 INFO nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deleting local config drive /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46/disk.config because it was imported into RBD.#033[00m
Oct  7 10:22:52 np0005473739 systemd-machined[214580]: New machine qemu-117-instance-0000005b.
Oct  7 10:22:52 np0005473739 systemd[1]: Started Virtual Machine qemu-117-instance-0000005b.
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.882 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846957.8796117, d932a7ab-839c-48b9-804f-90cc8634e93b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.882 2 INFO nova.compute.manager [-] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:22:52 np0005473739 nova_compute[259550]: 2025-10-07 14:22:52.902 2 DEBUG nova.compute.manager [None req-eac5e2c3-4f96-48ae-8731-97ff3a5fbf96 - - - - - -] [instance: d932a7ab-839c-48b9-804f-90cc8634e93b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.019 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846973.0187206, c53d76d2-4525-4798-bcf4-ad2e6b18071a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.020 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] VM Started (Lifecycle Event)#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.038 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.043 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846973.020317, c53d76d2-4525-4798-bcf4-ad2e6b18071a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.043 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.059 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.064 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.080 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1777: 305 pgs: 305 active+clean; 271 MiB data, 784 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 182 op/s
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.993 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for c8c2d410-01f0-4ef2-9ce3-232347c32e46 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.994 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846973.9932034, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.995 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.998 2 DEBUG nova.compute.manager [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:22:53 np0005473739 nova_compute[259550]: 2025-10-07 14:22:53.999 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.003 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance spawned successfully.#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.004 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.025 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.036 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.043 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.044 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.045 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.045 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.046 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.046 2 DEBUG nova.virt.libvirt.driver [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.080 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.080 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846973.997373, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.080 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Started (Lifecycle Event)#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.112 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.116 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.123 2 DEBUG nova.compute.manager [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.146 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.181 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.182 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.182 2 DEBUG nova.objects.instance [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.264 2 DEBUG oslo_concurrency.lockutils [None req-0ca1e730-eabb-4d1d-bedd-b0d9bc646e71 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.567 2 DEBUG nova.compute.manager [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.568 2 DEBUG oslo_concurrency.lockutils [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.568 2 DEBUG oslo_concurrency.lockutils [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.568 2 DEBUG oslo_concurrency.lockutils [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.568 2 DEBUG nova.compute.manager [req-3bec4f2d-55ee-4701-ab0c-e3a93ebb48e6 req-40535015-8ed4-4d68-aa02-a7f6ed4d712c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Processing event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.569 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.584 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846974.5724573, c53d76d2-4525-4798-bcf4-ad2e6b18071a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.586 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.591 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.595 2 INFO nova.virt.libvirt.driver [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance spawned successfully.#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.595 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.612 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.618 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.621 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.622 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.622 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.623 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.623 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.623 2 DEBUG nova.virt.libvirt.driver [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.647 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.672 2 INFO nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Took 8.22 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.673 2 DEBUG nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.730 2 INFO nova.compute.manager [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Took 9.19 seconds to build instance.#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.755 2 DEBUG oslo_concurrency.lockutils [None req-c2c7315b-37aa-4744-9a3b-a7a8994f93eb 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.282s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.922 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846959.9219847, a888e66f-9992-460a-ab15-79ac0261c4e2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.923 2 INFO nova.compute.manager [-] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:22:54 np0005473739 nova_compute[259550]: 2025-10-07 14:22:54.945 2 DEBUG nova.compute.manager [None req-46139455-f0f1-4bba-96d1-b0df361f91ac - - - - - -] [instance: a888e66f-9992-460a-ab15-79ac0261c4e2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:22:55 np0005473739 nova_compute[259550]: 2025-10-07 14:22:55.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:22:55Z|00963|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct  7 10:22:55 np0005473739 nova_compute[259550]: 2025-10-07 14:22:55.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1778: 305 pgs: 305 active+clean; 293 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.6 MiB/s wr, 178 op/s
Oct  7 10:22:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.335 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.336 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.336 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.336 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.337 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.338 2 INFO nova.compute.manager [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Terminating instance#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.338 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "refresh_cache-c8c2d410-01f0-4ef2-9ce3-232347c32e46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.338 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquired lock "refresh_cache-c8c2d410-01f0-4ef2-9ce3-232347c32e46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.339 2 DEBUG nova.network.neutron [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.500 2 DEBUG nova.network.neutron [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.681 2 DEBUG nova.compute.manager [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.681 2 DEBUG oslo_concurrency.lockutils [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.681 2 DEBUG oslo_concurrency.lockutils [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.681 2 DEBUG oslo_concurrency.lockutils [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.682 2 DEBUG nova.compute.manager [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] No waiting events found dispatching network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.682 2 WARNING nova.compute.manager [req-9bae44f2-5224-4c29-879a-6d21abc2548b req-ab95437a-c0f8-4641-9113-00166544cecd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received unexpected event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d for instance with vm_state active and task_state None.#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.833 2 DEBUG nova.network.neutron [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.859 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Releasing lock "refresh_cache-c8c2d410-01f0-4ef2-9ce3-232347c32e46" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:22:56 np0005473739 nova_compute[259550]: 2025-10-07 14:22:56.859 2 DEBUG nova.compute.manager [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:22:56 np0005473739 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005b.scope: Deactivated successfully.
Oct  7 10:22:56 np0005473739 systemd[1]: machine-qemu\x2d117\x2dinstance\x2d0000005b.scope: Consumed 3.990s CPU time.
Oct  7 10:22:56 np0005473739 systemd-machined[214580]: Machine qemu-117-instance-0000005b terminated.
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.077 2 INFO nova.virt.libvirt.driver [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance destroyed successfully.#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.078 2 DEBUG nova.objects.instance [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'resources' on Instance uuid c8c2d410-01f0-4ef2-9ce3-232347c32e46 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.493 2 INFO nova.virt.libvirt.driver [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deleting instance files /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46_del#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.493 2 INFO nova.virt.libvirt.driver [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deletion of /var/lib/nova/instances/c8c2d410-01f0-4ef2-9ce3-232347c32e46_del complete#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.582 2 INFO nova.compute.manager [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.582 2 DEBUG oslo.service.loopingcall [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.583 2 DEBUG nova.compute.manager [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.583 2 DEBUG nova.network.neutron [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.723 2 DEBUG nova.network.neutron [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.737 2 DEBUG nova.network.neutron [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.751 2 INFO nova.compute.manager [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Took 0.17 seconds to deallocate network for instance.#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.792 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.792 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:57 np0005473739 nova_compute[259550]: 2025-10-07 14:22:57.906 2 DEBUG oslo_concurrency.processutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1779: 305 pgs: 305 active+clean; 293 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 982 KiB/s rd, 3.6 MiB/s wr, 161 op/s
Oct  7 10:22:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3745086220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.388 2 DEBUG oslo_concurrency.processutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.395 2 DEBUG nova.compute.provider_tree [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.409 2 DEBUG nova.scheduler.client.report [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.429 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.471 2 INFO nova.scheduler.client.report [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Deleted allocations for instance c8c2d410-01f0-4ef2-9ce3-232347c32e46#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.520 2 DEBUG oslo_concurrency.lockutils [None req-fc877677-0cc7-465f-ab1b-1c27d418adac 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "c8c2d410-01f0-4ef2-9ce3-232347c32e46" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.652 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.653 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.674 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.754 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.755 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.762 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.763 2 INFO nova.compute.claims [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:22:58 np0005473739 nova_compute[259550]: 2025-10-07 14:22:58.924 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:22:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3711068996' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.367 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.373 2 DEBUG nova.compute.provider_tree [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.396 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "fdd84566-c63e-469a-9173-55b845d32171" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.396 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.397 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "fdd84566-c63e-469a-9173-55b845d32171-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.397 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.397 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.399 2 INFO nova.compute.manager [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Terminating instance#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.400 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "refresh_cache-fdd84566-c63e-469a-9173-55b845d32171" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.400 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquired lock "refresh_cache-fdd84566-c63e-469a-9173-55b845d32171" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.401 2 DEBUG nova.network.neutron [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.403 2 DEBUG nova.scheduler.client.report [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.428 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.673s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.429 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.502 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.503 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.539 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.566 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.664 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.665 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.666 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Creating image(s)#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.690 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.717 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.741 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.746 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.789 2 DEBUG nova.network.neutron [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.831 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.832 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.833 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.833 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.855 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.859 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:22:59 np0005473739 nova_compute[259550]: 2025-10-07 14:22:59.905 2 DEBUG nova.policy [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:22:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1780: 305 pgs: 305 active+clean; 267 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.6 MiB/s wr, 247 op/s
Oct  7 10:23:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:00.054 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:00.054 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:00.055 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.098 2 DEBUG nova.network.neutron [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.113 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Releasing lock "refresh_cache-fdd84566-c63e-469a-9173-55b845d32171" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.113 2 DEBUG nova.compute.manager [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.183 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:00 np0005473739 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Deactivated successfully.
Oct  7 10:23:00 np0005473739 systemd[1]: machine-qemu\x2d111\x2dinstance\x2d0000005a.scope: Consumed 14.098s CPU time.
Oct  7 10:23:00 np0005473739 systemd-machined[214580]: Machine qemu-111-instance-0000005a terminated.
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.261 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.356 2 DEBUG nova.objects.instance [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid 569d484f-0c0f-4b2c-aca5-e88654f9501f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.358 2 INFO nova.virt.libvirt.driver [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance destroyed successfully.#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.358 2 DEBUG nova.objects.instance [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lazy-loading 'resources' on Instance uuid fdd84566-c63e-469a-9173-55b845d32171 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.375 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.376 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Ensure instance console log exists: /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.377 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.377 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.377 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.472 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Successfully created port: 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.796 2 INFO nova.virt.libvirt.driver [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Deleting instance files /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171_del#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.796 2 INFO nova.virt.libvirt.driver [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Deletion of /var/lib/nova/instances/fdd84566-c63e-469a-9173-55b845d32171_del complete#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.864 2 INFO nova.compute.manager [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] [instance: fdd84566-c63e-469a-9173-55b845d32171] Took 0.75 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.865 2 DEBUG oslo.service.loopingcall [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.866 2 DEBUG nova.compute.manager [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:23:00 np0005473739 nova_compute[259550]: 2025-10-07 14:23:00.866 2 DEBUG nova.network.neutron [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:23:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:01 np0005473739 nova_compute[259550]: 2025-10-07 14:23:01.437 2 DEBUG nova.network.neutron [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:23:01 np0005473739 nova_compute[259550]: 2025-10-07 14:23:01.522 2 DEBUG nova.network.neutron [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:01 np0005473739 nova_compute[259550]: 2025-10-07 14:23:01.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:01 np0005473739 nova_compute[259550]: 2025-10-07 14:23:01.655 2 INFO nova.compute.manager [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] Took 0.79 seconds to deallocate network for instance.#033[00m
Oct  7 10:23:01 np0005473739 nova_compute[259550]: 2025-10-07 14:23:01.840 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:01 np0005473739 nova_compute[259550]: 2025-10-07 14:23:01.841 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:01 np0005473739 nova_compute[259550]: 2025-10-07 14:23:01.950 2 DEBUG oslo_concurrency.processutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1781: 305 pgs: 305 active+clean; 263 MiB data, 772 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.3 MiB/s wr, 283 op/s
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.109 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Successfully updated port: 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.136 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.137 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.137 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.240 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846967.2391968, 059cdf38-dead-4636-8397-0037c0c4ced3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.241 2 INFO nova.compute.manager [-] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.267 2 DEBUG nova.compute.manager [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received event network-changed-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.267 2 DEBUG nova.compute.manager [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Refreshing instance network info cache due to event network-changed-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.268 2 DEBUG oslo_concurrency.lockutils [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.315 2 DEBUG nova.compute.manager [None req-e8ca5208-0b03-45bb-b481-9828779556a4 - - - - - -] [instance: 059cdf38-dead-4636-8397-0037c0c4ced3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030206704' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.400 2 DEBUG oslo_concurrency.processutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.407 2 DEBUG nova.compute.provider_tree [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.537 2 DEBUG nova.scheduler.client.report [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.601 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.760s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.603 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.648 2 INFO nova.scheduler.client.report [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Deleted allocations for instance fdd84566-c63e-469a-9173-55b845d32171#033[00m
Oct  7 10:23:02 np0005473739 nova_compute[259550]: 2025-10-07 14:23:02.746 2 DEBUG oslo_concurrency.lockutils [None req-8efae4c3-5c8e-45f8-950d-005065d80333 680153f599e540e2ad16e791561c02e2 a580b052291544cca604ec1cdb73a416 - - default default] Lock "fdd84566-c63e-469a-9173-55b845d32171" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.349s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.689 2 DEBUG nova.network.neutron [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Updating instance_info_cache with network_info: [{"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.919 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.920 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance network_info: |[{"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.921 2 DEBUG oslo_concurrency.lockutils [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.921 2 DEBUG nova.network.neutron [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Refreshing network info cache for port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.925 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start _get_guest_xml network_info=[{"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.931 2 WARNING nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.935 2 DEBUG nova.virt.libvirt.host [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.936 2 DEBUG nova.virt.libvirt.host [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.940 2 DEBUG nova.virt.libvirt.host [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.940 2 DEBUG nova.virt.libvirt.host [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.941 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.941 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.941 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.942 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.942 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.942 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.942 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.943 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.943 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.943 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.943 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.944 2 DEBUG nova.virt.hardware [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:23:03 np0005473739 nova_compute[259550]: 2025-10-07 14:23:03.947 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1782: 305 pgs: 305 active+clean; 241 MiB data, 757 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.6 MiB/s wr, 284 op/s
Oct  7 10:23:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:23:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/473774807' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.412 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.433 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.437 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:23:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687310136' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.898 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.900 2 DEBUG nova.virt.libvirt.vif [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=95,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-2uq04zx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:59Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=569d484f-0c0f-4b2c-aca5-e88654f9501f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.901 2 DEBUG nova.network.os_vif_util [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.902 2 DEBUG nova.network.os_vif_util [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.903 2 DEBUG nova.objects.instance [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid 569d484f-0c0f-4b2c-aca5-e88654f9501f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.925 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <uuid>569d484f-0c0f-4b2c-aca5-e88654f9501f</uuid>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <name>instance-0000005f</name>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestJSON-server-316574553</nova:name>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:23:03</nova:creationTime>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <nova:port uuid="03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <entry name="serial">569d484f-0c0f-4b2c-aca5-e88654f9501f</entry>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <entry name="uuid">569d484f-0c0f-4b2c-aca5-e88654f9501f</entry>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/569d484f-0c0f-4b2c-aca5-e88654f9501f_disk">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:de:12:c0"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <target dev="tap03cd7a1d-2b"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/console.log" append="off"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:23:04 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:23:04 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:23:04 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:23:04 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.927 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Preparing to wait for external event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.928 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.928 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.929 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.930 2 DEBUG nova.virt.libvirt.vif [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=95,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-2uq04zx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:22:59Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=569d484f-0c0f-4b2c-aca5-e88654f9501f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.930 2 DEBUG nova.network.os_vif_util [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.931 2 DEBUG nova.network.os_vif_util [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.932 2 DEBUG os_vif [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.933 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.934 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03cd7a1d-2b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.938 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap03cd7a1d-2b, col_values=(('external_ids', {'iface-id': '03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:de:12:c0', 'vm-uuid': '569d484f-0c0f-4b2c-aca5-e88654f9501f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:04 np0005473739 NetworkManager[44949]: <info>  [1759846984.9411] manager: (tap03cd7a1d-2b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/397)
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:04 np0005473739 nova_compute[259550]: 2025-10-07 14:23:04.948 2 INFO os_vif [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b')#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.019 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.020 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.020 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:de:12:c0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.020 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Using config drive#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.041 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.663 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Creating config drive at /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.671 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdlbhn4y5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.837 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdlbhn4y5" returned: 0 in 0.166s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.862 2 DEBUG nova.storage.rbd_utils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:05 np0005473739 nova_compute[259550]: 2025-10-07 14:23:05.866 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1783: 305 pgs: 305 active+clean; 213 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 2.3 MiB/s wr, 242 op/s
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.109 2 DEBUG oslo_concurrency.processutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config 569d484f-0c0f-4b2c-aca5-e88654f9501f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.243s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.111 2 INFO nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Deleting local config drive /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f/disk.config because it was imported into RBD.#033[00m
Oct  7 10:23:06 np0005473739 kernel: tap03cd7a1d-2b: entered promiscuous mode
Oct  7 10:23:06 np0005473739 NetworkManager[44949]: <info>  [1759846986.1725] manager: (tap03cd7a1d-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/398)
Oct  7 10:23:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:06Z|00964|binding|INFO|Claiming lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd for this chassis.
Oct  7 10:23:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:06Z|00965|binding|INFO|03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd: Claiming fa:16:3e:de:12:c0 10.100.0.5
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.185 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:12:c0 10.100.0.5'], port_security=['fa:16:3e:de:12:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '569d484f-0c0f-4b2c-aca5-e88654f9501f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.187 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.188 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:23:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:06Z|00966|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd ovn-installed in OVS
Oct  7 10:23:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:06Z|00967|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd up in Southbound
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:06 np0005473739 systemd-udevd[353491]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.208 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4d324244-691b-4617-bf95-7ae5a85b7369]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:06 np0005473739 systemd-machined[214580]: New machine qemu-118-instance-0000005f.
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.216 2 DEBUG nova.network.neutron [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Updated VIF entry in instance network info cache for port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.216 2 DEBUG nova.network.neutron [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Updating instance_info_cache with network_info: [{"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:06 np0005473739 systemd[1]: Started Virtual Machine qemu-118-instance-0000005f.
Oct  7 10:23:06 np0005473739 NetworkManager[44949]: <info>  [1759846986.2234] device (tap03cd7a1d-2b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:23:06 np0005473739 NetworkManager[44949]: <info>  [1759846986.2243] device (tap03cd7a1d-2b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.238 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1afe6b-3392-417b-b365-68f133e050b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.240 2 DEBUG oslo_concurrency.lockutils [req-950f12e8-2090-41ef-a8ed-d7c56f9e386e req-6ad8747b-1362-4417-af64-40d4e049ff86 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-569d484f-0c0f-4b2c-aca5-e88654f9501f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.241 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5825eca8-aa8f-462b-a82e-bd391ddf5506]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.268 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e2275889-4a18-4e1c-881e-c0d5fd9a7c16]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a60a0ecd-f326-4f58-b9d3-089f34b56ee5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 916, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 16, 'rx_bytes': 916, 'tx_bytes': 864, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353503, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.308 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ea7a0b1a-606d-4dfc-81f1-43e39fd37fb4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353505, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353505, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.309 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.312 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.312 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.312 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:06.313 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:06 np0005473739 nova_compute[259550]: 2025-10-07 14:23:06.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.059 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846987.0593636, 569d484f-0c0f-4b2c-aca5-e88654f9501f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.060 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] VM Started (Lifecycle Event)#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.083 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.087 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846987.061973, 569d484f-0c0f-4b2c-aca5-e88654f9501f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.087 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.108 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.112 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.136 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.358 2 DEBUG nova.compute.manager [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.359 2 DEBUG oslo_concurrency.lockutils [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.359 2 DEBUG oslo_concurrency.lockutils [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.359 2 DEBUG oslo_concurrency.lockutils [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.360 2 DEBUG nova.compute.manager [req-01a35197-2b7d-4443-9bde-f93be82b1ed1 req-8d863e89-959b-4451-aa8c-8fed394cf69a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Processing event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.360 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.366 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.367 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759846987.366076, 569d484f-0c0f-4b2c-aca5-e88654f9501f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.367 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.373 2 INFO nova.virt.libvirt.driver [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance spawned successfully.#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.374 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:23:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:07Z|00103|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:06:7c:47 10.100.0.14
Oct  7 10:23:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:07Z|00104|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:06:7c:47 10.100.0.14
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.401 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.408 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.412 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.413 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.413 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.414 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.414 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.415 2 DEBUG nova.virt.libvirt.driver [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.442 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.476 2 INFO nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Took 7.81 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.476 2 DEBUG nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.572 2 INFO nova.compute.manager [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Took 8.84 seconds to build instance.#033[00m
Oct  7 10:23:07 np0005473739 nova_compute[259550]: 2025-10-07 14:23:07.595 2 DEBUG oslo_concurrency.lockutils [None req-5b450604-d2f1-483d-bef7-c910f6c1ff92 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.943s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1784: 305 pgs: 305 active+clean; 213 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 180 op/s
Oct  7 10:23:08 np0005473739 podman[353549]: 2025-10-07 14:23:08.090099134 +0000 UTC m=+0.070847734 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid)
Oct  7 10:23:08 np0005473739 podman[353548]: 2025-10-07 14:23:08.116172091 +0000 UTC m=+0.100739093 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:23:09 np0005473739 nova_compute[259550]: 2025-10-07 14:23:09.519 2 DEBUG nova.compute.manager [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:09 np0005473739 nova_compute[259550]: 2025-10-07 14:23:09.519 2 DEBUG oslo_concurrency.lockutils [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:09 np0005473739 nova_compute[259550]: 2025-10-07 14:23:09.519 2 DEBUG oslo_concurrency.lockutils [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:09 np0005473739 nova_compute[259550]: 2025-10-07 14:23:09.520 2 DEBUG oslo_concurrency.lockutils [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:09 np0005473739 nova_compute[259550]: 2025-10-07 14:23:09.520 2 DEBUG nova.compute.manager [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] No waiting events found dispatching network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:09 np0005473739 nova_compute[259550]: 2025-10-07 14:23:09.520 2 WARNING nova.compute.manager [req-08a7cc0b-5848-4aa0-bc45-c1b25a8c8e16 req-1948d1eb-b6a7-4d74-be1a-d22e094e1547 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received unexpected event network-vif-plugged-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd for instance with vm_state active and task_state None.#033[00m
Oct  7 10:23:09 np0005473739 nova_compute[259550]: 2025-10-07 14:23:09.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1785: 305 pgs: 305 active+clean; 235 MiB data, 773 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 3.1 MiB/s wr, 249 op/s
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.050 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.051 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.051 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.051 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.052 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.053 2 INFO nova.compute.manager [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Terminating instance#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.054 2 DEBUG nova.compute.manager [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:23:11 np0005473739 kernel: tap03cd7a1d-2b (unregistering): left promiscuous mode
Oct  7 10:23:11 np0005473739 NetworkManager[44949]: <info>  [1759846991.1080] device (tap03cd7a1d-2b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00968|binding|INFO|Releasing lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd from this chassis (sb_readonly=0)
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00969|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd down in Southbound
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00970|binding|INFO|Removing iface tap03cd7a1d-2b ovn-installed in OVS
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.126 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:12:c0 10.100.0.5'], port_security=['fa:16:3e:de:12:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '569d484f-0c0f-4b2c-aca5-e88654f9501f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.128 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.129 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.148 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24271953-256d-4e90-bc16-30ac5f732c47]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Deactivated successfully.
Oct  7 10:23:11 np0005473739 systemd[1]: machine-qemu\x2d118\x2dinstance\x2d0000005f.scope: Consumed 4.503s CPU time.
Oct  7 10:23:11 np0005473739 systemd-machined[214580]: Machine qemu-118-instance-0000005f terminated.
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.178 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[111d226d-d38d-4d27-a2d5-b88ac449cdd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.183 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d4142d-1073-4cd6-a4cd-56883a59fa05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.212 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff24d67-0595-42e2-b890-6dc23dc56a21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.229 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[35a462bd-4f14-4da1-8914-5435ffffb37b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 916, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 18, 'rx_bytes': 916, 'tx_bytes': 948, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353597, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.243 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[087981bd-5aaf-46ab-96cf-762b960c0af4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353598, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353598, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.245 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.250 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.250 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.251 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.251 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:11 np0005473739 kernel: tap03cd7a1d-2b: entered promiscuous mode
Oct  7 10:23:11 np0005473739 NetworkManager[44949]: <info>  [1759846991.2697] manager: (tap03cd7a1d-2b): new Tun device (/org/freedesktop/NetworkManager/Devices/399)
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00971|binding|INFO|Claiming lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd for this chassis.
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00972|binding|INFO|03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd: Claiming fa:16:3e:de:12:c0 10.100.0.5
Oct  7 10:23:11 np0005473739 kernel: tap03cd7a1d-2b (unregistering): left promiscuous mode
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.279 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:12:c0 10.100.0.5'], port_security=['fa:16:3e:de:12:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '569d484f-0c0f-4b2c-aca5-e88654f9501f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.281 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.282 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.293 2 INFO nova.virt.libvirt.driver [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Instance destroyed successfully.#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.293 2 DEBUG nova.objects.instance [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid 569d484f-0c0f-4b2c-aca5-e88654f9501f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.298 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8644c197-d994-4f76-8790-d21ac89ec0a1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00973|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd ovn-installed in OVS
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00974|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd up in Southbound
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00975|binding|INFO|Releasing lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd from this chassis (sb_readonly=1)
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00976|if_status|INFO|Dropped 2 log messages in last 113 seconds (most recently, 113 seconds ago) due to excessive rate
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00977|if_status|INFO|Not setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd down as sb is readonly
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00978|binding|INFO|Removing iface tap03cd7a1d-2b ovn-installed in OVS
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00979|binding|INFO|Releasing lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd from this chassis (sb_readonly=0)
Oct  7 10:23:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:11Z|00980|binding|INFO|Setting lport 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd down in Southbound
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.312 2 DEBUG nova.virt.libvirt.vif [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=95,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:23:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-2uq04zx6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:23:07Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=569d484f-0c0f-4b2c-aca5-e88654f9501f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.313 2 DEBUG nova.network.os_vif_util [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "address": "fa:16:3e:de:12:c0", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap03cd7a1d-2b", "ovs_interfaceid": "03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.314 2 DEBUG nova.network.os_vif_util [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.314 2 DEBUG os_vif [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.316 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03cd7a1d-2b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.318 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:de:12:c0 10.100.0.5'], port_security=['fa:16:3e:de:12:c0 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': '569d484f-0c0f-4b2c-aca5-e88654f9501f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.324 2 INFO os_vif [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:de:12:c0,bridge_name='br-int',has_traffic_filtering=True,id=03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap03cd7a1d-2b')#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.328 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e905ad09-fe0c-493d-868f-8cd64cb6523d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.330 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aab12013-1dae-4ca8-a43e-d317fe32b4f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.363 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[35752ebd-8bff-4f8d-93ed-711de3b5c17b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.383 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bf855b3-73b5-4c66-b70f-3fb5eea9164c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 916, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 20, 'rx_bytes': 916, 'tx_bytes': 1032, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353631, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7a1b7f10-6674-42f3-8942-5cc4008e9581]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353632, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353632, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.403 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.407 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.408 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.409 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7f2c339-ebc5-4a91-8eb2-86ed63fa60d0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.454 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fb7320cb-dec9-425c-b7ae-ae1b0f4801ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.458 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e7343633-fe70-4294-a89c-a141b7d560b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.487 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5207d359-00e0-4fea-978f-55b8e4ba3a3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.507 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[572a62b4-e03e-41ee-93dd-59d1d025db8c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 22, 'rx_bytes': 916, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 22, 'rx_bytes': 916, 'tx_bytes': 1116, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 353638, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.523 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0009b2dd-acb9-44ac-98e1-3d91e97416b9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353639, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 353639, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.525 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.527 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.528 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.528 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:11.529 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.854 2 INFO nova.virt.libvirt.driver [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Deleting instance files /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f_del#033[00m
Oct  7 10:23:11 np0005473739 nova_compute[259550]: 2025-10-07 14:23:11.855 2 INFO nova.virt.libvirt.driver [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Deletion of /var/lib/nova/instances/569d484f-0c0f-4b2c-aca5-e88654f9501f_del complete#033[00m
Oct  7 10:23:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1786: 305 pgs: 305 active+clean; 246 MiB data, 783 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 3.9 MiB/s wr, 222 op/s
Oct  7 10:23:12 np0005473739 nova_compute[259550]: 2025-10-07 14:23:12.033 2 INFO nova.compute.manager [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:23:12 np0005473739 nova_compute[259550]: 2025-10-07 14:23:12.033 2 DEBUG oslo.service.loopingcall [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:23:12 np0005473739 nova_compute[259550]: 2025-10-07 14:23:12.033 2 DEBUG nova.compute.manager [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:23:12 np0005473739 nova_compute[259550]: 2025-10-07 14:23:12.034 2 DEBUG nova.network.neutron [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:23:12 np0005473739 nova_compute[259550]: 2025-10-07 14:23:12.076 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846977.0753396, c8c2d410-01f0-4ef2-9ce3-232347c32e46 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:12 np0005473739 nova_compute[259550]: 2025-10-07 14:23:12.077 2 INFO nova.compute.manager [-] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:23:12 np0005473739 nova_compute[259550]: 2025-10-07 14:23:12.097 2 DEBUG nova.compute.manager [None req-4ed58ada-2a2a-4ed1-bdb7-7f1ebc9cb3b0 - - - - - -] [instance: c8c2d410-01f0-4ef2-9ce3-232347c32e46] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.015 2 DEBUG nova.network.neutron [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.035 2 INFO nova.compute.manager [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Took 1.00 seconds to deallocate network for instance.#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.076 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.077 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.169 2 DEBUG oslo_concurrency.processutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.383 2 DEBUG nova.compute.manager [req-177ece94-f21f-45fe-a939-7d94a8a008e4 req-2d64d94b-9a7d-4bc3-8b00-0723778dbab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Received event network-vif-deleted-03cd7a1d-2bef-4e45-8c09-1b0798c3fdfd external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3051669186' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.622 2 DEBUG oslo_concurrency.processutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.629 2 DEBUG nova.compute.provider_tree [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.650 2 DEBUG nova.scheduler.client.report [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.675 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.695 2 INFO nova.scheduler.client.report [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance 569d484f-0c0f-4b2c-aca5-e88654f9501f#033[00m
Oct  7 10:23:13 np0005473739 nova_compute[259550]: 2025-10-07 14:23:13.765 2 DEBUG oslo_concurrency.lockutils [None req-3f56346c-f88a-4094-82eb-8ba8c1032f60 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "569d484f-0c0f-4b2c-aca5-e88654f9501f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:23:13 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d7711c8f-cd6b-493b-860e-1cf8387973ac does not exist
Oct  7 10:23:13 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 89ca9e05-c5ec-4acf-b4e7-66c0801a5e52 does not exist
Oct  7 10:23:13 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 41c8a871-5009-41b3-bb44-f03176687eb5 does not exist
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:23:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:23:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1787: 305 pgs: 305 active+clean; 216 MiB data, 769 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 187 op/s
Oct  7 10:23:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:23:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:23:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.374 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.375 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.394 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.462 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.462 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.471 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.471 2 INFO nova.compute.claims [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:23:14 np0005473739 podman[353935]: 2025-10-07 14:23:14.516700485 +0000 UTC m=+0.042058295 container create ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:23:14 np0005473739 systemd[1]: Started libpod-conmon-ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce.scope.
Oct  7 10:23:14 np0005473739 podman[353935]: 2025-10-07 14:23:14.49705072 +0000 UTC m=+0.022408570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.595 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:23:14 np0005473739 podman[353935]: 2025-10-07 14:23:14.634164824 +0000 UTC m=+0.159522694 container init ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:23:14 np0005473739 podman[353935]: 2025-10-07 14:23:14.646946085 +0000 UTC m=+0.172303915 container start ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:23:14 np0005473739 podman[353935]: 2025-10-07 14:23:14.651158568 +0000 UTC m=+0.176516418 container attach ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 10:23:14 np0005473739 gifted_brown[353952]: 167 167
Oct  7 10:23:14 np0005473739 systemd[1]: libpod-ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce.scope: Deactivated successfully.
Oct  7 10:23:14 np0005473739 conmon[353952]: conmon ea424bb0b6efdfea74e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce.scope/container/memory.events
Oct  7 10:23:14 np0005473739 podman[353935]: 2025-10-07 14:23:14.655560996 +0000 UTC m=+0.180918836 container died ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:23:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9ca1273accc457cb54595a9b777d678e03e445dfc4b7536e5b3394e3caf3e40d-merged.mount: Deactivated successfully.
Oct  7 10:23:14 np0005473739 podman[353935]: 2025-10-07 14:23:14.709571799 +0000 UTC m=+0.234929599 container remove ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_brown, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:23:14 np0005473739 systemd[1]: libpod-conmon-ea424bb0b6efdfea74e614010d8eaed1b2056775246a10ace410198974d2ecce.scope: Deactivated successfully.
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.749 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.751 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.751 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.752 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.752 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.754 2 INFO nova.compute.manager [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Terminating instance#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.755 2 DEBUG nova.compute.manager [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:23:14 np0005473739 kernel: tap132f4c57-4e (unregistering): left promiscuous mode
Oct  7 10:23:14 np0005473739 NetworkManager[44949]: <info>  [1759846994.8334] device (tap132f4c57-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:14Z|00981|binding|INFO|Releasing lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d from this chassis (sb_readonly=0)
Oct  7 10:23:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:14Z|00982|binding|INFO|Setting lport 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d down in Southbound
Oct  7 10:23:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:14Z|00983|binding|INFO|Removing iface tap132f4c57-4e ovn-installed in OVS
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.854 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:06:7c:47 10.100.0.14'], port_security=['fa:16:3e:06:7c:47 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'c53d76d2-4525-4798-bcf4-ad2e6b18071a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.856 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis#033[00m
Oct  7 10:23:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.857 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:23:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.879 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc546f45-4c07-4ba8-8521-46e4169918c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:14 np0005473739 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Deactivated successfully.
Oct  7 10:23:14 np0005473739 systemd[1]: machine-qemu\x2d116\x2dinstance\x2d0000005e.scope: Consumed 13.519s CPU time.
Oct  7 10:23:14 np0005473739 systemd-machined[214580]: Machine qemu-116-instance-0000005e terminated.
Oct  7 10:23:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.928 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c315229a-ebf9-400e-ab00-0513f7832d14]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.931 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[22fa438b-d499-44dc-a1b5-d8a684e61f11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:14 np0005473739 podman[353999]: 2025-10-07 14:23:14.938755453 +0000 UTC m=+0.062084910 container create dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:23:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.963 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eaaab0a5-d199-45dc-84d2-25da55987880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:14.995 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c36c7bd7-5e4d-4824-8153-317c039a22f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 24, 'rx_bytes': 958, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 24, 'rx_bytes': 958, 'tx_bytes': 1200, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 354024, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:14 np0005473739 nova_compute[259550]: 2025-10-07 14:23:14.996 2 INFO nova.virt.libvirt.driver [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Instance destroyed successfully.#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.003 2 DEBUG nova.objects.instance [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid c53d76d2-4525-4798-bcf4-ad2e6b18071a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:15 np0005473739 systemd[1]: Started libpod-conmon-dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff.scope.
Oct  7 10:23:15 np0005473739 podman[353999]: 2025-10-07 14:23:14.917572057 +0000 UTC m=+0.040901564 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:23:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.019 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dc6244f8-5bd6-443b-84c1-258d5536e023]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354035, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 354035, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.019 2 DEBUG nova.virt.libvirt.vif [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:22:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-316574553',display_name='tempest-ServersTestJSON-server-316574553',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-316574553',id=94,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-3nuzvc5m',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:54Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=c53d76d2-4525-4798-bcf4-ad2e6b18071a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.020 2 DEBUG nova.network.os_vif_util [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "address": "fa:16:3e:06:7c:47", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap132f4c57-4e", "ovs_interfaceid": "132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.020 2 DEBUG nova.network.os_vif_util [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.021 2 DEBUG os_vif [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:23:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.021 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap132f4c57-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.026 2 INFO os_vif [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:06:7c:47,bridge_name='br-int',has_traffic_filtering=True,id=132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap132f4c57-4e')#033[00m
Oct  7 10:23:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.027 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.028 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.029 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:15.029 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:23:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:15 np0005473739 podman[353999]: 2025-10-07 14:23:15.074704776 +0000 UTC m=+0.198034243 container init dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:23:15 np0005473739 podman[353999]: 2025-10-07 14:23:15.082061002 +0000 UTC m=+0.205390469 container start dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:23:15 np0005473739 podman[353999]: 2025-10-07 14:23:15.085654498 +0000 UTC m=+0.208984005 container attach dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:23:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3756524447' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.131 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.138 2 DEBUG nova.compute.provider_tree [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.158 2 DEBUG nova.scheduler.client.report [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.182 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.720s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.183 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.227 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.227 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.245 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.261 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.353 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846980.3300617, fdd84566-c63e-469a-9173-55b845d32171 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.353 2 INFO nova.compute.manager [-] [instance: fdd84566-c63e-469a-9173-55b845d32171] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.373 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.374 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.375 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Creating image(s)#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.396 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.425 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.447 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.449 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.482 2 DEBUG nova.compute.manager [None req-a8dbbb45-bdee-466f-9d80-b494890cc6a6 - - - - - -] [instance: fdd84566-c63e-469a-9173-55b845d32171] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.489 2 INFO nova.virt.libvirt.driver [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Deleting instance files /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a_del#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.490 2 INFO nova.virt.libvirt.driver [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Deletion of /var/lib/nova/instances/c53d76d2-4525-4798-bcf4-ad2e6b18071a_del complete#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.520 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.521 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.522 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.522 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.538 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.541 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.581 2 INFO nova.compute.manager [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.581 2 DEBUG oslo.service.loopingcall [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.582 2 DEBUG nova.compute.manager [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.582 2 DEBUG nova.network.neutron [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.618 2 DEBUG nova.policy [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'ef4a6b36d606485ea7b0334d25dd23bd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '9fd5efb5e7c240a997805db53864ecfc', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.857 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.904 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] resizing rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:23:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1788: 305 pgs: 305 active+clean; 168 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 175 op/s
Oct  7 10:23:15 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.986 2 DEBUG nova.objects.instance [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lazy-loading 'migration_context' on Instance uuid ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:15.999 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.000 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Ensure instance console log exists: /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.000 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.000 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.000 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:16 np0005473739 serene_allen[354036]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:23:16 np0005473739 serene_allen[354036]: --> relative data size: 1.0
Oct  7 10:23:16 np0005473739 serene_allen[354036]: --> All data devices are unavailable
Oct  7 10:23:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:16 np0005473739 systemd[1]: libpod-dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff.scope: Deactivated successfully.
Oct  7 10:23:16 np0005473739 podman[353999]: 2025-10-07 14:23:16.270090047 +0000 UTC m=+1.393419544 container died dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:23:16 np0005473739 systemd[1]: libpod-dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff.scope: Consumed 1.108s CPU time.
Oct  7 10:23:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d1787db4ff472bfdcbded1cc706fbbe3e9341e27dff36f0f69d07dc0294ac91c-merged.mount: Deactivated successfully.
Oct  7 10:23:16 np0005473739 podman[353999]: 2025-10-07 14:23:16.336130692 +0000 UTC m=+1.459460149 container remove dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_allen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:23:16 np0005473739 systemd[1]: libpod-conmon-dbb4518d7e064b74a63d307bb5ae9a2867b9ee5ca0ec1e175f8cf4842dd14fff.scope: Deactivated successfully.
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.549 2 DEBUG nova.compute.manager [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-unplugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG oslo_concurrency.lockutils [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG oslo_concurrency.lockutils [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG oslo_concurrency.lockutils [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG nova.compute.manager [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] No waiting events found dispatching network-vif-unplugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.550 2 DEBUG nova.compute.manager [req-ac5d73d5-1f8e-4d03-a007-001eaa8ac6ec req-531aeb8b-7ba2-4715-95b7-cb53247857b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-unplugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:16.694 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=23, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=22) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:16.695 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.867 2 DEBUG nova.network.neutron [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.880 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Successfully created port: cef2a5c2-ea82-48c7-8da3-0028586c49aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.885 2 INFO nova.compute.manager [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Took 1.30 seconds to deallocate network for instance.#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.922 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.923 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:16 np0005473739 podman[354403]: 2025-10-07 14:23:16.958980274 +0000 UTC m=+0.039028384 container create bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 10:23:16 np0005473739 nova_compute[259550]: 2025-10-07 14:23:16.993 2 DEBUG oslo_concurrency.processutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:17 np0005473739 systemd[1]: Started libpod-conmon-bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83.scope.
Oct  7 10:23:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:23:17 np0005473739 podman[354403]: 2025-10-07 14:23:16.944479256 +0000 UTC m=+0.024527386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:23:17 np0005473739 podman[354403]: 2025-10-07 14:23:17.04753628 +0000 UTC m=+0.127584420 container init bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 10:23:17 np0005473739 podman[354403]: 2025-10-07 14:23:17.059058518 +0000 UTC m=+0.139106628 container start bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:23:17 np0005473739 youthful_sinoussi[354419]: 167 167
Oct  7 10:23:17 np0005473739 podman[354403]: 2025-10-07 14:23:17.063093666 +0000 UTC m=+0.143141776 container attach bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:23:17 np0005473739 systemd[1]: libpod-bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83.scope: Deactivated successfully.
Oct  7 10:23:17 np0005473739 conmon[354419]: conmon bc6dffe67d3b80a98747 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83.scope/container/memory.events
Oct  7 10:23:17 np0005473739 podman[354403]: 2025-10-07 14:23:17.064515334 +0000 UTC m=+0.144563444 container died bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 10:23:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7363ad5e3a7a612ea6181f51a63c844c9af041d3dd83c97fb3e0e21dd1f9dadd-merged.mount: Deactivated successfully.
Oct  7 10:23:17 np0005473739 podman[354403]: 2025-10-07 14:23:17.111615633 +0000 UTC m=+0.191663743 container remove bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_sinoussi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:23:17 np0005473739 systemd[1]: libpod-conmon-bc6dffe67d3b80a98747df01beccc22583bc9f096500d21b88ab7a92b8b4ee83.scope: Deactivated successfully.
Oct  7 10:23:17 np0005473739 podman[354463]: 2025-10-07 14:23:17.365401574 +0000 UTC m=+0.106220070 container create c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:23:17 np0005473739 podman[354463]: 2025-10-07 14:23:17.284454131 +0000 UTC m=+0.025272627 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:23:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1140728377' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.475 2 DEBUG oslo_concurrency.processutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.482 2 DEBUG nova.compute.provider_tree [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.498 2 DEBUG nova.scheduler.client.report [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.520 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.597s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:17 np0005473739 systemd[1]: Started libpod-conmon-c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c.scope.
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.548 2 INFO nova.scheduler.client.report [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance c53d76d2-4525-4798-bcf4-ad2e6b18071a#033[00m
Oct  7 10:23:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:23:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:17 np0005473739 podman[354463]: 2025-10-07 14:23:17.600517596 +0000 UTC m=+0.341336092 container init c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.602 2 DEBUG oslo_concurrency.lockutils [None req-c5368cbc-7eb9-409d-b018-3faa60705022 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:17 np0005473739 podman[354463]: 2025-10-07 14:23:17.606959249 +0000 UTC m=+0.347777735 container start c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:23:17 np0005473739 podman[354463]: 2025-10-07 14:23:17.646744222 +0000 UTC m=+0.387562728 container attach c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.794 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Successfully updated port: cef2a5c2-ea82-48c7-8da3-0028586c49aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.830 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.831 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquired lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:17 np0005473739 nova_compute[259550]: 2025-10-07 14:23:17.832 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:23:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1789: 305 pgs: 305 active+clean; 168 MiB data, 761 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 169 op/s
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.188 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]: {
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:    "0": [
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:        {
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "devices": [
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "/dev/loop3"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            ],
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_name": "ceph_lv0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_size": "21470642176",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "name": "ceph_lv0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "tags": {
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cluster_name": "ceph",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.crush_device_class": "",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.encrypted": "0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osd_id": "0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.type": "block",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.vdo": "0"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            },
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "type": "block",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "vg_name": "ceph_vg0"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:        }
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:    ],
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:    "1": [
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:        {
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "devices": [
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "/dev/loop4"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            ],
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_name": "ceph_lv1",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_size": "21470642176",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "name": "ceph_lv1",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "tags": {
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cluster_name": "ceph",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.crush_device_class": "",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.encrypted": "0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osd_id": "1",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.type": "block",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.vdo": "0"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            },
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "type": "block",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "vg_name": "ceph_vg1"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:        }
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:    ],
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:    "2": [
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:        {
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "devices": [
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "/dev/loop5"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            ],
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_name": "ceph_lv2",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_size": "21470642176",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "name": "ceph_lv2",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "tags": {
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.cluster_name": "ceph",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.crush_device_class": "",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.encrypted": "0",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osd_id": "2",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.type": "block",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:                "ceph.vdo": "0"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            },
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "type": "block",
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:            "vg_name": "ceph_vg2"
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:        }
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]:    ]
Oct  7 10:23:18 np0005473739 crazy_stonebraker[354481]: }
Oct  7 10:23:18 np0005473739 systemd[1]: libpod-c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c.scope: Deactivated successfully.
Oct  7 10:23:18 np0005473739 podman[354463]: 2025-10-07 14:23:18.400304057 +0000 UTC m=+1.141122633 container died c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:23:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e1c3bd96607fef57a8bab59e1701cdc3d67df4c25e642f8220f4c273039e93c4-merged.mount: Deactivated successfully.
Oct  7 10:23:18 np0005473739 podman[354463]: 2025-10-07 14:23:18.511641732 +0000 UTC m=+1.252460218 container remove c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_stonebraker, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 10:23:18 np0005473739 systemd[1]: libpod-conmon-c4b1bbc01b110d0e3699ccd0e754b49a826e0dbc93bb3217a6c7ca8f5ab95d4c.scope: Deactivated successfully.
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.689 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.691 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.691 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.691 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c53d76d2-4525-4798-bcf4-ad2e6b18071a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.691 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] No waiting events found dispatching network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.692 2 WARNING nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received unexpected event network-vif-plugged-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.692 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Received event network-vif-deleted-132f4c57-4ef1-4fa2-b2b5-4738e9ca8b1d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.694 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-changed-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.694 2 DEBUG nova.compute.manager [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Refreshing instance network info cache due to event network-changed-cef2a5c2-ea82-48c7-8da3-0028586c49aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:23:18 np0005473739 nova_compute[259550]: 2025-10-07 14:23:18.694 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:19 np0005473739 podman[354642]: 2025-10-07 14:23:19.241909865 +0000 UTC m=+0.053764988 container create a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:23:19 np0005473739 systemd[1]: Started libpod-conmon-a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0.scope.
Oct  7 10:23:19 np0005473739 podman[354642]: 2025-10-07 14:23:19.211004779 +0000 UTC m=+0.022859892 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:23:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:23:19 np0005473739 podman[354642]: 2025-10-07 14:23:19.394488192 +0000 UTC m=+0.206343305 container init a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:23:19 np0005473739 podman[354642]: 2025-10-07 14:23:19.402817584 +0000 UTC m=+0.214672677 container start a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:23:19 np0005473739 pensive_mendel[354658]: 167 167
Oct  7 10:23:19 np0005473739 systemd[1]: libpod-a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0.scope: Deactivated successfully.
Oct  7 10:23:19 np0005473739 podman[354642]: 2025-10-07 14:23:19.43934101 +0000 UTC m=+0.251196163 container attach a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:23:19 np0005473739 podman[354642]: 2025-10-07 14:23:19.44005028 +0000 UTC m=+0.251905383 container died a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:23:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d1f57d7050a010d76742f039d9cc2f135ac9abb8e115dacca7d5d32d228d734c-merged.mount: Deactivated successfully.
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.581 2 DEBUG nova.network.neutron [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Updating instance_info_cache with network_info: [{"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.604 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Releasing lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.605 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance network_info: |[{"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.605 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.605 2 DEBUG nova.network.neutron [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Refreshing network info cache for port cef2a5c2-ea82-48c7-8da3-0028586c49aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:23:19 np0005473739 podman[354642]: 2025-10-07 14:23:19.608463759 +0000 UTC m=+0.420319042 container remove a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_mendel, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.609 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start _get_guest_xml network_info=[{"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.616 2 WARNING nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:23:19 np0005473739 systemd[1]: libpod-conmon-a55515e8f695232e15b21b5ea4901aec29285f9aac6baebdee5be0fa25eec9b0.scope: Deactivated successfully.
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.625 2 DEBUG nova.virt.libvirt.host [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.626 2 DEBUG nova.virt.libvirt.host [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.632 2 DEBUG nova.virt.libvirt.host [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.632 2 DEBUG nova.virt.libvirt.host [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.633 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.633 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.634 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.634 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.634 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.635 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.635 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.635 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.636 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.636 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.636 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.637 2 DEBUG nova.virt.hardware [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:23:19 np0005473739 nova_compute[259550]: 2025-10-07 14:23:19.640 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:19 np0005473739 podman[354683]: 2025-10-07 14:23:19.764795387 +0000 UTC m=+0.023131830 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:23:19 np0005473739 podman[354683]: 2025-10-07 14:23:19.877349124 +0000 UTC m=+0.135685537 container create 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:23:19 np0005473739 systemd[1]: Started libpod-conmon-6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4.scope.
Oct  7 10:23:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:23:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1790: 305 pgs: 305 active+clean; 141 MiB data, 746 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.8 MiB/s wr, 216 op/s
Oct  7 10:23:19 np0005473739 podman[354683]: 2025-10-07 14:23:19.980192722 +0000 UTC m=+0.238529155 container init 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:23:19 np0005473739 podman[354683]: 2025-10-07 14:23:19.988085363 +0000 UTC m=+0.246421786 container start 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 10:23:19 np0005473739 podman[354683]: 2025-10-07 14:23:19.999039016 +0000 UTC m=+0.257375709 container attach 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4940471' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.131 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.160 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.165 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #81. Immutable memtables: 0.
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.166545) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 45] Flushing memtable with next log file: 81
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000166662, "job": 45, "event": "flush_started", "num_memtables": 1, "num_entries": 1080, "num_deletes": 254, "total_data_size": 1355732, "memory_usage": 1378944, "flush_reason": "Manual Compaction"}
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 45] Level-0 flush table #82: started
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000214554, "cf_name": "default", "job": 45, "event": "table_file_creation", "file_number": 82, "file_size": 1340073, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 36657, "largest_seqno": 37736, "table_properties": {"data_size": 1334880, "index_size": 2589, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12272, "raw_average_key_size": 20, "raw_value_size": 1324070, "raw_average_value_size": 2206, "num_data_blocks": 114, "num_entries": 600, "num_filter_entries": 600, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759846922, "oldest_key_time": 1759846922, "file_creation_time": 1759847000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 82, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 45] Flush lasted 48012 microseconds, and 8112 cpu microseconds.
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.214594) [db/flush_job.cc:967] [default] [JOB 45] Level-0 flush table #82: 1340073 bytes OK
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.214616) [db/memtable_list.cc:519] [default] Level-0 commit table #82 started
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.216789) [db/memtable_list.cc:722] [default] Level-0 commit table #82: memtable #1 done
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.216803) EVENT_LOG_v1 {"time_micros": 1759847000216799, "job": 45, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.216820) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 45] Try to delete WAL files size 1350546, prev total WAL file size 1350546, number of live WAL files 2.
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000078.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.217455) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033323633' seq:72057594037927935, type:22 .. '7061786F730033353135' seq:0, type:0; will stop at (end)
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 46] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 45 Base level 0, inputs: [82(1308KB)], [80(8186KB)]
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000217483, "job": 46, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [82], "files_L6": [80], "score": -1, "input_data_size": 9723476, "oldest_snapshot_seqno": -1}
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 46] Generated table #83: 6244 keys, 8100393 bytes, temperature: kUnknown
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000277028, "cf_name": "default", "job": 46, "event": "table_file_creation", "file_number": 83, "file_size": 8100393, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8059860, "index_size": 23847, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15621, "raw_key_size": 157964, "raw_average_key_size": 25, "raw_value_size": 7949119, "raw_average_value_size": 1273, "num_data_blocks": 960, "num_entries": 6244, "num_filter_entries": 6244, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847000, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 83, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.277293) [db/compaction/compaction_job.cc:1663] [default] [JOB 46] Compacted 1@0 + 1@6 files to L6 => 8100393 bytes
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.278973) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 163.0 rd, 135.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 8.0 +0.0 blob) out(7.7 +0.0 blob), read-write-amplify(13.3) write-amplify(6.0) OK, records in: 6767, records dropped: 523 output_compression: NoCompression
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.279008) EVENT_LOG_v1 {"time_micros": 1759847000278995, "job": 46, "event": "compaction_finished", "compaction_time_micros": 59657, "compaction_time_cpu_micros": 17926, "output_level": 6, "num_output_files": 1, "total_output_size": 8100393, "num_input_records": 6767, "num_output_records": 6244, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000082.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000279591, "job": 46, "event": "table_file_deletion", "file_number": 82}
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000080.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847000280873, "job": 46, "event": "table_file_deletion", "file_number": 80}
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.217376) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281013) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281015) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281017) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:20.281019) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:23:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3166003175' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.635 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.637 2 DEBUG nova.virt.libvirt.vif [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-426897571',display_name='tempest-ServerPasswordTestJSON-server-426897571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-426897571',id=96,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9fd5efb5e7c240a997805db53864ecfc',ramdisk_id='',reservation_id='r-8zuwxzb4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-452511673',owner_user_name='tempest-ServerPasswordTestJSON-452511673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:15Z,user_data=None,user_id='ef4a6b36d606485ea7b0334d25dd23bd',uuid=ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.637 2 DEBUG nova.network.os_vif_util [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converting VIF {"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.638 2 DEBUG nova.network.os_vif_util [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.640 2 DEBUG nova.objects.instance [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lazy-loading 'pci_devices' on Instance uuid ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.708 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <uuid>ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1</uuid>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <name>instance-00000060</name>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerPasswordTestJSON-server-426897571</nova:name>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:23:19</nova:creationTime>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <nova:user uuid="ef4a6b36d606485ea7b0334d25dd23bd">tempest-ServerPasswordTestJSON-452511673-project-member</nova:user>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <nova:project uuid="9fd5efb5e7c240a997805db53864ecfc">tempest-ServerPasswordTestJSON-452511673</nova:project>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <nova:port uuid="cef2a5c2-ea82-48c7-8da3-0028586c49aa">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <entry name="serial">ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1</entry>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <entry name="uuid">ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1</entry>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:53:69:ab"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <target dev="tapcef2a5c2-ea"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/console.log" append="off"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:23:20 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:23:20 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:23:20 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:23:20 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.709 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Preparing to wait for external event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.710 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.710 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.710 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.711 2 DEBUG nova.virt.libvirt.vif [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-426897571',display_name='tempest-ServerPasswordTestJSON-server-426897571',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-426897571',id=96,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='9fd5efb5e7c240a997805db53864ecfc',ramdisk_id='',reservation_id='r-8zuwxzb4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerPasswordTestJSON-452511673',owner_user_name='tempest-ServerPasswordTestJSON-452511673-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:15Z,user_data=None,user_id='ef4a6b36d606485ea7b0334d25dd23bd',uuid=ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.711 2 DEBUG nova.network.os_vif_util [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converting VIF {"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.712 2 DEBUG nova.network.os_vif_util [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.712 2 DEBUG os_vif [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.714 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.718 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcef2a5c2-ea, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.719 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcef2a5c2-ea, col_values=(('external_ids', {'iface-id': 'cef2a5c2-ea82-48c7-8da3-0028586c49aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:69:ab', 'vm-uuid': 'ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:20 np0005473739 NetworkManager[44949]: <info>  [1759847000.7218] manager: (tapcef2a5c2-ea): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/400)
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.732 2 INFO os_vif [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea')#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.810 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.810 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.811 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] No VIF found with MAC fa:16:3e:53:69:ab, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.811 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Using config drive#033[00m
Oct  7 10:23:20 np0005473739 nova_compute[259550]: 2025-10-07 14:23:20.829 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]: {
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "osd_id": 2,
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "type": "bluestore"
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:    },
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "osd_id": 1,
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "type": "bluestore"
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:    },
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "osd_id": 0,
Oct  7 10:23:20 np0005473739 adoring_franklin[354718]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:23:21 np0005473739 adoring_franklin[354718]:        "type": "bluestore"
Oct  7 10:23:21 np0005473739 adoring_franklin[354718]:    }
Oct  7 10:23:21 np0005473739 adoring_franklin[354718]: }
Oct  7 10:23:21 np0005473739 systemd[1]: libpod-6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4.scope: Deactivated successfully.
Oct  7 10:23:21 np0005473739 systemd[1]: libpod-6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4.scope: Consumed 1.031s CPU time.
Oct  7 10:23:21 np0005473739 podman[354826]: 2025-10-07 14:23:21.071172314 +0000 UTC m=+0.026785147 container died 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:23:21 np0005473739 podman[354815]: 2025-10-07 14:23:21.126409229 +0000 UTC m=+0.105632923 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:23:21 np0005473739 podman[354812]: 2025-10-07 14:23:21.16347819 +0000 UTC m=+0.143447964 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:23:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-50561b75e0bdd7ab3e1e266516a48b26c4502ad42767ba47edd4da70e13e6964-merged.mount: Deactivated successfully.
Oct  7 10:23:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:21 np0005473739 podman[354826]: 2025-10-07 14:23:21.352280074 +0000 UTC m=+0.307892877 container remove 6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:23:21 np0005473739 systemd[1]: libpod-conmon-6474646cc538e5f102846ac4c45f718138fa91f04ec3bc524664ef3890c476f4.scope: Deactivated successfully.
Oct  7 10:23:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:23:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:23:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:23:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:23:21 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5876a73c-09af-4e8b-9bea-dabb34e2214a does not exist
Oct  7 10:23:21 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev fa78878c-5e66-487f-b9d4-20af59f868ba does not exist
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.635 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Creating config drive at /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config#033[00m
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.639 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3evq2bn5 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:21.697 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '23'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.782 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp3evq2bn5" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.804 2 DEBUG nova.storage.rbd_utils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] rbd image ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.807 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.945 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.946 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:21 np0005473739 nova_compute[259550]: 2025-10-07 14:23:21.970 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:23:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1791: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.6 MiB/s wr, 150 op/s
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.067 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.068 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.075 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.075 2 INFO nova.compute.claims [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.207 2 DEBUG nova.network.neutron [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Updated VIF entry in instance network info cache for port cef2a5c2-ea82-48c7-8da3-0028586c49aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.208 2 DEBUG nova.network.neutron [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Updating instance_info_cache with network_info: [{"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.216 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.256 2 DEBUG oslo_concurrency.lockutils [req-9a57519e-8403-49d4-b313-1d483004b0c6 req-1b303c72-4ba0-440b-93d9-b0e42db9c66b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.258 2 DEBUG oslo_concurrency.processutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.259 2 INFO nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Deleting local config drive /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1/disk.config because it was imported into RBD.#033[00m
Oct  7 10:23:22 np0005473739 kernel: tapcef2a5c2-ea: entered promiscuous mode
Oct  7 10:23:22 np0005473739 NetworkManager[44949]: <info>  [1759847002.3192] manager: (tapcef2a5c2-ea): new Tun device (/org/freedesktop/NetworkManager/Devices/401)
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:22Z|00984|binding|INFO|Claiming lport cef2a5c2-ea82-48c7-8da3-0028586c49aa for this chassis.
Oct  7 10:23:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:22Z|00985|binding|INFO|cef2a5c2-ea82-48c7-8da3-0028586c49aa: Claiming fa:16:3e:53:69:ab 10.100.0.7
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.334 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:69:ab 10.100.0.7'], port_security=['fa:16:3e:53:69:ab 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fd5efb5e7c240a997805db53864ecfc', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a65ebf45-161a-4778-bc19-6a690dbd2208', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efcc1312-35a2-444f-b47e-40e2e08c733d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=cef2a5c2-ea82-48c7-8da3-0028586c49aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.335 161536 INFO neutron.agent.ovn.metadata.agent [-] Port cef2a5c2-ea82-48c7-8da3-0028586c49aa in datapath ac82160b-5221-4bd6-bab0-b5d08c95e428 bound to our chassis#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.336 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network ac82160b-5221-4bd6-bab0-b5d08c95e428#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.351 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf0cde73-3841-48c2-b55d-5fd026b22465]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.353 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapac82160b-51 in ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.356 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapac82160b-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.356 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa6ebee-f4fd-4c28-b1f6-7fdb3ecc83e4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.357 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[675cf287-79d7-48e5-9a04-2135b2c3d6ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 systemd-udevd[354985]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:23:22 np0005473739 NetworkManager[44949]: <info>  [1759847002.3735] device (tapcef2a5c2-ea): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:23:22 np0005473739 NetworkManager[44949]: <info>  [1759847002.3750] device (tapcef2a5c2-ea): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:23:22 np0005473739 systemd-machined[214580]: New machine qemu-119-instance-00000060.
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.377 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cc040ba1-42bb-4d2a-81ff-0151b691089f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 systemd[1]: Started Virtual Machine qemu-119-instance-00000060.
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.406 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97365a04-96a3-475c-a979-cc98b5bb00cf]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:22Z|00986|binding|INFO|Setting lport cef2a5c2-ea82-48c7-8da3-0028586c49aa ovn-installed in OVS
Oct  7 10:23:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:22Z|00987|binding|INFO|Setting lport cef2a5c2-ea82-48c7-8da3-0028586c49aa up in Southbound
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.439 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d3a80c30-fdc1-45bf-bddc-1ed590cbaad7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.445 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8480ae70-c328-433c-8368-82015b8587d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 NetworkManager[44949]: <info>  [1759847002.4467] manager: (tapac82160b-50): new Veth device (/org/freedesktop/NetworkManager/Devices/402)
Oct  7 10:23:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:23:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.490 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cb117214-e494-4174-a033-0dbb7a1c7645]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.493 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca4c23c9-8ad6-499b-8bfc-e57b6e901a55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 NetworkManager[44949]: <info>  [1759847002.5150] device (tapac82160b-50): carrier: link connected
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.521 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7ab1e8db-06da-4a89-ba4e-fd4592e7483f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4b92c05-e420-4a27-bb81-35b0189360fe]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac82160b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:87:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758607, 'reachable_time': 19310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355026, 'error': None, 'target': 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.558 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42e8cb42-0885-4696-8c4f-2742eb3b14ac]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe44:87a0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 758607, 'tstamp': 758607}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355027, 'error': None, 'target': 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.577 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[05dae76e-7ff3-48be-b0cc-ee102cc3f918]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapac82160b-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:44:87:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 285], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758607, 'reachable_time': 19310, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 355028, 'error': None, 'target': 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.609 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d284d36e-4978-408b-8143-f646a195fe4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:23:22
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'default.rgw.control', 'cephfs.cephfs.meta', 'default.rgw.log', 'volumes', 'vms', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', '.rgw.root']
Oct  7 10:23:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3557800978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[360399e5-f941-4133-ae48-6ed02b142520]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.674 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac82160b-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.674 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.675 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapac82160b-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:22 np0005473739 kernel: tapac82160b-50: entered promiscuous mode
Oct  7 10:23:22 np0005473739 NetworkManager[44949]: <info>  [1759847002.6780] manager: (tapac82160b-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/403)
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.680 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapac82160b-50, col_values=(('external_ids', {'iface-id': '5eb8784d-43ef-482a-a6dc-a2613b333a0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:22Z|00988|binding|INFO|Releasing lport 5eb8784d-43ef-482a-a6dc-a2613b333a0f from this chassis (sb_readonly=0)
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.685 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.712 2 DEBUG nova.compute.provider_tree [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.713 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/ac82160b-5221-4bd6-bab0-b5d08c95e428.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/ac82160b-5221-4bd6-bab0-b5d08c95e428.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.714 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30e27fb9-5c94-4b05-a0d3-7dc3f7df661f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.715 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-ac82160b-5221-4bd6-bab0-b5d08c95e428
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/ac82160b-5221-4bd6-bab0-b5d08c95e428.pid.haproxy
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID ac82160b-5221-4bd6-bab0-b5d08c95e428
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:23:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:22.717 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'env', 'PROCESS_TAG=haproxy-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/ac82160b-5221-4bd6-bab0-b5d08c95e428.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.730 2 DEBUG nova.scheduler.client.report [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.751 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.683s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.752 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.821 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.822 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.846 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.867 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.899 2 DEBUG nova.compute.manager [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.900 2 DEBUG oslo_concurrency.lockutils [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.900 2 DEBUG oslo_concurrency.lockutils [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.900 2 DEBUG oslo_concurrency.lockutils [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.901 2 DEBUG nova.compute.manager [req-43be2ca1-f66d-47e1-a464-0c4eeba5ee72 req-5386f489-a307-48fd-86f9-a7869bdb4869 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Processing event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:23:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.981 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.983 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:23:22 np0005473739 nova_compute[259550]: 2025-10-07 14:23:22.983 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Creating image(s)#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.004 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.029 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.053 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.057 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:23 np0005473739 podman[355116]: 2025-10-07 14:23:23.110056373 +0000 UTC m=+0.063042065 container create 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.137 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.140 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.140 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.141 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.168 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:23 np0005473739 podman[355116]: 2025-10-07 14:23:23.075078929 +0000 UTC m=+0.028064681 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:23:23 np0005473739 systemd[1]: Started libpod-conmon-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9.scope.
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.177 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 49805eb1-6f40-48f8-bcdc-de12de83733b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:23:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e45254348c58ccf37cde64c4f9e929821eb371c9ded4236404aedd3233f9a4d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:23:23 np0005473739 podman[355116]: 2025-10-07 14:23:23.212916921 +0000 UTC m=+0.165902633 container init 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:23:23 np0005473739 podman[355116]: 2025-10-07 14:23:23.21881571 +0000 UTC m=+0.171801402 container start 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:23:23 np0005473739 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [NOTICE]   (355198) : New worker (355207) forked
Oct  7 10:23:23 np0005473739 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [NOTICE]   (355198) : Loading success.
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.354 2 DEBUG nova.policy [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.485 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 49805eb1-6f40-48f8-bcdc-de12de83733b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.308s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.549 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.598 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847003.5980296, ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.599 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] VM Started (Lifecycle Event)#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.601 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.607 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.637 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.639 2 INFO nova.virt.libvirt.driver [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance spawned successfully.#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.639 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.644 2 DEBUG nova.objects.instance [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid 49805eb1-6f40-48f8-bcdc-de12de83733b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.647 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.667 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.668 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Ensure instance console log exists: /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.668 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.669 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.669 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.670 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.671 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847003.5981593, ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.671 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.675 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.675 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.676 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.676 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.677 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.677 2 DEBUG nova.virt.libvirt.driver [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.702 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.705 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847003.605789, ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.706 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.734 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.737 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.760 2 INFO nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Took 8.39 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.761 2 DEBUG nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.768 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.819 2 INFO nova.compute.manager [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Took 9.37 seconds to build instance.#033[00m
Oct  7 10:23:23 np0005473739 nova_compute[259550]: 2025-10-07 14:23:23.839 2 DEBUG oslo_concurrency.lockutils [None req-c618fc51-4172-44a6-a9e1-f788a4e0268e ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1792: 305 pgs: 305 active+clean; 187 MiB data, 737 MiB used, 59 GiB / 60 GiB avail; 336 KiB/s rd, 2.6 MiB/s wr, 112 op/s
Oct  7 10:23:24 np0005473739 nova_compute[259550]: 2025-10-07 14:23:24.035 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Successfully created port: 853f22d9-282d-4428-a55e-f35727720eb3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:23:24 np0005473739 nova_compute[259550]: 2025-10-07 14:23:24.961 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Successfully updated port: 853f22d9-282d-4428-a55e-f35727720eb3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:23:24 np0005473739 nova_compute[259550]: 2025-10-07 14:23:24.978 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:24 np0005473739 nova_compute[259550]: 2025-10-07 14:23:24.979 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:24 np0005473739 nova_compute[259550]: 2025-10-07 14:23:24.979 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.091 2 DEBUG nova.compute.manager [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.091 2 DEBUG oslo_concurrency.lockutils [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.092 2 DEBUG oslo_concurrency.lockutils [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.092 2 DEBUG oslo_concurrency.lockutils [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.092 2 DEBUG nova.compute.manager [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] No waiting events found dispatching network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.092 2 WARNING nova.compute.manager [req-12ee7846-fc37-464f-a78d-1ef6ad11cdbf req-c4557a90-8992-4172-85b3-6413580f1480 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received unexpected event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa for instance with vm_state active and task_state None.#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.204 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.283 2 DEBUG nova.compute.manager [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-changed-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.284 2 DEBUG nova.compute.manager [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Refreshing instance network info cache due to event network-changed-853f22d9-282d-4428-a55e-f35727720eb3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.284 2 DEBUG oslo_concurrency.lockutils [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:25 np0005473739 nova_compute[259550]: 2025-10-07 14:23:25.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1793: 305 pgs: 305 active+clean; 213 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 145 op/s
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.292 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846991.290687, 569d484f-0c0f-4b2c-aca5-e88654f9501f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.292 2 INFO nova.compute.manager [-] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.318 2 DEBUG nova.compute.manager [None req-b74eccc1-ac91-4be5-8d21-4307a3edc69a - - - - - -] [instance: 569d484f-0c0f-4b2c-aca5-e88654f9501f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.797 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.798 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.798 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.798 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.799 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.800 2 INFO nova.compute.manager [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Terminating instance#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.801 2 DEBUG nova.compute.manager [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:23:26 np0005473739 kernel: tapcef2a5c2-ea (unregistering): left promiscuous mode
Oct  7 10:23:26 np0005473739 NetworkManager[44949]: <info>  [1759847006.8567] device (tapcef2a5c2-ea): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:23:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:26Z|00989|binding|INFO|Releasing lport cef2a5c2-ea82-48c7-8da3-0028586c49aa from this chassis (sb_readonly=0)
Oct  7 10:23:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:26Z|00990|binding|INFO|Setting lport cef2a5c2-ea82-48c7-8da3-0028586c49aa down in Southbound
Oct  7 10:23:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:26Z|00991|binding|INFO|Removing iface tapcef2a5c2-ea ovn-installed in OVS
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.881 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:69:ab 10.100.0.7'], port_security=['fa:16:3e:53:69:ab 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9fd5efb5e7c240a997805db53864ecfc', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a65ebf45-161a-4778-bc19-6a690dbd2208', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=efcc1312-35a2-444f-b47e-40e2e08c733d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=cef2a5c2-ea82-48c7-8da3-0028586c49aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.882 161536 INFO neutron.agent.ovn.metadata.agent [-] Port cef2a5c2-ea82-48c7-8da3-0028586c49aa in datapath ac82160b-5221-4bd6-bab0-b5d08c95e428 unbound from our chassis#033[00m
Oct  7 10:23:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.884 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ac82160b-5221-4bd6-bab0-b5d08c95e428, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:23:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.885 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f227a45-4ea4-449d-932d-228dc96907b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:26.885 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 namespace which is not needed anymore#033[00m
Oct  7 10:23:26 np0005473739 nova_compute[259550]: 2025-10-07 14:23:26.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:26 np0005473739 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Deactivated successfully.
Oct  7 10:23:26 np0005473739 systemd[1]: machine-qemu\x2d119\x2dinstance\x2d00000060.scope: Consumed 4.368s CPU time.
Oct  7 10:23:26 np0005473739 systemd-machined[214580]: Machine qemu-119-instance-00000060 terminated.
Oct  7 10:23:27 np0005473739 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [NOTICE]   (355198) : haproxy version is 2.8.14-c23fe91
Oct  7 10:23:27 np0005473739 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [NOTICE]   (355198) : path to executable is /usr/sbin/haproxy
Oct  7 10:23:27 np0005473739 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [WARNING]  (355198) : Exiting Master process...
Oct  7 10:23:27 np0005473739 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [WARNING]  (355198) : Exiting Master process...
Oct  7 10:23:27 np0005473739 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [ALERT]    (355198) : Current worker (355207) exited with code 143 (Terminated)
Oct  7 10:23:27 np0005473739 neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428[355193]: [WARNING]  (355198) : All workers exited. Exiting... (0)
Oct  7 10:23:27 np0005473739 systemd[1]: libpod-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9.scope: Deactivated successfully.
Oct  7 10:23:27 np0005473739 conmon[355193]: conmon 3a30c5a21e14b7bd568c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9.scope/container/memory.events
Oct  7 10:23:27 np0005473739 podman[355320]: 2025-10-07 14:23:27.036637883 +0000 UTC m=+0.052126613 container died 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.043 2 INFO nova.virt.libvirt.driver [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Instance destroyed successfully.#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.044 2 DEBUG nova.objects.instance [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lazy-loading 'resources' on Instance uuid ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.058 2 DEBUG nova.virt.libvirt.vif [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:23:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerPasswordTestJSON-server-426897571',display_name='tempest-ServerPasswordTestJSON-server-426897571',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverpasswordtestjson-server-426897571',id=96,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:23:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='9fd5efb5e7c240a997805db53864ecfc',ramdisk_id='',reservation_id='r-8zuwxzb4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerPasswordTestJSON-452511673',owner_user_name='tempest-ServerPasswordTestJSON-452511673-project-member',password_0='',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:23:26Z,user_data=None,user_id='ef4a6b36d606485ea7b0334d25dd23bd',uuid=ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.059 2 DEBUG nova.network.os_vif_util [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converting VIF {"id": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "address": "fa:16:3e:53:69:ab", "network": {"id": "ac82160b-5221-4bd6-bab0-b5d08c95e428", "bridge": "br-int", "label": "tempest-ServerPasswordTestJSON-76056275-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "9fd5efb5e7c240a997805db53864ecfc", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcef2a5c2-ea", "ovs_interfaceid": "cef2a5c2-ea82-48c7-8da3-0028586c49aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.059 2 DEBUG nova.network.os_vif_util [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.060 2 DEBUG os_vif [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:23:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9-userdata-shm.mount: Deactivated successfully.
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.063 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcef2a5c2-ea, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e45254348c58ccf37cde64c4f9e929821eb371c9ded4236404aedd3233f9a4d2-merged.mount: Deactivated successfully.
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.072 2 INFO os_vif [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:69:ab,bridge_name='br-int',has_traffic_filtering=True,id=cef2a5c2-ea82-48c7-8da3-0028586c49aa,network=Network(ac82160b-5221-4bd6-bab0-b5d08c95e428),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcef2a5c2-ea')#033[00m
Oct  7 10:23:27 np0005473739 podman[355320]: 2025-10-07 14:23:27.079005006 +0000 UTC m=+0.094493736 container cleanup 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:23:27 np0005473739 systemd[1]: libpod-conmon-3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9.scope: Deactivated successfully.
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.114 2 DEBUG nova.network.neutron [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Updating instance_info_cache with network_info: [{"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:27 np0005473739 podman[355367]: 2025-10-07 14:23:27.150745863 +0000 UTC m=+0.048038915 container remove 3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.158 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6427d2-b5ab-4875-a67d-0e92a8e4f068]: (4, ('Tue Oct  7 02:23:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 (3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9)\n3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9\nTue Oct  7 02:23:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 (3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9)\n3a30c5a21e14b7bd568c5cdb8d8833aabfd5861384b4e5b0acb1593604499de9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.160 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eab685a4-c3f0-41c6-9576-2347b51628e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.161 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapac82160b-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:27 np0005473739 kernel: tapac82160b-50: left promiscuous mode
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.172 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.172 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance network_info: |[{"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.173 2 DEBUG oslo_concurrency.lockutils [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.173 2 DEBUG nova.network.neutron [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Refreshing network info cache for port 853f22d9-282d-4428-a55e-f35727720eb3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.175 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start _get_guest_xml network_info=[{"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.181 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.185 2 WARNING nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.187 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[221873bc-1145-4738-9de6-bf663feb53cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.198 2 DEBUG nova.virt.libvirt.host [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.199 2 DEBUG nova.virt.libvirt.host [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.205 2 DEBUG nova.virt.libvirt.host [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.205 2 DEBUG nova.virt.libvirt.host [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.206 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.206 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.206 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.206 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.207 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.208 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.208 2 DEBUG nova.virt.hardware [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.212 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.218 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28e4f4f2-c640-4d21-9c14-9edc06ea4ad1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.219 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1c972abf-d47f-4095-9b0b-e297b266b4a6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.238 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[24a64403-b589-4a57-b34f-2cbc791b539c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 758599, 'reachable_time': 24248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355393, 'error': None, 'target': 'ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.240 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-ac82160b-5221-4bd6-bab0-b5d08c95e428 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:23:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:27.240 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cdd204d0-2bea-4331-a521-259cd825ddad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:27 np0005473739 systemd[1]: run-netns-ovnmeta\x2dac82160b\x2d5221\x2d4bd6\x2dbab0\x2db5d08c95e428.mount: Deactivated successfully.
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.556 2 INFO nova.virt.libvirt.driver [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Deleting instance files /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_del#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.557 2 INFO nova.virt.libvirt.driver [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Deletion of /var/lib/nova/instances/ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1_del complete#033[00m
Oct  7 10:23:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:23:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/716028951' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.684 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.701 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.704 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.750 2 DEBUG nova.compute.manager [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-unplugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.750 2 DEBUG oslo_concurrency.lockutils [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.751 2 DEBUG oslo_concurrency.lockutils [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.751 2 DEBUG oslo_concurrency.lockutils [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.752 2 DEBUG nova.compute.manager [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] No waiting events found dispatching network-vif-unplugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.752 2 DEBUG nova.compute.manager [req-b65a6c51-c0c2-42cb-8e65-63901d7b5123 req-46dc3577-217c-44d7-a9d9-a393a0fb9b66 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-unplugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.753 2 INFO nova.compute.manager [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.753 2 DEBUG oslo.service.loopingcall [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.755 2 DEBUG nova.compute.manager [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:23:27 np0005473739 nova_compute[259550]: 2025-10-07 14:23:27.755 2 DEBUG nova.network.neutron [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:23:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1794: 305 pgs: 305 active+clean; 213 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 126 op/s
Oct  7 10:23:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:23:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2979835019' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.168 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.169 2 DEBUG nova.virt.libvirt.vif [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1173658225',display_name='tempest-ServersTestJSON-server-1173658225',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1173658225',id=97,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-h2rgpkvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:22Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=49805eb1-6f40-48f8-bcdc-de12de83733b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.169 2 DEBUG nova.network.os_vif_util [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.170 2 DEBUG nova.network.os_vif_util [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.171 2 DEBUG nova.objects.instance [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid 49805eb1-6f40-48f8-bcdc-de12de83733b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.184 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <uuid>49805eb1-6f40-48f8-bcdc-de12de83733b</uuid>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <name>instance-00000061</name>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestJSON-server-1173658225</nova:name>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:23:27</nova:creationTime>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <nova:port uuid="853f22d9-282d-4428-a55e-f35727720eb3">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <entry name="serial">49805eb1-6f40-48f8-bcdc-de12de83733b</entry>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <entry name="uuid">49805eb1-6f40-48f8-bcdc-de12de83733b</entry>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/49805eb1-6f40-48f8-bcdc-de12de83733b_disk">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:fe:c7:be"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <target dev="tap853f22d9-28"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/console.log" append="off"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:23:28 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:23:28 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:23:28 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:23:28 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.184 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Preparing to wait for external event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.185 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.185 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.185 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.185 2 DEBUG nova.virt.libvirt.vif [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1173658225',display_name='tempest-ServersTestJSON-server-1173658225',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1173658225',id=97,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-h2rgpkvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:22Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=49805eb1-6f40-48f8-bcdc-de12de83733b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.186 2 DEBUG nova.network.os_vif_util [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.186 2 DEBUG nova.network.os_vif_util [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.186 2 DEBUG os_vif [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.187 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.188 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.190 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap853f22d9-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.191 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap853f22d9-28, col_values=(('external_ids', {'iface-id': '853f22d9-282d-4428-a55e-f35727720eb3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:fe:c7:be', 'vm-uuid': '49805eb1-6f40-48f8-bcdc-de12de83733b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:28 np0005473739 NetworkManager[44949]: <info>  [1759847008.1945] manager: (tap853f22d9-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/404)
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.200 2 INFO os_vif [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28')#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.257 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.257 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.257 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:fe:c7:be, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.258 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Using config drive#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.276 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.453 2 DEBUG nova.network.neutron [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.477 2 INFO nova.compute.manager [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Took 0.72 seconds to deallocate network for instance.#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.541 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.542 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:28 np0005473739 nova_compute[259550]: 2025-10-07 14:23:28.638 2 DEBUG oslo_concurrency.processutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.008 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Creating config drive at /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.016 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfh96fy2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.059 2 DEBUG nova.network.neutron [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Updated VIF entry in instance network info cache for port 853f22d9-282d-4428-a55e-f35727720eb3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.060 2 DEBUG nova.network.neutron [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Updating instance_info_cache with network_info: [{"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.078 2 DEBUG oslo_concurrency.lockutils [req-0b146891-26fc-48f2-b268-22d180462ca0 req-ea09b24b-317e-4218-b484-5d419b88b271 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-49805eb1-6f40-48f8-bcdc-de12de83733b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/481142845' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.121 2 DEBUG oslo_concurrency.processutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.128 2 DEBUG nova.compute.provider_tree [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.145 2 DEBUG nova.scheduler.client.report [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.167 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.625s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.171 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjfh96fy2" returned: 0 in 0.155s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.201 2 DEBUG nova.storage.rbd_utils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.205 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.261 2 INFO nova.scheduler.client.report [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Deleted allocations for instance ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.324 2 DEBUG oslo_concurrency.lockutils [None req-c93a3d5a-b7d3-44ba-9e82-174994f0bc88 ef4a6b36d606485ea7b0334d25dd23bd 9fd5efb5e7c240a997805db53864ecfc - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.526s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.401 2 DEBUG oslo_concurrency.processutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config 49805eb1-6f40-48f8-bcdc-de12de83733b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.196s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.402 2 INFO nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Deleting local config drive /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b/disk.config because it was imported into RBD.#033[00m
Oct  7 10:23:29 np0005473739 kernel: tap853f22d9-28: entered promiscuous mode
Oct  7 10:23:29 np0005473739 NetworkManager[44949]: <info>  [1759847009.4497] manager: (tap853f22d9-28): new Tun device (/org/freedesktop/NetworkManager/Devices/405)
Oct  7 10:23:29 np0005473739 systemd-udevd[355301]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:29Z|00992|binding|INFO|Claiming lport 853f22d9-282d-4428-a55e-f35727720eb3 for this chassis.
Oct  7 10:23:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:29Z|00993|binding|INFO|853f22d9-282d-4428-a55e-f35727720eb3: Claiming fa:16:3e:fe:c7:be 10.100.0.14
Oct  7 10:23:29 np0005473739 NetworkManager[44949]: <info>  [1759847009.4615] device (tap853f22d9-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:23:29 np0005473739 NetworkManager[44949]: <info>  [1759847009.4635] device (tap853f22d9-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.461 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c7:be 10.100.0.14'], port_security=['fa:16:3e:fe:c7:be 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '49805eb1-6f40-48f8-bcdc-de12de83733b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=853f22d9-282d-4428-a55e-f35727720eb3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.465 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 853f22d9-282d-4428-a55e-f35727720eb3 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.467 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:23:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:29Z|00994|binding|INFO|Setting lport 853f22d9-282d-4428-a55e-f35727720eb3 ovn-installed in OVS
Oct  7 10:23:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:29Z|00995|binding|INFO|Setting lport 853f22d9-282d-4428-a55e-f35727720eb3 up in Southbound
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.491 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[231b6963-808e-4986-8e4d-c9145e4fc11f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:29 np0005473739 systemd-machined[214580]: New machine qemu-120-instance-00000061.
Oct  7 10:23:29 np0005473739 systemd[1]: Started Virtual Machine qemu-120-instance-00000061.
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.534 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1e2a6196-f744-4d70-b4d9-b7e496d9c7a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.539 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[850d45a5-b9ca-418c-a46e-4056b49d173a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.571 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b51e44e2-d413-4675-befe-7ced4286667b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae3698b8-cc4e-41fe-a6d6-43da711f7253]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 26, 'rx_bytes': 958, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 26, 'rx_bytes': 958, 'tx_bytes': 1284, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355563, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.614 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[38582979-0192-431a-bd4c-b6c7ac44d02d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355564, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355564, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.617 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.620 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:29.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.728 2 DEBUG nova.compute.manager [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.728 2 DEBUG oslo_concurrency.lockutils [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.729 2 DEBUG oslo_concurrency.lockutils [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.729 2 DEBUG oslo_concurrency.lockutils [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.730 2 DEBUG nova.compute.manager [req-08a6bdee-2dad-49f5-8305-8324796429e7 req-8d06a25c-e90d-49ef-a77d-a960c43bb6bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Processing event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.830 2 DEBUG nova.compute.manager [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.831 2 DEBUG oslo_concurrency.lockutils [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.831 2 DEBUG oslo_concurrency.lockutils [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.831 2 DEBUG oslo_concurrency.lockutils [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.832 2 DEBUG nova.compute.manager [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] No waiting events found dispatching network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.832 2 WARNING nova.compute.manager [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received unexpected event network-vif-plugged-cef2a5c2-ea82-48c7-8da3-0028586c49aa for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.832 2 DEBUG nova.compute.manager [req-dd66ca86-1135-4c6c-a0cc-ecd61cfd53f0 req-9055b8f9-0186-46b1-b67c-4e96cc8a79c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Received event network-vif-deleted-cef2a5c2-ea82-48c7-8da3-0028586c49aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1795: 305 pgs: 305 active+clean; 193 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 181 op/s
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.990 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759846994.989657, c53d76d2-4525-4798-bcf4-ad2e6b18071a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:29 np0005473739 nova_compute[259550]: 2025-10-07 14:23:29.991 2 INFO nova.compute.manager [-] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.010 2 DEBUG nova.compute.manager [None req-2718f535-f24e-41e9-bdc6-f6041534af05 - - - - - -] [instance: c53d76d2-4525-4798-bcf4-ad2e6b18071a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.776 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847010.7754915, 49805eb1-6f40-48f8-bcdc-de12de83733b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.776 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] VM Started (Lifecycle Event)#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.779 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.783 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.788 2 INFO nova.virt.libvirt.driver [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance spawned successfully.#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.789 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.810 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.815 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.823 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.824 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.825 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.825 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.826 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.826 2 DEBUG nova.virt.libvirt.driver [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.850 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.851 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847010.7757099, 49805eb1-6f40-48f8-bcdc-de12de83733b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.851 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.880 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.885 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847010.7825274, 49805eb1-6f40-48f8-bcdc-de12de83733b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.886 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.889 2 INFO nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Took 7.91 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.890 2 DEBUG nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.923 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.928 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.952 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.962 2 INFO nova.compute.manager [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Took 8.92 seconds to build instance.#033[00m
Oct  7 10:23:30 np0005473739 nova_compute[259550]: 2025-10-07 14:23:30.979 2 DEBUG oslo_concurrency.lockutils [None req-e802d905-ef97-4b6b-8e47-5b48552b9d45 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:31 np0005473739 nova_compute[259550]: 2025-10-07 14:23:31.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:31 np0005473739 nova_compute[259550]: 2025-10-07 14:23:31.840 2 DEBUG nova.compute.manager [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:31 np0005473739 nova_compute[259550]: 2025-10-07 14:23:31.841 2 DEBUG oslo_concurrency.lockutils [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:31 np0005473739 nova_compute[259550]: 2025-10-07 14:23:31.841 2 DEBUG oslo_concurrency.lockutils [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:31 np0005473739 nova_compute[259550]: 2025-10-07 14:23:31.841 2 DEBUG oslo_concurrency.lockutils [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:31 np0005473739 nova_compute[259550]: 2025-10-07 14:23:31.841 2 DEBUG nova.compute.manager [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] No waiting events found dispatching network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:31 np0005473739 nova_compute[259550]: 2025-10-07 14:23:31.842 2 WARNING nova.compute.manager [req-d0933404-b72d-43ff-90c4-81b334052731 req-e711d199-d880-4e2b-8190-7ac519c9a900 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received unexpected event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:23:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1796: 305 pgs: 305 active+clean; 167 MiB data, 736 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.9 MiB/s wr, 163 op/s
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001106732678908641 of space, bias 1.0, pg target 0.33201980367259226 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:23:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:23:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:32Z|00996|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct  7 10:23:32 np0005473739 nova_compute[259550]: 2025-10-07 14:23:32.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:23:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2166337259' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:23:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:23:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2166337259' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:23:33 np0005473739 nova_compute[259550]: 2025-10-07 14:23:33.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1797: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 230 op/s
Oct  7 10:23:34 np0005473739 nova_compute[259550]: 2025-10-07 14:23:34.968 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:34 np0005473739 nova_compute[259550]: 2025-10-07 14:23:34.968 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:34 np0005473739 nova_compute[259550]: 2025-10-07 14:23:34.968 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:34 np0005473739 nova_compute[259550]: 2025-10-07 14:23:34.968 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:34 np0005473739 nova_compute[259550]: 2025-10-07 14:23:34.969 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:34 np0005473739 nova_compute[259550]: 2025-10-07 14:23:34.970 2 INFO nova.compute.manager [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Terminating instance#033[00m
Oct  7 10:23:34 np0005473739 nova_compute[259550]: 2025-10-07 14:23:34.970 2 DEBUG nova.compute.manager [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:23:35 np0005473739 kernel: tap853f22d9-28 (unregistering): left promiscuous mode
Oct  7 10:23:35 np0005473739 NetworkManager[44949]: <info>  [1759847015.0044] device (tap853f22d9-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:23:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:35Z|00997|binding|INFO|Releasing lport 853f22d9-282d-4428-a55e-f35727720eb3 from this chassis (sb_readonly=0)
Oct  7 10:23:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:35Z|00998|binding|INFO|Setting lport 853f22d9-282d-4428-a55e-f35727720eb3 down in Southbound
Oct  7 10:23:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:35Z|00999|binding|INFO|Removing iface tap853f22d9-28 ovn-installed in OVS
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.025 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:fe:c7:be 10.100.0.14'], port_security=['fa:16:3e:fe:c7:be 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '49805eb1-6f40-48f8-bcdc-de12de83733b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=853f22d9-282d-4428-a55e-f35727720eb3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.026 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 853f22d9-282d-4428-a55e-f35727720eb3 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.027 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.050 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a0db35d-7ba2-484e-9f55-a856fe0a16b6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:35 np0005473739 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000061.scope: Deactivated successfully.
Oct  7 10:23:35 np0005473739 systemd[1]: machine-qemu\x2d120\x2dinstance\x2d00000061.scope: Consumed 5.450s CPU time.
Oct  7 10:23:35 np0005473739 systemd-machined[214580]: Machine qemu-120-instance-00000061 terminated.
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.079 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[903e6d7c-e7e2-4036-9a23-593c6d6e2821]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.082 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[59192d7e-a169-4d18-b767-db5a1011e37f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.107 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[77ebead3-4484-4ddb-ac1f-ae4b07f6f84f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.123 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae081852-b359-4e43-be4f-77da3a03ffc8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 28, 'rx_bytes': 958, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 28, 'rx_bytes': 958, 'tx_bytes': 1368, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 355620, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.140 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ed48e92-a2d2-4030-8c5e-1be1e51338d7]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355621, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 355621, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.141 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.147 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.147 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.147 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:35.147 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.211 2 INFO nova.virt.libvirt.driver [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Instance destroyed successfully.#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.212 2 DEBUG nova.objects.instance [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid 49805eb1-6f40-48f8-bcdc-de12de83733b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.236 2 DEBUG nova.virt.libvirt.vif [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:202:202,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:23:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1173658225',display_name='tempest-ServersTestJSON-server-1173658225',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1173658225',id=97,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:23:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-h2rgpkvt',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:23:33Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=49805eb1-6f40-48f8-bcdc-de12de83733b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.237 2 DEBUG nova.network.os_vif_util [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "853f22d9-282d-4428-a55e-f35727720eb3", "address": "fa:16:3e:fe:c7:be", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap853f22d9-28", "ovs_interfaceid": "853f22d9-282d-4428-a55e-f35727720eb3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.237 2 DEBUG nova.network.os_vif_util [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.238 2 DEBUG os_vif [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap853f22d9-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.245 2 INFO os_vif [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fe:c7:be,bridge_name='br-int',has_traffic_filtering=True,id=853f22d9-282d-4428-a55e-f35727720eb3,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap853f22d9-28')#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.262 2 DEBUG nova.compute.manager [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-unplugged-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.263 2 DEBUG oslo_concurrency.lockutils [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.263 2 DEBUG oslo_concurrency.lockutils [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.263 2 DEBUG oslo_concurrency.lockutils [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.263 2 DEBUG nova.compute.manager [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] No waiting events found dispatching network-vif-unplugged-853f22d9-282d-4428-a55e-f35727720eb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.264 2 DEBUG nova.compute.manager [req-38c69068-d027-4d29-a66a-795446fa0212 req-e8f20bd5-bf26-42cd-8223-2f4347283463 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-unplugged-853f22d9-282d-4428-a55e-f35727720eb3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.641 2 INFO nova.virt.libvirt.driver [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Deleting instance files /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b_del#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.641 2 INFO nova.virt.libvirt.driver [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Deletion of /var/lib/nova/instances/49805eb1-6f40-48f8-bcdc-de12de83733b_del complete#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.690 2 INFO nova.compute.manager [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.691 2 DEBUG oslo.service.loopingcall [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.691 2 DEBUG nova.compute.manager [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.691 2 DEBUG nova.network.neutron [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:35 np0005473739 nova_compute[259550]: 2025-10-07 14:23:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1798: 305 pgs: 305 active+clean; 140 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 984 KiB/s wr, 248 op/s
Oct  7 10:23:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.767 2 DEBUG nova.network.neutron [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.788 2 INFO nova.compute.manager [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Took 1.10 seconds to deallocate network for instance.#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.836 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.837 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.889 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.926 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.927 2 DEBUG nova.compute.provider_tree [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.947 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:23:36 np0005473739 nova_compute[259550]: 2025-10-07 14:23:36.968 2 DEBUG nova.compute.manager [req-170d5577-c9a5-47e6-8ae6-a66098fe630f req-9e15b1ff-3cc9-410e-ac6f-8a7a4baf5bb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-deleted-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.052 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.112 2 DEBUG oslo_concurrency.processutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.430 2 DEBUG nova.compute.manager [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.431 2 DEBUG oslo_concurrency.lockutils [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.432 2 DEBUG oslo_concurrency.lockutils [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.432 2 DEBUG oslo_concurrency.lockutils [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.433 2 DEBUG nova.compute.manager [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] No waiting events found dispatching network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.433 2 WARNING nova.compute.manager [req-af9dfb53-1529-44ec-aae8-df5ca58cbb97 req-ed47376b-0fe1-4503-9b5b-e5434c676473 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Received unexpected event network-vif-plugged-853f22d9-282d-4428-a55e-f35727720eb3 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:23:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2693103898' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.615 2 DEBUG oslo_concurrency.processutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.503s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.623 2 DEBUG nova.compute.provider_tree [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.774 2 DEBUG nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.798 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.961s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.837 2 INFO nova.scheduler.client.report [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance 49805eb1-6f40-48f8-bcdc-de12de83733b#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.896 2 DEBUG oslo_concurrency.lockutils [None req-71607091-4f21-49c1-81af-1dfa21fa6975 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "49805eb1-6f40-48f8-bcdc-de12de83733b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.928s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:37 np0005473739 nova_compute[259550]: 2025-10-07 14:23:37.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:23:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1799: 305 pgs: 305 active+clean; 140 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 14 KiB/s wr, 192 op/s
Oct  7 10:23:39 np0005473739 podman[355676]: 2025-10-07 14:23:39.084999861 +0000 UTC m=+0.067767191 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid)
Oct  7 10:23:39 np0005473739 podman[355675]: 2025-10-07 14:23:39.089716408 +0000 UTC m=+0.076946327 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:23:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1800: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 15 KiB/s wr, 209 op/s
Oct  7 10:23:40 np0005473739 nova_compute[259550]: 2025-10-07 14:23:40.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:40 np0005473739 nova_compute[259550]: 2025-10-07 14:23:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.016 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.016 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.016 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.017 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.017 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/334021485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.499 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.565 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.566 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000058 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.752 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.754 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3631MB free_disk=59.94264221191406GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.754 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.754 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.806 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 2f0de516-cf33-49b6-b036-aee8c2f72943 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.806 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.807 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:23:41 np0005473739 nova_compute[259550]: 2025-10-07 14:23:41.845 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1801: 305 pgs: 305 active+clean; 121 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 14 KiB/s wr, 155 op/s
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.027 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.028 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.040 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847007.039891, ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.041 2 INFO nova.compute.manager [-] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.048 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.071 2 DEBUG nova.compute.manager [None req-b74820bf-1d1c-480a-b4a0-adda141784c1 - - - - - -] [instance: ff22e8b2-2539-4f9f-b6bb-f2bd1cd4d6d1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.121 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4079182429' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.367 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.375 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.416 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.446 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.447 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.447 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.326s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.454 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.455 2 INFO nova.compute.claims [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:23:42 np0005473739 nova_compute[259550]: 2025-10-07 14:23:42.602 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1380457896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.066 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.071 2 DEBUG nova.compute.provider_tree [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.086 2 DEBUG nova.scheduler.client.report [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.106 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.107 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.150 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.150 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.167 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.188 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.283 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.284 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.285 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Creating image(s)#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.306 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.329 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.351 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.354 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.430 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.432 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.432 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.433 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.453 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.457 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2ec4d149-b57c-4821-9852-5485f6279472_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.494 2 DEBUG nova.policy [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '2606252961124ad2a15c7f7529b28488', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.498 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.499 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.732 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 2ec4d149-b57c-4821-9852-5485f6279472_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.276s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.794 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] resizing rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.894 2 DEBUG nova.objects.instance [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'migration_context' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.909 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.909 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Ensure instance console log exists: /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.910 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.910 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:43 np0005473739 nova_compute[259550]: 2025-10-07 14:23:43.910 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1802: 305 pgs: 305 active+clean; 133 MiB data, 719 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 329 KiB/s wr, 138 op/s
Oct  7 10:23:44 np0005473739 nova_compute[259550]: 2025-10-07 14:23:44.182 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Successfully created port: 3cf557fc-1ada-4cc7-9d31-e123f70742b7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:23:44 np0005473739 nova_compute[259550]: 2025-10-07 14:23:44.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.533 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Successfully updated port: 3cf557fc-1ada-4cc7-9d31-e123f70742b7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.553 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.553 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquired lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.553 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.700 2 DEBUG nova.compute.manager [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-changed-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.701 2 DEBUG nova.compute.manager [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Refreshing instance network info cache due to event network-changed-3cf557fc-1ada-4cc7-9d31-e123f70742b7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.701 2 DEBUG oslo_concurrency.lockutils [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:45 np0005473739 nova_compute[259550]: 2025-10-07 14:23:45.790 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:23:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1803: 305 pgs: 305 active+clean; 152 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 972 KiB/s rd, 1.0 MiB/s wr, 72 op/s
Oct  7 10:23:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.888 2 DEBUG nova.network.neutron [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Updating instance_info_cache with network_info: [{"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.911 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Releasing lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.911 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance network_info: |[{"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.912 2 DEBUG oslo_concurrency.lockutils [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.912 2 DEBUG nova.network.neutron [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Refreshing network info cache for port 3cf557fc-1ada-4cc7-9d31-e123f70742b7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.918 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start _get_guest_xml network_info=[{"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.926 2 WARNING nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.931 2 DEBUG nova.virt.libvirt.host [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.932 2 DEBUG nova.virt.libvirt.host [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.936 2 DEBUG nova.virt.libvirt.host [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.937 2 DEBUG nova.virt.libvirt.host [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.937 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.938 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.938 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.938 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.939 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.939 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.939 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.939 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.940 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.940 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.940 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.940 2 DEBUG nova.virt.hardware [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:23:46 np0005473739 nova_compute[259550]: 2025-10-07 14:23:46.944 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:23:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/386131731' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.448 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.475 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.479 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:23:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425748012' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.969 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.970 2 DEBUG nova.virt.libvirt.vif [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1268944220',display_name='tempest-ServersTestJSON-server-1268944220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1268944220',id=98,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-0tb8zd37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:43Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2ec4d149-b57c-4821-9852-5485f6279472,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.971 2 DEBUG nova.network.os_vif_util [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.972 2 DEBUG nova.network.os_vif_util [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.973 2 DEBUG nova.objects.instance [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'pci_devices' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:23:47 np0005473739 nova_compute[259550]: 2025-10-07 14:23:47.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:23:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1804: 305 pgs: 305 active+clean; 152 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.0 MiB/s wr, 33 op/s
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.176 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <uuid>2ec4d149-b57c-4821-9852-5485f6279472</uuid>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <name>instance-00000062</name>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersTestJSON-server-1268944220</nova:name>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:23:46</nova:creationTime>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <nova:user uuid="2606252961124ad2a15c7f7529b28488">tempest-ServersTestJSON-387950529-project-member</nova:user>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <nova:project uuid="ca7071ac09d84d15aba25489e9bb909a">tempest-ServersTestJSON-387950529</nova:project>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <nova:port uuid="3cf557fc-1ada-4cc7-9d31-e123f70742b7">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <entry name="serial">2ec4d149-b57c-4821-9852-5485f6279472</entry>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <entry name="uuid">2ec4d149-b57c-4821-9852-5485f6279472</entry>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/2ec4d149-b57c-4821-9852-5485f6279472_disk">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/2ec4d149-b57c-4821-9852-5485f6279472_disk.config">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:8c:b5:9a"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <target dev="tap3cf557fc-1a"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/console.log" append="off"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:23:48 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:23:48 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:23:48 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:23:48 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.177 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Preparing to wait for external event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.177 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.177 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.178 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.178 2 DEBUG nova.virt.libvirt.vif [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1268944220',display_name='tempest-ServersTestJSON-server-1268944220',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1268944220',id=98,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-0tb8zd37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:43Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2ec4d149-b57c-4821-9852-5485f6279472,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.179 2 DEBUG nova.network.os_vif_util [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.179 2 DEBUG nova.network.os_vif_util [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.180 2 DEBUG os_vif [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.181 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3cf557fc-1a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.184 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3cf557fc-1a, col_values=(('external_ids', {'iface-id': '3cf557fc-1ada-4cc7-9d31-e123f70742b7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8c:b5:9a', 'vm-uuid': '2ec4d149-b57c-4821-9852-5485f6279472'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.185 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:48 np0005473739 NetworkManager[44949]: <info>  [1759847028.1869] manager: (tap3cf557fc-1a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/406)
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.192 2 INFO os_vif [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a')#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.426 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.444 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.445 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.445 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] No VIF found with MAC fa:16:3e:8c:b5:9a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.446 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Using config drive#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.476 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.800 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.800 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.800 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:23:48 np0005473739 nova_compute[259550]: 2025-10-07 14:23:48.801 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2f0de516-cf33-49b6-b036-aee8c2f72943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.270 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Creating config drive at /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.277 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpufr3q68x execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.421 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpufr3q68x" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.448 2 DEBUG nova.storage.rbd_utils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] rbd image 2ec4d149-b57c-4821-9852-5485f6279472_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.452 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config 2ec4d149-b57c-4821-9852-5485f6279472_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.619 2 DEBUG oslo_concurrency.processutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config 2ec4d149-b57c-4821-9852-5485f6279472_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.167s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.620 2 INFO nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Deleting local config drive /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472/disk.config because it was imported into RBD.#033[00m
Oct  7 10:23:49 np0005473739 kernel: tap3cf557fc-1a: entered promiscuous mode
Oct  7 10:23:49 np0005473739 NetworkManager[44949]: <info>  [1759847029.6656] manager: (tap3cf557fc-1a): new Tun device (/org/freedesktop/NetworkManager/Devices/407)
Oct  7 10:23:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:49Z|01000|binding|INFO|Claiming lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 for this chassis.
Oct  7 10:23:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:49Z|01001|binding|INFO|3cf557fc-1ada-4cc7-9d31-e123f70742b7: Claiming fa:16:3e:8c:b5:9a 10.100.0.3
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.674 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:b5:9a 10.100.0.3'], port_security=['fa:16:3e:8c:b5:9a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2ec4d149-b57c-4821-9852-5485f6279472', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3cf557fc-1ada-4cc7-9d31-e123f70742b7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.675 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3cf557fc-1ada-4cc7-9d31-e123f70742b7 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 bound to our chassis#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.676 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:23:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:49Z|01002|binding|INFO|Setting lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 ovn-installed in OVS
Oct  7 10:23:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:23:49Z|01003|binding|INFO|Setting lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 up in Southbound
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[996c63c3-fce5-4123-96a0-61d222128f2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:49 np0005473739 systemd-udevd[356084]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:23:49 np0005473739 systemd-machined[214580]: New machine qemu-121-instance-00000062.
Oct  7 10:23:49 np0005473739 NetworkManager[44949]: <info>  [1759847029.7087] device (tap3cf557fc-1a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:23:49 np0005473739 NetworkManager[44949]: <info>  [1759847029.7098] device (tap3cf557fc-1a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:23:49 np0005473739 systemd[1]: Started Virtual Machine qemu-121-instance-00000062.
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.719 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[762b96ba-5655-4ca1-9fd8-92c4d4d84953]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.722 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a99584-167b-4303-b8bc-05323854a8f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.748 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d35285-90de-4f50-b749-95dd40902924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.765 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90f011dd-63f8-43b2-ac41-e58daff9ad2f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 30, 'rx_bytes': 958, 'tx_bytes': 1452, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 30, 'rx_bytes': 958, 'tx_bytes': 1452, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356093, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.782 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[248f60a5-2561-4042-a144-9e885d45d1d2]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356097, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356097, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.784 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:49 np0005473739 nova_compute[259550]: 2025-10-07 14:23:49.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.788 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:23:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:23:49.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:23:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1805: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.035 2 DEBUG nova.compute.manager [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.036 2 DEBUG oslo_concurrency.lockutils [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.036 2 DEBUG oslo_concurrency.lockutils [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.036 2 DEBUG oslo_concurrency.lockutils [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.037 2 DEBUG nova.compute.manager [req-7062659f-8977-4eaa-95a5-fffd0085fa09 req-bd0b3b5e-a0db-4182-9926-2a144149a895 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Processing event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.210 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847015.2094228, 49805eb1-6f40-48f8-bcdc-de12de83733b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.210 2 INFO nova.compute.manager [-] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.240 2 DEBUG nova.compute.manager [None req-0a51322a-d7b4-4ddd-8d52-941605354967 - - - - - -] [instance: 49805eb1-6f40-48f8-bcdc-de12de83733b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.460 2 DEBUG nova.network.neutron [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Updated VIF entry in instance network info cache for port 3cf557fc-1ada-4cc7-9d31-e123f70742b7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.461 2 DEBUG nova.network.neutron [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Updating instance_info_cache with network_info: [{"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.594 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847030.5936222, 2ec4d149-b57c-4821-9852-5485f6279472 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.594 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] VM Started (Lifecycle Event)#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.596 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.599 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.602 2 INFO nova.virt.libvirt.driver [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance spawned successfully.#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.602 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.633 2 DEBUG oslo_concurrency.lockutils [req-d17e5a2a-fa51-494c-adc3-65c03bd55874 req-36e9af90-be8f-4f4d-9054-538d32b45c28 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-2ec4d149-b57c-4821-9852-5485f6279472" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.741 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.747 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.750 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.751 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.751 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.752 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.752 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.752 2 DEBUG nova.virt.libvirt.driver [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.849 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.849 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847030.5937285, 2ec4d149-b57c-4821-9852-5485f6279472 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.850 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.878 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.881 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847030.5989666, 2ec4d149-b57c-4821-9852-5485f6279472 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.881 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.898 2 INFO nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Took 7.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.898 2 DEBUG nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.906 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.909 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.949 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.969 2 INFO nova.compute.manager [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Took 8.87 seconds to build instance.#033[00m
Oct  7 10:23:50 np0005473739 nova_compute[259550]: 2025-10-07 14:23:50.987 2 DEBUG oslo_concurrency.lockutils [None req-906781d6-f93a-4d9d-8ab6-6e9b9dfbf887 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.959s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:51 np0005473739 nova_compute[259550]: 2025-10-07 14:23:51.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1806: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.053 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updating instance_info_cache with network_info: [{"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:23:52 np0005473739 podman[356141]: 2025-10-07 14:23:52.069711929 +0000 UTC m=+0.055905734 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.073 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-2f0de516-cf33-49b6-b036-aee8c2f72943" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.074 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.074 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:23:52 np0005473739 podman[356142]: 2025-10-07 14:23:52.104844488 +0000 UTC m=+0.088807053 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.873 2 DEBUG nova.compute.manager [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.874 2 DEBUG oslo_concurrency.lockutils [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.874 2 DEBUG oslo_concurrency.lockutils [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.874 2 DEBUG oslo_concurrency.lockutils [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.874 2 DEBUG nova.compute.manager [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] No waiting events found dispatching network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:23:52 np0005473739 nova_compute[259550]: 2025-10-07 14:23:52.875 2 WARNING nova.compute.manager [req-4fed3e16-66c0-41d4-8826-a3396ff36263 req-2bfc1966-e10b-4731-8b97-cbee558980e2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received unexpected event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:23:53 np0005473739 nova_compute[259550]: 2025-10-07 14:23:53.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1807: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 79 op/s
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.681 2 DEBUG oslo_concurrency.lockutils [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.681 2 DEBUG oslo_concurrency.lockutils [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.682 2 DEBUG nova.compute.manager [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.687 2 DEBUG nova.compute.manager [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.688 2 DEBUG nova.objects.instance [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'flavor' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.716 2 DEBUG nova.virt.libvirt.driver [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.764 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.765 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.783 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.862 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.862 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.871 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:23:55 np0005473739 nova_compute[259550]: 2025-10-07 14:23:55.871 2 INFO nova.compute.claims [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:23:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1808: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 87 op/s
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.026 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #84. Immutable memtables: 0.
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.336153) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 47] Flushing memtable with next log file: 84
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036336198, "job": 47, "event": "flush_started", "num_memtables": 1, "num_entries": 578, "num_deletes": 255, "total_data_size": 579062, "memory_usage": 590048, "flush_reason": "Manual Compaction"}
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 47] Level-0 flush table #85: started
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036342740, "cf_name": "default", "job": 47, "event": "table_file_creation", "file_number": 85, "file_size": 573604, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 37737, "largest_seqno": 38314, "table_properties": {"data_size": 570471, "index_size": 1041, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 7225, "raw_average_key_size": 18, "raw_value_size": 564172, "raw_average_value_size": 1454, "num_data_blocks": 46, "num_entries": 388, "num_filter_entries": 388, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847001, "oldest_key_time": 1759847001, "file_creation_time": 1759847036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 85, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 47] Flush lasted 6617 microseconds, and 2374 cpu microseconds.
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.342771) [db/flush_job.cc:967] [default] [JOB 47] Level-0 flush table #85: 573604 bytes OK
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.342785) [db/memtable_list.cc:519] [default] Level-0 commit table #85 started
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344566) [db/memtable_list.cc:722] [default] Level-0 commit table #85: memtable #1 done
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344577) EVENT_LOG_v1 {"time_micros": 1759847036344573, "job": 47, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344589) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 47] Try to delete WAL files size 575841, prev total WAL file size 575841, number of live WAL files 2.
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000081.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344993) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031323539' seq:72057594037927935, type:22 .. '6C6F676D0031353130' seq:0, type:0; will stop at (end)
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 48] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 47 Base level 0, inputs: [85(560KB)], [83(7910KB)]
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036345029, "job": 48, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [85], "files_L6": [83], "score": -1, "input_data_size": 8673997, "oldest_snapshot_seqno": -1}
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 48] Generated table #86: 6111 keys, 8542906 bytes, temperature: kUnknown
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036386519, "cf_name": "default", "job": 48, "event": "table_file_creation", "file_number": 86, "file_size": 8542906, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8502251, "index_size": 24289, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15301, "raw_key_size": 156130, "raw_average_key_size": 25, "raw_value_size": 8392838, "raw_average_value_size": 1373, "num_data_blocks": 976, "num_entries": 6111, "num_filter_entries": 6111, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847036, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 86, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.386769) [db/compaction/compaction_job.cc:1663] [default] [JOB 48] Compacted 1@0 + 1@6 files to L6 => 8542906 bytes
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.388125) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 208.7 rd, 205.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 7.7 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(30.0) write-amplify(14.9) OK, records in: 6632, records dropped: 521 output_compression: NoCompression
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.388140) EVENT_LOG_v1 {"time_micros": 1759847036388133, "job": 48, "event": "compaction_finished", "compaction_time_micros": 41559, "compaction_time_cpu_micros": 18836, "output_level": 6, "num_output_files": 1, "total_output_size": 8542906, "num_input_records": 6632, "num_output_records": 6111, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000085.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036388353, "job": 48, "event": "table_file_deletion", "file_number": 85}
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000083.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847036389498, "job": 48, "event": "table_file_deletion", "file_number": 83}
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.344906) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389575) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389583) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389586) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389589) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:23:56.389592) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:23:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2903994116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.447 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.421s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.453 2 DEBUG nova.compute.provider_tree [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.474 2 DEBUG nova.scheduler.client.report [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.493 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.494 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.540 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.541 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.558 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.572 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.661 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.662 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.663 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Creating image(s)#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.683 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.706 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.727 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.730 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.804 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.805 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.806 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.806 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.826 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:23:56 np0005473739 nova_compute[259550]: 2025-10-07 14:23:56.829 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.026 2 DEBUG nova.policy [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b84ccbe54bc04b6e9b1e9a3ec69cae9c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '85bd6ccdfa5f4d8b8afbb83b034f15f7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.105 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.277s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.159 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] resizing rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.249 2 DEBUG nova.objects.instance [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lazy-loading 'migration_context' on Instance uuid 77a1261a-cfc4-44f8-8353-fce48dff65e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.264 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.264 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Ensure instance console log exists: /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.265 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.265 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.265 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:23:57 np0005473739 nova_compute[259550]: 2025-10-07 14:23:57.853 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Successfully created port: 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:23:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1809: 305 pgs: 305 active+clean; 167 MiB data, 740 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 778 KiB/s wr, 84 op/s
Oct  7 10:23:58 np0005473739 nova_compute[259550]: 2025-10-07 14:23:58.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:23:59 np0005473739 nova_compute[259550]: 2025-10-07 14:23:59.332 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Successfully updated port: 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:23:59 np0005473739 nova_compute[259550]: 2025-10-07 14:23:59.346 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:59 np0005473739 nova_compute[259550]: 2025-10-07 14:23:59.347 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquired lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:23:59 np0005473739 nova_compute[259550]: 2025-10-07 14:23:59.347 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:23:59 np0005473739 nova_compute[259550]: 2025-10-07 14:23:59.449 2 DEBUG nova.compute.manager [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-changed-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:23:59 np0005473739 nova_compute[259550]: 2025-10-07 14:23:59.450 2 DEBUG nova.compute.manager [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Refreshing instance network info cache due to event network-changed-691fda3f-3a00-4049-9e23-b7ce0a4d3d24. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:23:59 np0005473739 nova_compute[259550]: 2025-10-07 14:23:59.451 2 DEBUG oslo_concurrency.lockutils [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:23:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1810: 305 pgs: 305 active+clean; 200 MiB data, 756 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.0 MiB/s wr, 86 op/s
Oct  7 10:24:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:00.055 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:00.055 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:00.056 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:00 np0005473739 nova_compute[259550]: 2025-10-07 14:24:00.090 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:24:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.398 2 DEBUG nova.network.neutron [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Updating instance_info_cache with network_info: [{"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.420 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Releasing lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.421 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance network_info: |[{"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.421 2 DEBUG oslo_concurrency.lockutils [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.421 2 DEBUG nova.network.neutron [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Refreshing network info cache for port 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.424 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start _get_guest_xml network_info=[{"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.428 2 WARNING nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.437 2 DEBUG nova.virt.libvirt.host [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.439 2 DEBUG nova.virt.libvirt.host [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.443 2 DEBUG nova.virt.libvirt.host [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.443 2 DEBUG nova.virt.libvirt.host [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.444 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.444 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.444 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.445 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.445 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.445 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.446 2 DEBUG nova.virt.hardware [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.450 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:24:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1437245736' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.896 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.919 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:01 np0005473739 nova_compute[259550]: 2025-10-07 14:24:01.923 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1811: 305 pgs: 305 active+clean; 213 MiB data, 762 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:24:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:02Z|00105|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8c:b5:9a 10.100.0.3
Oct  7 10:24:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:02Z|00106|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8c:b5:9a 10.100.0.3
Oct  7 10:24:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:24:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1909468088' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.363 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.365 2 DEBUG nova.virt.libvirt.vif [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-359470232',id=99,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='85bd6ccdfa5f4d8b8afbb83b034f15f7',ramdisk_id='',reservation_id='r-zsiwi22d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:56Z,user_data=None,user_id='b84ccbe54bc04b6e9b1e9a3ec69cae9c',uuid=77a1261a-cfc4-44f8-8353-fce48dff65e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.366 2 DEBUG nova.network.os_vif_util [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converting VIF {"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.367 2 DEBUG nova.network.os_vif_util [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.368 2 DEBUG nova.objects.instance [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 77a1261a-cfc4-44f8-8353-fce48dff65e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.385 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <uuid>77a1261a-cfc4-44f8-8353-fce48dff65e6</uuid>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <name>instance-00000063</name>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersNegativeTestMultiTenantJSON-server-359470232</nova:name>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:24:01</nova:creationTime>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <nova:user uuid="b84ccbe54bc04b6e9b1e9a3ec69cae9c">tempest-ServersNegativeTestMultiTenantJSON-1133687348-project-member</nova:user>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <nova:project uuid="85bd6ccdfa5f4d8b8afbb83b034f15f7">tempest-ServersNegativeTestMultiTenantJSON-1133687348</nova:project>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <nova:port uuid="691fda3f-3a00-4049-9e23-b7ce0a4d3d24">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <entry name="serial">77a1261a-cfc4-44f8-8353-fce48dff65e6</entry>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <entry name="uuid">77a1261a-cfc4-44f8-8353-fce48dff65e6</entry>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/77a1261a-cfc4-44f8-8353-fce48dff65e6_disk">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:7a:ba:dc"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <target dev="tap691fda3f-3a"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/console.log" append="off"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:24:02 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:24:02 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:24:02 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:24:02 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.386 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Preparing to wait for external event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.387 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.387 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.387 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.388 2 DEBUG nova.virt.libvirt.vif [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-359470232',id=99,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='85bd6ccdfa5f4d8b8afbb83b034f15f7',ramdisk_id='',reservation_id='r-zsiwi22d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:23:56Z,user_data=None,user_id='b84ccbe54bc04b6e9b1e9a3ec69cae9c',uuid=77a1261a-cfc4-44f8-8353-fce48dff65e6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.388 2 DEBUG nova.network.os_vif_util [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converting VIF {"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.389 2 DEBUG nova.network.os_vif_util [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.389 2 DEBUG os_vif [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.390 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.391 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap691fda3f-3a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.394 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap691fda3f-3a, col_values=(('external_ids', {'iface-id': '691fda3f-3a00-4049-9e23-b7ce0a4d3d24', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7a:ba:dc', 'vm-uuid': '77a1261a-cfc4-44f8-8353-fce48dff65e6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:02 np0005473739 NetworkManager[44949]: <info>  [1759847042.3964] manager: (tap691fda3f-3a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/408)
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.406 2 INFO os_vif [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a')#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.462 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.463 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.463 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] No VIF found with MAC fa:16:3e:7a:ba:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.464 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Using config drive#033[00m
Oct  7 10:24:02 np0005473739 nova_compute[259550]: 2025-10-07 14:24:02.491 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.028 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Creating config drive at /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.034 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd0hylty0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.189 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpd0hylty0" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.229 2 DEBUG nova.storage.rbd_utils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] rbd image 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.233 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.389 2 DEBUG oslo_concurrency.processutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config 77a1261a-cfc4-44f8-8353-fce48dff65e6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.390 2 INFO nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Deleting local config drive /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6/disk.config because it was imported into RBD.#033[00m
Oct  7 10:24:03 np0005473739 kernel: tap691fda3f-3a: entered promiscuous mode
Oct  7 10:24:03 np0005473739 NetworkManager[44949]: <info>  [1759847043.4381] manager: (tap691fda3f-3a): new Tun device (/org/freedesktop/NetworkManager/Devices/409)
Oct  7 10:24:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:03Z|01004|binding|INFO|Claiming lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 for this chassis.
Oct  7 10:24:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:03Z|01005|binding|INFO|691fda3f-3a00-4049-9e23-b7ce0a4d3d24: Claiming fa:16:3e:7a:ba:dc 10.100.0.12
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.453 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:ba:dc 10.100.0.12'], port_security=['fa:16:3e:7a:ba:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '77a1261a-cfc4-44f8-8353-fce48dff65e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85bd6ccdfa5f4d8b8afbb83b034f15f7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '616fd0c8-90f8-4e77-b261-e8ac5fb3da26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5297e51f-b3a6-4510-b470-81311044f72c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=691fda3f-3a00-4049-9e23-b7ce0a4d3d24) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.454 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 in datapath 9229a572-88fa-42e8-a77f-f7ab29bba83d bound to our chassis#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.455 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9229a572-88fa-42e8-a77f-f7ab29bba83d#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.471 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[93c9602d-f2ed-4f69-81b4-48d5227b0462]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.472 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9229a572-81 in ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:24:03 np0005473739 systemd-machined[214580]: New machine qemu-122-instance-00000063.
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.475 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9229a572-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.475 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e59aafbb-985b-44dc-abce-74309a4f7d72]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.476 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eb9a3090-c17e-4404-a109-216286afd136]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 systemd-udevd[356510]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:24:03 np0005473739 systemd[1]: Started Virtual Machine qemu-122-instance-00000063.
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.493 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e7a5e48e-4e5b-4bba-9add-0725fb2804e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 NetworkManager[44949]: <info>  [1759847043.4973] device (tap691fda3f-3a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:24:03 np0005473739 NetworkManager[44949]: <info>  [1759847043.5002] device (tap691fda3f-3a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.501 2 DEBUG nova.network.neutron [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Updated VIF entry in instance network info cache for port 691fda3f-3a00-4049-9e23-b7ce0a4d3d24. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.502 2 DEBUG nova.network.neutron [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Updating instance_info_cache with network_info: [{"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.517 2 DEBUG oslo_concurrency.lockutils [req-16f42d47-cc1a-4800-a3a7-53bc1452a51e req-295cc813-6634-4b31-8dce-f552d005bd1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77a1261a-cfc4-44f8-8353-fce48dff65e6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.519 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a15cd46f-4cd8-48de-8ccd-54f8de1fd976]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.547 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5905524d-0def-4358-97bb-e609f2751fdc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:03Z|01006|binding|INFO|Setting lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 ovn-installed in OVS
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:03Z|01007|binding|INFO|Setting lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 up in Southbound
Oct  7 10:24:03 np0005473739 systemd-udevd[356513]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:24:03 np0005473739 NetworkManager[44949]: <info>  [1759847043.5568] manager: (tap9229a572-80): new Veth device (/org/freedesktop/NetworkManager/Devices/410)
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.556 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf1f25cb-c2d9-49d5-93cc-6921992a87a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.587 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a61c34e3-bcaf-4245-aeb8-01026d9d3d4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.592 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[09d0f9d2-1247-4c4b-b25f-ba4ff0ac35ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 NetworkManager[44949]: <info>  [1759847043.6127] device (tap9229a572-80): carrier: link connected
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.618 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8a188e53-cac1-4596-827e-046f74f8d858]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d472c97e-1a5b-4747-a305-21036fe2ac4b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9229a572-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:f2:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762717, 'reachable_time': 33902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356542, 'error': None, 'target': 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.652 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad050048-5864-4ae9-aa05-6ef5bdc3d0e5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feb8:f2aa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 762717, 'tstamp': 762717}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356543, 'error': None, 'target': 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.671 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c25fd2ca-301f-4a83-a335-dec368e21133]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9229a572-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:b8:f2:aa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 291], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762717, 'reachable_time': 33902, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 356551, 'error': None, 'target': 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.702 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a6a4023-1aba-4146-8b80-d40bb9e77fa8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.754 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ba3df04-8ef2-414b-b226-235955f99db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9229a572-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9229a572-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:03 np0005473739 NetworkManager[44949]: <info>  [1759847043.7607] manager: (tap9229a572-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/411)
Oct  7 10:24:03 np0005473739 kernel: tap9229a572-80: entered promiscuous mode
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9229a572-80, col_values=(('external_ids', {'iface-id': '467816a9-10f3-4a54-9b50-5cd9f2a12df0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:03Z|01008|binding|INFO|Releasing lport 467816a9-10f3-4a54-9b50-5cd9f2a12df0 from this chassis (sb_readonly=0)
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.770 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9229a572-88fa-42e8-a77f-f7ab29bba83d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9229a572-88fa-42e8-a77f-f7ab29bba83d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.771 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3bd09470-f222-424c-9df0-ffac9dc8d0d2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.772 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-9229a572-88fa-42e8-a77f-f7ab29bba83d
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/9229a572-88fa-42e8-a77f-f7ab29bba83d.pid.haproxy
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 9229a572-88fa-42e8-a77f-f7ab29bba83d
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:24:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:03.773 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'env', 'PROCESS_TAG=haproxy-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9229a572-88fa-42e8-a77f-f7ab29bba83d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.835 2 DEBUG nova.compute.manager [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.835 2 DEBUG oslo_concurrency.lockutils [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.835 2 DEBUG oslo_concurrency.lockutils [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.836 2 DEBUG oslo_concurrency.lockutils [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:03 np0005473739 nova_compute[259550]: 2025-10-07 14:24:03.836 2 DEBUG nova.compute.manager [req-2d57ca19-65f1-45e9-bf67-2a9268f73b03 req-197cafdb-14d7-4930-bb29-4e4c67338b64 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Processing event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:24:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1812: 305 pgs: 305 active+clean; 230 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.6 MiB/s wr, 137 op/s
Oct  7 10:24:04 np0005473739 podman[356618]: 2025-10-07 14:24:04.185467427 +0000 UTC m=+0.065937463 container create 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:24:04 np0005473739 systemd[1]: Started libpod-conmon-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11.scope.
Oct  7 10:24:04 np0005473739 podman[356618]: 2025-10-07 14:24:04.146741333 +0000 UTC m=+0.027211399 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:24:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:24:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bb5be1eceb21f86cca23ad9b2f755736db45630c487afa93af14943591b01768/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.263 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.264 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847044.263533, 77a1261a-cfc4-44f8-8353-fce48dff65e6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.265 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] VM Started (Lifecycle Event)#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.267 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.270 2 INFO nova.virt.libvirt.driver [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance spawned successfully.#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.270 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:24:04 np0005473739 podman[356618]: 2025-10-07 14:24:04.273019417 +0000 UTC m=+0.153489473 container init 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:24:04 np0005473739 podman[356618]: 2025-10-07 14:24:04.280477736 +0000 UTC m=+0.160947772 container start 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.286 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.293 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.298 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.298 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.299 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.299 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.300 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.300 2 DEBUG nova.virt.libvirt.driver [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:04 np0005473739 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [NOTICE]   (356638) : New worker (356640) forked
Oct  7 10:24:04 np0005473739 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [NOTICE]   (356638) : Loading success.
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.322 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.323 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847044.2638097, 77a1261a-cfc4-44f8-8353-fce48dff65e6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.323 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.361 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.365 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847044.2667992, 77a1261a-cfc4-44f8-8353-fce48dff65e6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.365 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.386 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.389 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.396 2 INFO nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Took 7.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.396 2 DEBUG nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.403 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.453 2 INFO nova.compute.manager [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Took 8.62 seconds to build instance.#033[00m
Oct  7 10:24:04 np0005473739 nova_compute[259550]: 2025-10-07 14:24:04.477 2 DEBUG oslo_concurrency.lockutils [None req-be08e0a6-b6d3-46ef-b261-4562723d5684 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.712s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:05 np0005473739 nova_compute[259550]: 2025-10-07 14:24:05.799 2 DEBUG nova.virt.libvirt.driver [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:24:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1813: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 141 op/s
Oct  7 10:24:06 np0005473739 nova_compute[259550]: 2025-10-07 14:24:06.260 2 DEBUG nova.compute.manager [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:06 np0005473739 nova_compute[259550]: 2025-10-07 14:24:06.261 2 DEBUG oslo_concurrency.lockutils [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:06 np0005473739 nova_compute[259550]: 2025-10-07 14:24:06.261 2 DEBUG oslo_concurrency.lockutils [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:06 np0005473739 nova_compute[259550]: 2025-10-07 14:24:06.261 2 DEBUG oslo_concurrency.lockutils [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:06 np0005473739 nova_compute[259550]: 2025-10-07 14:24:06.261 2 DEBUG nova.compute.manager [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] No waiting events found dispatching network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:24:06 np0005473739 nova_compute[259550]: 2025-10-07 14:24:06.261 2 WARNING nova.compute.manager [req-eee84337-daeb-4942-be42-545ee34a5806 req-4480fb0e-308b-4f6f-ba5d-6bf4452f9d5f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received unexpected event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:24:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:06 np0005473739 nova_compute[259550]: 2025-10-07 14:24:06.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:07 np0005473739 nova_compute[259550]: 2025-10-07 14:24:07.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1814: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 943 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Oct  7 10:24:08 np0005473739 kernel: tap3cf557fc-1a (unregistering): left promiscuous mode
Oct  7 10:24:08 np0005473739 NetworkManager[44949]: <info>  [1759847048.0560] device (tap3cf557fc-1a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:24:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:08Z|01009|binding|INFO|Releasing lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 from this chassis (sb_readonly=0)
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.062 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:08Z|01010|binding|INFO|Setting lport 3cf557fc-1ada-4cc7-9d31-e123f70742b7 down in Southbound
Oct  7 10:24:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:08Z|01011|binding|INFO|Removing iface tap3cf557fc-1a ovn-installed in OVS
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.073 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8c:b5:9a 10.100.0.3'], port_security=['fa:16:3e:8c:b5:9a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '2ec4d149-b57c-4821-9852-5485f6279472', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3cf557fc-1ada-4cc7-9d31-e123f70742b7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.075 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3cf557fc-1ada-4cc7-9d31-e123f70742b7 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.077 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.084 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.094 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e8370c25-7e44-4378-9a7b-f560bcb2a603]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.127 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[617461e0-20a3-4b4c-8bdf-9cd3f4cbfdab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.129 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4ca97522-1415-4b20-a27c-930e2c2a9592]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Deactivated successfully.
Oct  7 10:24:08 np0005473739 systemd[1]: machine-qemu\x2d121\x2dinstance\x2d00000062.scope: Consumed 13.258s CPU time.
Oct  7 10:24:08 np0005473739 systemd-machined[214580]: Machine qemu-121-instance-00000062 terminated.
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.158 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[97f04613-9840-426a-816d-289843c5694a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.177 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7429b08a-bf83-4d86-b709-7c871b5617db]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap55c52758-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:25:22:1a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 32, 'rx_bytes': 958, 'tx_bytes': 1536, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 32, 'rx_bytes': 958, 'tx_bytes': 1536, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 262], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749527, 'reachable_time': 15179, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356660, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.197 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9c4142ba-4627-4bc2-87bc-3a6c79d7fe74]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749541, 'tstamp': 749541}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356661, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap55c52758-91'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 749544, 'tstamp': 749544}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 356661, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.199 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55c52758-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap55c52758-90, col_values=(('external_ids', {'iface-id': '401012b3-9244-4a9f-9a1e-3bf75a54a412'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.206 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.334 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.334 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.334 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.335 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.335 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.336 2 INFO nova.compute.manager [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Terminating instance#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.337 2 DEBUG nova.compute.manager [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:24:08 np0005473739 kernel: tap691fda3f-3a (unregistering): left promiscuous mode
Oct  7 10:24:08 np0005473739 NetworkManager[44949]: <info>  [1759847048.3869] device (tap691fda3f-3a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:24:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:08Z|01012|binding|INFO|Releasing lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 from this chassis (sb_readonly=0)
Oct  7 10:24:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:08Z|01013|binding|INFO|Setting lport 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 down in Southbound
Oct  7 10:24:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:08Z|01014|binding|INFO|Removing iface tap691fda3f-3a ovn-installed in OVS
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.405 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7a:ba:dc 10.100.0.12'], port_security=['fa:16:3e:7a:ba:dc 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '77a1261a-cfc4-44f8-8353-fce48dff65e6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '85bd6ccdfa5f4d8b8afbb83b034f15f7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '616fd0c8-90f8-4e77-b261-e8ac5fb3da26', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5297e51f-b3a6-4510-b470-81311044f72c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=691fda3f-3a00-4049-9e23-b7ce0a4d3d24) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.406 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 691fda3f-3a00-4049-9e23-b7ce0a4d3d24 in datapath 9229a572-88fa-42e8-a77f-f7ab29bba83d unbound from our chassis#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.407 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9229a572-88fa-42e8-a77f-f7ab29bba83d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.408 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[856ae252-ed0f-4987-b769-326775ee18e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.408 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d namespace which is not needed anymore#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.418 2 DEBUG nova.compute.manager [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-unplugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.418 2 DEBUG oslo_concurrency.lockutils [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.419 2 DEBUG oslo_concurrency.lockutils [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.419 2 DEBUG oslo_concurrency.lockutils [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.419 2 DEBUG nova.compute.manager [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] No waiting events found dispatching network-vif-unplugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.419 2 WARNING nova.compute.manager [req-ac022a7b-9a9a-4e52-bc15-d7c17e4eb01d req-7913a213-517b-42a7-b8ca-7051ec0f90b3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received unexpected event network-vif-unplugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 for instance with vm_state active and task_state powering-off.#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Deactivated successfully.
Oct  7 10:24:08 np0005473739 systemd[1]: machine-qemu\x2d122\x2dinstance\x2d00000063.scope: Consumed 4.820s CPU time.
Oct  7 10:24:08 np0005473739 systemd-machined[214580]: Machine qemu-122-instance-00000063 terminated.
Oct  7 10:24:08 np0005473739 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [NOTICE]   (356638) : haproxy version is 2.8.14-c23fe91
Oct  7 10:24:08 np0005473739 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [NOTICE]   (356638) : path to executable is /usr/sbin/haproxy
Oct  7 10:24:08 np0005473739 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [WARNING]  (356638) : Exiting Master process...
Oct  7 10:24:08 np0005473739 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [WARNING]  (356638) : Exiting Master process...
Oct  7 10:24:08 np0005473739 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [ALERT]    (356638) : Current worker (356640) exited with code 143 (Terminated)
Oct  7 10:24:08 np0005473739 neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d[356634]: [WARNING]  (356638) : All workers exited. Exiting... (0)
Oct  7 10:24:08 np0005473739 systemd[1]: libpod-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11.scope: Deactivated successfully.
Oct  7 10:24:08 np0005473739 conmon[356634]: conmon 3e19f4b541108fbdec24 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11.scope/container/memory.events
Oct  7 10:24:08 np0005473739 podman[356696]: 2025-10-07 14:24:08.529457551 +0000 UTC m=+0.038996013 container died 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:24:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11-userdata-shm.mount: Deactivated successfully.
Oct  7 10:24:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bb5be1eceb21f86cca23ad9b2f755736db45630c487afa93af14943591b01768-merged.mount: Deactivated successfully.
Oct  7 10:24:08 np0005473739 podman[356696]: 2025-10-07 14:24:08.572356718 +0000 UTC m=+0.081895190 container cleanup 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.574 2 INFO nova.virt.libvirt.driver [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Instance destroyed successfully.#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.575 2 DEBUG nova.objects.instance [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lazy-loading 'resources' on Instance uuid 77a1261a-cfc4-44f8-8353-fce48dff65e6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:08 np0005473739 systemd[1]: libpod-conmon-3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11.scope: Deactivated successfully.
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.596 2 DEBUG nova.virt.libvirt.vif [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:23:54Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',display_name='tempest-ServersNegativeTestMultiTenantJSON-server-359470232',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestmultitenantjson-server-359470232',id=99,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:24:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='85bd6ccdfa5f4d8b8afbb83b034f15f7',ramdisk_id='',reservation_id='r-zsiwi22d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348',owner_user_name='tempest-ServersNegativeTestMultiTenantJSON-1133687348-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:24:04Z,user_data=None,user_id='b84ccbe54bc04b6e9b1e9a3ec69cae9c',uuid=77a1261a-cfc4-44f8-8353-fce48dff65e6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.598 2 DEBUG nova.network.os_vif_util [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converting VIF {"id": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "address": "fa:16:3e:7a:ba:dc", "network": {"id": "9229a572-88fa-42e8-a77f-f7ab29bba83d", "bridge": "br-int", "label": "tempest-ServersNegativeTestMultiTenantJSON-1495144896-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "85bd6ccdfa5f4d8b8afbb83b034f15f7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap691fda3f-3a", "ovs_interfaceid": "691fda3f-3a00-4049-9e23-b7ce0a4d3d24", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.599 2 DEBUG nova.network.os_vif_util [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.599 2 DEBUG os_vif [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.603 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap691fda3f-3a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.611 2 INFO os_vif [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7a:ba:dc,bridge_name='br-int',has_traffic_filtering=True,id=691fda3f-3a00-4049-9e23-b7ce0a4d3d24,network=Network(9229a572-88fa-42e8-a77f-f7ab29bba83d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap691fda3f-3a')#033[00m
Oct  7 10:24:08 np0005473739 podman[356737]: 2025-10-07 14:24:08.639059439 +0000 UTC m=+0.044728096 container remove 3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b19c0f83-2288-491c-90c9-648f7248a076]: (4, ('Tue Oct  7 02:24:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d (3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11)\n3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11\nTue Oct  7 02:24:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d (3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11)\n3e19f4b541108fbdec24367ff02ff22e7a5e61e469ce1934267ff4a4d6079c11\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0770a22-2f3e-4168-8c1d-3f3cee7961c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9229a572-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 kernel: tap9229a572-80: left promiscuous mode
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.667 2 DEBUG nova.compute.manager [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-unplugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.668 2 DEBUG oslo_concurrency.lockutils [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.668 2 DEBUG oslo_concurrency.lockutils [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.668 2 DEBUG oslo_concurrency.lockutils [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.672 2 DEBUG nova.compute.manager [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] No waiting events found dispatching network-vif-unplugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.672 2 DEBUG nova.compute.manager [req-bd113aba-376d-4b9d-9bea-9e0763bd0235 req-d00258ee-0bc5-4918-89bb-2de0a2670e44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-unplugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e19c6f12-1d15-4730-91d8-092283e2fff6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.705 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fba4794a-3290-4708-a799-6fed94dd22e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.706 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13359dd2-a8d9-46d5-ba8a-674a4d46ec3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b03c20db-76da-4e14-b3a1-7f228bb86972]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 762710, 'reachable_time': 30592, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356774, 'error': None, 'target': 'ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.726 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9229a572-88fa-42e8-a77f-f7ab29bba83d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:24:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:08.726 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fba3a72a-7788-4453-bd8f-4f21b8e3b595]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:08 np0005473739 systemd[1]: run-netns-ovnmeta\x2d9229a572\x2d88fa\x2d42e8\x2da77f\x2df7ab29bba83d.mount: Deactivated successfully.
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.816 2 INFO nova.virt.libvirt.driver [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.823 2 INFO nova.virt.libvirt.driver [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance destroyed successfully.#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.824 2 DEBUG nova.objects.instance [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'numa_topology' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.837 2 DEBUG nova.compute.manager [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.878 2 DEBUG oslo_concurrency.lockutils [None req-8c126237-8c7f-4afe-aace-6bab6862f710 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 13.196s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.982 2 INFO nova.virt.libvirt.driver [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Deleting instance files /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6_del#033[00m
Oct  7 10:24:08 np0005473739 nova_compute[259550]: 2025-10-07 14:24:08.983 2 INFO nova.virt.libvirt.driver [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Deletion of /var/lib/nova/instances/77a1261a-cfc4-44f8-8353-fce48dff65e6_del complete#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.067 2 INFO nova.compute.manager [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.067 2 DEBUG oslo.service.loopingcall [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.068 2 DEBUG nova.compute.manager [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.068 2 DEBUG nova.network.neutron [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.629 2 DEBUG nova.network.neutron [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.650 2 INFO nova.compute.manager [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Took 0.58 seconds to deallocate network for instance.#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.699 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.700 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:09 np0005473739 nova_compute[259550]: 2025-10-07 14:24:09.778 2 DEBUG oslo_concurrency.processutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1815: 305 pgs: 305 active+clean; 246 MiB data, 791 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 165 op/s
Oct  7 10:24:10 np0005473739 podman[356797]: 2025-10-07 14:24:10.085899401 +0000 UTC m=+0.069825897 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:24:10 np0005473739 podman[356796]: 2025-10-07 14:24:10.08623834 +0000 UTC m=+0.071998866 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:24:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:24:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1873977040' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.228 2 DEBUG oslo_concurrency.processutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.238 2 DEBUG nova.compute.provider_tree [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.257 2 DEBUG nova.scheduler.client.report [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.283 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.305 2 INFO nova.scheduler.client.report [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Deleted allocations for instance 77a1261a-cfc4-44f8-8353-fce48dff65e6#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.391 2 DEBUG oslo_concurrency.lockutils [None req-48a7a26c-211b-4a8e-9ae4-ddab53967f14 b84ccbe54bc04b6e9b1e9a3ec69cae9c 85bd6ccdfa5f4d8b8afbb83b034f15f7 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.677 2 DEBUG nova.compute.manager [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.677 2 DEBUG oslo_concurrency.lockutils [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.677 2 DEBUG oslo_concurrency.lockutils [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.678 2 DEBUG oslo_concurrency.lockutils [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.678 2 DEBUG nova.compute.manager [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] No waiting events found dispatching network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.678 2 WARNING nova.compute.manager [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received unexpected event network-vif-plugged-3cf557fc-1ada-4cc7-9d31-e123f70742b7 for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.678 2 DEBUG nova.compute.manager [req-36187d15-ea40-46b7-b8a8-ef4927fa25ea req-10a8c062-ef05-4d51-a719-13a9a9951328 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-deleted-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.951 2 DEBUG nova.compute.manager [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.951 2 DEBUG oslo_concurrency.lockutils [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.951 2 DEBUG oslo_concurrency.lockutils [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.952 2 DEBUG oslo_concurrency.lockutils [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77a1261a-cfc4-44f8-8353-fce48dff65e6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.952 2 DEBUG nova.compute.manager [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] No waiting events found dispatching network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:24:10 np0005473739 nova_compute[259550]: 2025-10-07 14:24:10.952 2 WARNING nova.compute.manager [req-de64a4fe-6d6c-4f0d-8f08-088eb547bd59 req-5f763028-6993-409f-a3ea-7ba5c192914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Received unexpected event network-vif-plugged-691fda3f-3a00-4049-9e23-b7ce0a4d3d24 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:24:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.394 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.395 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.396 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2ec4d149-b57c-4821-9852-5485f6279472-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.396 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.397 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.398 2 INFO nova.compute.manager [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Terminating instance#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.400 2 DEBUG nova.compute.manager [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.407 2 INFO nova.virt.libvirt.driver [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Instance destroyed successfully.#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.408 2 DEBUG nova.objects.instance [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid 2ec4d149-b57c-4821-9852-5485f6279472 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.424 2 DEBUG nova.virt.libvirt.vif [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:23:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1268944220',display_name='tempest-Íñstáñcé-1224174263',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1268944220',id=98,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:23:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-0tb8zd37',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:24:09Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2ec4d149-b57c-4821-9852-5485f6279472,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.425 2 DEBUG nova.network.os_vif_util [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "address": "fa:16:3e:8c:b5:9a", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3cf557fc-1a", "ovs_interfaceid": "3cf557fc-1ada-4cc7-9d31-e123f70742b7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.425 2 DEBUG nova.network.os_vif_util [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.426 2 DEBUG os_vif [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3cf557fc-1a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.433 2 INFO os_vif [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8c:b5:9a,bridge_name='br-int',has_traffic_filtering=True,id=3cf557fc-1ada-4cc7-9d31-e123f70742b7,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3cf557fc-1a')#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.814 2 INFO nova.virt.libvirt.driver [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Deleting instance files /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472_del#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.815 2 INFO nova.virt.libvirt.driver [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Deletion of /var/lib/nova/instances/2ec4d149-b57c-4821-9852-5485f6279472_del complete#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.870 2 INFO nova.compute.manager [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Took 0.47 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.871 2 DEBUG oslo.service.loopingcall [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.871 2 DEBUG nova.compute.manager [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:24:11 np0005473739 nova_compute[259550]: 2025-10-07 14:24:11.872 2 DEBUG nova.network.neutron [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:24:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1816: 305 pgs: 305 active+clean; 233 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.7 MiB/s wr, 187 op/s
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.323 2 DEBUG nova.network.neutron [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.338 2 INFO nova.compute.manager [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Took 1.47 seconds to deallocate network for instance.#033[00m
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.392 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.392 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.479 2 DEBUG oslo_concurrency.processutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.528 2 DEBUG nova.compute.manager [req-631e8046-769f-41b6-a01f-13fb2d639a79 req-5eff4733-95bb-4e7b-b3d4-716c5c63b22c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Received event network-vif-deleted-3cf557fc-1ada-4cc7-9d31-e123f70742b7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:24:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3947667113' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.951 2 DEBUG oslo_concurrency.processutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.956 2 DEBUG nova.compute.provider_tree [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.970 2 DEBUG nova.scheduler.client.report [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:24:13 np0005473739 nova_compute[259550]: 2025-10-07 14:24:13.988 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1817: 305 pgs: 305 active+clean; 156 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.2 MiB/s wr, 190 op/s
Oct  7 10:24:14 np0005473739 nova_compute[259550]: 2025-10-07 14:24:14.013 2 INFO nova.scheduler.client.report [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance 2ec4d149-b57c-4821-9852-5485f6279472#033[00m
Oct  7 10:24:14 np0005473739 nova_compute[259550]: 2025-10-07 14:24:14.073 2 DEBUG oslo_concurrency.lockutils [None req-391fca58-ad8f-41c2-ae67-447657ff1fd2 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2ec4d149-b57c-4821-9852-5485f6279472" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:14Z|01015|binding|INFO|Releasing lport 401012b3-9244-4a9f-9a1e-3bf75a54a412 from this chassis (sb_readonly=0)
Oct  7 10:24:14 np0005473739 nova_compute[259550]: 2025-10-07 14:24:14.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1818: 305 pgs: 305 active+clean; 121 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.3 MiB/s wr, 156 op/s
Oct  7 10:24:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.570 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.571 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.571 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.571 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.571 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.573 2 INFO nova.compute.manager [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Terminating instance#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.574 2 DEBUG nova.compute.manager [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:24:16 np0005473739 kernel: tapd1f34f39-08 (unregistering): left promiscuous mode
Oct  7 10:24:16 np0005473739 NetworkManager[44949]: <info>  [1759847056.6355] device (tapd1f34f39-08): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:16Z|01016|binding|INFO|Releasing lport d1f34f39-0808-4a53-bf40-fff477dee819 from this chassis (sb_readonly=0)
Oct  7 10:24:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:16Z|01017|binding|INFO|Setting lport d1f34f39-0808-4a53-bf40-fff477dee819 down in Southbound
Oct  7 10:24:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:16Z|01018|binding|INFO|Removing iface tapd1f34f39-08 ovn-installed in OVS
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.650 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:9a:2b 10.100.0.4'], port_security=['fa:16:3e:f6:9a:2b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '2f0de516-cf33-49b6-b036-aee8c2f72943', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ca7071ac09d84d15aba25489e9bb909a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '9ef35a2c-0147-432e-a27a-01b5fc3673e0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c577f8ba-d8b7-4477-be55-e47dd4d9f942, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d1f34f39-0808-4a53-bf40-fff477dee819) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d1f34f39-0808-4a53-bf40-fff477dee819 in datapath 55c52758-97c9-4a7e-b735-6c70d1ca75a7 unbound from our chassis#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.652 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 55c52758-97c9-4a7e-b735-6c70d1ca75a7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.653 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e17b2921-aeb8-4554-be29-a22caac87f3a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.653 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 namespace which is not needed anymore#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000058.scope: Deactivated successfully.
Oct  7 10:24:16 np0005473739 systemd[1]: machine-qemu\x2d107\x2dinstance\x2d00000058.scope: Consumed 18.436s CPU time.
Oct  7 10:24:16 np0005473739 systemd-machined[214580]: Machine qemu-107-instance-00000058 terminated.
Oct  7 10:24:16 np0005473739 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [NOTICE]   (347512) : haproxy version is 2.8.14-c23fe91
Oct  7 10:24:16 np0005473739 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [NOTICE]   (347512) : path to executable is /usr/sbin/haproxy
Oct  7 10:24:16 np0005473739 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [WARNING]  (347512) : Exiting Master process...
Oct  7 10:24:16 np0005473739 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [ALERT]    (347512) : Current worker (347514) exited with code 143 (Terminated)
Oct  7 10:24:16 np0005473739 neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7[347508]: [WARNING]  (347512) : All workers exited. Exiting... (0)
Oct  7 10:24:16 np0005473739 systemd[1]: libpod-b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb.scope: Deactivated successfully.
Oct  7 10:24:16 np0005473739 podman[356901]: 2025-10-07 14:24:16.786101753 +0000 UTC m=+0.043679398 container died b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.813 2 INFO nova.virt.libvirt.driver [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Instance destroyed successfully.#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.813 2 DEBUG nova.objects.instance [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lazy-loading 'resources' on Instance uuid 2f0de516-cf33-49b6-b036-aee8c2f72943 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb-userdata-shm.mount: Deactivated successfully.
Oct  7 10:24:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7a3a13a26d565d27b55a2c061c5f559046e2a2ef2ed17aeaf90edd70b5935596-merged.mount: Deactivated successfully.
Oct  7 10:24:16 np0005473739 podman[356901]: 2025-10-07 14:24:16.825131336 +0000 UTC m=+0.082708971 container cleanup b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.832 2 DEBUG nova.virt.libvirt.vif [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:21:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-₡-500502864',display_name='tempest-₡-500502864',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest--500502864',id=88,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:22:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ca7071ac09d84d15aba25489e9bb909a',ramdisk_id='',reservation_id='r-ann565o7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-387950529',owner_user_name='tempest-ServersTestJSON-387950529-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:22:01Z,user_data=None,user_id='2606252961124ad2a15c7f7529b28488',uuid=2f0de516-cf33-49b6-b036-aee8c2f72943,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.833 2 DEBUG nova.network.os_vif_util [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converting VIF {"id": "d1f34f39-0808-4a53-bf40-fff477dee819", "address": "fa:16:3e:f6:9a:2b", "network": {"id": "55c52758-97c9-4a7e-b735-6c70d1ca75a7", "bridge": "br-int", "label": "tempest-ServersTestJSON-1721559166-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ca7071ac09d84d15aba25489e9bb909a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd1f34f39-08", "ovs_interfaceid": "d1f34f39-0808-4a53-bf40-fff477dee819", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.833 2 DEBUG nova.network.os_vif_util [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.833 2 DEBUG os_vif [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.835 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd1f34f39-08, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:24:16 np0005473739 systemd[1]: libpod-conmon-b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb.scope: Deactivated successfully.
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.842 2 INFO os_vif [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:9a:2b,bridge_name='br-int',has_traffic_filtering=True,id=d1f34f39-0808-4a53-bf40-fff477dee819,network=Network(55c52758-97c9-4a7e-b735-6c70d1ca75a7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd1f34f39-08')#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.893 2 DEBUG nova.compute.manager [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-unplugged-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:16 np0005473739 podman[356943]: 2025-10-07 14:24:16.894500249 +0000 UTC m=+0.043477473 container remove b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.894 2 DEBUG oslo_concurrency.lockutils [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.896 2 DEBUG oslo_concurrency.lockutils [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.896 2 DEBUG oslo_concurrency.lockutils [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.897 2 DEBUG nova.compute.manager [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] No waiting events found dispatching network-vif-unplugged-d1f34f39-0808-4a53-bf40-fff477dee819 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.897 2 DEBUG nova.compute.manager [req-d49b93df-48ba-422a-871b-7aad95936106 req-d8fd9034-b6fb-49df-8612-3bf540a95c36 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-unplugged-d1f34f39-0808-4a53-bf40-fff477dee819 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.910 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[822f08c6-a5e9-40e8-b3ab-eb5878b95666]: (4, ('Tue Oct  7 02:24:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 (b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb)\nb69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb\nTue Oct  7 02:24:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 (b69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb)\nb69b9fd0c9b0ab7c5eed09f1c90611ce14779ce8c807519adba4f30f1a40aeeb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.912 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f9bcb70-1667-4a60-8cb0-50cfd7917982]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.913 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55c52758-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:16 np0005473739 kernel: tap55c52758-90: left promiscuous mode
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.923 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a6befe7-db76-4648-99fc-4519e4184f95]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 nova_compute[259550]: 2025-10-07 14:24:16.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.948 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75aeb9a0-0e5a-47d0-9eb7-1e23f8a1c477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.950 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1115326d-58dd-41b2-b120-c5fbcbe2d818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.965 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[592bd875-83a9-47a1-841b-069bd17ceca3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 749518, 'reachable_time': 23845, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 356976, 'error': None, 'target': 'ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.967 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-55c52758-97c9-4a7e-b735-6c70d1ca75a7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:24:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:16.967 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cca92823-4e27-4700-bc96-939a61ca980f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:24:16 np0005473739 systemd[1]: run-netns-ovnmeta\x2d55c52758\x2d97c9\x2d4a7e\x2db735\x2d6c70d1ca75a7.mount: Deactivated successfully.
Oct  7 10:24:17 np0005473739 nova_compute[259550]: 2025-10-07 14:24:17.182 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:17.184 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=24, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=23) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:24:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:17.186 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:24:17 np0005473739 nova_compute[259550]: 2025-10-07 14:24:17.251 2 INFO nova.virt.libvirt.driver [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Deleting instance files /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943_del#033[00m
Oct  7 10:24:17 np0005473739 nova_compute[259550]: 2025-10-07 14:24:17.252 2 INFO nova.virt.libvirt.driver [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Deletion of /var/lib/nova/instances/2f0de516-cf33-49b6-b036-aee8c2f72943_del complete#033[00m
Oct  7 10:24:17 np0005473739 nova_compute[259550]: 2025-10-07 14:24:17.305 2 INFO nova.compute.manager [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Took 0.73 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:24:17 np0005473739 nova_compute[259550]: 2025-10-07 14:24:17.306 2 DEBUG oslo.service.loopingcall [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:24:17 np0005473739 nova_compute[259550]: 2025-10-07 14:24:17.307 2 DEBUG nova.compute.manager [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:24:17 np0005473739 nova_compute[259550]: 2025-10-07 14:24:17.307 2 DEBUG nova.network.neutron [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:24:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1819: 305 pgs: 305 active+clean; 121 MiB data, 709 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 27 KiB/s wr, 100 op/s
Oct  7 10:24:18 np0005473739 nova_compute[259550]: 2025-10-07 14:24:18.542 2 DEBUG nova.network.neutron [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:24:18 np0005473739 nova_compute[259550]: 2025-10-07 14:24:18.564 2 INFO nova.compute.manager [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Took 1.26 seconds to deallocate network for instance.#033[00m
Oct  7 10:24:18 np0005473739 nova_compute[259550]: 2025-10-07 14:24:18.613 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:18 np0005473739 nova_compute[259550]: 2025-10-07 14:24:18.613 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:18 np0005473739 nova_compute[259550]: 2025-10-07 14:24:18.665 2 DEBUG oslo_concurrency.processutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.086 2 DEBUG nova.compute.manager [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.087 2 DEBUG oslo_concurrency.lockutils [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.087 2 DEBUG oslo_concurrency.lockutils [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.087 2 DEBUG oslo_concurrency.lockutils [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.088 2 DEBUG nova.compute.manager [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] No waiting events found dispatching network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.088 2 WARNING nova.compute.manager [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received unexpected event network-vif-plugged-d1f34f39-0808-4a53-bf40-fff477dee819 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.088 2 DEBUG nova.compute.manager [req-9f5377fe-f9aa-430a-906a-d3457e3c2e5e req-5e622c37-8afd-45c4-a1ee-e1efa24b9651 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Received event network-vif-deleted-d1f34f39-0808-4a53-bf40-fff477dee819 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:24:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:24:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1967518062' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.112 2 DEBUG oslo_concurrency.processutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.120 2 DEBUG nova.compute.provider_tree [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.138 2 DEBUG nova.scheduler.client.report [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.170 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.556s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.192 2 INFO nova.scheduler.client.report [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Deleted allocations for instance 2f0de516-cf33-49b6-b036-aee8c2f72943#033[00m
Oct  7 10:24:19 np0005473739 nova_compute[259550]: 2025-10-07 14:24:19.249 2 DEBUG oslo_concurrency.lockutils [None req-c6f19902-c9c6-465f-b5ec-435e692fb481 2606252961124ad2a15c7f7529b28488 ca7071ac09d84d15aba25489e9bb909a - - default default] Lock "2f0de516-cf33-49b6-b036-aee8c2f72943" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.678s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1820: 305 pgs: 305 active+clean; 57 MiB data, 670 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 27 KiB/s wr, 123 op/s
Oct  7 10:24:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:21 np0005473739 nova_compute[259550]: 2025-10-07 14:24:21.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:21 np0005473739 nova_compute[259550]: 2025-10-07 14:24:21.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1821: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 5.5 KiB/s wr, 81 op/s
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:24:22 np0005473739 podman[357117]: 2025-10-07 14:24:22.172163911 +0000 UTC m=+0.058524825 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:22 np0005473739 podman[357125]: 2025-10-07 14:24:22.226152733 +0000 UTC m=+0.093738266 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller)
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:24:22
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'volumes', 'default.rgw.control', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'default.rgw.meta', 'vms', 'images', '.mgr', 'default.rgw.log']
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4eb21dfb-6933-43e1-bb80-75f890d4a1a7 does not exist
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 51ae20cf-4707-4b43-854f-f31b5308f7cc does not exist
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cb3cf96e-c751-42b2-a082-55275bf3f77e does not exist
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:24:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:24:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:24:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:24:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:24:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:23 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:24:23 np0005473739 nova_compute[259550]: 2025-10-07 14:24:23.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:23 np0005473739 nova_compute[259550]: 2025-10-07 14:24:23.322 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847048.3220978, 2ec4d149-b57c-4821-9852-5485f6279472 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:23 np0005473739 nova_compute[259550]: 2025-10-07 14:24:23.323 2 INFO nova.compute.manager [-] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:24:23 np0005473739 nova_compute[259550]: 2025-10-07 14:24:23.347 2 DEBUG nova.compute.manager [None req-2c3413dc-7203-4602-bfa7-a798afdff589 - - - - - -] [instance: 2ec4d149-b57c-4821-9852-5485f6279472] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:23 np0005473739 podman[357435]: 2025-10-07 14:24:23.482881953 +0000 UTC m=+0.039908028 container create eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:24:23 np0005473739 systemd[1]: Started libpod-conmon-eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b.scope.
Oct  7 10:24:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:24:23 np0005473739 podman[357435]: 2025-10-07 14:24:23.464016159 +0000 UTC m=+0.021042254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:24:23 np0005473739 nova_compute[259550]: 2025-10-07 14:24:23.571 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847048.5701125, 77a1261a-cfc4-44f8-8353-fce48dff65e6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:23 np0005473739 nova_compute[259550]: 2025-10-07 14:24:23.571 2 INFO nova.compute.manager [-] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:24:23 np0005473739 podman[357435]: 2025-10-07 14:24:23.576621828 +0000 UTC m=+0.133647923 container init eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:24:23 np0005473739 podman[357435]: 2025-10-07 14:24:23.583722818 +0000 UTC m=+0.140748893 container start eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:24:23 np0005473739 podman[357435]: 2025-10-07 14:24:23.587419997 +0000 UTC m=+0.144446122 container attach eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 10:24:23 np0005473739 flamboyant_swartz[357451]: 167 167
Oct  7 10:24:23 np0005473739 systemd[1]: libpod-eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b.scope: Deactivated successfully.
Oct  7 10:24:23 np0005473739 conmon[357451]: conmon eea661b64b2b997ff795 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b.scope/container/memory.events
Oct  7 10:24:23 np0005473739 podman[357435]: 2025-10-07 14:24:23.591212318 +0000 UTC m=+0.148238393 container died eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:24:23 np0005473739 nova_compute[259550]: 2025-10-07 14:24:23.595 2 DEBUG nova.compute.manager [None req-24c51562-a703-4474-98c1-35332a7665bb - - - - - -] [instance: 77a1261a-cfc4-44f8-8353-fce48dff65e6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f59a0d2a75db7180e2be4fd4dd20e268a6bcd6e7ed8575329ba67e64f1151cae-merged.mount: Deactivated successfully.
Oct  7 10:24:23 np0005473739 podman[357435]: 2025-10-07 14:24:23.631508285 +0000 UTC m=+0.188534360 container remove eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_swartz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:24:23 np0005473739 systemd[1]: libpod-conmon-eea661b64b2b997ff7951dbb54ff36c595976cbcf5e298273403ddb93b15134b.scope: Deactivated successfully.
Oct  7 10:24:23 np0005473739 podman[357476]: 2025-10-07 14:24:23.823239668 +0000 UTC m=+0.049005891 container create e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True)
Oct  7 10:24:23 np0005473739 systemd[1]: Started libpod-conmon-e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32.scope.
Oct  7 10:24:23 np0005473739 podman[357476]: 2025-10-07 14:24:23.803238194 +0000 UTC m=+0.029004687 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:24:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:24:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:23 np0005473739 podman[357476]: 2025-10-07 14:24:23.926294642 +0000 UTC m=+0.152060865 container init e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 10:24:23 np0005473739 podman[357476]: 2025-10-07 14:24:23.935790015 +0000 UTC m=+0.161556228 container start e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:24:23 np0005473739 podman[357476]: 2025-10-07 14:24:23.940522632 +0000 UTC m=+0.166288835 container attach e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:24:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1822: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 2.8 KiB/s wr, 58 op/s
Oct  7 10:24:24 np0005473739 jolly_wilson[357492]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:24:24 np0005473739 jolly_wilson[357492]: --> relative data size: 1.0
Oct  7 10:24:24 np0005473739 jolly_wilson[357492]: --> All data devices are unavailable
Oct  7 10:24:24 np0005473739 systemd[1]: libpod-e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32.scope: Deactivated successfully.
Oct  7 10:24:24 np0005473739 podman[357476]: 2025-10-07 14:24:24.977320355 +0000 UTC m=+1.203086568 container died e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:24:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-90a4e7222cc16807184ada7d751d6492c39acc5ee98427a2569781e91a09905f-merged.mount: Deactivated successfully.
Oct  7 10:24:25 np0005473739 podman[357476]: 2025-10-07 14:24:25.033223899 +0000 UTC m=+1.258990142 container remove e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_wilson, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:24:25 np0005473739 systemd[1]: libpod-conmon-e8e75761b308d14879415681704fa4e22be90005279fc06dd3415fde3d564b32.scope: Deactivated successfully.
Oct  7 10:24:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:24:25.188 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '24'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:24:25 np0005473739 podman[357673]: 2025-10-07 14:24:25.604576186 +0000 UTC m=+0.042302312 container create d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:24:25 np0005473739 systemd[1]: Started libpod-conmon-d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc.scope.
Oct  7 10:24:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:24:25 np0005473739 podman[357673]: 2025-10-07 14:24:25.584032477 +0000 UTC m=+0.021758623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:24:25 np0005473739 podman[357673]: 2025-10-07 14:24:25.683869605 +0000 UTC m=+0.121595751 container init d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 10:24:25 np0005473739 podman[357673]: 2025-10-07 14:24:25.689378962 +0000 UTC m=+0.127105088 container start d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:24:25 np0005473739 podman[357673]: 2025-10-07 14:24:25.692764542 +0000 UTC m=+0.130497118 container attach d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:24:25 np0005473739 modest_euler[357689]: 167 167
Oct  7 10:24:25 np0005473739 systemd[1]: libpod-d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc.scope: Deactivated successfully.
Oct  7 10:24:25 np0005473739 podman[357673]: 2025-10-07 14:24:25.695123845 +0000 UTC m=+0.132849991 container died d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:24:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-466e84dd01f330ecfaf29f24aa56d8cd608cfcaa1303b176ce7a865dffcf6e8f-merged.mount: Deactivated successfully.
Oct  7 10:24:25 np0005473739 podman[357673]: 2025-10-07 14:24:25.733968464 +0000 UTC m=+0.171694590 container remove d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_euler, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:24:25 np0005473739 systemd[1]: libpod-conmon-d795a3f8677b70d1ae19782b1c855671d77a72743f58a56052d9eed4109cc4bc.scope: Deactivated successfully.
Oct  7 10:24:25 np0005473739 podman[357713]: 2025-10-07 14:24:25.961211925 +0000 UTC m=+0.043563185 container create 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:24:26 np0005473739 systemd[1]: Started libpod-conmon-1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d.scope.
Oct  7 10:24:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1823: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 KiB/s wr, 30 op/s
Oct  7 10:24:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:24:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:26 np0005473739 podman[357713]: 2025-10-07 14:24:25.941901419 +0000 UTC m=+0.024252709 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:24:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:26 np0005473739 podman[357713]: 2025-10-07 14:24:26.055364461 +0000 UTC m=+0.137715741 container init 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:24:26 np0005473739 podman[357713]: 2025-10-07 14:24:26.066616761 +0000 UTC m=+0.148968021 container start 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:24:26 np0005473739 podman[357713]: 2025-10-07 14:24:26.071538523 +0000 UTC m=+0.153889823 container attach 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:24:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:26 np0005473739 nova_compute[259550]: 2025-10-07 14:24:26.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:26 np0005473739 silly_diffie[357730]: {
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:    "0": [
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:        {
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "devices": [
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "/dev/loop3"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            ],
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_name": "ceph_lv0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_size": "21470642176",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "name": "ceph_lv0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "tags": {
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cluster_name": "ceph",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.crush_device_class": "",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.encrypted": "0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osd_id": "0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.type": "block",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.vdo": "0"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            },
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "type": "block",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "vg_name": "ceph_vg0"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:        }
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:    ],
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:    "1": [
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:        {
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "devices": [
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "/dev/loop4"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            ],
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_name": "ceph_lv1",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_size": "21470642176",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "name": "ceph_lv1",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "tags": {
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cluster_name": "ceph",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.crush_device_class": "",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.encrypted": "0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osd_id": "1",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.type": "block",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.vdo": "0"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            },
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "type": "block",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "vg_name": "ceph_vg1"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:        }
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:    ],
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:    "2": [
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:        {
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "devices": [
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "/dev/loop5"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            ],
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_name": "ceph_lv2",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_size": "21470642176",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "name": "ceph_lv2",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "tags": {
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.cluster_name": "ceph",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.crush_device_class": "",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.encrypted": "0",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osd_id": "2",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.type": "block",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:                "ceph.vdo": "0"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            },
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "type": "block",
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:            "vg_name": "ceph_vg2"
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:        }
Oct  7 10:24:26 np0005473739 silly_diffie[357730]:    ]
Oct  7 10:24:26 np0005473739 silly_diffie[357730]: }
Oct  7 10:24:26 np0005473739 nova_compute[259550]: 2025-10-07 14:24:26.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:26 np0005473739 systemd[1]: libpod-1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d.scope: Deactivated successfully.
Oct  7 10:24:26 np0005473739 podman[357713]: 2025-10-07 14:24:26.859891438 +0000 UTC m=+0.942242698 container died 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 10:24:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-445974c0442e74abfc89947246f7fa1e33f879ce898aacee564a185cb621a345-merged.mount: Deactivated successfully.
Oct  7 10:24:26 np0005473739 podman[357713]: 2025-10-07 14:24:26.91980615 +0000 UTC m=+1.002157400 container remove 1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_diffie, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:24:26 np0005473739 systemd[1]: libpod-conmon-1ba6ac5fe541b2b7529b8581efe2dadeba7763c97fac5162a96df3ed2765233d.scope: Deactivated successfully.
Oct  7 10:24:27 np0005473739 podman[357895]: 2025-10-07 14:24:27.53548393 +0000 UTC m=+0.047938992 container create bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:24:27 np0005473739 systemd[1]: Started libpod-conmon-bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34.scope.
Oct  7 10:24:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:24:27 np0005473739 podman[357895]: 2025-10-07 14:24:27.509211779 +0000 UTC m=+0.021666891 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:24:27 np0005473739 podman[357895]: 2025-10-07 14:24:27.614101891 +0000 UTC m=+0.126556923 container init bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:24:27 np0005473739 podman[357895]: 2025-10-07 14:24:27.623515012 +0000 UTC m=+0.135970034 container start bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:24:27 np0005473739 podman[357895]: 2025-10-07 14:24:27.627611442 +0000 UTC m=+0.140066514 container attach bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:24:27 np0005473739 happy_chaum[357912]: 167 167
Oct  7 10:24:27 np0005473739 systemd[1]: libpod-bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34.scope: Deactivated successfully.
Oct  7 10:24:27 np0005473739 podman[357895]: 2025-10-07 14:24:27.631877416 +0000 UTC m=+0.144332438 container died bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:24:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-68714eb975e59e2b96273d8a7c3d3facbd957a8c578261777b894df8d3978c33-merged.mount: Deactivated successfully.
Oct  7 10:24:27 np0005473739 podman[357895]: 2025-10-07 14:24:27.676375575 +0000 UTC m=+0.188830597 container remove bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_chaum, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:24:27 np0005473739 systemd[1]: libpod-conmon-bf38e28bc8d19a11751d9bb588ffad29700718c07376252bf74c8fe1b6122e34.scope: Deactivated successfully.
Oct  7 10:24:27 np0005473739 podman[357936]: 2025-10-07 14:24:27.901534562 +0000 UTC m=+0.077764020 container create 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:24:27 np0005473739 systemd[1]: Started libpod-conmon-9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754.scope.
Oct  7 10:24:27 np0005473739 podman[357936]: 2025-10-07 14:24:27.869462925 +0000 UTC m=+0.045692443 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:24:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:24:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:24:28 np0005473739 podman[357936]: 2025-10-07 14:24:28.002591962 +0000 UTC m=+0.178821460 container init 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:24:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1824: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:24:28 np0005473739 podman[357936]: 2025-10-07 14:24:28.016408231 +0000 UTC m=+0.192637649 container start 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:24:28 np0005473739 podman[357936]: 2025-10-07 14:24:28.020337836 +0000 UTC m=+0.196567264 container attach 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:24:28 np0005473739 trusting_golick[357952]: {
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "osd_id": 2,
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "type": "bluestore"
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:    },
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "osd_id": 1,
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "type": "bluestore"
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:    },
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "osd_id": 0,
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:        "type": "bluestore"
Oct  7 10:24:28 np0005473739 trusting_golick[357952]:    }
Oct  7 10:24:28 np0005473739 trusting_golick[357952]: }
Oct  7 10:24:29 np0005473739 systemd[1]: libpod-9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754.scope: Deactivated successfully.
Oct  7 10:24:29 np0005473739 systemd[1]: libpod-9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754.scope: Consumed 1.019s CPU time.
Oct  7 10:24:29 np0005473739 podman[357936]: 2025-10-07 14:24:29.024149478 +0000 UTC m=+1.200378936 container died 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:24:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c98b36e6578c8d8957aadf184909f50089ce87edd39a8b489e467e94b06e6d1d-merged.mount: Deactivated successfully.
Oct  7 10:24:29 np0005473739 podman[357936]: 2025-10-07 14:24:29.087504821 +0000 UTC m=+1.263734239 container remove 9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_golick, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 10:24:29 np0005473739 systemd[1]: libpod-conmon-9c45d4a5e431c3e4a7f5d2fa8981bf43d21c10ba265f9fe79dfd91522ee52754.scope: Deactivated successfully.
Oct  7 10:24:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:24:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:24:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 11fd2aa3-21da-4d73-abce-09b8f09237a7 does not exist
Oct  7 10:24:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b769475e-7a33-401b-888b-2ceb51b02eb0 does not exist
Oct  7 10:24:29 np0005473739 nova_compute[259550]: 2025-10-07 14:24:29.901 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:29 np0005473739 nova_compute[259550]: 2025-10-07 14:24:29.903 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:29 np0005473739 nova_compute[259550]: 2025-10-07 14:24:29.923 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.003 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.004 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1825: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.012 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.013 2 INFO nova.compute.claims [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.106 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:24:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:24:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1611791755' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.567 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.574 2 DEBUG nova.compute.provider_tree [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.602 2 DEBUG nova.scheduler.client.report [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.653 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.654 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.718 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.736 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.753 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.836 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.838 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.839 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating image(s)#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.865 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.896 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.927 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:30 np0005473739 nova_compute[259550]: 2025-10-07 14:24:30.933 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.029 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.031 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.033 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.033 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.073 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.077 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.352 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.411 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] resizing rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.505 2 DEBUG nova.objects.instance [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'migration_context' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.521 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.521 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Ensure instance console log exists: /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.522 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.522 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.523 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.524 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.529 2 WARNING nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.535 2 DEBUG nova.virt.libvirt.host [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.536 2 DEBUG nova.virt.libvirt.host [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.539 2 DEBUG nova.virt.libvirt.host [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.540 2 DEBUG nova.virt.libvirt.host [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.540 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.540 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.541 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.541 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.541 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.542 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.542 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.542 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.542 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.543 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.543 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.543 2 DEBUG nova.virt.hardware [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.546 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.809 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847056.8084297, 2f0de516-cf33-49b6-b036-aee8c2f72943 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.810 2 INFO nova.compute.manager [-] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.832 2 DEBUG nova.compute.manager [None req-c8057cfc-2785-4af9-a6ed-37cbb6e2917a - - - - - -] [instance: 2f0de516-cf33-49b6-b036-aee8c2f72943] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:31 np0005473739 nova_compute[259550]: 2025-10-07 14:24:31.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:24:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3297341986' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.002 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1826: 305 pgs: 305 active+clean; 41 MiB data, 660 MiB used, 59 GiB / 60 GiB avail; 2.5 KiB/s rd, 341 B/s wr, 4 op/s
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.032 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.038 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:24:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3155769748' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:24:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.482 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.484 2 DEBUG nova.objects.instance [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.500 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <uuid>4fa0acc3-6fd9-4af0-8126-abba381ba4ad</uuid>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <name>instance-00000064</name>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerShowV254Test-server-543993218</nova:name>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:24:31</nova:creationTime>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <nova:user uuid="cb586f3a9a014ce38f71d2a40873fa67">tempest-ServerShowV254Test-2120464215-project-member</nova:user>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <nova:project uuid="e73ba37bce884458b471aeda5370963f">tempest-ServerShowV254Test-2120464215</nova:project>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <entry name="serial">4fa0acc3-6fd9-4af0-8126-abba381ba4ad</entry>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <entry name="uuid">4fa0acc3-6fd9-4af0-8126-abba381ba4ad</entry>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/console.log" append="off"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:24:32 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:24:32 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:24:32 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:24:32 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.550 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.550 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.551 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Using config drive#033[00m
Oct  7 10:24:32 np0005473739 nova_compute[259550]: 2025-10-07 14:24:32.577 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:24:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188609326' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:24:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:24:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4188609326' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:24:33 np0005473739 nova_compute[259550]: 2025-10-07 14:24:33.340 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating config drive at /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config#033[00m
Oct  7 10:24:33 np0005473739 nova_compute[259550]: 2025-10-07 14:24:33.345 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwt_tmdgy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:33 np0005473739 nova_compute[259550]: 2025-10-07 14:24:33.486 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpwt_tmdgy" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:33 np0005473739 nova_compute[259550]: 2025-10-07 14:24:33.513 2 DEBUG nova.storage.rbd_utils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:33 np0005473739 nova_compute[259550]: 2025-10-07 14:24:33.517 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:33 np0005473739 nova_compute[259550]: 2025-10-07 14:24:33.686 2 DEBUG oslo_concurrency.processutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:33 np0005473739 nova_compute[259550]: 2025-10-07 14:24:33.688 2 INFO nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deleting local config drive /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config because it was imported into RBD.#033[00m
Oct  7 10:24:33 np0005473739 systemd-machined[214580]: New machine qemu-123-instance-00000064.
Oct  7 10:24:33 np0005473739 systemd[1]: Started Virtual Machine qemu-123-instance-00000064.
Oct  7 10:24:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1827: 305 pgs: 305 active+clean; 66 MiB data, 672 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.0 MiB/s wr, 17 op/s
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.621 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847074.621026, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.623 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.626 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.626 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.629 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance spawned successfully.#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.630 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.648 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.654 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.659 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.660 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.660 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.660 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.661 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.661 2 DEBUG nova.virt.libvirt.driver [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.681 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.681 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847074.6228948, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.682 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Started (Lifecycle Event)#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.718 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.722 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.729 2 INFO nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Took 3.89 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.729 2 DEBUG nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.743 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.784 2 INFO nova.compute.manager [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Took 4.81 seconds to build instance.#033[00m
Oct  7 10:24:34 np0005473739 nova_compute[259550]: 2025-10-07 14:24:34.807 2 DEBUG oslo_concurrency.lockutils [None req-a656b6cb-c03f-45f1-aa3d-64c9e87f23d3 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.904s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1828: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct  7 10:24:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:36 np0005473739 nova_compute[259550]: 2025-10-07 14:24:36.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:36 np0005473739 nova_compute[259550]: 2025-10-07 14:24:36.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:36 np0005473739 nova_compute[259550]: 2025-10-07 14:24:36.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:36 np0005473739 nova_compute[259550]: 2025-10-07 14:24:36.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:37 np0005473739 nova_compute[259550]: 2025-10-07 14:24:37.743 2 INFO nova.compute.manager [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Rebuilding instance#033[00m
Oct  7 10:24:37 np0005473739 nova_compute[259550]: 2025-10-07 14:24:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1829: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 399 KiB/s rd, 1.8 MiB/s wr, 43 op/s
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.174 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'trusted_certs' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.191 2 DEBUG nova.compute.manager [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.242 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'pci_requests' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.255 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.271 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'resources' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.283 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'migration_context' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.296 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.299 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:38 np0005473739 nova_compute[259550]: 2025-10-07 14:24:38.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:24:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1830: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 89 op/s
Oct  7 10:24:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e241 do_prune osdmap full prune enabled
Oct  7 10:24:41 np0005473739 podman[358414]: 2025-10-07 14:24:41.084754011 +0000 UTC m=+0.071479221 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:24:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e242 e242: 3 total, 3 up, 3 in
Oct  7 10:24:41 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e242: 3 total, 3 up, 3 in
Oct  7 10:24:41 np0005473739 podman[358415]: 2025-10-07 14:24:41.113913491 +0000 UTC m=+0.089322398 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 10:24:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e242 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:41 np0005473739 nova_compute[259550]: 2025-10-07 14:24:41.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:41 np0005473739 nova_compute[259550]: 2025-10-07 14:24:41.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:41 np0005473739 nova_compute[259550]: 2025-10-07 14:24:41.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.005 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.005 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1832: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 120 op/s
Oct  7 10:24:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:24:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481067306' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.475 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.534 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.535 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000064 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.685 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.687 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3719MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.688 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.688 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4fa0acc3-6fd9-4af0-8126-abba381ba4ad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:24:42 np0005473739 nova_compute[259550]: 2025-10-07 14:24:42.790 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:24:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3512057660' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:24:43 np0005473739 nova_compute[259550]: 2025-10-07 14:24:43.306 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.515s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:43 np0005473739 nova_compute[259550]: 2025-10-07 14:24:43.311 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:24:43 np0005473739 nova_compute[259550]: 2025-10-07 14:24:43.328 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:24:43 np0005473739 nova_compute[259550]: 2025-10-07 14:24:43.351 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:24:43 np0005473739 nova_compute[259550]: 2025-10-07 14:24:43.351 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.663s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1833: 305 pgs: 305 active+clean; 88 MiB data, 682 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 941 KiB/s wr, 114 op/s
Oct  7 10:24:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e242 do_prune osdmap full prune enabled
Oct  7 10:24:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e243 e243: 3 total, 3 up, 3 in
Oct  7 10:24:44 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e243: 3 total, 3 up, 3 in
Oct  7 10:24:45 np0005473739 nova_compute[259550]: 2025-10-07 14:24:45.352 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:45 np0005473739 nova_compute[259550]: 2025-10-07 14:24:45.353 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:45 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  7 10:24:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1835: 305 pgs: 305 active+clean; 128 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 5.0 MiB/s wr, 139 op/s
Oct  7 10:24:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e243 do_prune osdmap full prune enabled
Oct  7 10:24:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e244 e244: 3 total, 3 up, 3 in
Oct  7 10:24:46 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e244: 3 total, 3 up, 3 in
Oct  7 10:24:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e244 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:46 np0005473739 nova_compute[259550]: 2025-10-07 14:24:46.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:46 np0005473739 nova_compute[259550]: 2025-10-07 14:24:46.847 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e244 do_prune osdmap full prune enabled
Oct  7 10:24:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e245 e245: 3 total, 3 up, 3 in
Oct  7 10:24:47 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e245: 3 total, 3 up, 3 in
Oct  7 10:24:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1838: 305 pgs: 305 active+clean; 128 MiB data, 706 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 6.7 MiB/s wr, 70 op/s
Oct  7 10:24:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e245 do_prune osdmap full prune enabled
Oct  7 10:24:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e246 e246: 3 total, 3 up, 3 in
Oct  7 10:24:48 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e246: 3 total, 3 up, 3 in
Oct  7 10:24:48 np0005473739 nova_compute[259550]: 2025-10-07 14:24:48.339 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:24:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e246 do_prune osdmap full prune enabled
Oct  7 10:24:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e247 e247: 3 total, 3 up, 3 in
Oct  7 10:24:49 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e247: 3 total, 3 up, 3 in
Oct  7 10:24:49 np0005473739 nova_compute[259550]: 2025-10-07 14:24:49.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:49 np0005473739 nova_compute[259550]: 2025-10-07 14:24:49.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:24:49 np0005473739 nova_compute[259550]: 2025-10-07 14:24:49.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:24:50 np0005473739 nova_compute[259550]: 2025-10-07 14:24:50.002 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:24:50 np0005473739 nova_compute[259550]: 2025-10-07 14:24:50.003 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:24:50 np0005473739 nova_compute[259550]: 2025-10-07 14:24:50.003 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:24:50 np0005473739 nova_compute[259550]: 2025-10-07 14:24:50.003 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1841: 305 pgs: 305 active+clean; 151 MiB data, 747 MiB used, 59 GiB / 60 GiB avail; 631 KiB/s rd, 22 MiB/s wr, 222 op/s
Oct  7 10:24:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e247 do_prune osdmap full prune enabled
Oct  7 10:24:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e248 e248: 3 total, 3 up, 3 in
Oct  7 10:24:50 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e248: 3 total, 3 up, 3 in
Oct  7 10:24:50 np0005473739 nova_compute[259550]: 2025-10-07 14:24:50.366 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:24:50 np0005473739 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct  7 10:24:50 np0005473739 systemd[1]: machine-qemu\x2d123\x2dinstance\x2d00000064.scope: Consumed 13.158s CPU time.
Oct  7 10:24:50 np0005473739 systemd-machined[214580]: Machine qemu-123-instance-00000064 terminated.
Oct  7 10:24:50 np0005473739 nova_compute[259550]: 2025-10-07 14:24:50.951 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.091 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.092 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:24:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e248 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.354 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.360 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance destroyed successfully.#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.365 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance destroyed successfully.#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.769 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deleting instance files /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_del#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.771 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deletion of /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_del complete#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.942 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.942 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating image(s)#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.962 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:51 np0005473739 nova_compute[259550]: 2025-10-07 14:24:51.985 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.008 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.012 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1843: 305 pgs: 305 active+clean; 121 MiB data, 733 MiB used, 59 GiB / 60 GiB avail; 960 KiB/s rd, 20 MiB/s wr, 407 op/s
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.048 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.088 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.088 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.089 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.089 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.109 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.112 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.375 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.263s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.442 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] resizing rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.532 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.533 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Ensure instance console log exists: /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.533 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.533 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.534 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.535 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.540 2 WARNING nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.545 2 DEBUG nova.virt.libvirt.host [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.545 2 DEBUG nova.virt.libvirt.host [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.549 2 DEBUG nova.virt.libvirt.host [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.549 2 DEBUG nova.virt.libvirt.host [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.549 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.550 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.550 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.550 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.551 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.551 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.551 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.551 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.552 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.552 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.552 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.552 2 DEBUG nova.virt.hardware [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.553 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:52 np0005473739 nova_compute[259550]: 2025-10-07 14:24:52.572 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:24:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:24:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/457808243' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.021 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.049 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.052 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:53 np0005473739 podman[358704]: 2025-10-07 14:24:53.088964122 +0000 UTC m=+0.079184127 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:24:53 np0005473739 podman[358706]: 2025-10-07 14:24:53.091986793 +0000 UTC m=+0.079554957 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:24:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:24:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1012545423' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.497 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.500 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <uuid>4fa0acc3-6fd9-4af0-8126-abba381ba4ad</uuid>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <name>instance-00000064</name>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerShowV254Test-server-543993218</nova:name>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:24:52</nova:creationTime>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <nova:user uuid="cb586f3a9a014ce38f71d2a40873fa67">tempest-ServerShowV254Test-2120464215-project-member</nova:user>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <nova:project uuid="e73ba37bce884458b471aeda5370963f">tempest-ServerShowV254Test-2120464215</nova:project>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <entry name="serial">4fa0acc3-6fd9-4af0-8126-abba381ba4ad</entry>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <entry name="uuid">4fa0acc3-6fd9-4af0-8126-abba381ba4ad</entry>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/console.log" append="off"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:24:53 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:24:53 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:24:53 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:24:53 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.550 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.551 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.551 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Using config drive#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.575 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.592 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'ec2_ids' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.953 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Creating config drive at /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config#033[00m
Oct  7 10:24:53 np0005473739 nova_compute[259550]: 2025-10-07 14:24:53.959 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp37aogcok execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1844: 305 pgs: 305 active+clean; 108 MiB data, 715 MiB used, 59 GiB / 60 GiB avail; 879 KiB/s rd, 19 MiB/s wr, 473 op/s
Oct  7 10:24:54 np0005473739 nova_compute[259550]: 2025-10-07 14:24:54.116 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp37aogcok" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:54 np0005473739 nova_compute[259550]: 2025-10-07 14:24:54.145 2 DEBUG nova.storage.rbd_utils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] rbd image 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:24:54 np0005473739 nova_compute[259550]: 2025-10-07 14:24:54.149 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e248 do_prune osdmap full prune enabled
Oct  7 10:24:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e249 e249: 3 total, 3 up, 3 in
Oct  7 10:24:54 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e249: 3 total, 3 up, 3 in
Oct  7 10:24:54 np0005473739 nova_compute[259550]: 2025-10-07 14:24:54.320 2 DEBUG oslo_concurrency.processutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config 4fa0acc3-6fd9-4af0-8126-abba381ba4ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:54 np0005473739 nova_compute[259550]: 2025-10-07 14:24:54.321 2 INFO nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deleting local config drive /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad/disk.config because it was imported into RBD.#033[00m
Oct  7 10:24:54 np0005473739 systemd-machined[214580]: New machine qemu-124-instance-00000064.
Oct  7 10:24:54 np0005473739 systemd[1]: Started Virtual Machine qemu-124-instance-00000064.
Oct  7 10:24:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e249 do_prune osdmap full prune enabled
Oct  7 10:24:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e250 e250: 3 total, 3 up, 3 in
Oct  7 10:24:55 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e250: 3 total, 3 up, 3 in
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.249 2 DEBUG nova.compute.manager [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.250 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.251 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 4fa0acc3-6fd9-4af0-8126-abba381ba4ad due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.252 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847095.2481906, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.252 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.261 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance spawned successfully.#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.262 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.282 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.289 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.294 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.295 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.295 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.296 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.296 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.297 2 DEBUG nova.virt.libvirt.driver [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.328 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.329 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847095.2500045, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.329 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Started (Lifecycle Event)#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.360 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.364 2 DEBUG nova.compute.manager [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.367 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.409 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.428 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.428 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.429 2 DEBUG nova.objects.instance [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:24:55 np0005473739 nova_compute[259550]: 2025-10-07 14:24:55.485 2 DEBUG oslo_concurrency.lockutils [None req-14b9bce8-b408-4fed-8d9d-1db8b94899c8 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:24:56Z|01019|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct  7 10:24:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1847: 305 pgs: 305 active+clean; 88 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 471 KiB/s rd, 5.1 MiB/s wr, 355 op/s
Oct  7 10:24:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e250 do_prune osdmap full prune enabled
Oct  7 10:24:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e251 e251: 3 total, 3 up, 3 in
Oct  7 10:24:56 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e251: 3 total, 3 up, 3 in
Oct  7 10:24:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e251 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:24:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e251 do_prune osdmap full prune enabled
Oct  7 10:24:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e252 e252: 3 total, 3 up, 3 in
Oct  7 10:24:56 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e252: 3 total, 3 up, 3 in
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.707 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.708 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.709 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.709 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.709 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.711 2 INFO nova.compute.manager [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Terminating instance#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.711 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.712 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquired lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.712 2 DEBUG nova.network.neutron [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:24:56 np0005473739 nova_compute[259550]: 2025-10-07 14:24:56.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:24:57 np0005473739 nova_compute[259550]: 2025-10-07 14:24:57.233 2 DEBUG nova.network.neutron [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:24:57 np0005473739 nova_compute[259550]: 2025-10-07 14:24:57.539 2 DEBUG nova.network.neutron [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:24:57 np0005473739 nova_compute[259550]: 2025-10-07 14:24:57.558 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Releasing lock "refresh_cache-4fa0acc3-6fd9-4af0-8126-abba381ba4ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:24:57 np0005473739 nova_compute[259550]: 2025-10-07 14:24:57.559 2 DEBUG nova.compute.manager [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:24:57 np0005473739 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000064.scope: Deactivated successfully.
Oct  7 10:24:57 np0005473739 systemd[1]: machine-qemu\x2d124\x2dinstance\x2d00000064.scope: Consumed 3.126s CPU time.
Oct  7 10:24:57 np0005473739 systemd-machined[214580]: Machine qemu-124-instance-00000064 terminated.
Oct  7 10:24:57 np0005473739 nova_compute[259550]: 2025-10-07 14:24:57.780 2 INFO nova.virt.libvirt.driver [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance destroyed successfully.#033[00m
Oct  7 10:24:57 np0005473739 nova_compute[259550]: 2025-10-07 14:24:57.780 2 DEBUG nova.objects.instance [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lazy-loading 'resources' on Instance uuid 4fa0acc3-6fd9-4af0-8126-abba381ba4ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:24:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1850: 305 pgs: 305 active+clean; 88 MiB data, 704 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 44 op/s
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.183 2 INFO nova.virt.libvirt.driver [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deleting instance files /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_del#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.184 2 INFO nova.virt.libvirt.driver [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deletion of /var/lib/nova/instances/4fa0acc3-6fd9-4af0-8126-abba381ba4ad_del complete#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.235 2 INFO nova.compute.manager [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Took 0.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.236 2 DEBUG oslo.service.loopingcall [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.236 2 DEBUG nova.compute.manager [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.237 2 DEBUG nova.network.neutron [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:24:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e252 do_prune osdmap full prune enabled
Oct  7 10:24:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e253 e253: 3 total, 3 up, 3 in
Oct  7 10:24:58 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e253: 3 total, 3 up, 3 in
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.442 2 DEBUG nova.network.neutron [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.467 2 DEBUG nova.network.neutron [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.487 2 INFO nova.compute.manager [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Took 0.25 seconds to deallocate network for instance.#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.533 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.534 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:24:58 np0005473739 nova_compute[259550]: 2025-10-07 14:24:58.590 2 DEBUG oslo_concurrency.processutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:24:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:24:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4076374420' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:24:59 np0005473739 nova_compute[259550]: 2025-10-07 14:24:59.059 2 DEBUG oslo_concurrency.processutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:24:59 np0005473739 nova_compute[259550]: 2025-10-07 14:24:59.065 2 DEBUG nova.compute.provider_tree [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:24:59 np0005473739 nova_compute[259550]: 2025-10-07 14:24:59.083 2 DEBUG nova.scheduler.client.report [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:24:59 np0005473739 nova_compute[259550]: 2025-10-07 14:24:59.117 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:59 np0005473739 nova_compute[259550]: 2025-10-07 14:24:59.164 2 INFO nova.scheduler.client.report [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Deleted allocations for instance 4fa0acc3-6fd9-4af0-8126-abba381ba4ad#033[00m
Oct  7 10:24:59 np0005473739 nova_compute[259550]: 2025-10-07 14:24:59.265 2 DEBUG oslo_concurrency.lockutils [None req-e6610ca3-3863-4697-a211-169ff39f3702 cb586f3a9a014ce38f71d2a40873fa67 e73ba37bce884458b471aeda5370963f - - default default] Lock "4fa0acc3-6fd9-4af0-8126-abba381ba4ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:24:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e253 do_prune osdmap full prune enabled
Oct  7 10:24:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e254 e254: 3 total, 3 up, 3 in
Oct  7 10:24:59 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e254: 3 total, 3 up, 3 in
Oct  7 10:25:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1853: 305 pgs: 305 active+clean; 52 MiB data, 693 MiB used, 59 GiB / 60 GiB avail; 4.1 MiB/s rd, 47 KiB/s wr, 416 op/s
Oct  7 10:25:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:25:00.056 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:25:00.057 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:25:00.057 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e254 do_prune osdmap full prune enabled
Oct  7 10:25:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e255 e255: 3 total, 3 up, 3 in
Oct  7 10:25:01 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e255: 3 total, 3 up, 3 in
Oct  7 10:25:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e255 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e255 do_prune osdmap full prune enabled
Oct  7 10:25:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e256 e256: 3 total, 3 up, 3 in
Oct  7 10:25:01 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e256: 3 total, 3 up, 3 in
Oct  7 10:25:01 np0005473739 nova_compute[259550]: 2025-10-07 14:25:01.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:01 np0005473739 nova_compute[259550]: 2025-10-07 14:25:01.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1856: 305 pgs: 305 active+clean; 41 MiB data, 686 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 56 KiB/s wr, 607 op/s
Oct  7 10:25:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e256 do_prune osdmap full prune enabled
Oct  7 10:25:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e257 e257: 3 total, 3 up, 3 in
Oct  7 10:25:03 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e257: 3 total, 3 up, 3 in
Oct  7 10:25:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1858: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 12 KiB/s wr, 272 op/s
Oct  7 10:25:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1859: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 10 KiB/s wr, 216 op/s
Oct  7 10:25:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e257 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e257 do_prune osdmap full prune enabled
Oct  7 10:25:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 e258: 3 total, 3 up, 3 in
Oct  7 10:25:06 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e258: 3 total, 3 up, 3 in
Oct  7 10:25:06 np0005473739 nova_compute[259550]: 2025-10-07 14:25:06.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:06 np0005473739 nova_compute[259550]: 2025-10-07 14:25:06.851 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "d3b23591-1e36-4309-b361-7763dfc43021" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:06 np0005473739 nova_compute[259550]: 2025-10-07 14:25:06.852 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:06 np0005473739 nova_compute[259550]: 2025-10-07 14:25:06.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:07 np0005473739 nova_compute[259550]: 2025-10-07 14:25:07.198 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:25:07 np0005473739 nova_compute[259550]: 2025-10-07 14:25:07.406 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:07 np0005473739 nova_compute[259550]: 2025-10-07 14:25:07.407 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:07 np0005473739 nova_compute[259550]: 2025-10-07 14:25:07.414 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:25:07 np0005473739 nova_compute[259550]: 2025-10-07 14:25:07.414 2 INFO nova.compute.claims [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:25:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1861: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 58 KiB/s rd, 4.4 KiB/s wr, 80 op/s
Oct  7 10:25:08 np0005473739 nova_compute[259550]: 2025-10-07 14:25:08.440 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:25:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2329822703' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:25:08 np0005473739 nova_compute[259550]: 2025-10-07 14:25:08.994 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.554s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:09 np0005473739 nova_compute[259550]: 2025-10-07 14:25:09.004 2 DEBUG nova.compute.provider_tree [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:25:09 np0005473739 nova_compute[259550]: 2025-10-07 14:25:09.207 2 DEBUG nova.scheduler.client.report [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:25:09 np0005473739 nova_compute[259550]: 2025-10-07 14:25:09.863 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.457s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:09 np0005473739 nova_compute[259550]: 2025-10-07 14:25:09.865 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:25:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1862: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 4.1 KiB/s wr, 69 op/s
Oct  7 10:25:10 np0005473739 nova_compute[259550]: 2025-10-07 14:25:10.163 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Oct  7 10:25:10 np0005473739 nova_compute[259550]: 2025-10-07 14:25:10.335 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:25:11 np0005473739 nova_compute[259550]: 2025-10-07 14:25:11.040 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:25:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:11 np0005473739 nova_compute[259550]: 2025-10-07 14:25:11.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:11 np0005473739 nova_compute[259550]: 2025-10-07 14:25:11.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1863: 305 pgs: 305 active+clean; 41 MiB data, 666 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.8 KiB/s wr, 64 op/s
Oct  7 10:25:12 np0005473739 podman[358973]: 2025-10-07 14:25:12.065262018 +0000 UTC m=+0.055840633 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:25:12 np0005473739 podman[358974]: 2025-10-07 14:25:12.065799192 +0000 UTC m=+0.054957529 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid)
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.605 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.608 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.609 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating image(s)#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.639 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.660 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.680 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.684 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.761 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.762 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.763 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.763 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.784 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.787 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d3b23591-1e36-4309-b361-7763dfc43021_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.834 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847097.7777388, 4fa0acc3-6fd9-4af0-8126-abba381ba4ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:25:12 np0005473739 nova_compute[259550]: 2025-10-07 14:25:12.835 2 INFO nova.compute.manager [-] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.064 2 DEBUG nova.compute.manager [None req-6626d0d3-06c2-40a9-8cfb-7e2bd1ce5fd4 - - - - - -] [instance: 4fa0acc3-6fd9-4af0-8126-abba381ba4ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.079 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d3b23591-1e36-4309-b361-7763dfc43021_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.292s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.144 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] resizing rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.223 2 DEBUG nova.objects.instance [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'migration_context' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.341 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.342 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Ensure instance console log exists: /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.343 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.344 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.344 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.347 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.355 2 WARNING nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.362 2 DEBUG nova.virt.libvirt.host [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.364 2 DEBUG nova.virt.libvirt.host [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.368 2 DEBUG nova.virt.libvirt.host [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.369 2 DEBUG nova.virt.libvirt.host [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.370 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.371 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.372 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.373 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.373 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.374 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.375 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.375 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.376 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.376 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.377 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.378 2 DEBUG nova.virt.hardware [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.383 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:25:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2891196183' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.838 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.866 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:13 np0005473739 nova_compute[259550]: 2025-10-07 14:25:13.871 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1864: 305 pgs: 305 active+clean; 64 MiB data, 679 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.3 MiB/s wr, 20 op/s
Oct  7 10:25:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:25:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/986770265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:25:14 np0005473739 nova_compute[259550]: 2025-10-07 14:25:14.325 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:14 np0005473739 nova_compute[259550]: 2025-10-07 14:25:14.328 2 DEBUG nova.objects.instance [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:14 np0005473739 nova_compute[259550]: 2025-10-07 14:25:14.505 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <uuid>d3b23591-1e36-4309-b361-7763dfc43021</uuid>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <name>instance-00000065</name>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerShowV257Test-server-1426187763</nova:name>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:25:13</nova:creationTime>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <nova:user uuid="b13de0bc55134c1a9a69b90ed0ce42dd">tempest-ServerShowV257Test-1404222754-project-member</nova:user>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <nova:project uuid="9a9c9f0aca2445a3996c55f880cebe99">tempest-ServerShowV257Test-1404222754</nova:project>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <entry name="serial">d3b23591-1e36-4309-b361-7763dfc43021</entry>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <entry name="uuid">d3b23591-1e36-4309-b361-7763dfc43021</entry>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d3b23591-1e36-4309-b361-7763dfc43021_disk">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d3b23591-1e36-4309-b361-7763dfc43021_disk.config">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/console.log" append="off"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:25:14 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:25:14 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:25:14 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:25:14 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:25:15 np0005473739 nova_compute[259550]: 2025-10-07 14:25:15.175 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:25:15 np0005473739 nova_compute[259550]: 2025-10-07 14:25:15.176 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:25:15 np0005473739 nova_compute[259550]: 2025-10-07 14:25:15.176 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Using config drive#033[00m
Oct  7 10:25:15 np0005473739 nova_compute[259550]: 2025-10-07 14:25:15.202 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1865: 305 pgs: 305 active+clean; 88 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 2.1 MiB/s wr, 34 op/s
Oct  7 10:25:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:16 np0005473739 nova_compute[259550]: 2025-10-07 14:25:16.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:16 np0005473739 nova_compute[259550]: 2025-10-07 14:25:16.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:17 np0005473739 nova_compute[259550]: 2025-10-07 14:25:17.248 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating config drive at /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config#033[00m
Oct  7 10:25:17 np0005473739 nova_compute[259550]: 2025-10-07 14:25:17.255 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph0jn384h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:17 np0005473739 nova_compute[259550]: 2025-10-07 14:25:17.418 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph0jn384h" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:17 np0005473739 nova_compute[259550]: 2025-10-07 14:25:17.447 2 DEBUG nova.storage.rbd_utils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:17 np0005473739 nova_compute[259550]: 2025-10-07 14:25:17.450 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config d3b23591-1e36-4309-b361-7763dfc43021_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:25:17.513 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=25, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=24) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:25:17 np0005473739 nova_compute[259550]: 2025-10-07 14:25:17.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:25:17.515 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:25:17 np0005473739 nova_compute[259550]: 2025-10-07 14:25:17.636 2 DEBUG oslo_concurrency.processutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config d3b23591-1e36-4309-b361-7763dfc43021_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.186s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:17 np0005473739 nova_compute[259550]: 2025-10-07 14:25:17.637 2 INFO nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deleting local config drive /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config because it was imported into RBD.#033[00m
Oct  7 10:25:17 np0005473739 systemd-machined[214580]: New machine qemu-125-instance-00000065.
Oct  7 10:25:17 np0005473739 systemd[1]: Started Virtual Machine qemu-125-instance-00000065.
Oct  7 10:25:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1866: 305 pgs: 305 active+clean; 88 MiB data, 687 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.548 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847118.5474482, d3b23591-1e36-4309-b361-7763dfc43021 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.548 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.552 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.552 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.556 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance spawned successfully.#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.556 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.620 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.623 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.710 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.711 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.712 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.713 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.713 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.714 2 DEBUG nova.virt.libvirt.driver [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.916 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.917 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847118.5485191, d3b23591-1e36-4309-b361-7763dfc43021 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:25:18 np0005473739 nova_compute[259550]: 2025-10-07 14:25:18.917 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Started (Lifecycle Event)#033[00m
Oct  7 10:25:19 np0005473739 nova_compute[259550]: 2025-10-07 14:25:19.358 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:25:19 np0005473739 nova_compute[259550]: 2025-10-07 14:25:19.361 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:25:19 np0005473739 nova_compute[259550]: 2025-10-07 14:25:19.447 2 INFO nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Took 6.84 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:25:19 np0005473739 nova_compute[259550]: 2025-10-07 14:25:19.447 2 DEBUG nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:25:19 np0005473739 nova_compute[259550]: 2025-10-07 14:25:19.851 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:25:19 np0005473739 nova_compute[259550]: 2025-10-07 14:25:19.996 2 INFO nova.compute.manager [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Took 12.61 seconds to build instance.#033[00m
Oct  7 10:25:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1867: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct  7 10:25:20 np0005473739 nova_compute[259550]: 2025-10-07 14:25:20.146 2 DEBUG oslo_concurrency.lockutils [None req-a24abad0-c104-46f0-95ee-e11df0ae2df0 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.294s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:21 np0005473739 nova_compute[259550]: 2025-10-07 14:25:21.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:21 np0005473739 nova_compute[259550]: 2025-10-07 14:25:21.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1868: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 853 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:25:22
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', '.mgr', 'images', '.rgw.root', 'default.rgw.meta', 'default.rgw.log', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', 'default.rgw.control']
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:25:22 np0005473739 nova_compute[259550]: 2025-10-07 14:25:22.878 2 INFO nova.compute.manager [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Rebuilding instance#033[00m
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:25:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:25:23 np0005473739 nova_compute[259550]: 2025-10-07 14:25:23.593 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'trusted_certs' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:23 np0005473739 nova_compute[259550]: 2025-10-07 14:25:23.642 2 DEBUG nova.compute.manager [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:25:23 np0005473739 nova_compute[259550]: 2025-10-07 14:25:23.751 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'pci_requests' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:23 np0005473739 nova_compute[259550]: 2025-10-07 14:25:23.847 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'pci_devices' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:23 np0005473739 nova_compute[259550]: 2025-10-07 14:25:23.885 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'resources' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:23 np0005473739 nova_compute[259550]: 2025-10-07 14:25:23.921 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'migration_context' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:24 np0005473739 nova_compute[259550]: 2025-10-07 14:25:24.004 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:25:24 np0005473739 nova_compute[259550]: 2025-10-07 14:25:24.008 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:25:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1869: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:25:24 np0005473739 podman[359355]: 2025-10-07 14:25:24.105691964 +0000 UTC m=+0.092156203 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  7 10:25:24 np0005473739 podman[359356]: 2025-10-07 14:25:24.124750113 +0000 UTC m=+0.101323198 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct  7 10:25:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:25:25.517 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '25'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:25:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1870: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 759 KiB/s wr, 87 op/s
Oct  7 10:25:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:26 np0005473739 nova_compute[259550]: 2025-10-07 14:25:26.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:26 np0005473739 nova_compute[259550]: 2025-10-07 14:25:26.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1871: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:25:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1872: 305 pgs: 305 active+clean; 88 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:25:30 np0005473739 podman[359575]: 2025-10-07 14:25:30.044768148 +0000 UTC m=+0.072404516 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:25:30 np0005473739 podman[359575]: 2025-10-07 14:25:30.158330383 +0000 UTC m=+0.185966781 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 10:25:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:25:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:25:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:31 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev dbcf6e1a-58af-411b-af4d-2f69938ea94e does not exist
Oct  7 10:25:31 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 72a21181-83b1-4e1d-8651-833c5da29288 does not exist
Oct  7 10:25:31 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 871038ac-43c2-4c96-8c87-2fdf432f8e2d does not exist
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:25:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:25:31 np0005473739 nova_compute[259550]: 2025-10-07 14:25:31.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:31 np0005473739 nova_compute[259550]: 2025-10-07 14:25:31.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1873: 305 pgs: 305 active+clean; 95 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 836 KiB/s wr, 92 op/s
Oct  7 10:25:32 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:25:32 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:32 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:25:32 np0005473739 podman[360004]: 2025-10-07 14:25:32.171505446 +0000 UTC m=+0.059508571 container create 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 10:25:32 np0005473739 systemd[1]: Started libpod-conmon-6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7.scope.
Oct  7 10:25:32 np0005473739 podman[360004]: 2025-10-07 14:25:32.140098537 +0000 UTC m=+0.028101732 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:25:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:25:32 np0005473739 podman[360004]: 2025-10-07 14:25:32.265865618 +0000 UTC m=+0.153868763 container init 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2)
Oct  7 10:25:32 np0005473739 podman[360004]: 2025-10-07 14:25:32.273254035 +0000 UTC m=+0.161257150 container start 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:25:32 np0005473739 podman[360004]: 2025-10-07 14:25:32.277349294 +0000 UTC m=+0.165352419 container attach 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:25:32 np0005473739 gallant_haslett[360020]: 167 167
Oct  7 10:25:32 np0005473739 systemd[1]: libpod-6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7.scope: Deactivated successfully.
Oct  7 10:25:32 np0005473739 podman[360004]: 2025-10-07 14:25:32.27834017 +0000 UTC m=+0.166343295 container died 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:25:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-313d9a37118157265147cf7e8c38140c1d487c45321349669442d75faa6ef7ad-merged.mount: Deactivated successfully.
Oct  7 10:25:32 np0005473739 podman[360004]: 2025-10-07 14:25:32.319463609 +0000 UTC m=+0.207466724 container remove 6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_haslett, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:25:32 np0005473739 systemd[1]: libpod-conmon-6d0d830579651780e70f6d78ca8d548c5a5f4f83604c168183a539a72d243dd7.scope: Deactivated successfully.
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0005071358948687895 of space, bias 1.0, pg target 0.15214076846063684 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:25:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:25:32 np0005473739 podman[360044]: 2025-10-07 14:25:32.507413181 +0000 UTC m=+0.061827913 container create 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:25:32 np0005473739 systemd[1]: Started libpod-conmon-8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f.scope.
Oct  7 10:25:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:25:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:32 np0005473739 podman[360044]: 2025-10-07 14:25:32.488875866 +0000 UTC m=+0.043290688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:25:32 np0005473739 podman[360044]: 2025-10-07 14:25:32.583864595 +0000 UTC m=+0.138279347 container init 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:25:32 np0005473739 podman[360044]: 2025-10-07 14:25:32.594753875 +0000 UTC m=+0.149168607 container start 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:25:32 np0005473739 podman[360044]: 2025-10-07 14:25:32.598073304 +0000 UTC m=+0.152488036 container attach 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:25:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:25:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3159187807' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:25:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:25:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3159187807' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:25:33 np0005473739 stoic_elion[360061]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:25:33 np0005473739 stoic_elion[360061]: --> relative data size: 1.0
Oct  7 10:25:33 np0005473739 stoic_elion[360061]: --> All data devices are unavailable
Oct  7 10:25:33 np0005473739 systemd[1]: libpod-8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f.scope: Deactivated successfully.
Oct  7 10:25:33 np0005473739 podman[360044]: 2025-10-07 14:25:33.621173662 +0000 UTC m=+1.175588414 container died 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:25:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-736434d5f45bfa560105b01b7b804a8ebd5bc4e3192fae2fbc323810ea3fb79f-merged.mount: Deactivated successfully.
Oct  7 10:25:33 np0005473739 podman[360044]: 2025-10-07 14:25:33.713519659 +0000 UTC m=+1.267934391 container remove 8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:25:33 np0005473739 systemd[1]: libpod-conmon-8890ab0793da99b6c372d94654c0eb5e2b5abc2ccd85f6cf1809c27623d6021f.scope: Deactivated successfully.
Oct  7 10:25:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1874: 305 pgs: 305 active+clean; 121 MiB data, 712 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 2.1 MiB/s wr, 97 op/s
Oct  7 10:25:34 np0005473739 nova_compute[259550]: 2025-10-07 14:25:34.058 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance in state 1 after 10 seconds - resending shutdown _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4101#033[00m
Oct  7 10:25:34 np0005473739 podman[360244]: 2025-10-07 14:25:34.297483153 +0000 UTC m=+0.056379868 container create 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:25:34 np0005473739 systemd[1]: Started libpod-conmon-24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab.scope.
Oct  7 10:25:34 np0005473739 podman[360244]: 2025-10-07 14:25:34.269369991 +0000 UTC m=+0.028266806 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:25:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:25:34 np0005473739 podman[360244]: 2025-10-07 14:25:34.381167229 +0000 UTC m=+0.140063954 container init 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:25:34 np0005473739 podman[360244]: 2025-10-07 14:25:34.3894549 +0000 UTC m=+0.148351625 container start 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:25:34 np0005473739 podman[360244]: 2025-10-07 14:25:34.393422136 +0000 UTC m=+0.152318861 container attach 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:25:34 np0005473739 determined_austin[360261]: 167 167
Oct  7 10:25:34 np0005473739 systemd[1]: libpod-24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab.scope: Deactivated successfully.
Oct  7 10:25:34 np0005473739 podman[360244]: 2025-10-07 14:25:34.396311873 +0000 UTC m=+0.155208608 container died 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:25:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8c5e765a40594d6c795f0611a25a2aea054abd7c19ab6f571a30d9c30f3a078e-merged.mount: Deactivated successfully.
Oct  7 10:25:34 np0005473739 podman[360244]: 2025-10-07 14:25:34.442880987 +0000 UTC m=+0.201777702 container remove 24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_austin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 10:25:34 np0005473739 systemd[1]: libpod-conmon-24c8a99dfb38841dfa7c2d727902c2c1a1563c9bf62613e414fba7f3df9e7cab.scope: Deactivated successfully.
Oct  7 10:25:34 np0005473739 podman[360287]: 2025-10-07 14:25:34.683144868 +0000 UTC m=+0.059767319 container create 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 10:25:34 np0005473739 systemd[1]: Started libpod-conmon-133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492.scope.
Oct  7 10:25:34 np0005473739 podman[360287]: 2025-10-07 14:25:34.662415434 +0000 UTC m=+0.039037895 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:25:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:25:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:34 np0005473739 podman[360287]: 2025-10-07 14:25:34.791309768 +0000 UTC m=+0.167932279 container init 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 10:25:34 np0005473739 podman[360287]: 2025-10-07 14:25:34.800639107 +0000 UTC m=+0.177261538 container start 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 10:25:34 np0005473739 podman[360287]: 2025-10-07 14:25:34.80635393 +0000 UTC m=+0.182976441 container attach 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]: {
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:    "0": [
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:        {
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "devices": [
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "/dev/loop3"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            ],
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_name": "ceph_lv0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_size": "21470642176",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "name": "ceph_lv0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "tags": {
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cluster_name": "ceph",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.crush_device_class": "",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.encrypted": "0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osd_id": "0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.type": "block",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.vdo": "0"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            },
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "type": "block",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "vg_name": "ceph_vg0"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:        }
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:    ],
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:    "1": [
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:        {
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "devices": [
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "/dev/loop4"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            ],
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_name": "ceph_lv1",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_size": "21470642176",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "name": "ceph_lv1",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "tags": {
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cluster_name": "ceph",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.crush_device_class": "",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.encrypted": "0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osd_id": "1",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.type": "block",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.vdo": "0"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            },
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "type": "block",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "vg_name": "ceph_vg1"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:        }
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:    ],
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:    "2": [
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:        {
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "devices": [
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "/dev/loop5"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            ],
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_name": "ceph_lv2",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_size": "21470642176",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "name": "ceph_lv2",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "tags": {
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.cluster_name": "ceph",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.crush_device_class": "",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.encrypted": "0",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osd_id": "2",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.type": "block",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:                "ceph.vdo": "0"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            },
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "type": "block",
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:            "vg_name": "ceph_vg2"
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:        }
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]:    ]
Oct  7 10:25:35 np0005473739 nifty_kepler[360303]: }
Oct  7 10:25:35 np0005473739 systemd[1]: libpod-133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492.scope: Deactivated successfully.
Oct  7 10:25:35 np0005473739 podman[360287]: 2025-10-07 14:25:35.556987017 +0000 UTC m=+0.933609478 container died 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:25:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8ecb99705be147e0345e7c12dea212835574e61683d6d000c119d2536ca8a02b-merged.mount: Deactivated successfully.
Oct  7 10:25:35 np0005473739 podman[360287]: 2025-10-07 14:25:35.618573813 +0000 UTC m=+0.995196254 container remove 133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_kepler, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:25:35 np0005473739 systemd[1]: libpod-conmon-133dffb942057c729e3b3e36de78538c641511f395b2586f4a946fae73224492.scope: Deactivated successfully.
Oct  7 10:25:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1875: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 10:25:36 np0005473739 podman[360463]: 2025-10-07 14:25:36.237168232 +0000 UTC m=+0.040067412 container create 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 10:25:36 np0005473739 systemd[1]: Started libpod-conmon-0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4.scope.
Oct  7 10:25:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:25:36 np0005473739 podman[360463]: 2025-10-07 14:25:36.218717589 +0000 UTC m=+0.021616799 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:25:36 np0005473739 podman[360463]: 2025-10-07 14:25:36.320876789 +0000 UTC m=+0.123775969 container init 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:25:36 np0005473739 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct  7 10:25:36 np0005473739 systemd[1]: machine-qemu\x2d125\x2dinstance\x2d00000065.scope: Consumed 12.683s CPU time.
Oct  7 10:25:36 np0005473739 podman[360463]: 2025-10-07 14:25:36.32954639 +0000 UTC m=+0.132445570 container start 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:25:36 np0005473739 systemd-machined[214580]: Machine qemu-125-instance-00000065 terminated.
Oct  7 10:25:36 np0005473739 podman[360463]: 2025-10-07 14:25:36.33292592 +0000 UTC m=+0.135825120 container attach 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:25:36 np0005473739 eager_banach[360480]: 167 167
Oct  7 10:25:36 np0005473739 systemd[1]: libpod-0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4.scope: Deactivated successfully.
Oct  7 10:25:36 np0005473739 podman[360463]: 2025-10-07 14:25:36.335825488 +0000 UTC m=+0.138724668 container died 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 10:25:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-70d0e8c8554e64e843c051f8e5e9b263f8b5a1841830a5d365a6afc1216148a6-merged.mount: Deactivated successfully.
Oct  7 10:25:36 np0005473739 podman[360463]: 2025-10-07 14:25:36.371949773 +0000 UTC m=+0.174848953 container remove 0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_banach, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  7 10:25:36 np0005473739 systemd[1]: libpod-conmon-0596c708aa394dd1d778ef95e51fb61cf1af9e914d8decba42911bd6340dfac4.scope: Deactivated successfully.
Oct  7 10:25:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:36 np0005473739 podman[360503]: 2025-10-07 14:25:36.553903535 +0000 UTC m=+0.053505380 container create a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:25:36 np0005473739 systemd[1]: Started libpod-conmon-a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b.scope.
Oct  7 10:25:36 np0005473739 podman[360503]: 2025-10-07 14:25:36.524985822 +0000 UTC m=+0.024587737 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:25:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:25:36 np0005473739 podman[360503]: 2025-10-07 14:25:36.650698131 +0000 UTC m=+0.150299996 container init a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:25:36 np0005473739 podman[360503]: 2025-10-07 14:25:36.665262741 +0000 UTC m=+0.164864616 container start a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 10:25:36 np0005473739 podman[360503]: 2025-10-07 14:25:36.669569716 +0000 UTC m=+0.169171611 container attach a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:25:36 np0005473739 nova_compute[259550]: 2025-10-07 14:25:36.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:36 np0005473739 nova_compute[259550]: 2025-10-07 14:25:36.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:36 np0005473739 nova_compute[259550]: 2025-10-07 14:25:36.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.074 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance shutdown successfully after 13 seconds.#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.080 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance destroyed successfully.#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.086 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance destroyed successfully.#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.505 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deleting instance files /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021_del#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.507 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deletion of /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021_del complete#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.661 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.661 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating image(s)#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.679 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.698 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]: {
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "osd_id": 2,
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "type": "bluestore"
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:    },
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "osd_id": 1,
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "type": "bluestore"
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:    },
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "osd_id": 0,
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:        "type": "bluestore"
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]:    }
Oct  7 10:25:37 np0005473739 nifty_haslett[360522]: }
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.720 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.724 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:37 np0005473739 systemd[1]: libpod-a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b.scope: Deactivated successfully.
Oct  7 10:25:37 np0005473739 podman[360503]: 2025-10-07 14:25:37.74342548 +0000 UTC m=+1.243027315 container died a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:25:37 np0005473739 systemd[1]: libpod-a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b.scope: Consumed 1.067s CPU time.
Oct  7 10:25:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-755fb84c13499be36299904cd7685641c421f4089e08ffa8b781e38471d76032-merged.mount: Deactivated successfully.
Oct  7 10:25:37 np0005473739 podman[360503]: 2025-10-07 14:25:37.798638815 +0000 UTC m=+1.298240670 container remove a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_haslett, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.802 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.803 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:37 np0005473739 systemd[1]: libpod-conmon-a9a8625546102f60c7d3734790a5b72b6178d2dfb5c6803c63d74f2eefcd6f1b.scope: Deactivated successfully.
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.805 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.805 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.823 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.827 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 d3b23591-1e36-4309-b361-7763dfc43021_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:25:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:25:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 706a9c6a-8fc4-4706-8ae6-4c23c20542d2 does not exist
Oct  7 10:25:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8cccc1ca-ba21-44b7-9ef9-9113b8cfa9e9 does not exist
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:37 np0005473739 nova_compute[259550]: 2025-10-07 14:25:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1876: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.096 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 d3b23591-1e36-4309-b361-7763dfc43021_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.269s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.165 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] resizing rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.268 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.269 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Ensure instance console log exists: /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.270 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.270 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.270 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.272 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.277 2 WARNING nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.286 2 DEBUG nova.virt.libvirt.host [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.287 2 DEBUG nova.virt.libvirt.host [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.292 2 DEBUG nova.virt.libvirt.host [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.293 2 DEBUG nova.virt.libvirt.host [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.293 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.294 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.295 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.295 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.295 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.295 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.296 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.296 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.296 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.297 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.297 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.297 2 DEBUG nova.virt.hardware [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.298 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'vcpu_model' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.321 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:25:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3776649082' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.767 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.793 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.798 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:38 np0005473739 nova_compute[259550]: 2025-10-07 14:25:38.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:25:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:25:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1216107542' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:25:39 np0005473739 nova_compute[259550]: 2025-10-07 14:25:39.271 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:39 np0005473739 nova_compute[259550]: 2025-10-07 14:25:39.276 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <uuid>d3b23591-1e36-4309-b361-7763dfc43021</uuid>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <name>instance-00000065</name>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServerShowV257Test-server-1426187763</nova:name>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:25:38</nova:creationTime>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <nova:user uuid="b13de0bc55134c1a9a69b90ed0ce42dd">tempest-ServerShowV257Test-1404222754-project-member</nova:user>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <nova:project uuid="9a9c9f0aca2445a3996c55f880cebe99">tempest-ServerShowV257Test-1404222754</nova:project>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <entry name="serial">d3b23591-1e36-4309-b361-7763dfc43021</entry>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <entry name="uuid">d3b23591-1e36-4309-b361-7763dfc43021</entry>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d3b23591-1e36-4309-b361-7763dfc43021_disk">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d3b23591-1e36-4309-b361-7763dfc43021_disk.config">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/console.log" append="off"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:25:39 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:25:39 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:25:39 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:25:39 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:25:39 np0005473739 nova_compute[259550]: 2025-10-07 14:25:39.437 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:25:39 np0005473739 nova_compute[259550]: 2025-10-07 14:25:39.438 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:25:39 np0005473739 nova_compute[259550]: 2025-10-07 14:25:39.438 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Using config drive#033[00m
Oct  7 10:25:39 np0005473739 nova_compute[259550]: 2025-10-07 14:25:39.464 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:39 np0005473739 nova_compute[259550]: 2025-10-07 14:25:39.504 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'ec2_ids' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:39 np0005473739 nova_compute[259550]: 2025-10-07 14:25:39.588 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'keypairs' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1877: 305 pgs: 305 active+clean; 110 MiB data, 720 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 3.6 MiB/s wr, 86 op/s
Oct  7 10:25:40 np0005473739 nova_compute[259550]: 2025-10-07 14:25:40.300 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Creating config drive at /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config#033[00m
Oct  7 10:25:40 np0005473739 nova_compute[259550]: 2025-10-07 14:25:40.311 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp55ps3jrm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:40 np0005473739 nova_compute[259550]: 2025-10-07 14:25:40.459 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp55ps3jrm" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:40 np0005473739 nova_compute[259550]: 2025-10-07 14:25:40.496 2 DEBUG nova.storage.rbd_utils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] rbd image d3b23591-1e36-4309-b361-7763dfc43021_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:25:40 np0005473739 nova_compute[259550]: 2025-10-07 14:25:40.502 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config d3b23591-1e36-4309-b361-7763dfc43021_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:40 np0005473739 nova_compute[259550]: 2025-10-07 14:25:40.670 2 DEBUG oslo_concurrency.processutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config d3b23591-1e36-4309-b361-7763dfc43021_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:40 np0005473739 nova_compute[259550]: 2025-10-07 14:25:40.672 2 INFO nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deleting local config drive /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021/disk.config because it was imported into RBD.#033[00m
Oct  7 10:25:40 np0005473739 systemd-machined[214580]: New machine qemu-126-instance-00000065.
Oct  7 10:25:40 np0005473739 systemd[1]: Started Virtual Machine qemu-126-instance-00000065.
Oct  7 10:25:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.635 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for d3b23591-1e36-4309-b361-7763dfc43021 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.637 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847141.6351116, d3b23591-1e36-4309-b361-7763dfc43021 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.637 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.640 2 DEBUG nova.compute.manager [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.640 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.645 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance spawned successfully.#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.646 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.662 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.668 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.674 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.674 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.675 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.675 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.676 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.677 2 DEBUG nova.virt.libvirt.driver [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.705 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.706 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847141.6352496, d3b23591-1e36-4309-b361-7763dfc43021 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.707 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Started (Lifecycle Event)#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.733 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.739 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.744 2 DEBUG nova.compute.manager [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.775 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.809 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.811 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.811 2 DEBUG nova.objects.instance [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.870 2 DEBUG oslo_concurrency.lockutils [None req-c9e7da45-e28f-4f93-b651-43b0f6cca226 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:41 np0005473739 nova_compute[259550]: 2025-10-07 14:25:41.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1878: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 358 KiB/s rd, 3.9 MiB/s wr, 119 op/s
Oct  7 10:25:43 np0005473739 podman[360982]: 2025-10-07 14:25:43.090698851 +0000 UTC m=+0.074101911 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:25:43 np0005473739 podman[360981]: 2025-10-07 14:25:43.129014244 +0000 UTC m=+0.109970048 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.618 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "d3b23591-1e36-4309-b361-7763dfc43021" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.619 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.619 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "d3b23591-1e36-4309-b361-7763dfc43021-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.619 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.619 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.620 2 INFO nova.compute.manager [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Terminating instance#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.621 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "refresh_cache-d3b23591-1e36-4309-b361-7763dfc43021" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.621 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquired lock "refresh_cache-d3b23591-1e36-4309-b361-7763dfc43021" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.621 2 DEBUG nova.network.neutron [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.783 2 DEBUG nova.network.neutron [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:43 np0005473739 nova_compute[259550]: 2025-10-07 14:25:43.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.023 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.023 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1879: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.1 MiB/s wr, 153 op/s
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.392 2 DEBUG nova.network.neutron [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.419 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Releasing lock "refresh_cache-d3b23591-1e36-4309-b361-7763dfc43021" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.420 2 DEBUG nova.compute.manager [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:25:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:25:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3111153291' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:25:44 np0005473739 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000065.scope: Deactivated successfully.
Oct  7 10:25:44 np0005473739 systemd[1]: machine-qemu\x2d126\x2dinstance\x2d00000065.scope: Consumed 3.694s CPU time.
Oct  7 10:25:44 np0005473739 systemd-machined[214580]: Machine qemu-126-instance-00000065 terminated.
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.496 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.650 2 INFO nova.virt.libvirt.driver [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance destroyed successfully.#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.651 2 DEBUG nova.objects.instance [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lazy-loading 'resources' on Instance uuid d3b23591-1e36-4309-b361-7763dfc43021 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.732 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.733 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000065 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.875 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.876 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3834MB free_disk=59.96736145019531GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.876 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.876 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.975 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d3b23591-1e36-4309-b361-7763dfc43021 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.976 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:25:44 np0005473739 nova_compute[259550]: 2025-10-07 14:25:44.977 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.019 2 INFO nova.virt.libvirt.driver [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deleting instance files /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021_del#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.020 2 INFO nova.virt.libvirt.driver [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deletion of /var/lib/nova/instances/d3b23591-1e36-4309-b361-7763dfc43021_del complete#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.043 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.136 2 INFO nova.compute.manager [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.138 2 DEBUG oslo.service.loopingcall [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.139 2 DEBUG nova.compute.manager [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.140 2 DEBUG nova.network.neutron [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.457 2 DEBUG nova.network.neutron [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:25:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:25:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3338664337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.473 2 DEBUG nova.network.neutron [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.489 2 INFO nova.compute.manager [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Took 0.35 seconds to deallocate network for instance.#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.490 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.502 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.525 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.549 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.560 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.561 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.684s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.561 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:25:45 np0005473739 nova_compute[259550]: 2025-10-07 14:25:45.605 2 DEBUG oslo_concurrency.processutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:25:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:25:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2435711627' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:25:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1880: 305 pgs: 305 active+clean; 79 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 147 op/s
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.042 2 DEBUG oslo_concurrency.processutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.050 2 DEBUG nova.compute.provider_tree [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.081 2 DEBUG nova.scheduler.client.report [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.171 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.196 2 INFO nova.scheduler.client.report [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Deleted allocations for instance d3b23591-1e36-4309-b361-7763dfc43021#033[00m
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.262 2 DEBUG oslo_concurrency.lockutils [None req-6818e3c1-fd86-4a44-aa4f-56a81aa84c29 b13de0bc55134c1a9a69b90ed0ce42dd 9a9c9f0aca2445a3996c55f880cebe99 - - default default] Lock "d3b23591-1e36-4309-b361-7763dfc43021" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:25:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.561 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:46 np0005473739 nova_compute[259550]: 2025-10-07 14:25:46.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1881: 305 pgs: 305 active+clean; 79 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 145 op/s
Oct  7 10:25:48 np0005473739 nova_compute[259550]: 2025-10-07 14:25:48.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:49 np0005473739 nova_compute[259550]: 2025-10-07 14:25:49.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:49 np0005473739 nova_compute[259550]: 2025-10-07 14:25:49.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:25:49 np0005473739 nova_compute[259550]: 2025-10-07 14:25:49.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:25:49 np0005473739 nova_compute[259550]: 2025-10-07 14:25:49.996 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:25:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1882: 305 pgs: 305 active+clean; 41 MiB data, 688 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 156 op/s
Oct  7 10:25:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:51 np0005473739 nova_compute[259550]: 2025-10-07 14:25:51.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:51 np0005473739 nova_compute[259550]: 2025-10-07 14:25:51.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1883: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 358 KiB/s wr, 132 op/s
Oct  7 10:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:25:53 np0005473739 nova_compute[259550]: 2025-10-07 14:25:53.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:25:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1884: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 100 op/s
Oct  7 10:25:55 np0005473739 podman[361109]: 2025-10-07 14:25:55.09091193 +0000 UTC m=+0.077158352 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:25:55 np0005473739 podman[361110]: 2025-10-07 14:25:55.091368043 +0000 UTC m=+0.076081134 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:25:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1885: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Oct  7 10:25:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:25:56 np0005473739 nova_compute[259550]: 2025-10-07 14:25:56.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:56 np0005473739 nova_compute[259550]: 2025-10-07 14:25:56.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:25:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1886: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Oct  7 10:25:59 np0005473739 nova_compute[259550]: 2025-10-07 14:25:59.648 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847144.6468685, d3b23591-1e36-4309-b361-7763dfc43021 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:25:59 np0005473739 nova_compute[259550]: 2025-10-07 14:25:59.648 2 INFO nova.compute.manager [-] [instance: d3b23591-1e36-4309-b361-7763dfc43021] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:25:59 np0005473739 nova_compute[259550]: 2025-10-07 14:25:59.667 2 DEBUG nova.compute.manager [None req-9705d272-efb8-45b5-bae0-7988facb2b7e - - - - - -] [instance: d3b23591-1e36-4309-b361-7763dfc43021] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1887: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 511 B/s wr, 11 op/s
Oct  7 10:26:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:00.058 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:00.058 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:00.059 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:01 np0005473739 nova_compute[259550]: 2025-10-07 14:26:01.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:01 np0005473739 nova_compute[259550]: 2025-10-07 14:26:01.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1888: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:26:02 np0005473739 nova_compute[259550]: 2025-10-07 14:26:02.546 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:02 np0005473739 nova_compute[259550]: 2025-10-07 14:26:02.547 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:02 np0005473739 nova_compute[259550]: 2025-10-07 14:26:02.559 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:26:02 np0005473739 nova_compute[259550]: 2025-10-07 14:26:02.624 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:02 np0005473739 nova_compute[259550]: 2025-10-07 14:26:02.625 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:02 np0005473739 nova_compute[259550]: 2025-10-07 14:26:02.632 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:26:02 np0005473739 nova_compute[259550]: 2025-10-07 14:26:02.633 2 INFO nova.compute.claims [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:26:02 np0005473739 nova_compute[259550]: 2025-10-07 14:26:02.732 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:26:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4291413713' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.174 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.180 2 DEBUG nova.compute.provider_tree [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.193 2 DEBUG nova.scheduler.client.report [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.212 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.587s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.213 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.268 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.268 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.291 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.309 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.438 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.439 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.440 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating image(s)#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.459 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.479 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.501 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.505 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.577 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.578 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.579 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.579 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.604 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.608 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.643 2 DEBUG nova.policy [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b6ae9b333804dcc8e1ed82ba0879e32', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65245efcef84404ca2ccf82df5946696', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.913 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.305s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:03 np0005473739 nova_compute[259550]: 2025-10-07 14:26:03.974 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] resizing rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:26:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1889: 305 pgs: 305 active+clean; 41 MiB data, 684 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 85 B/s wr, 0 op/s
Oct  7 10:26:04 np0005473739 nova_compute[259550]: 2025-10-07 14:26:04.069 2 DEBUG nova.objects.instance [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'migration_context' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:26:04 np0005473739 nova_compute[259550]: 2025-10-07 14:26:04.087 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:26:04 np0005473739 nova_compute[259550]: 2025-10-07 14:26:04.088 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Ensure instance console log exists: /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:26:04 np0005473739 nova_compute[259550]: 2025-10-07 14:26:04.088 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:04 np0005473739 nova_compute[259550]: 2025-10-07 14:26:04.088 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:04 np0005473739 nova_compute[259550]: 2025-10-07 14:26:04.089 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:05 np0005473739 nova_compute[259550]: 2025-10-07 14:26:05.607 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Successfully created port: 8d87c94f-f436-4619-9ca7-9e116cab44bf _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:26:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1890: 305 pgs: 305 active+clean; 72 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 1.1 MiB/s wr, 2 op/s
Oct  7 10:26:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:06 np0005473739 nova_compute[259550]: 2025-10-07 14:26:06.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:06 np0005473739 nova_compute[259550]: 2025-10-07 14:26:06.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:07 np0005473739 nova_compute[259550]: 2025-10-07 14:26:07.690 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Successfully updated port: 8d87c94f-f436-4619-9ca7-9e116cab44bf _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:26:07 np0005473739 nova_compute[259550]: 2025-10-07 14:26:07.709 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:26:07 np0005473739 nova_compute[259550]: 2025-10-07 14:26:07.709 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:26:07 np0005473739 nova_compute[259550]: 2025-10-07 14:26:07.709 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:26:07 np0005473739 nova_compute[259550]: 2025-10-07 14:26:07.851 2 DEBUG nova.compute.manager [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:07 np0005473739 nova_compute[259550]: 2025-10-07 14:26:07.851 2 DEBUG nova.compute.manager [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing instance network info cache due to event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:26:07 np0005473739 nova_compute[259550]: 2025-10-07 14:26:07.851 2 DEBUG oslo_concurrency.lockutils [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:26:07 np0005473739 nova_compute[259550]: 2025-10-07 14:26:07.897 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:26:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1891: 305 pgs: 305 active+clean; 72 MiB data, 697 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 1.1 MiB/s wr, 2 op/s
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.059 2 DEBUG nova.network.neutron [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.080 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.081 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance network_info: |[{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.081 2 DEBUG oslo_concurrency.lockutils [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.082 2 DEBUG nova.network.neutron [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.085 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start _get_guest_xml network_info=[{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.091 2 WARNING nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.095 2 DEBUG nova.virt.libvirt.host [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.096 2 DEBUG nova.virt.libvirt.host [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.100 2 DEBUG nova.virt.libvirt.host [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.101 2 DEBUG nova.virt.libvirt.host [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.102 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.102 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.102 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.103 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.103 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.103 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.104 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.104 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.104 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.104 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.105 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.105 2 DEBUG nova.virt.hardware [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.108 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:26:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2254047907' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.537 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.429s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.568 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:09 np0005473739 nova_compute[259550]: 2025-10-07 14:26:09.574 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:26:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798753409' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.039 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.042 2 DEBUG nova.virt.libvirt.vif [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:26:03Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.042 2 DEBUG nova.network.os_vif_util [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.043 2 DEBUG nova.network.os_vif_util [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.045 2 DEBUG nova.objects.instance [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:26:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1892: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.065 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <uuid>91c66dff-47e6-4b52-9e3f-d8c58d256bcf</uuid>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <name>instance-00000066</name>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersNegativeTestJSON-server-1596186961</nova:name>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:26:09</nova:creationTime>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <nova:user uuid="6b6ae9b333804dcc8e1ed82ba0879e32">tempest-ServersNegativeTestJSON-1110185129-project-member</nova:user>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <nova:project uuid="65245efcef84404ca2ccf82df5946696">tempest-ServersNegativeTestJSON-1110185129</nova:project>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <nova:port uuid="8d87c94f-f436-4619-9ca7-9e116cab44bf">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <entry name="serial">91c66dff-47e6-4b52-9e3f-d8c58d256bcf</entry>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <entry name="uuid">91c66dff-47e6-4b52-9e3f-d8c58d256bcf</entry>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:ce:79:3a"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <target dev="tap8d87c94f-f4"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/console.log" append="off"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:26:10 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:26:10 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:26:10 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:26:10 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.066 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Preparing to wait for external event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.067 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.067 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.067 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.068 2 DEBUG nova.virt.libvirt.vif [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:26:03Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.069 2 DEBUG nova.network.os_vif_util [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.070 2 DEBUG nova.network.os_vif_util [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.070 2 DEBUG os_vif [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.072 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.073 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.076 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.076 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d87c94f-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.077 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d87c94f-f4, col_values=(('external_ids', {'iface-id': '8d87c94f-f436-4619-9ca7-9e116cab44bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:79:3a', 'vm-uuid': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:10 np0005473739 NetworkManager[44949]: <info>  [1759847170.0848] manager: (tap8d87c94f-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/412)
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.093 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.094 2 INFO os_vif [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.140 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.141 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.141 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No VIF found with MAC fa:16:3e:ce:79:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.141 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Using config drive#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.162 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.623 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating config drive at /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.630 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvjuyra1k execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.777 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvjuyra1k" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.809 2 DEBUG nova.storage.rbd_utils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.812 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.850 2 DEBUG nova.network.neutron [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated VIF entry in instance network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.850 2 DEBUG nova.network.neutron [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:26:10 np0005473739 nova_compute[259550]: 2025-10-07 14:26:10.871 2 DEBUG oslo_concurrency.lockutils [req-9d6ff499-3ab0-4d7d-813d-a949720b8165 req-9db2846f-414a-4606-a14c-51a2c8f1b885 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.011 2 DEBUG oslo_concurrency.processutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.199s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.012 2 INFO nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deleting local config drive /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config because it was imported into RBD.#033[00m
Oct  7 10:26:11 np0005473739 kernel: tap8d87c94f-f4: entered promiscuous mode
Oct  7 10:26:11 np0005473739 NetworkManager[44949]: <info>  [1759847171.0668] manager: (tap8d87c94f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/413)
Oct  7 10:26:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:11Z|01020|binding|INFO|Claiming lport 8d87c94f-f436-4619-9ca7-9e116cab44bf for this chassis.
Oct  7 10:26:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:11Z|01021|binding|INFO|8d87c94f-f436-4619-9ca7-9e116cab44bf: Claiming fa:16:3e:ce:79:3a 10.100.0.3
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.084 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.087 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a bound to our chassis#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.088 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a#033[00m
Oct  7 10:26:11 np0005473739 systemd-udevd[361476]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.102 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1e49ea7b-5169-4f84-85f3-30b54585f50b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.103 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf30eae3-81 in ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.105 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf30eae3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2be3f2-fe4d-4dd2-b8f1-1d7f69a3350e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.106 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[066fed09-61bd-4f7b-94c1-71e6651aa042]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 systemd-machined[214580]: New machine qemu-127-instance-00000066.
Oct  7 10:26:11 np0005473739 NetworkManager[44949]: <info>  [1759847171.1211] device (tap8d87c94f-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:26:11 np0005473739 NetworkManager[44949]: <info>  [1759847171.1219] device (tap8d87c94f-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.119 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b8d30e25-cb4d-4259-97f6-3727c379656d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.148 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ff320474-384e-44ae-8f48-b4446f43eb97]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 systemd[1]: Started Virtual Machine qemu-127-instance-00000066.
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:11Z|01022|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf ovn-installed in OVS
Oct  7 10:26:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:11Z|01023|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf up in Southbound
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.179 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6084680b-773d-4c11-811a-3c5d0bcf671a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.183 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de9d89de-c5cd-4d35-8f21-aa140f8686b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 NetworkManager[44949]: <info>  [1759847171.1850] manager: (tapdf30eae3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/414)
Oct  7 10:26:11 np0005473739 systemd-udevd[361480]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.213 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[374b87cf-c1d2-4c33-903f-577563ce3b7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.216 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a52458-b9bc-480e-ba92-06aaeb5e1c28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 NetworkManager[44949]: <info>  [1759847171.2391] device (tapdf30eae3-80): carrier: link connected
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.244 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4ef08277-656f-4c95-b8ef-5be82d5ac172]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17ad9bf4-55c2-4bda-afca-8284e65a0a06]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775480, 'reachable_time': 40114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361509, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.278 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb56d22-f3ef-4075-bbfe-874f957afba5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:c8ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775480, 'tstamp': 775480}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361510, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.295 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2bdbb9f-3e5b-4811-b489-bd227c4e41c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775480, 'reachable_time': 40114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 361511, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.329 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9696227b-773d-4022-b52f-4f8d19d907c0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ecc0af9b-f74f-40ea-9e99-46718c30c006]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:11 np0005473739 NetworkManager[44949]: <info>  [1759847171.4041] manager: (tapdf30eae3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/415)
Oct  7 10:26:11 np0005473739 kernel: tapdf30eae3-80: entered promiscuous mode
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:11Z|01024|binding|INFO|Releasing lport 36ffe8eb-a28d-46e1-ae56-324c81416e5e from this chassis (sb_readonly=0)
Oct  7 10:26:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.425 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.425 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.426 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[750555fa-24e9-4b5f-b7db-672568b55643]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.427 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-df30eae3-80f8-4787-8c66-2dfab55da65a
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID df30eae3-80f8-4787-8c66-2dfab55da65a
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:26:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:11.427 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'env', 'PROCESS_TAG=haproxy-df30eae3-80f8-4787-8c66-2dfab55da65a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df30eae3-80f8-4787-8c66-2dfab55da65a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.563 2 DEBUG nova.compute.manager [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.563 2 DEBUG oslo_concurrency.lockutils [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.563 2 DEBUG oslo_concurrency.lockutils [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.564 2 DEBUG oslo_concurrency.lockutils [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.564 2 DEBUG nova.compute.manager [req-77d9ec6e-6154-4965-89f6-7d7c0f526674 req-aa0a0727-0059-4391-bcc1-5751d8a245a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Processing event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:26:11 np0005473739 nova_compute[259550]: 2025-10-07 14:26:11.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:11 np0005473739 podman[361542]: 2025-10-07 14:26:11.78671946 +0000 UTC m=+0.050431718 container create d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:26:11 np0005473739 systemd[1]: Started libpod-conmon-d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a.scope.
Oct  7 10:26:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:26:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d441a2678501fe529d289bed0f151b8829262887c81920cdf574184ba61c8960/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:11 np0005473739 podman[361542]: 2025-10-07 14:26:11.755846315 +0000 UTC m=+0.019558603 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:26:11 np0005473739 podman[361542]: 2025-10-07 14:26:11.866173663 +0000 UTC m=+0.129885981 container init d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:26:11 np0005473739 podman[361542]: 2025-10-07 14:26:11.871069584 +0000 UTC m=+0.134781862 container start d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:26:11 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [NOTICE]   (361561) : New worker (361563) forked
Oct  7 10:26:11 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [NOTICE]   (361561) : Loading success.
Oct  7 10:26:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1893: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.455 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847173.4550266, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.456 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Started (Lifecycle Event)#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.458 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.463 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.466 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance spawned successfully.#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.466 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.481 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.486 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.491 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.491 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.492 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.492 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.492 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.493 2 DEBUG nova.virt.libvirt.driver [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.514 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.514 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847173.4582098, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.514 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.548 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.551 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847173.4621363, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.551 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.556 2 INFO nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Took 10.12 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.557 2 DEBUG nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.565 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.567 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.599 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.624 2 INFO nova.compute.manager [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Took 11.03 seconds to build instance.#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.643 2 DEBUG oslo_concurrency.lockutils [None req-d7b8039a-4065-43da-bb9b-95fd2d9f71e0 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.096s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.701 2 DEBUG nova.compute.manager [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.701 2 DEBUG oslo_concurrency.lockutils [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.701 2 DEBUG oslo_concurrency.lockutils [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.702 2 DEBUG oslo_concurrency.lockutils [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.702 2 DEBUG nova.compute.manager [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:26:13 np0005473739 nova_compute[259550]: 2025-10-07 14:26:13.702 2 WARNING nova.compute.manager [req-3de53637-3c16-4db5-9a66-040b177d9ea2 req-a07192bc-9cb7-47e9-84ca-9bdbe5a1ab32 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state active and task_state None.#033[00m
Oct  7 10:26:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1894: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct  7 10:26:14 np0005473739 podman[361615]: 2025-10-07 14:26:14.075800505 +0000 UTC m=+0.059395928 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:26:14 np0005473739 podman[361614]: 2025-10-07 14:26:14.077405348 +0000 UTC m=+0.061530835 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:26:15 np0005473739 nova_compute[259550]: 2025-10-07 14:26:15.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1895: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 76 op/s
Oct  7 10:26:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:16 np0005473739 nova_compute[259550]: 2025-10-07 14:26:16.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:17 np0005473739 nova_compute[259550]: 2025-10-07 14:26:17.672 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:17 np0005473739 nova_compute[259550]: 2025-10-07 14:26:17.673 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:17 np0005473739 nova_compute[259550]: 2025-10-07 14:26:17.690 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:26:17 np0005473739 nova_compute[259550]: 2025-10-07 14:26:17.767 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:17 np0005473739 nova_compute[259550]: 2025-10-07 14:26:17.768 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:17 np0005473739 nova_compute[259550]: 2025-10-07 14:26:17.781 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:26:17 np0005473739 nova_compute[259550]: 2025-10-07 14:26:17.782 2 INFO nova.compute.claims [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:26:17 np0005473739 nova_compute[259550]: 2025-10-07 14:26:17.911 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1896: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 731 KiB/s wr, 74 op/s
Oct  7 10:26:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:26:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1737943312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.354 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.361 2 DEBUG nova.compute.provider_tree [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.380 2 DEBUG nova.scheduler.client.report [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.417 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.418 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.481 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.481 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.501 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.515 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.606 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.608 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.608 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Creating image(s)#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.633 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.662 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.690 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.694 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.746 2 DEBUG nova.policy [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6b6ae9b333804dcc8e1ed82ba0879e32', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '65245efcef84404ca2ccf82df5946696', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.787 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.788 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.789 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.789 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.817 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:18 np0005473739 nova_compute[259550]: 2025-10-07 14:26:18.821 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.120 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.299s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.184 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] resizing rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:26:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:19.201 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=26, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=25) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:26:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:19.202 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.215 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.330 2 DEBUG nova.objects.instance [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'migration_context' on Instance uuid e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.345 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.346 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Ensure instance console log exists: /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.346 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.347 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.347 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:19 np0005473739 nova_compute[259550]: 2025-10-07 14:26:19.674 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Successfully created port: 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:26:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1897: 305 pgs: 305 active+clean; 88 MiB data, 705 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 731 KiB/s wr, 98 op/s
Oct  7 10:26:20 np0005473739 nova_compute[259550]: 2025-10-07 14:26:20.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.078 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Successfully updated port: 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.108 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.109 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.109 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.264 2 DEBUG nova.compute.manager [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-changed-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.265 2 DEBUG nova.compute.manager [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Refreshing instance network info cache due to event network-changed-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.265 2 DEBUG oslo_concurrency.lockutils [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.314 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:26:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:21 np0005473739 nova_compute[259550]: 2025-10-07 14:26:21.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1898: 305 pgs: 305 active+clean; 114 MiB data, 714 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 98 op/s
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.201 2 DEBUG nova.network.neutron [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Updating instance_info_cache with network_info: [{"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.219 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.220 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance network_info: |[{"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.221 2 DEBUG oslo_concurrency.lockutils [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.221 2 DEBUG nova.network.neutron [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Refreshing network info cache for port 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.225 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start _get_guest_xml network_info=[{"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.230 2 WARNING nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.240 2 DEBUG nova.virt.libvirt.host [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.241 2 DEBUG nova.virt.libvirt.host [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.251 2 DEBUG nova.virt.libvirt.host [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.252 2 DEBUG nova.virt.libvirt.host [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.253 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.253 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.254 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.254 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.254 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.255 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.255 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.255 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.256 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.256 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.256 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.257 2 DEBUG nova.virt.hardware [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.259 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:26:22
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'images', 'default.rgw.log', '.mgr', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'vms', 'volumes', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:26:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:26:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1336973826' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.709 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.736 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:22 np0005473739 nova_compute[259550]: 2025-10-07 14:26:22.741 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:26:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:26:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:26:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676300460' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.206 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.209 2 DEBUG nova.virt.libvirt.vif [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1332650014',display_name='tempest-ServersNegativeTestJSON-server-1332650014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1332650014',id=103,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-f0g4mxvo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:26:18Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.209 2 DEBUG nova.network.os_vif_util [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.210 2 DEBUG nova.network.os_vif_util [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.211 2 DEBUG nova.objects.instance [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.225 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <uuid>e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d</uuid>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <name>instance-00000067</name>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersNegativeTestJSON-server-1332650014</nova:name>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:26:22</nova:creationTime>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <nova:user uuid="6b6ae9b333804dcc8e1ed82ba0879e32">tempest-ServersNegativeTestJSON-1110185129-project-member</nova:user>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <nova:project uuid="65245efcef84404ca2ccf82df5946696">tempest-ServersNegativeTestJSON-1110185129</nova:project>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <nova:port uuid="55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <entry name="serial">e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d</entry>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <entry name="uuid">e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d</entry>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:26:d7:24"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <target dev="tap55a0a0d7-af"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/console.log" append="off"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:26:23 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:26:23 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:26:23 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:26:23 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.230 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Preparing to wait for external event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.231 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.231 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.231 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.232 2 DEBUG nova.virt.libvirt.vif [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1332650014',display_name='tempest-ServersNegativeTestJSON-server-1332650014',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1332650014',id=103,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-f0g4mxvo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:26:18Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.232 2 DEBUG nova.network.os_vif_util [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.233 2 DEBUG nova.network.os_vif_util [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.234 2 DEBUG os_vif [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.235 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.238 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap55a0a0d7-af, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap55a0a0d7-af, col_values=(('external_ids', {'iface-id': '55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:26:d7:24', 'vm-uuid': 'e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:23 np0005473739 NetworkManager[44949]: <info>  [1759847183.2417] manager: (tap55a0a0d7-af): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/416)
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.247 2 INFO os_vif [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af')#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.299 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.300 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.300 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No VIF found with MAC fa:16:3e:26:d7:24, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.301 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Using config drive#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.322 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.763 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Creating config drive at /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.768 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3a5c5he execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.912 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc3a5c5he" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.939 2 DEBUG nova.storage.rbd_utils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:26:23 np0005473739 nova_compute[259550]: 2025-10-07 14:26:23.943 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.025 2 DEBUG nova.network.neutron [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Updated VIF entry in instance network info cache for port 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.026 2 DEBUG nova.network.neutron [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Updating instance_info_cache with network_info: [{"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.049 2 DEBUG oslo_concurrency.lockutils [req-fe3983fe-17b2-46a0-b6bc-aed476f18a23 req-cc2ec416-9f92-4502-ad37-e74d0d00c842 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:26:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1899: 305 pgs: 305 active+clean; 134 MiB data, 727 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.118 2 DEBUG oslo_concurrency.processutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.119 2 INFO nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Deleting local config drive /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d/disk.config because it was imported into RBD.#033[00m
Oct  7 10:26:24 np0005473739 kernel: tap55a0a0d7-af: entered promiscuous mode
Oct  7 10:26:24 np0005473739 NetworkManager[44949]: <info>  [1759847184.1667] manager: (tap55a0a0d7-af): new Tun device (/org/freedesktop/NetworkManager/Devices/417)
Oct  7 10:26:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:24Z|01025|binding|INFO|Claiming lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac for this chassis.
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:24Z|01026|binding|INFO|55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac: Claiming fa:16:3e:26:d7:24 10.100.0.10
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.175 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:d7:24 10.100.0.10'], port_security=['fa:16:3e:26:d7:24 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.177 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac in datapath df30eae3-80f8-4787-8c66-2dfab55da65a bound to our chassis#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.178 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a#033[00m
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:24Z|01027|binding|INFO|Setting lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac ovn-installed in OVS
Oct  7 10:26:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:24Z|01028|binding|INFO|Setting lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac up in Southbound
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.196 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab0b3712-fa87-47a3-a6c0-ab084386cc6d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:24 np0005473739 systemd-machined[214580]: New machine qemu-128-instance-00000067.
Oct  7 10:26:24 np0005473739 systemd-udevd[361978]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:26:24 np0005473739 systemd[1]: Started Virtual Machine qemu-128-instance-00000067.
Oct  7 10:26:24 np0005473739 NetworkManager[44949]: <info>  [1759847184.2264] device (tap55a0a0d7-af): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:26:24 np0005473739 NetworkManager[44949]: <info>  [1759847184.2275] device (tap55a0a0d7-af): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.228 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e69335b1-8f58-445f-854e-c1406e204aed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.231 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eed1a83f-1299-4d36-8783-ed7eb31a3418]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.263 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b2bfba3d-fe6f-474b-8fb2-863ac9f2af39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.280 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf8cc8f-0731-4141-8e8c-2a5a1afa25d8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 6, 'rx_bytes': 832, 'tx_bytes': 440, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775480, 'reachable_time': 40114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 361988, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.297 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[22bf9384-0d39-4b7c-9415-c4ff864e8a2f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdf30eae3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775491, 'tstamp': 775491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361990, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdf30eae3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775495, 'tstamp': 775495}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 361990, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.298 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:24 np0005473739 nova_compute[259550]: 2025-10-07 14:26:24.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.301 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.302 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.302 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:24.302 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:26:25 np0005473739 nova_compute[259550]: 2025-10-07 14:26:25.128 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847185.1281853, e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:26:25 np0005473739 nova_compute[259550]: 2025-10-07 14:26:25.129 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] VM Started (Lifecycle Event)#033[00m
Oct  7 10:26:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e258 do_prune osdmap full prune enabled
Oct  7 10:26:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e259 e259: 3 total, 3 up, 3 in
Oct  7 10:26:25 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e259: 3 total, 3 up, 3 in
Oct  7 10:26:25 np0005473739 nova_compute[259550]: 2025-10-07 14:26:25.148 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:25 np0005473739 nova_compute[259550]: 2025-10-07 14:26:25.153 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847185.129251, e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:26:25 np0005473739 nova_compute[259550]: 2025-10-07 14:26:25.153 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:26:25 np0005473739 nova_compute[259550]: 2025-10-07 14:26:25.175 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:25 np0005473739 nova_compute[259550]: 2025-10-07 14:26:25.181 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:26:25 np0005473739 nova_compute[259550]: 2025-10-07 14:26:25.209 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:26:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:25Z|00107|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:ce:79:3a 10.100.0.3
Oct  7 10:26:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:25Z|00108|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ce:79:3a 10.100.0.3
Oct  7 10:26:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1901: 305 pgs: 305 active+clean; 142 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 3.3 MiB/s wr, 102 op/s
Oct  7 10:26:26 np0005473739 podman[362034]: 2025-10-07 14:26:26.089402653 +0000 UTC m=+0.066291682 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:26:26 np0005473739 podman[362035]: 2025-10-07 14:26:26.122423436 +0000 UTC m=+0.099689136 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:26:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e259 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:26 np0005473739 nova_compute[259550]: 2025-10-07 14:26:26.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e259 do_prune osdmap full prune enabled
Oct  7 10:26:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e260 e260: 3 total, 3 up, 3 in
Oct  7 10:26:27 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e260: 3 total, 3 up, 3 in
Oct  7 10:26:27 np0005473739 nova_compute[259550]: 2025-10-07 14:26:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:27 np0005473739 nova_compute[259550]: 2025-10-07 14:26:27.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:26:28 np0005473739 nova_compute[259550]: 2025-10-07 14:26:28.005 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:26:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1903: 305 pgs: 305 active+clean; 142 MiB data, 738 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 4.1 MiB/s wr, 91 op/s
Oct  7 10:26:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:28.204 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '26'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:28 np0005473739 nova_compute[259550]: 2025-10-07 14:26:28.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e260 do_prune osdmap full prune enabled
Oct  7 10:26:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e261 e261: 3 total, 3 up, 3 in
Oct  7 10:26:29 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e261: 3 total, 3 up, 3 in
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.905 2 DEBUG nova.compute.manager [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.906 2 DEBUG oslo_concurrency.lockutils [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.906 2 DEBUG oslo_concurrency.lockutils [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.907 2 DEBUG oslo_concurrency.lockutils [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.907 2 DEBUG nova.compute.manager [req-e8c270e4-db35-4dab-8e31-01287cf063c4 req-357a9ebc-6d5b-4005-9233-4cda1c0fedb8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Processing event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.908 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance event wait completed in 4 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.912 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847189.9117203, e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.912 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.914 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.919 2 INFO nova.virt.libvirt.driver [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance spawned successfully.#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.920 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.937 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.940 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.948 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.948 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.949 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.949 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.949 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.950 2 DEBUG nova.virt.libvirt.driver [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:26:29 np0005473739 nova_compute[259550]: 2025-10-07 14:26:29.976 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:26:30 np0005473739 nova_compute[259550]: 2025-10-07 14:26:30.011 2 INFO nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Took 11.40 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:26:30 np0005473739 nova_compute[259550]: 2025-10-07 14:26:30.012 2 DEBUG nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1905: 305 pgs: 305 active+clean; 162 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 678 KiB/s rd, 4.2 MiB/s wr, 212 op/s
Oct  7 10:26:30 np0005473739 nova_compute[259550]: 2025-10-07 14:26:30.075 2 INFO nova.compute.manager [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Took 12.34 seconds to build instance.#033[00m
Oct  7 10:26:30 np0005473739 nova_compute[259550]: 2025-10-07 14:26:30.169 2 DEBUG oslo_concurrency.lockutils [None req-c13fb213-a426-436f-8de9-75bb7b228e6d 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e261 do_prune osdmap full prune enabled
Oct  7 10:26:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e262 e262: 3 total, 3 up, 3 in
Oct  7 10:26:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e262: 3 total, 3 up, 3 in
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.381 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.382 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.382 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.382 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.382 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.383 2 INFO nova.compute.manager [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Terminating instance#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.385 2 DEBUG nova.compute.manager [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:26:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:31 np0005473739 kernel: tap55a0a0d7-af (unregistering): left promiscuous mode
Oct  7 10:26:31 np0005473739 NetworkManager[44949]: <info>  [1759847191.4322] device (tap55a0a0d7-af): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:26:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:31Z|01029|binding|INFO|Releasing lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac from this chassis (sb_readonly=0)
Oct  7 10:26:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:31Z|01030|binding|INFO|Setting lport 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac down in Southbound
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:26:31Z|01031|binding|INFO|Removing iface tap55a0a0d7-af ovn-installed in OVS
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.449 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:26:d7:24 10.100.0.10'], port_security=['fa:16:3e:26:d7:24 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.450 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac in datapath df30eae3-80f8-4787-8c66-2dfab55da65a unbound from our chassis#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.452 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4de366b-230c-4130-ad37-8398baf8a577]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.505 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c9ffb6d5-1acb-4c10-b7ea-77981c6f5b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.509 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a6e25820-40a4-4dd4-8db5-ab8bda454a11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:31 np0005473739 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000067.scope: Deactivated successfully.
Oct  7 10:26:31 np0005473739 systemd[1]: machine-qemu\x2d128\x2dinstance\x2d00000067.scope: Consumed 2.398s CPU time.
Oct  7 10:26:31 np0005473739 systemd-machined[214580]: Machine qemu-128-instance-00000067 terminated.
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.546 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1cef83af-2bcc-48ae-850b-475d3cbbb5a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.570 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe7e446c-9873-43a0-9761-ce8669073547]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 874, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 8, 'rx_bytes': 874, 'tx_bytes': 524, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 296], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775480, 'reachable_time': 40114, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 300, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 300, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 362086, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.593 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8991129a-7cc6-4436-8be7-62be64842082]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapdf30eae3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775491, 'tstamp': 775491}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362087, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapdf30eae3-81'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 775495, 'tstamp': 775495}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 362087, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.605 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.606 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.606 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:26:31.607 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.623 2 INFO nova.virt.libvirt.driver [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Instance destroyed successfully.#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.624 2 DEBUG nova.objects.instance [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'resources' on Instance uuid e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.639 2 DEBUG nova.virt.libvirt.vif [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:26:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1332650014',display_name='tempest-ServersNegativeTestJSON-server-1332650014',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1332650014',id=103,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:26:30Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-f0g4mxvo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:26:30Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.639 2 DEBUG nova.network.os_vif_util [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "address": "fa:16:3e:26:d7:24", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap55a0a0d7-af", "ovs_interfaceid": "55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.640 2 DEBUG nova.network.os_vif_util [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.640 2 DEBUG os_vif [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.644 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap55a0a0d7-af, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.650 2 INFO os_vif [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:26:d7:24,bridge_name='br-int',has_traffic_filtering=True,id=55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap55a0a0d7-af')#033[00m
Oct  7 10:26:31 np0005473739 nova_compute[259550]: 2025-10-07 14:26:31.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1907: 305 pgs: 305 active+clean; 167 MiB data, 752 MiB used, 59 GiB / 60 GiB avail; 721 KiB/s rd, 2.3 MiB/s wr, 186 op/s
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.093 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.094 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.094 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.095 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.095 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] No waiting events found dispatching network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.096 2 WARNING nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received unexpected event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.096 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-unplugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.096 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.097 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.098 2 DEBUG oslo_concurrency.lockutils [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.098 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] No waiting events found dispatching network-vif-unplugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.099 2 DEBUG nova.compute.manager [req-c9f6053a-2e22-46f6-a631-1d7ff3fb8c99 req-02bbf832-1ad9-47b4-afc9-6124be1f036e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-unplugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.120 2 INFO nova.virt.libvirt.driver [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Deleting instance files /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_del#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.120 2 INFO nova.virt.libvirt.driver [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Deletion of /var/lib/nova/instances/e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d_del complete#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.213 2 INFO nova.compute.manager [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Took 0.83 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.214 2 DEBUG oslo.service.loopingcall [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.214 2 DEBUG nova.compute.manager [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:26:32 np0005473739 nova_compute[259550]: 2025-10-07 14:26:32.214 2 DEBUG nova.network.neutron [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.001104379822719281 of space, bias 1.0, pg target 0.33131394681578435 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006664306179592368 of space, bias 1.0, pg target 0.19992918538777105 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:26:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:26:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:26:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4233695989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:26:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:26:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4233695989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:26:33 np0005473739 nova_compute[259550]: 2025-10-07 14:26:33.693 2 DEBUG nova.network.neutron [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:26:33 np0005473739 nova_compute[259550]: 2025-10-07 14:26:33.754 2 INFO nova.compute.manager [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Took 1.54 seconds to deallocate network for instance.#033[00m
Oct  7 10:26:33 np0005473739 nova_compute[259550]: 2025-10-07 14:26:33.820 2 DEBUG nova.compute.manager [req-6aadea6d-6437-4fc9-b4cb-0983f1ad0245 req-c5169e4e-e8eb-42d5-94b9-3c61873cf495 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-deleted-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:33 np0005473739 nova_compute[259550]: 2025-10-07 14:26:33.880 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:33 np0005473739 nova_compute[259550]: 2025-10-07 14:26:33.881 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:33 np0005473739 nova_compute[259550]: 2025-10-07 14:26:33.960 2 DEBUG oslo_concurrency.processutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1908: 305 pgs: 305 active+clean; 147 MiB data, 739 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 2.0 MiB/s wr, 296 op/s
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.258 2 DEBUG nova.compute.manager [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.258 2 DEBUG oslo_concurrency.lockutils [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.258 2 DEBUG oslo_concurrency.lockutils [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.258 2 DEBUG oslo_concurrency.lockutils [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.259 2 DEBUG nova.compute.manager [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] No waiting events found dispatching network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.259 2 WARNING nova.compute.manager [req-c93a78bd-3202-45ce-bf8d-1be45a5ecedf req-7d3e61d2-fbe5-413c-bd16-270da0688e91 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Received unexpected event network-vif-plugged-55a0a0d7-af33-41a7-a09d-16dd2ea9e7ac for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:26:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:26:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3102534354' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.472 2 DEBUG oslo_concurrency.processutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.479 2 DEBUG nova.compute.provider_tree [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.494 2 DEBUG nova.scheduler.client.report [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.527 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.646s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.556 2 INFO nova.scheduler.client.report [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Deleted allocations for instance e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d#033[00m
Oct  7 10:26:34 np0005473739 nova_compute[259550]: 2025-10-07 14:26:34.632 2 DEBUG oslo_concurrency.lockutils [None req-7a2621a0-6b49-46a8-9925-b15f0a3f3872 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #87. Immutable memtables: 0.
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.454551) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 49] Flushing memtable with next log file: 87
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195454644, "job": 49, "event": "flush_started", "num_memtables": 1, "num_entries": 1864, "num_deletes": 261, "total_data_size": 2800524, "memory_usage": 2854848, "flush_reason": "Manual Compaction"}
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 49] Level-0 flush table #88: started
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195476589, "cf_name": "default", "job": 49, "event": "table_file_creation", "file_number": 88, "file_size": 2735554, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 38315, "largest_seqno": 40178, "table_properties": {"data_size": 2726903, "index_size": 5337, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2245, "raw_key_size": 18250, "raw_average_key_size": 20, "raw_value_size": 2709520, "raw_average_value_size": 3075, "num_data_blocks": 235, "num_entries": 881, "num_filter_entries": 881, "num_deletions": 261, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847037, "oldest_key_time": 1759847037, "file_creation_time": 1759847195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 88, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 49] Flush lasted 22080 microseconds, and 6109 cpu microseconds.
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.476644) [db/flush_job.cc:967] [default] [JOB 49] Level-0 flush table #88: 2735554 bytes OK
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.476665) [db/memtable_list.cc:519] [default] Level-0 commit table #88 started
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.485650) [db/memtable_list.cc:722] [default] Level-0 commit table #88: memtable #1 done
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.485676) EVENT_LOG_v1 {"time_micros": 1759847195485667, "job": 49, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.485732) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 49] Try to delete WAL files size 2792411, prev total WAL file size 2792411, number of live WAL files 2.
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000084.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.486700) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033353134' seq:72057594037927935, type:22 .. '7061786F730033373636' seq:0, type:0; will stop at (end)
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 50] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 49 Base level 0, inputs: [88(2671KB)], [86(8342KB)]
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195486802, "job": 50, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [88], "files_L6": [86], "score": -1, "input_data_size": 11278460, "oldest_snapshot_seqno": -1}
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 50] Generated table #89: 6463 keys, 9649569 bytes, temperature: kUnknown
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195562850, "cf_name": "default", "job": 50, "event": "table_file_creation", "file_number": 89, "file_size": 9649569, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9605087, "index_size": 27220, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 164328, "raw_average_key_size": 25, "raw_value_size": 9488079, "raw_average_value_size": 1468, "num_data_blocks": 1095, "num_entries": 6463, "num_filter_entries": 6463, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847195, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 89, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.563289) [db/compaction/compaction_job.cc:1663] [default] [JOB 50] Compacted 1@0 + 1@6 files to L6 => 9649569 bytes
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.564764) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.9 rd, 126.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.6, 8.1 +0.0 blob) out(9.2 +0.0 blob), read-write-amplify(7.7) write-amplify(3.5) OK, records in: 6992, records dropped: 529 output_compression: NoCompression
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.564792) EVENT_LOG_v1 {"time_micros": 1759847195564779, "job": 50, "event": "compaction_finished", "compaction_time_micros": 76262, "compaction_time_cpu_micros": 26577, "output_level": 6, "num_output_files": 1, "total_output_size": 9649569, "num_input_records": 6992, "num_output_records": 6463, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000088.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195566047, "job": 50, "event": "table_file_deletion", "file_number": 88}
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000086.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847195569135, "job": 50, "event": "table_file_deletion", "file_number": 86}
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.486526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569318) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569332) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569334) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569336) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:26:35 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:26:35.569338) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:26:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1909: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.2 MiB/s rd, 1.8 MiB/s wr, 314 op/s
Oct  7 10:26:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e262 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e262 do_prune osdmap full prune enabled
Oct  7 10:26:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e263 e263: 3 total, 3 up, 3 in
Oct  7 10:26:36 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e263: 3 total, 3 up, 3 in
Oct  7 10:26:36 np0005473739 nova_compute[259550]: 2025-10-07 14:26:36.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:36 np0005473739 nova_compute[259550]: 2025-10-07 14:26:36.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:38 np0005473739 nova_compute[259550]: 2025-10-07 14:26:38.004 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:38 np0005473739 nova_compute[259550]: 2025-10-07 14:26:38.005 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1911: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 114 KiB/s wr, 205 op/s
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:26:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1a96c624-057f-4e87-8ca1-7aa15a409cb0 does not exist
Oct  7 10:26:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8e414cfa-6513-4b37-994b-8ec0c1cc6103 does not exist
Oct  7 10:26:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7536aeaf-db4c-46b6-81a7-6cbaeda6baf8 does not exist
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:26:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:26:38 np0005473739 nova_compute[259550]: 2025-10-07 14:26:38.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:26:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:26:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:26:39 np0005473739 podman[362410]: 2025-10-07 14:26:39.322677593 +0000 UTC m=+0.037103702 container create b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:26:39 np0005473739 systemd[1]: Started libpod-conmon-b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792.scope.
Oct  7 10:26:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:26:39 np0005473739 podman[362410]: 2025-10-07 14:26:39.304319623 +0000 UTC m=+0.018745772 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:26:39 np0005473739 podman[362410]: 2025-10-07 14:26:39.409204116 +0000 UTC m=+0.123630265 container init b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:26:39 np0005473739 podman[362410]: 2025-10-07 14:26:39.419138741 +0000 UTC m=+0.133564860 container start b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:26:39 np0005473739 podman[362410]: 2025-10-07 14:26:39.423423845 +0000 UTC m=+0.137849984 container attach b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:26:39 np0005473739 inspiring_liskov[362426]: 167 167
Oct  7 10:26:39 np0005473739 systemd[1]: libpod-b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792.scope: Deactivated successfully.
Oct  7 10:26:39 np0005473739 podman[362410]: 2025-10-07 14:26:39.42582946 +0000 UTC m=+0.140255599 container died b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:26:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ec523373747ed95c79223a713f48961293f3a656bdf996dabc2a30666c8e0844-merged.mount: Deactivated successfully.
Oct  7 10:26:39 np0005473739 podman[362410]: 2025-10-07 14:26:39.468337966 +0000 UTC m=+0.182764085 container remove b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_liskov, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:26:39 np0005473739 systemd[1]: libpod-conmon-b67c5e3a78fd75899fdf8443c61d10aa788b16c98a4d2c20e5abb31c0f9e4792.scope: Deactivated successfully.
Oct  7 10:26:39 np0005473739 podman[362449]: 2025-10-07 14:26:39.687444711 +0000 UTC m=+0.065885052 container create af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:26:39 np0005473739 systemd[1]: Started libpod-conmon-af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa.scope.
Oct  7 10:26:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:26:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:39 np0005473739 podman[362449]: 2025-10-07 14:26:39.663419379 +0000 UTC m=+0.041859790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:26:39 np0005473739 podman[362449]: 2025-10-07 14:26:39.780288701 +0000 UTC m=+0.158729042 container init af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:26:39 np0005473739 podman[362449]: 2025-10-07 14:26:39.787006301 +0000 UTC m=+0.165446612 container start af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 10:26:39 np0005473739 podman[362449]: 2025-10-07 14:26:39.843171852 +0000 UTC m=+0.221612213 container attach af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:26:39 np0005473739 nova_compute[259550]: 2025-10-07 14:26:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:39 np0005473739 nova_compute[259550]: 2025-10-07 14:26:39.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:26:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1912: 305 pgs: 305 active+clean; 121 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 21 KiB/s wr, 160 op/s
Oct  7 10:26:40 np0005473739 angry_grothendieck[362465]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:26:40 np0005473739 angry_grothendieck[362465]: --> relative data size: 1.0
Oct  7 10:26:40 np0005473739 angry_grothendieck[362465]: --> All data devices are unavailable
Oct  7 10:26:40 np0005473739 systemd[1]: libpod-af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa.scope: Deactivated successfully.
Oct  7 10:26:40 np0005473739 podman[362449]: 2025-10-07 14:26:40.832545358 +0000 UTC m=+1.210985709 container died af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:26:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b2a458b86aeeebf39b82042190ea8efb6c3e5fa3ab26d469220961e761c1569f-merged.mount: Deactivated successfully.
Oct  7 10:26:40 np0005473739 podman[362449]: 2025-10-07 14:26:40.888109173 +0000 UTC m=+1.266549494 container remove af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 10:26:40 np0005473739 systemd[1]: libpod-conmon-af45ea02b00a184f21541ea126f1bf97b0d0ccbb102ca834522f8d0c4432abfa.scope: Deactivated successfully.
Oct  7 10:26:40 np0005473739 nova_compute[259550]: 2025-10-07 14:26:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:40 np0005473739 nova_compute[259550]: 2025-10-07 14:26:40.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:26:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e263 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e263 do_prune osdmap full prune enabled
Oct  7 10:26:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 e264: 3 total, 3 up, 3 in
Oct  7 10:26:41 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e264: 3 total, 3 up, 3 in
Oct  7 10:26:41 np0005473739 podman[362644]: 2025-10-07 14:26:41.470562376 +0000 UTC m=+0.050948462 container create b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:26:41 np0005473739 systemd[1]: Started libpod-conmon-b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8.scope.
Oct  7 10:26:41 np0005473739 podman[362644]: 2025-10-07 14:26:41.443181845 +0000 UTC m=+0.023567961 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:26:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:26:41 np0005473739 nova_compute[259550]: 2025-10-07 14:26:41.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:41 np0005473739 podman[362644]: 2025-10-07 14:26:41.672922684 +0000 UTC m=+0.253308780 container init b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:26:41 np0005473739 podman[362644]: 2025-10-07 14:26:41.679760006 +0000 UTC m=+0.260146092 container start b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:26:41 np0005473739 systemd[1]: libpod-b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8.scope: Deactivated successfully.
Oct  7 10:26:41 np0005473739 peaceful_meitner[362661]: 167 167
Oct  7 10:26:41 np0005473739 conmon[362661]: conmon b440b574f73aa1e3139b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8.scope/container/memory.events
Oct  7 10:26:41 np0005473739 podman[362644]: 2025-10-07 14:26:41.700972633 +0000 UTC m=+0.281358719 container attach b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 10:26:41 np0005473739 podman[362644]: 2025-10-07 14:26:41.701720812 +0000 UTC m=+0.282106898 container died b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:26:41 np0005473739 nova_compute[259550]: 2025-10-07 14:26:41.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:41 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  7 10:26:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3d9b32d343572ecec74821d80560408261919626921ee42151b9ead6414bfabc-merged.mount: Deactivated successfully.
Oct  7 10:26:41 np0005473739 nova_compute[259550]: 2025-10-07 14:26:41.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:42 np0005473739 podman[362644]: 2025-10-07 14:26:42.002213332 +0000 UTC m=+0.582599418 container remove b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_meitner, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 10:26:42 np0005473739 systemd[1]: libpod-conmon-b440b574f73aa1e3139b26d2bce111edad32ba82716e15e55b067369725035d8.scope: Deactivated successfully.
Oct  7 10:26:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1914: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail; 949 KiB/s rd, 20 KiB/s wr, 60 op/s
Oct  7 10:26:42 np0005473739 podman[362687]: 2025-10-07 14:26:42.183755773 +0000 UTC m=+0.040130453 container create 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:26:42 np0005473739 systemd[1]: Started libpod-conmon-3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3.scope.
Oct  7 10:26:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:26:42 np0005473739 podman[362687]: 2025-10-07 14:26:42.166710488 +0000 UTC m=+0.023085178 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:26:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:42 np0005473739 podman[362687]: 2025-10-07 14:26:42.284775933 +0000 UTC m=+0.141150603 container init 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:26:42 np0005473739 podman[362687]: 2025-10-07 14:26:42.295640073 +0000 UTC m=+0.152014743 container start 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:26:42 np0005473739 podman[362687]: 2025-10-07 14:26:42.302331371 +0000 UTC m=+0.158706051 container attach 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]: {
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:    "0": [
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:        {
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "devices": [
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "/dev/loop3"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            ],
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_name": "ceph_lv0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_size": "21470642176",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "name": "ceph_lv0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "tags": {
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cluster_name": "ceph",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.crush_device_class": "",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.encrypted": "0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osd_id": "0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.type": "block",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.vdo": "0"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            },
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "type": "block",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "vg_name": "ceph_vg0"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:        }
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:    ],
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:    "1": [
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:        {
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "devices": [
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "/dev/loop4"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            ],
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_name": "ceph_lv1",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_size": "21470642176",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "name": "ceph_lv1",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "tags": {
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cluster_name": "ceph",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.crush_device_class": "",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.encrypted": "0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osd_id": "1",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.type": "block",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.vdo": "0"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            },
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "type": "block",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "vg_name": "ceph_vg1"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:        }
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:    ],
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:    "2": [
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:        {
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "devices": [
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "/dev/loop5"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            ],
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_name": "ceph_lv2",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_size": "21470642176",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "name": "ceph_lv2",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "tags": {
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.cluster_name": "ceph",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.crush_device_class": "",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.encrypted": "0",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osd_id": "2",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.type": "block",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:                "ceph.vdo": "0"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            },
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "type": "block",
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:            "vg_name": "ceph_vg2"
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:        }
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]:    ]
Oct  7 10:26:43 np0005473739 wizardly_aryabhata[362704]: }
Oct  7 10:26:43 np0005473739 systemd[1]: libpod-3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3.scope: Deactivated successfully.
Oct  7 10:26:43 np0005473739 podman[362687]: 2025-10-07 14:26:43.081467941 +0000 UTC m=+0.937842641 container died 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:26:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7a52c90ea10a86180285eae7a7418373fdbfa06978be849de2c2cce323317bbe-merged.mount: Deactivated successfully.
Oct  7 10:26:43 np0005473739 podman[362687]: 2025-10-07 14:26:43.15704511 +0000 UTC m=+1.013419780 container remove 3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct  7 10:26:43 np0005473739 systemd[1]: libpod-conmon-3b2cd43ed4b69a143914394b69aab08e79c7543e150081945a439f8d8301dda3.scope: Deactivated successfully.
Oct  7 10:26:43 np0005473739 podman[362865]: 2025-10-07 14:26:43.794912504 +0000 UTC m=+0.040835992 container create 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:26:43 np0005473739 systemd[1]: Started libpod-conmon-15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c.scope.
Oct  7 10:26:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:26:43 np0005473739 podman[362865]: 2025-10-07 14:26:43.775602578 +0000 UTC m=+0.021525976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:26:43 np0005473739 podman[362865]: 2025-10-07 14:26:43.874323756 +0000 UTC m=+0.120247184 container init 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:26:43 np0005473739 podman[362865]: 2025-10-07 14:26:43.881759534 +0000 UTC m=+0.127682932 container start 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:26:43 np0005473739 podman[362865]: 2025-10-07 14:26:43.884973441 +0000 UTC m=+0.130896859 container attach 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:26:43 np0005473739 reverent_bohr[362881]: 167 167
Oct  7 10:26:43 np0005473739 systemd[1]: libpod-15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c.scope: Deactivated successfully.
Oct  7 10:26:43 np0005473739 podman[362865]: 2025-10-07 14:26:43.887274602 +0000 UTC m=+0.133198000 container died 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:26:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bfd93a073c0bf4be28434900ff1202d3927e054842db60afdfa87e00a1e87ec5-merged.mount: Deactivated successfully.
Oct  7 10:26:43 np0005473739 podman[362865]: 2025-10-07 14:26:43.934484934 +0000 UTC m=+0.180408332 container remove 15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:26:43 np0005473739 systemd[1]: libpod-conmon-15ac06c06e52810121932ac08ba4f6230737765ebc35884c993321b612a9c01c.scope: Deactivated successfully.
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.008 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.028 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.028 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1915: 305 pgs: 305 active+clean; 121 MiB data, 731 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:26:44 np0005473739 podman[362906]: 2025-10-07 14:26:44.103385127 +0000 UTC m=+0.043506113 container create a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:26:44 np0005473739 systemd[1]: Started libpod-conmon-a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5.scope.
Oct  7 10:26:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:26:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:26:44 np0005473739 podman[362906]: 2025-10-07 14:26:44.085696524 +0000 UTC m=+0.025817530 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:26:44 np0005473739 podman[362906]: 2025-10-07 14:26:44.187169976 +0000 UTC m=+0.127290982 container init a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:26:44 np0005473739 podman[362906]: 2025-10-07 14:26:44.194474301 +0000 UTC m=+0.134595287 container start a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 10:26:44 np0005473739 podman[362906]: 2025-10-07 14:26:44.200133672 +0000 UTC m=+0.140254658 container attach a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default)
Oct  7 10:26:44 np0005473739 podman[362924]: 2025-10-07 14:26:44.233866863 +0000 UTC m=+0.086359569 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:26:44 np0005473739 podman[362919]: 2025-10-07 14:26:44.237640154 +0000 UTC m=+0.092967815 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:26:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:26:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/101934396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.518 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.603 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.603 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000066 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.748 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.749 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3626MB free_disk=59.94279098510742GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.749 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:26:44 np0005473739 nova_compute[259550]: 2025-10-07 14:26:44.749 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.042 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 91c66dff-47e6-4b52-9e3f-d8c58d256bcf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.043 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.043 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]: {
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "osd_id": 2,
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "type": "bluestore"
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:    },
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "osd_id": 1,
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "type": "bluestore"
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:    },
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "osd_id": 0,
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:        "type": "bluestore"
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]:    }
Oct  7 10:26:45 np0005473739 recursing_clarke[362925]: }
Oct  7 10:26:45 np0005473739 systemd[1]: libpod-a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5.scope: Deactivated successfully.
Oct  7 10:26:45 np0005473739 podman[363016]: 2025-10-07 14:26:45.232324702 +0000 UTC m=+0.031309447 container died a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True)
Oct  7 10:26:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e9fddb01774a8456d0ab6005e2aa7b3c72844e28ea044dfea54740a4564faebc-merged.mount: Deactivated successfully.
Oct  7 10:26:45 np0005473739 podman[363016]: 2025-10-07 14:26:45.29024645 +0000 UTC m=+0.089231205 container remove a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_clarke, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:26:45 np0005473739 systemd[1]: libpod-conmon-a39692a55a7c9ef8335cba21e82744f5a3f30634d0e5d9e3d85d0435311857c5.scope: Deactivated successfully.
Oct  7 10:26:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.362 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:26:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:26:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:26:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:26:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e4ffc561-563f-41ca-87f0-c225eaf46553 does not exist
Oct  7 10:26:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e3f30dd9-81ca-43d3-8100-fadebc10cf8e does not exist
Oct  7 10:26:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:26:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3626018372' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.870 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.509s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.878 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.896 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.917 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:26:45 np0005473739 nova_compute[259550]: 2025-10-07 14:26:45.918 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:26:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1916: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct  7 10:26:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:26:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:26:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:46 np0005473739 nova_compute[259550]: 2025-10-07 14:26:46.626 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847191.6208487, e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:26:46 np0005473739 nova_compute[259550]: 2025-10-07 14:26:46.626 2 INFO nova.compute.manager [-] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:26:46 np0005473739 nova_compute[259550]: 2025-10-07 14:26:46.647 2 DEBUG nova.compute.manager [None req-5d5a820c-14f1-4477-9549-bb4895bf55b5 - - - - - -] [instance: e11bfabe-4a8b-48d3-98b6-cbc38e3fce7d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:26:46 np0005473739 nova_compute[259550]: 2025-10-07 14:26:46.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:46 np0005473739 nova_compute[259550]: 2025-10-07 14:26:46.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1917: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct  7 10:26:48 np0005473739 nova_compute[259550]: 2025-10-07 14:26:48.893 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1918: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.2 KiB/s wr, 0 op/s
Oct  7 10:26:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:51 np0005473739 nova_compute[259550]: 2025-10-07 14:26:51.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:51 np0005473739 nova_compute[259550]: 2025-10-07 14:26:51.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:51 np0005473739 nova_compute[259550]: 2025-10-07 14:26:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:51 np0005473739 nova_compute[259550]: 2025-10-07 14:26:51.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:26:51 np0005473739 nova_compute[259550]: 2025-10-07 14:26:51.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:26:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1919: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.1 KiB/s wr, 0 op/s
Oct  7 10:26:52 np0005473739 nova_compute[259550]: 2025-10-07 14:26:52.435 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:26:52 np0005473739 nova_compute[259550]: 2025-10-07 14:26:52.436 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:26:52 np0005473739 nova_compute[259550]: 2025-10-07 14:26:52.436 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:26:52 np0005473739 nova_compute[259550]: 2025-10-07 14:26:52.436 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:26:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:26:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:26:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:26:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:26:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:26:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:26:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1920: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  7 10:26:54 np0005473739 nova_compute[259550]: 2025-10-07 14:26:54.146 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:26:54 np0005473739 nova_compute[259550]: 2025-10-07 14:26:54.209 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:26:54 np0005473739 nova_compute[259550]: 2025-10-07 14:26:54.210 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:26:54 np0005473739 nova_compute[259550]: 2025-10-07 14:26:54.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:26:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1921: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  7 10:26:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:26:56 np0005473739 nova_compute[259550]: 2025-10-07 14:26:56.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:56 np0005473739 nova_compute[259550]: 2025-10-07 14:26:56.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:26:57 np0005473739 podman[363103]: 2025-10-07 14:26:57.073810233 +0000 UTC m=+0.061826993 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:26:57 np0005473739 podman[363104]: 2025-10-07 14:26:57.104914305 +0000 UTC m=+0.091513957 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  7 10:26:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1922: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:27:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:00.058 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:00.059 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:00.060 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1923: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 682 B/s wr, 0 op/s
Oct  7 10:27:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:01 np0005473739 nova_compute[259550]: 2025-10-07 14:27:01.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:01 np0005473739 nova_compute[259550]: 2025-10-07 14:27:01.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1924: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct  7 10:27:03 np0005473739 nova_compute[259550]: 2025-10-07 14:27:03.865 2 INFO nova.compute.manager [None req-0f1d7120-f3fe-4b1d-9bbb-02473144f376 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Pausing#033[00m
Oct  7 10:27:03 np0005473739 nova_compute[259550]: 2025-10-07 14:27:03.865 2 DEBUG nova.objects.instance [None req-0f1d7120-f3fe-4b1d-9bbb-02473144f376 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'flavor' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:03 np0005473739 nova_compute[259550]: 2025-10-07 14:27:03.894 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847223.8941445, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:27:03 np0005473739 nova_compute[259550]: 2025-10-07 14:27:03.895 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:27:03 np0005473739 nova_compute[259550]: 2025-10-07 14:27:03.897 2 DEBUG nova.compute.manager [None req-0f1d7120-f3fe-4b1d-9bbb-02473144f376 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:03 np0005473739 nova_compute[259550]: 2025-10-07 14:27:03.919 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:03 np0005473739 nova_compute[259550]: 2025-10-07 14:27:03.922 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:27:03 np0005473739 nova_compute[259550]: 2025-10-07 14:27:03.956 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (pausing). Skip.#033[00m
Oct  7 10:27:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1925: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct  7 10:27:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1926: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct  7 10:27:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.601 2 INFO nova.compute.manager [None req-9a97f8af-4a84-4195-b57e-74d83c7e05be 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Unpausing#033[00m
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.602 2 DEBUG nova.objects.instance [None req-9a97f8af-4a84-4195-b57e-74d83c7e05be 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'flavor' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.626 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847226.626048, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.626 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:27:06 np0005473739 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.630 2 DEBUG nova.virt.libvirt.guest [None req-9a97f8af-4a84-4195-b57e-74d83c7e05be 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.631 2 DEBUG nova.compute.manager [None req-9a97f8af-4a84-4195-b57e-74d83c7e05be 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.654 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.658 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:06 np0005473739 nova_compute[259550]: 2025-10-07 14:27:06.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1927: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct  7 10:27:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1928: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 2.3 KiB/s wr, 0 op/s
Oct  7 10:27:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:11 np0005473739 nova_compute[259550]: 2025-10-07 14:27:11.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:11 np0005473739 nova_compute[259550]: 2025-10-07 14:27:11.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1929: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s wr, 0 op/s
Oct  7 10:27:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1930: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:27:15 np0005473739 podman[363149]: 2025-10-07 14:27:15.076972297 +0000 UTC m=+0.061294018 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:27:15 np0005473739 podman[363150]: 2025-10-07 14:27:15.088953968 +0000 UTC m=+0.062363227 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:27:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1931: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:27:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:16 np0005473739 nova_compute[259550]: 2025-10-07 14:27:16.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:16 np0005473739 nova_compute[259550]: 2025-10-07 14:27:16.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1932: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:27:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:19.340 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=27, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=26) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:19 np0005473739 nova_compute[259550]: 2025-10-07 14:27:19.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:19.341 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:27:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1933: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct  7 10:27:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:21 np0005473739 nova_compute[259550]: 2025-10-07 14:27:21.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:21 np0005473739 nova_compute[259550]: 2025-10-07 14:27:21.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1934: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 8.1 KiB/s wr, 1 op/s
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:27:22
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'default.rgw.meta', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'backups', 'cephfs.cephfs.meta', 'images', 'volumes', 'default.rgw.control', 'default.rgw.log']
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:27:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:27:23 np0005473739 nova_compute[259550]: 2025-10-07 14:27:23.108 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:23 np0005473739 nova_compute[259550]: 2025-10-07 14:27:23.108 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:23 np0005473739 nova_compute[259550]: 2025-10-07 14:27:23.109 2 INFO nova.compute.manager [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Shelving#033[00m
Oct  7 10:27:23 np0005473739 nova_compute[259550]: 2025-10-07 14:27:23.147 2 DEBUG nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:27:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1935: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 11 KiB/s wr, 2 op/s
Oct  7 10:27:25 np0005473739 kernel: tap8d87c94f-f4 (unregistering): left promiscuous mode
Oct  7 10:27:25 np0005473739 NetworkManager[44949]: <info>  [1759847245.3728] device (tap8d87c94f-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:27:25 np0005473739 nova_compute[259550]: 2025-10-07 14:27:25.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:27:25Z|01032|binding|INFO|Releasing lport 8d87c94f-f436-4619-9ca7-9e116cab44bf from this chassis (sb_readonly=0)
Oct  7 10:27:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:27:25Z|01033|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf down in Southbound
Oct  7 10:27:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:27:25Z|01034|binding|INFO|Removing iface tap8d87c94f-f4 ovn-installed in OVS
Oct  7 10:27:25 np0005473739 nova_compute[259550]: 2025-10-07 14:27:25.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:25 np0005473739 nova_compute[259550]: 2025-10-07 14:27:25.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:25 np0005473739 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct  7 10:27:25 np0005473739 systemd[1]: machine-qemu\x2d127\x2dinstance\x2d00000066.scope: Consumed 16.697s CPU time.
Oct  7 10:27:25 np0005473739 systemd-machined[214580]: Machine qemu-127-instance-00000066 terminated.
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.475 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.476 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a unbound from our chassis#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.477 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df30eae3-80f8-4787-8c66-2dfab55da65a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.478 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[beb694b5-0eab-4e4a-9dbc-52fc8964852d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.480 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace which is not needed anymore#033[00m
Oct  7 10:27:25 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [NOTICE]   (361561) : haproxy version is 2.8.14-c23fe91
Oct  7 10:27:25 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [NOTICE]   (361561) : path to executable is /usr/sbin/haproxy
Oct  7 10:27:25 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [WARNING]  (361561) : Exiting Master process...
Oct  7 10:27:25 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [WARNING]  (361561) : Exiting Master process...
Oct  7 10:27:25 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [ALERT]    (361561) : Current worker (361563) exited with code 143 (Terminated)
Oct  7 10:27:25 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[361557]: [WARNING]  (361561) : All workers exited. Exiting... (0)
Oct  7 10:27:25 np0005473739 systemd[1]: libpod-d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a.scope: Deactivated successfully.
Oct  7 10:27:25 np0005473739 podman[363214]: 2025-10-07 14:27:25.634545583 +0000 UTC m=+0.061241518 container died d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:27:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d441a2678501fe529d289bed0f151b8829262887c81920cdf574184ba61c8960-merged.mount: Deactivated successfully.
Oct  7 10:27:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a-userdata-shm.mount: Deactivated successfully.
Oct  7 10:27:25 np0005473739 podman[363214]: 2025-10-07 14:27:25.70029712 +0000 UTC m=+0.126993055 container cleanup d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:27:25 np0005473739 systemd[1]: libpod-conmon-d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a.scope: Deactivated successfully.
Oct  7 10:27:25 np0005473739 podman[363252]: 2025-10-07 14:27:25.7657964 +0000 UTC m=+0.043383420 container remove d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.772 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf05dfa-67c3-4baf-b19f-d004ee958a45]: (4, ('Tue Oct  7 02:27:25 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a)\nd52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a\nTue Oct  7 02:27:25 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (d52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a)\nd52e850c2b7b7cbee10458af9303e359840768dde915c66336bf45e922296e6a\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.776 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9670f37-8321-4871-98e2-bd6887a3efb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.778 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:25 np0005473739 nova_compute[259550]: 2025-10-07 14:27:25.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:25 np0005473739 kernel: tapdf30eae3-80: left promiscuous mode
Oct  7 10:27:25 np0005473739 nova_compute[259550]: 2025-10-07 14:27:25.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.805 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b004946-50be-42a2-9929-d1aaf9d99c27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.835 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1945d17c-339a-4e1c-9369-350addd6dfff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.838 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[546afd28-963f-42c7-899c-fc9475836636]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.858 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd21a908-8359-429c-801d-607743a2d9b3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 775473, 'reachable_time': 26548, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 363269, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.861 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:27:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:25.861 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4aaa5634-5603-4b23-bbab-477b805fe417]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:25 np0005473739 systemd[1]: run-netns-ovnmeta\x2ddf30eae3\x2d80f8\x2d4787\x2d8c66\x2d2dfab55da65a.mount: Deactivated successfully.
Oct  7 10:27:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1936: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 16 KiB/s wr, 2 op/s
Oct  7 10:27:26 np0005473739 nova_compute[259550]: 2025-10-07 14:27:26.168 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance shutdown successfully after 3 seconds.#033[00m
Oct  7 10:27:26 np0005473739 nova_compute[259550]: 2025-10-07 14:27:26.176 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance destroyed successfully.#033[00m
Oct  7 10:27:26 np0005473739 nova_compute[259550]: 2025-10-07 14:27:26.177 2 DEBUG nova.objects.instance [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:26 np0005473739 nova_compute[259550]: 2025-10-07 14:27:26.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:26 np0005473739 nova_compute[259550]: 2025-10-07 14:27:26.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:28 np0005473739 podman[363270]: 2025-10-07 14:27:28.073968785 +0000 UTC m=+0.057894218 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:27:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1937: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 16 KiB/s wr, 2 op/s
Oct  7 10:27:28 np0005473739 podman[363271]: 2025-10-07 14:27:28.118759282 +0000 UTC m=+0.094975089 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:27:28 np0005473739 nova_compute[259550]: 2025-10-07 14:27:28.197 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Beginning cold snapshot process#033[00m
Oct  7 10:27:28 np0005473739 nova_compute[259550]: 2025-10-07 14:27:28.739 2 DEBUG nova.virt.libvirt.imagebackend [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:27:29 np0005473739 nova_compute[259550]: 2025-10-07 14:27:29.135 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] creating snapshot(6f362715705247eab083d2a7fd304410) on rbd image(91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:27:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e264 do_prune osdmap full prune enabled
Oct  7 10:27:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e265 e265: 3 total, 3 up, 3 in
Oct  7 10:27:29 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e265: 3 total, 3 up, 3 in
Oct  7 10:27:29 np0005473739 nova_compute[259550]: 2025-10-07 14:27:29.278 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] cloning vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk@6f362715705247eab083d2a7fd304410 to images/6cbf90eb-7382-4407-9e6c-a36d06a02db1 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:27:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:29.343 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '27'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:29 np0005473739 nova_compute[259550]: 2025-10-07 14:27:29.399 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] flattening images/6cbf90eb-7382-4407-9e6c-a36d06a02db1 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:27:29 np0005473739 nova_compute[259550]: 2025-10-07 14:27:29.909 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] removing snapshot(6f362715705247eab083d2a7fd304410) on rbd image(91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:27:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1939: 305 pgs: 305 active+clean; 121 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 1.4 KiB/s rd, 10 KiB/s wr, 2 op/s
Oct  7 10:27:30 np0005473739 nova_compute[259550]: 2025-10-07 14:27:30.158 2 DEBUG nova.compute.manager [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:27:30 np0005473739 nova_compute[259550]: 2025-10-07 14:27:30.158 2 DEBUG oslo_concurrency.lockutils [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:30 np0005473739 nova_compute[259550]: 2025-10-07 14:27:30.158 2 DEBUG oslo_concurrency.lockutils [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:30 np0005473739 nova_compute[259550]: 2025-10-07 14:27:30.159 2 DEBUG oslo_concurrency.lockutils [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:30 np0005473739 nova_compute[259550]: 2025-10-07 14:27:30.159 2 DEBUG nova.compute.manager [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:27:30 np0005473739 nova_compute[259550]: 2025-10-07 14:27:30.159 2 WARNING nova.compute.manager [req-6aa132e1-b00a-4d18-9df6-b62e1cacef5d req-a1a89546-c870-4cbc-87e9-70c794e2f5a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  7 10:27:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e265 do_prune osdmap full prune enabled
Oct  7 10:27:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e266 e266: 3 total, 3 up, 3 in
Oct  7 10:27:30 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e266: 3 total, 3 up, 3 in
Oct  7 10:27:30 np0005473739 nova_compute[259550]: 2025-10-07 14:27:30.257 2 DEBUG nova.storage.rbd_utils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] creating snapshot(snap) on rbd image(6cbf90eb-7382-4407-9e6c-a36d06a02db1) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:27:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e266 do_prune osdmap full prune enabled
Oct  7 10:27:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e267 e267: 3 total, 3 up, 3 in
Oct  7 10:27:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e267: 3 total, 3 up, 3 in
Oct  7 10:27:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:31 np0005473739 nova_compute[259550]: 2025-10-07 14:27:31.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:31 np0005473739 nova_compute[259550]: 2025-10-07 14:27:31.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1942: 305 pgs: 305 active+clean; 128 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 741 KiB/s wr, 6 op/s
Oct  7 10:27:32 np0005473739 nova_compute[259550]: 2025-10-07 14:27:32.451 2 DEBUG nova.compute.manager [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:27:32 np0005473739 nova_compute[259550]: 2025-10-07 14:27:32.452 2 DEBUG oslo_concurrency.lockutils [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:32 np0005473739 nova_compute[259550]: 2025-10-07 14:27:32.452 2 DEBUG oslo_concurrency.lockutils [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:32 np0005473739 nova_compute[259550]: 2025-10-07 14:27:32.452 2 DEBUG oslo_concurrency.lockutils [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:32 np0005473739 nova_compute[259550]: 2025-10-07 14:27:32.452 2 DEBUG nova.compute.manager [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:27:32 np0005473739 nova_compute[259550]: 2025-10-07 14:27:32.453 2 WARNING nova.compute.manager [req-752f3321-5696-422d-a56b-4ad9bf9d3107 req-b3aa27db-cfae-4bb2-b6a6-f3af251c6f1d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007594638235006837 of space, bias 1.0, pg target 0.2278391470502051 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0007368255315165723 of space, bias 1.0, pg target 0.22104765945497168 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:27:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:27:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:27:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328128088' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:27:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:27:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/328128088' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:27:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:33.101 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:33.103 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:27:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:33.104 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:27:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:33.105 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c5ae6f39-e3a6-4eaa-b88e-312bd7f89514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:33 np0005473739 nova_compute[259550]: 2025-10-07 14:27:33.488 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Snapshot image upload complete#033[00m
Oct  7 10:27:33 np0005473739 nova_compute[259550]: 2025-10-07 14:27:33.489 2 DEBUG nova.compute.manager [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:33 np0005473739 nova_compute[259550]: 2025-10-07 14:27:33.682 2 INFO nova.compute.manager [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Shelve offloading#033[00m
Oct  7 10:27:33 np0005473739 nova_compute[259550]: 2025-10-07 14:27:33.689 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance destroyed successfully.#033[00m
Oct  7 10:27:33 np0005473739 nova_compute[259550]: 2025-10-07 14:27:33.690 2 DEBUG nova.compute.manager [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:33 np0005473739 nova_compute[259550]: 2025-10-07 14:27:33.692 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:27:33 np0005473739 nova_compute[259550]: 2025-10-07 14:27:33.693 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:27:33 np0005473739 nova_compute[259550]: 2025-10-07 14:27:33.693 2 DEBUG nova.network.neutron [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:27:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1943: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 7.8 MiB/s rd, 7.8 MiB/s wr, 172 op/s
Oct  7 10:27:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1944: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.8 MiB/s rd, 6.8 MiB/s wr, 150 op/s
Oct  7 10:27:36 np0005473739 nova_compute[259550]: 2025-10-07 14:27:36.208 2 DEBUG nova.network.neutron [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:27:36 np0005473739 nova_compute[259550]: 2025-10-07 14:27:36.279 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:27:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e267 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e267 do_prune osdmap full prune enabled
Oct  7 10:27:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 e268: 3 total, 3 up, 3 in
Oct  7 10:27:36 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e268: 3 total, 3 up, 3 in
Oct  7 10:27:36 np0005473739 nova_compute[259550]: 2025-10-07 14:27:36.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:36 np0005473739 nova_compute[259550]: 2025-10-07 14:27:36.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.662 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance destroyed successfully.#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.662 2 DEBUG nova.objects.instance [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'resources' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.682 2 DEBUG nova.virt.libvirt.vif [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:26:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member',shelved_at='2025-10-07T14:27:33.488961',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6cbf90eb-7382-4407-9e6c-a36d06a02db1'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:27:28Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.682 2 DEBUG nova.network.os_vif_util [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.683 2 DEBUG nova.network.os_vif_util [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.683 2 DEBUG os_vif [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.685 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d87c94f-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.741 2 INFO os_vif [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.776 2 DEBUG nova.compute.manager [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.777 2 DEBUG nova.compute.manager [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing instance network info cache due to event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.777 2 DEBUG oslo_concurrency.lockutils [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.778 2 DEBUG oslo_concurrency.lockutils [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:27:37 np0005473739 nova_compute[259550]: 2025-10-07 14:27:37.778 2 DEBUG nova.network.neutron [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:27:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1946: 305 pgs: 305 active+clean; 200 MiB data, 795 MiB used, 59 GiB / 60 GiB avail; 6.0 MiB/s rd, 5.9 MiB/s wr, 131 op/s
Oct  7 10:27:38 np0005473739 nova_compute[259550]: 2025-10-07 14:27:38.225 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deleting instance files /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_del#033[00m
Oct  7 10:27:38 np0005473739 nova_compute[259550]: 2025-10-07 14:27:38.226 2 INFO nova.virt.libvirt.driver [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deletion of /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_del complete#033[00m
Oct  7 10:27:38 np0005473739 nova_compute[259550]: 2025-10-07 14:27:38.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:38 np0005473739 nova_compute[259550]: 2025-10-07 14:27:38.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.202 2 INFO nova.scheduler.client.report [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Deleted allocations for instance 91c66dff-47e6-4b52-9e3f-d8c58d256bcf#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.346 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.347 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.380 2 DEBUG oslo_concurrency.processutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:27:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:27:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3174143164' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.838 2 DEBUG oslo_concurrency.processutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.844 2 DEBUG nova.compute.provider_tree [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.873 2 DEBUG nova.scheduler.client.report [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.919 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.967 2 DEBUG oslo_concurrency.lockutils [None req-4d922ca2-99ae-4404-83dc-7cd901a34016 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 16.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:39 np0005473739 nova_compute[259550]: 2025-10-07 14:27:39.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1947: 305 pgs: 305 active+clean; 165 MiB data, 777 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.8 MiB/s wr, 135 op/s
Oct  7 10:27:40 np0005473739 nova_compute[259550]: 2025-10-07 14:27:40.623 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847245.6214035, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:27:40 np0005473739 nova_compute[259550]: 2025-10-07 14:27:40.624 2 INFO nova.compute.manager [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:27:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:40.648 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:40.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:27:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:40.653 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:27:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:40.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[015cd650-55a2-4489-8892-8eee578c585b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:40 np0005473739 nova_compute[259550]: 2025-10-07 14:27:40.658 2 DEBUG nova.compute.manager [None req-f5b28fef-ffb2-4c06-8edd-0afbe914b1a2 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:40 np0005473739 nova_compute[259550]: 2025-10-07 14:27:40.944 2 DEBUG nova.network.neutron [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated VIF entry in instance network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:27:40 np0005473739 nova_compute[259550]: 2025-10-07 14:27:40.944 2 DEBUG nova.network.neutron [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": null, "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:27:41 np0005473739 nova_compute[259550]: 2025-10-07 14:27:41.003 2 DEBUG oslo_concurrency.lockutils [req-9c56341a-516b-478d-84ea-0f66787c0498 req-57ba4043-fe61-40bc-8d87-d2b0ecaac3ab 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:27:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:41 np0005473739 nova_compute[259550]: 2025-10-07 14:27:41.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:41 np0005473739 nova_compute[259550]: 2025-10-07 14:27:41.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:41 np0005473739 nova_compute[259550]: 2025-10-07 14:27:41.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:27:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1948: 305 pgs: 305 active+clean; 120 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 4.2 MiB/s wr, 132 op/s
Oct  7 10:27:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:42.626 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:42.627 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:27:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:42.628 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:27:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:42.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[508a3994-18f8-42f0-8b71-aaf44b9e9442]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:42 np0005473739 nova_compute[259550]: 2025-10-07 14:27:42.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1949: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct  7 10:27:44 np0005473739 nova_compute[259550]: 2025-10-07 14:27:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.558 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.559 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.559 2 INFO nova.compute.manager [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Unshelving#033[00m
Oct  7 10:27:45 np0005473739 podman[363523]: 2025-10-07 14:27:45.752459341 +0000 UTC m=+0.058572667 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.754 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.754 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.759 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_requests' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.780 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:45 np0005473739 podman[363522]: 2025-10-07 14:27:45.784584849 +0000 UTC m=+0.094740352 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.805 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.805 2 INFO nova.compute.claims [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:27:45 np0005473739 nova_compute[259550]: 2025-10-07 14:27:45.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.008 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.030 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:27:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1950: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.4 KiB/s wr, 32 op/s
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/221435772' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.485 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:27:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1198d2d6-47d9-4fe9-8c81-aec4d2528e17 does not exist
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.493 2 DEBUG nova.compute.provider_tree [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:27:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a4ee4754-59ee-4726-9732-d01bd7357802 does not exist
Oct  7 10:27:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f3b3bca1-cdfe-4399-bddd-215804c3c8d9 does not exist
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:27:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.572 2 DEBUG nova.scheduler.client.report [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.599 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.844s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.604 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.595s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.604 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.604 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.604 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:46 np0005473739 nova_compute[259550]: 2025-10-07 14:27:46.866 2 INFO nova.network.neutron [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating port 8d87c94f-f436-4619-9ca7-9e116cab44bf with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  7 10:27:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:27:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1178763856' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.038 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:27:47 np0005473739 podman[363850]: 2025-10-07 14:27:47.087450923 +0000 UTC m=+0.046454142 container create 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:27:47 np0005473739 systemd[1]: Started libpod-conmon-6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6.scope.
Oct  7 10:27:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:27:47 np0005473739 podman[363850]: 2025-10-07 14:27:47.064245682 +0000 UTC m=+0.023248951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:27:47 np0005473739 podman[363850]: 2025-10-07 14:27:47.175694971 +0000 UTC m=+0.134698210 container init 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:27:47 np0005473739 podman[363850]: 2025-10-07 14:27:47.182569114 +0000 UTC m=+0.141572333 container start 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:27:47 np0005473739 podman[363850]: 2025-10-07 14:27:47.187622949 +0000 UTC m=+0.146626168 container attach 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:27:47 np0005473739 ecstatic_mccarthy[363867]: 167 167
Oct  7 10:27:47 np0005473739 systemd[1]: libpod-6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6.scope: Deactivated successfully.
Oct  7 10:27:47 np0005473739 conmon[363867]: conmon 6f891371c0443674ecb3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6.scope/container/memory.events
Oct  7 10:27:47 np0005473739 podman[363850]: 2025-10-07 14:27:47.189027156 +0000 UTC m=+0.148030375 container died 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:27:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:27:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:27:47 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:27:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ce13e1c7b78c35ff27185603dc2c723ce7e95ec7b180a220dd410bf2774034a6-merged.mount: Deactivated successfully.
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.233 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.236 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3857MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.236 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.236 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:47 np0005473739 podman[363850]: 2025-10-07 14:27:47.248380623 +0000 UTC m=+0.207383852 container remove 6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:27:47 np0005473739 systemd[1]: libpod-conmon-6f891371c0443674ecb3f9f366fcbc0526089dc743fd83438e86bcfbedeeddc6.scope: Deactivated successfully.
Oct  7 10:27:47 np0005473739 podman[363889]: 2025-10-07 14:27:47.396961733 +0000 UTC m=+0.043692998 container create d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:27:47 np0005473739 systemd[1]: Started libpod-conmon-d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937.scope.
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.449 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 91c66dff-47e6-4b52-9e3f-d8c58d256bcf actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.449 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.449 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:27:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:27:47 np0005473739 podman[363889]: 2025-10-07 14:27:47.376538427 +0000 UTC m=+0.023269702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:27:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:47 np0005473739 podman[363889]: 2025-10-07 14:27:47.497289473 +0000 UTC m=+0.144020748 container init d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:27:47 np0005473739 podman[363889]: 2025-10-07 14:27:47.503189101 +0000 UTC m=+0.149920346 container start d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.511 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:27:47 np0005473739 podman[363889]: 2025-10-07 14:27:47.51250465 +0000 UTC m=+0.159235895 container attach d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:27:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:47.716 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '7', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:47.717 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:27:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:47.718 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:27:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:47.719 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[18d7d44a-a445-4dba-8146-91cd1b84084c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.737 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.738 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.738 2 DEBUG nova.network.neutron [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.740 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.847 2 DEBUG nova.compute.manager [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.848 2 DEBUG nova.compute.manager [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing instance network info cache due to event network-changed-8d87c94f-f436-4619-9ca7-9e116cab44bf. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.848 2 DEBUG oslo_concurrency.lockutils [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:27:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:27:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108190869' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.954 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.959 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.974 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.997 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:27:47 np0005473739 nova_compute[259550]: 2025-10-07 14:27:47.998 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.761s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1951: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  7 10:27:48 np0005473739 sharp_leavitt[363905]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:27:48 np0005473739 sharp_leavitt[363905]: --> relative data size: 1.0
Oct  7 10:27:48 np0005473739 sharp_leavitt[363905]: --> All data devices are unavailable
Oct  7 10:27:48 np0005473739 systemd[1]: libpod-d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937.scope: Deactivated successfully.
Oct  7 10:27:48 np0005473739 podman[363889]: 2025-10-07 14:27:48.606312007 +0000 UTC m=+1.253043252 container died d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:27:48 np0005473739 systemd[1]: libpod-d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937.scope: Consumed 1.044s CPU time.
Oct  7 10:27:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3d9ecf2856dd8d19baaf98282e72f19dbeecbff1f49511d5f44b4ad853c03853-merged.mount: Deactivated successfully.
Oct  7 10:27:48 np0005473739 podman[363889]: 2025-10-07 14:27:48.802752576 +0000 UTC m=+1.449483861 container remove d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 10:27:48 np0005473739 systemd[1]: libpod-conmon-d90d45e7ab4fb3f1388c6d7ba89436101f8b83d8d33b5a0c6850a4ef906a4937.scope: Deactivated successfully.
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.442 2 DEBUG nova.network.neutron [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.469 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.471 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.472 2 INFO nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating image(s)#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.494 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.497 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.498 2 DEBUG oslo_concurrency.lockutils [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.499 2 DEBUG nova.network.neutron [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Refreshing network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.536 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:27:49 np0005473739 podman[364111]: 2025-10-07 14:27:49.456116444 +0000 UTC m=+0.022102331 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.561 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.564 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "e39203b7fd9b2dbef1d5d036a58f08adbcb6966b" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.565 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e39203b7fd9b2dbef1d5d036a58f08adbcb6966b" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:49 np0005473739 podman[364111]: 2025-10-07 14:27:49.625645205 +0000 UTC m=+0.191631112 container create ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 10:27:49 np0005473739 systemd[1]: Started libpod-conmon-ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34.scope.
Oct  7 10:27:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:49.774 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '10', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:27:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:49.776 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:27:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:49.778 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:27:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:49.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9b7de782-ace8-4be2-a023-cdc32abdf3b5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.796 2 DEBUG nova.virt.libvirt.imagebackend [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/6cbf90eb-7382-4407-9e6c-a36d06a02db1/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/6cbf90eb-7382-4407-9e6c-a36d06a02db1/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  7 10:27:49 np0005473739 podman[364111]: 2025-10-07 14:27:49.857895199 +0000 UTC m=+0.423881166 container init ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:27:49 np0005473739 podman[364111]: 2025-10-07 14:27:49.867012113 +0000 UTC m=+0.432998000 container start ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.871 2 DEBUG nova.virt.libvirt.imagebackend [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/6cbf90eb-7382-4407-9e6c-a36d06a02db1/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  7 10:27:49 np0005473739 nova_compute[259550]: 2025-10-07 14:27:49.872 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] cloning images/6cbf90eb-7382-4407-9e6c-a36d06a02db1@snap to None/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:27:49 np0005473739 beautiful_easley[364181]: 167 167
Oct  7 10:27:49 np0005473739 systemd[1]: libpod-ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34.scope: Deactivated successfully.
Oct  7 10:27:49 np0005473739 conmon[364181]: conmon ef1716c3c971fef1d9a2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34.scope/container/memory.events
Oct  7 10:27:49 np0005473739 podman[364111]: 2025-10-07 14:27:49.948024758 +0000 UTC m=+0.514010645 container attach ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:27:49 np0005473739 podman[364111]: 2025-10-07 14:27:49.94847742 +0000 UTC m=+0.514463317 container died ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:27:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1952: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:27:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f25d5884bc60bbf9983225a2c369d3113bda353594ebfe19721e7165153986f6-merged.mount: Deactivated successfully.
Oct  7 10:27:50 np0005473739 podman[364111]: 2025-10-07 14:27:50.387709147 +0000 UTC m=+0.953695034 container remove ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:27:50 np0005473739 nova_compute[259550]: 2025-10-07 14:27:50.426 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "e39203b7fd9b2dbef1d5d036a58f08adbcb6966b" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.862s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:50 np0005473739 systemd[1]: libpod-conmon-ef1716c3c971fef1d9a29f36bcbe782ddec232d4a1e8fb66035f2faa82b95e34.scope: Deactivated successfully.
Oct  7 10:27:50 np0005473739 nova_compute[259550]: 2025-10-07 14:27:50.569 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'migration_context' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:50 np0005473739 podman[364304]: 2025-10-07 14:27:50.572406422 +0000 UTC m=+0.065385789 container create f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:27:50 np0005473739 nova_compute[259550]: 2025-10-07 14:27:50.625 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] flattening vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:27:50 np0005473739 podman[364304]: 2025-10-07 14:27:50.532277839 +0000 UTC m=+0.025257226 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:27:50 np0005473739 systemd[1]: Started libpod-conmon-f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783.scope.
Oct  7 10:27:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:27:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:50 np0005473739 podman[364304]: 2025-10-07 14:27:50.710686997 +0000 UTC m=+0.203666384 container init f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:27:50 np0005473739 podman[364304]: 2025-10-07 14:27:50.720071927 +0000 UTC m=+0.213051284 container start f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:27:50 np0005473739 podman[364304]: 2025-10-07 14:27:50.765878491 +0000 UTC m=+0.258857898 container attach f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:27:50 np0005473739 nova_compute[259550]: 2025-10-07 14:27:50.997 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:51 np0005473739 nova_compute[259550]: 2025-10-07 14:27:51.020 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:51 np0005473739 modest_yonath[364374]: {
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:    "0": [
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:        {
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "devices": [
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "/dev/loop3"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            ],
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_name": "ceph_lv0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_size": "21470642176",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "name": "ceph_lv0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "tags": {
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cluster_name": "ceph",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.crush_device_class": "",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.encrypted": "0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osd_id": "0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.type": "block",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.vdo": "0"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            },
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "type": "block",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "vg_name": "ceph_vg0"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:        }
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:    ],
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:    "1": [
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:        {
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "devices": [
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "/dev/loop4"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            ],
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_name": "ceph_lv1",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_size": "21470642176",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "name": "ceph_lv1",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "tags": {
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cluster_name": "ceph",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.crush_device_class": "",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.encrypted": "0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osd_id": "1",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.type": "block",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.vdo": "0"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            },
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "type": "block",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "vg_name": "ceph_vg1"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:        }
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:    ],
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:    "2": [
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:        {
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "devices": [
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "/dev/loop5"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            ],
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_name": "ceph_lv2",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_size": "21470642176",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "name": "ceph_lv2",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "tags": {
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.cluster_name": "ceph",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.crush_device_class": "",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.encrypted": "0",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osd_id": "2",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.type": "block",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:                "ceph.vdo": "0"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            },
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "type": "block",
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:            "vg_name": "ceph_vg2"
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:        }
Oct  7 10:27:51 np0005473739 modest_yonath[364374]:    ]
Oct  7 10:27:51 np0005473739 modest_yonath[364374]: }
Oct  7 10:27:51 np0005473739 systemd[1]: libpod-f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783.scope: Deactivated successfully.
Oct  7 10:27:51 np0005473739 podman[364388]: 2025-10-07 14:27:51.704033539 +0000 UTC m=+0.026184591 container died f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:27:51 np0005473739 nova_compute[259550]: 2025-10-07 14:27:51.739 2 DEBUG nova.network.neutron [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated VIF entry in instance network info cache for port 8d87c94f-f436-4619-9ca7-9e116cab44bf. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:27:51 np0005473739 nova_compute[259550]: 2025-10-07 14:27:51.741 2 DEBUG nova.network.neutron [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:27:51 np0005473739 nova_compute[259550]: 2025-10-07 14:27:51.757 2 DEBUG oslo_concurrency.lockutils [req-4dc18136-a29e-40a7-8d09-03aab17acfca req-19c025fd-b88a-45ab-a0a1-7e3bfabff26a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:27:51 np0005473739 nova_compute[259550]: 2025-10-07 14:27:51.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:51 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:27:51 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:27:51 np0005473739 nova_compute[259550]: 2025-10-07 14:27:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:51 np0005473739 nova_compute[259550]: 2025-10-07 14:27:51.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:27:51 np0005473739 nova_compute[259550]: 2025-10-07 14:27:51.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.003 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.004 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.004 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1953: 305 pgs: 305 active+clean; 120 MiB data, 749 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 85 B/s wr, 10 op/s
Oct  7 10:27:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f3234d7827d1247b757a4f4c48ae3c620c5161a10bbd4f6d35dcc51ff5e73545-merged.mount: Deactivated successfully.
Oct  7 10:27:52 np0005473739 podman[364388]: 2025-10-07 14:27:52.373743573 +0000 UTC m=+0.695894645 container remove f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_yonath, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:27:52 np0005473739 systemd[1]: libpod-conmon-f0e17b817f73837f7c1b16f7e1543c36814e7ffa6ec982a191203e42c0e3c783.scope: Deactivated successfully.
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.421 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Image rbd:vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.422 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.422 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Ensure instance console log exists: /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.423 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.423 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.423 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.426 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start _get_guest_xml network_info=[{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:27:22Z,direct_url=<?>,disk_format='raw',id=6cbf90eb-7382-4407-9e6c-a36d06a02db1,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1596186961-shelved',owner='65245efcef84404ca2ccf82df5946696',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:27:33Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.430 2 WARNING nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.436 2 DEBUG nova.virt.libvirt.host [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.436 2 DEBUG nova.virt.libvirt.host [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.442 2 DEBUG nova.virt.libvirt.host [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.442 2 DEBUG nova.virt.libvirt.host [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.443 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.443 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:27:22Z,direct_url=<?>,disk_format='raw',id=6cbf90eb-7382-4407-9e6c-a36d06a02db1,min_disk=1,min_ram=0,name='tempest-ServersNegativeTestJSON-server-1596186961-shelved',owner='65245efcef84404ca2ccf82df5946696',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:27:33Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.444 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.444 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.444 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.444 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.445 2 DEBUG nova.virt.hardware [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.446 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.461 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:27:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:27:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:27:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:27:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:27:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:27:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:27:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/428140567' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.924 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.949 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:27:52 np0005473739 nova_compute[259550]: 2025-10-07 14:27:52.958 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:27:53 np0005473739 podman[364584]: 2025-10-07 14:27:53.004922179 +0000 UTC m=+0.023813148 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:27:53 np0005473739 podman[364584]: 2025-10-07 14:27:53.340754322 +0000 UTC m=+0.359645321 container create 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:27:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:27:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/512768072' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.460 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.462 2 DEBUG nova.virt.libvirt.vif [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='6cbf90eb-7382-4407-9e6c-a36d06a02db1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:26:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member',shelved_at='2025-10-07T14:27:33.488961',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6cbf90eb-7382-4407-9e6c-a36d06a02db1'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:27:45Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.463 2 DEBUG nova.network.os_vif_util [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.464 2 DEBUG nova.network.os_vif_util [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.465 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.493 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <uuid>91c66dff-47e6-4b52-9e3f-d8c58d256bcf</uuid>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <name>instance-00000066</name>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <nova:name>tempest-ServersNegativeTestJSON-server-1596186961</nova:name>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:27:52</nova:creationTime>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <nova:user uuid="6b6ae9b333804dcc8e1ed82ba0879e32">tempest-ServersNegativeTestJSON-1110185129-project-member</nova:user>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <nova:project uuid="65245efcef84404ca2ccf82df5946696">tempest-ServersNegativeTestJSON-1110185129</nova:project>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="6cbf90eb-7382-4407-9e6c-a36d06a02db1"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <nova:port uuid="8d87c94f-f436-4619-9ca7-9e116cab44bf">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <entry name="serial">91c66dff-47e6-4b52-9e3f-d8c58d256bcf</entry>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <entry name="uuid">91c66dff-47e6-4b52-9e3f-d8c58d256bcf</entry>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:ce:79:3a"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <target dev="tap8d87c94f-f4"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/console.log" append="off"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:27:53 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:27:53 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:27:53 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:27:53 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.494 2 DEBUG nova.compute.manager [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Preparing to wait for external event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.494 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.495 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.495 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.496 2 DEBUG nova.virt.libvirt.vif [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='6cbf90eb-7382-4407-9e6c-a36d06a02db1',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:26:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member',shelved_at='2025-10-07T14:27:33.488961',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='6cbf90eb-7382-4407-9e6c-a36d06a02db1'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:27:45Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.496 2 DEBUG nova.network.os_vif_util [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.497 2 DEBUG nova.network.os_vif_util [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.497 2 DEBUG os_vif [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.502 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d87c94f-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.503 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d87c94f-f4, col_values=(('external_ids', {'iface-id': '8d87c94f-f436-4619-9ca7-9e116cab44bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:79:3a', 'vm-uuid': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:53 np0005473739 NetworkManager[44949]: <info>  [1759847273.5059] manager: (tap8d87c94f-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/418)
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.517 2 INFO os_vif [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')#033[00m
Oct  7 10:27:53 np0005473739 systemd[1]: Started libpod-conmon-392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94.scope.
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.619 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.619 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.619 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] No VIF found with MAC fa:16:3e:ce:79:3a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.620 2 INFO nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Using config drive#033[00m
Oct  7 10:27:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.646 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:27:53 np0005473739 podman[364584]: 2025-10-07 14:27:53.659407727 +0000 UTC m=+0.678298706 container init 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:27:53 np0005473739 podman[364584]: 2025-10-07 14:27:53.665412097 +0000 UTC m=+0.684303056 container start 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.667 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:53 np0005473739 podman[364584]: 2025-10-07 14:27:53.669383474 +0000 UTC m=+0.688274453 container attach 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct  7 10:27:53 np0005473739 romantic_wright[364624]: 167 167
Oct  7 10:27:53 np0005473739 systemd[1]: libpod-392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94.scope: Deactivated successfully.
Oct  7 10:27:53 np0005473739 conmon[364624]: conmon 392e101ecc239a61c67d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94.scope/container/memory.events
Oct  7 10:27:53 np0005473739 podman[364584]: 2025-10-07 14:27:53.672869896 +0000 UTC m=+0.691760855 container died 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:27:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b446219e50ade6aab90b1a390481dfcf4dd12ba5ff4a232d8e0e356a0704e268-merged.mount: Deactivated successfully.
Oct  7 10:27:53 np0005473739 podman[364584]: 2025-10-07 14:27:53.711059296 +0000 UTC m=+0.729950255 container remove 392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_wright, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:27:53 np0005473739 nova_compute[259550]: 2025-10-07 14:27:53.716 2 DEBUG nova.objects.instance [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'keypairs' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:27:53 np0005473739 systemd[1]: libpod-conmon-392e101ecc239a61c67dc2fb37c3cf8d1e4a14729b27569bc64326ee4004ea94.scope: Deactivated successfully.
Oct  7 10:27:53 np0005473739 podman[364666]: 2025-10-07 14:27:53.890224794 +0000 UTC m=+0.049750180 container create b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct  7 10:27:53 np0005473739 systemd[1]: Started libpod-conmon-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope.
Oct  7 10:27:53 np0005473739 podman[364666]: 2025-10-07 14:27:53.865387831 +0000 UTC m=+0.024913257 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:27:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:27:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:53 np0005473739 podman[364666]: 2025-10-07 14:27:53.985313545 +0000 UTC m=+0.144838941 container init b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 10:27:53 np0005473739 podman[364666]: 2025-10-07 14:27:53.994567732 +0000 UTC m=+0.154093118 container start b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:27:53 np0005473739 podman[364666]: 2025-10-07 14:27:53.997909981 +0000 UTC m=+0.157435387 container attach b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:27:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1954: 305 pgs: 305 active+clean; 159 MiB data, 788 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.138 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.153 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.154 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.192 2 INFO nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Creating config drive at /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.197 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf77v5bax execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.350 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf77v5bax" returned: 0 in 0.153s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.372 2 DEBUG nova.storage.rbd_utils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] rbd image 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.378 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.560 2 DEBUG oslo_concurrency.processutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config 91c66dff-47e6-4b52-9e3f-d8c58d256bcf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.182s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.561 2 INFO nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deleting local config drive /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf/disk.config because it was imported into RBD.#033[00m
Oct  7 10:27:54 np0005473739 kernel: tap8d87c94f-f4: entered promiscuous mode
Oct  7 10:27:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:27:54Z|01035|binding|INFO|Claiming lport 8d87c94f-f436-4619-9ca7-9e116cab44bf for this chassis.
Oct  7 10:27:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:27:54Z|01036|binding|INFO|8d87c94f-f436-4619-9ca7-9e116cab44bf: Claiming fa:16:3e:ce:79:3a 10.100.0.3
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:54 np0005473739 NetworkManager[44949]: <info>  [1759847274.6200] manager: (tap8d87c94f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/419)
Oct  7 10:27:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:27:54Z|01037|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf ovn-installed in OVS
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:54 np0005473739 nova_compute[259550]: 2025-10-07 14:27:54.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:54 np0005473739 systemd-udevd[364740]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:27:54 np0005473739 systemd-machined[214580]: New machine qemu-129-instance-00000066.
Oct  7 10:27:54 np0005473739 NetworkManager[44949]: <info>  [1759847274.6679] device (tap8d87c94f-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:27:54 np0005473739 NetworkManager[44949]: <info>  [1759847274.6686] device (tap8d87c94f-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:27:54 np0005473739 systemd[1]: Started Virtual Machine qemu-129-instance-00000066.
Oct  7 10:27:54 np0005473739 kind_curran[364682]: {
Oct  7 10:27:54 np0005473739 kind_curran[364682]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "osd_id": 2,
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "type": "bluestore"
Oct  7 10:27:54 np0005473739 kind_curran[364682]:    },
Oct  7 10:27:54 np0005473739 kind_curran[364682]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "osd_id": 1,
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "type": "bluestore"
Oct  7 10:27:54 np0005473739 kind_curran[364682]:    },
Oct  7 10:27:54 np0005473739 kind_curran[364682]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "osd_id": 0,
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:27:54 np0005473739 kind_curran[364682]:        "type": "bluestore"
Oct  7 10:27:54 np0005473739 kind_curran[364682]:    }
Oct  7 10:27:54 np0005473739 kind_curran[364682]: }
Oct  7 10:27:55 np0005473739 systemd[1]: libpod-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope: Deactivated successfully.
Oct  7 10:27:55 np0005473739 systemd[1]: libpod-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope: Consumed 1.024s CPU time.
Oct  7 10:27:55 np0005473739 conmon[364682]: conmon b8c14af54ae2a9cd3c03 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope/container/memory.events
Oct  7 10:27:55 np0005473739 podman[364666]: 2025-10-07 14:27:55.024351248 +0000 UTC m=+1.183876634 container died b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:27:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-726696d104e98c7547c6543e395624c24b66d23f2e699fe046c030e09717d3f8-merged.mount: Deactivated successfully.
Oct  7 10:27:55 np0005473739 podman[364666]: 2025-10-07 14:27:55.09174049 +0000 UTC m=+1.251265876 container remove b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_curran, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 10:27:55 np0005473739 systemd[1]: libpod-conmon-b8c14af54ae2a9cd3c0319b37d54fef73522847620de5efb8bd9f537bdef473f.scope: Deactivated successfully.
Oct  7 10:27:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:27:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:27:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:27:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:27:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ba106108-6e6f-470d-bb53-fbad2dc6c63f does not exist
Oct  7 10:27:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev db9ec09e-4ec3-41ba-8553-1ccecffd66e5 does not exist
Oct  7 10:27:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:27:55Z|01038|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf up in Southbound
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.305 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '7', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.306 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a bound to our chassis#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.307 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.321 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed5cdc9d-9b4f-41e3-af62-7c3a3940ce1b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.322 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf30eae3-81 in ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.324 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf30eae3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3bdef5e2-f660-4e4f-95d9-cf07d26d6b62]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.325 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5155d05b-6f54-49c6-90a5-c923705d6fe2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.342 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1f463f77-18bb-405e-a13a-0b03bbd17106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.372 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4726ff7d-0f60-470d-8508-2f7f9ee1c69b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.409 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e9a40f8c-26c0-4247-8de2-1a08c4465a8e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ace15dc4-eec7-4fed-b9ca-94a0dba022b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 systemd-udevd[364742]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:27:55 np0005473739 NetworkManager[44949]: <info>  [1759847275.4177] manager: (tapdf30eae3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/420)
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.448 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8378d8c2-7186-4265-93a7-a0079581ac5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.453 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca2576bc-8e2f-41ec-91f9-7bab9de94f78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 NetworkManager[44949]: <info>  [1759847275.4744] device (tapdf30eae3-80): carrier: link connected
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.481 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d18edb9d-781e-410b-ba16-8a80f03bb7dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.500 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3e40d0e-5f2f-495c-97f9-d32d323834e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785903, 'reachable_time': 42236, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 364865, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.517 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f6bd2a7-484f-493d-b718-329a84937af4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:c8ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 785903, 'tstamp': 785903}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 364866, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[22b97f48-0cc4-44b6-99ae-3f3620503b27]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 301], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785903, 'reachable_time': 42236, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 364882, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.576 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dc7245c3-34cc-4a73-914b-312ee0843ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[80d59c0b-0e20-49d0-bb5d-b1befdb0ec22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.630 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.631 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.631 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:55 np0005473739 kernel: tapdf30eae3-80: entered promiscuous mode
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:55 np0005473739 NetworkManager[44949]: <info>  [1759847275.6351] manager: (tapdf30eae3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/421)
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.636 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.637 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:27:55Z|01039|binding|INFO|Releasing lport 36ffe8eb-a28d-46e1-ae56-324c81416e5e from this chassis (sb_readonly=0)
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.654 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.655 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf268914-398b-4f51-8213-6398222aabca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.655 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-df30eae3-80f8-4787-8c66-2dfab55da65a
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID df30eae3-80f8-4787-8c66-2dfab55da65a
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:27:55 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:55.657 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'env', 'PROCESS_TAG=haproxy-df30eae3-80f8-4787-8c66-2dfab55da65a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df30eae3-80f8-4787-8c66-2dfab55da65a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.865 2 DEBUG nova.compute.manager [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.865 2 DEBUG oslo_concurrency.lockutils [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.871 2 DEBUG oslo_concurrency.lockutils [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.006s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.871 2 DEBUG oslo_concurrency.lockutils [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:55 np0005473739 nova_compute[259550]: 2025-10-07 14:27:55.871 2 DEBUG nova.compute.manager [req-352e7e0e-ce93-4870-9733-04b3e927d70c req-2c20ff73-ab25-4576-82e1-1efa095c04da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Processing event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:27:56 np0005473739 podman[364942]: 2025-10-07 14:27:56.027017791 +0000 UTC m=+0.063393736 container create 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.046 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847276.0452726, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.047 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Started (Lifecycle Event)#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.049 2 DEBUG nova.compute.manager [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.055 2 DEBUG nova.virt.libvirt.driver [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:27:56 np0005473739 systemd[1]: Started libpod-conmon-0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce.scope.
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.058 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance spawned successfully.#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.067 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.072 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:27:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:27:56 np0005473739 podman[364942]: 2025-10-07 14:27:55.987379191 +0000 UTC m=+0.023755156 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:27:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9e094ead86eef4fe25a38bfdd989dd00f422d4b0c9f86053c4d21d1603c9022a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.093 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.094 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847276.045442, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.094 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:27:56 np0005473739 podman[364942]: 2025-10-07 14:27:56.098207252 +0000 UTC m=+0.134583227 container init 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:27:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1955: 305 pgs: 305 active+clean; 200 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 3.9 MiB/s wr, 77 op/s
Oct  7 10:27:56 np0005473739 podman[364942]: 2025-10-07 14:27:56.106656288 +0000 UTC m=+0.143032233 container start 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.111 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.114 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847276.0521812, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.114 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:27:56 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [NOTICE]   (364961) : New worker (364963) forked
Oct  7 10:27:56 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [NOTICE]   (364961) : Loading success.
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.134 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.139 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.163 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:27:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:56.169 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '11', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:56.170 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:27:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:56.171 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:27:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:56.172 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6461eeed-1b77-4b46-b2f0-e9c7f184884c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #90. Immutable memtables: 0.
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.474996) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 51] Flushing memtable with next log file: 90
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276475053, "job": 51, "event": "flush_started", "num_memtables": 1, "num_entries": 1011, "num_deletes": 252, "total_data_size": 1367403, "memory_usage": 1393392, "flush_reason": "Manual Compaction"}
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 51] Level-0 flush table #91: started
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276490277, "cf_name": "default", "job": 51, "event": "table_file_creation", "file_number": 91, "file_size": 1343286, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 40179, "largest_seqno": 41189, "table_properties": {"data_size": 1338281, "index_size": 2529, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1413, "raw_key_size": 10032, "raw_average_key_size": 18, "raw_value_size": 1328172, "raw_average_value_size": 2397, "num_data_blocks": 112, "num_entries": 554, "num_filter_entries": 554, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847196, "oldest_key_time": 1759847196, "file_creation_time": 1759847276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 91, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 51] Flush lasted 15457 microseconds, and 8320 cpu microseconds.
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.490452) [db/flush_job.cc:967] [default] [JOB 51] Level-0 flush table #91: 1343286 bytes OK
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.490512) [db/memtable_list.cc:519] [default] Level-0 commit table #91 started
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.492114) [db/memtable_list.cc:722] [default] Level-0 commit table #91: memtable #1 done
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.492137) EVENT_LOG_v1 {"time_micros": 1759847276492129, "job": 51, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.492156) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 51] Try to delete WAL files size 1362570, prev total WAL file size 1362570, number of live WAL files 2.
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000087.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.494955) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6B760030' seq:72057594037927935, type:22 .. '6B7600323531' seq:0, type:0; will stop at (end)
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 52] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 51 Base level 0, inputs: [91(1311KB)], [89(9423KB)]
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276495018, "job": 52, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [91], "files_L6": [89], "score": -1, "input_data_size": 10992855, "oldest_snapshot_seqno": -1}
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 52] Generated table #92: 6496 keys, 10287771 bytes, temperature: kUnknown
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276556048, "cf_name": "default", "job": 52, "event": "table_file_creation", "file_number": 92, "file_size": 10287771, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10241901, "index_size": 28544, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16261, "raw_key_size": 166845, "raw_average_key_size": 25, "raw_value_size": 10123063, "raw_average_value_size": 1558, "num_data_blocks": 1133, "num_entries": 6496, "num_filter_entries": 6496, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847276, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 92, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.556287) [db/compaction/compaction_job.cc:1663] [default] [JOB 52] Compacted 1@0 + 1@6 files to L6 => 10287771 bytes
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.557323) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 179.8 rd, 168.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.3, 9.2 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(15.8) write-amplify(7.7) OK, records in: 7017, records dropped: 521 output_compression: NoCompression
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.557338) EVENT_LOG_v1 {"time_micros": 1759847276557330, "job": 52, "event": "compaction_finished", "compaction_time_micros": 61126, "compaction_time_cpu_micros": 22764, "output_level": 6, "num_output_files": 1, "total_output_size": 10287771, "num_input_records": 7017, "num_output_records": 6496, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000091.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276557659, "job": 52, "event": "table_file_deletion", "file_number": 91}
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000089.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847276559222, "job": 52, "event": "table_file_deletion", "file_number": 89}
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.494808) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559287) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559294) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559297) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559299) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:27:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:27:56.559301) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:56 np0005473739 nova_compute[259550]: 2025-10-07 14:27:56.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:27:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e268 do_prune osdmap full prune enabled
Oct  7 10:27:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e269 e269: 3 total, 3 up, 3 in
Oct  7 10:27:57 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e269: 3 total, 3 up, 3 in
Oct  7 10:27:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:57.589 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '14', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:27:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:57.590 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:27:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:57.591 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:27:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:27:57.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65833c99-4f39-4ace-b806-539cb2a579bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:27:57 np0005473739 nova_compute[259550]: 2025-10-07 14:27:57.649 2 DEBUG nova.compute.manager [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:27:57 np0005473739 nova_compute[259550]: 2025-10-07 14:27:57.754 2 DEBUG oslo_concurrency.lockutils [None req-dc473a8d-197f-49f8-9d6b-794b5b5af2bb 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 12.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:57 np0005473739 nova_compute[259550]: 2025-10-07 14:27:57.948 2 DEBUG nova.compute.manager [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:27:57 np0005473739 nova_compute[259550]: 2025-10-07 14:27:57.949 2 DEBUG oslo_concurrency.lockutils [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:27:57 np0005473739 nova_compute[259550]: 2025-10-07 14:27:57.949 2 DEBUG oslo_concurrency.lockutils [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:27:57 np0005473739 nova_compute[259550]: 2025-10-07 14:27:57.950 2 DEBUG oslo_concurrency.lockutils [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:27:57 np0005473739 nova_compute[259550]: 2025-10-07 14:27:57.950 2 DEBUG nova.compute.manager [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:27:57 np0005473739 nova_compute[259550]: 2025-10-07 14:27:57.950 2 WARNING nova.compute.manager [req-da11c6e4-0646-4801-ade0-4dbbcad3d840 req-5f682aa6-de19-4634-b41a-c70c7c401697 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state active and task_state None.#033[00m
Oct  7 10:27:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1957: 305 pgs: 305 active+clean; 200 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 4.7 MiB/s rd, 4.7 MiB/s wr, 92 op/s
Oct  7 10:27:58 np0005473739 nova_compute[259550]: 2025-10-07 14:27:58.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:27:59 np0005473739 podman[364972]: 2025-10-07 14:27:59.056562781 +0000 UTC m=+0.052913855 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  7 10:27:59 np0005473739 podman[364973]: 2025-10-07 14:27:59.083809779 +0000 UTC m=+0.078418196 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:28:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:00.059 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:00.060 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:00.060 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1958: 305 pgs: 305 active+clean; 128 MiB data, 771 MiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 4.7 MiB/s wr, 158 op/s
Oct  7 10:28:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:01 np0005473739 nova_compute[259550]: 2025-10-07 14:28:01.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1959: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 207 op/s
Oct  7 10:28:03 np0005473739 nova_compute[259550]: 2025-10-07 14:28:03.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1960: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.8 MiB/s rd, 2.5 MiB/s wr, 126 op/s
Oct  7 10:28:04 np0005473739 nova_compute[259550]: 2025-10-07 14:28:04.623 2 DEBUG nova.objects.instance [None req-2eb11ae5-a7e4-4a5e-a509-2f4b4409caa8 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:28:04 np0005473739 nova_compute[259550]: 2025-10-07 14:28:04.646 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847284.6462533, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:28:04 np0005473739 nova_compute[259550]: 2025-10-07 14:28:04.646 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:28:04 np0005473739 nova_compute[259550]: 2025-10-07 14:28:04.665 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:28:04 np0005473739 nova_compute[259550]: 2025-10-07 14:28:04.672 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:28:04 np0005473739 nova_compute[259550]: 2025-10-07 14:28:04.696 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  7 10:28:05 np0005473739 kernel: tap8d87c94f-f4 (unregistering): left promiscuous mode
Oct  7 10:28:05 np0005473739 NetworkManager[44949]: <info>  [1759847285.2773] device (tap8d87c94f-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:28:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:05Z|01040|binding|INFO|Releasing lport 8d87c94f-f436-4619-9ca7-9e116cab44bf from this chassis (sb_readonly=0)
Oct  7 10:28:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:05Z|01041|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf down in Southbound
Oct  7 10:28:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:05Z|01042|binding|INFO|Removing iface tap8d87c94f-f4 ovn-installed in OVS
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.296 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '9', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.298 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a unbound from our chassis#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.299 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df30eae3-80f8-4787-8c66-2dfab55da65a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.300 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aed93dcf-f16a-45cd-a3ab-62e72bec28d9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.300 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace which is not needed anymore#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:05 np0005473739 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct  7 10:28:05 np0005473739 systemd[1]: machine-qemu\x2d129\x2dinstance\x2d00000066.scope: Consumed 10.390s CPU time.
Oct  7 10:28:05 np0005473739 systemd-machined[214580]: Machine qemu-129-instance-00000066 terminated.
Oct  7 10:28:05 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [NOTICE]   (364961) : haproxy version is 2.8.14-c23fe91
Oct  7 10:28:05 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [NOTICE]   (364961) : path to executable is /usr/sbin/haproxy
Oct  7 10:28:05 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [WARNING]  (364961) : Exiting Master process...
Oct  7 10:28:05 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [ALERT]    (364961) : Current worker (364963) exited with code 143 (Terminated)
Oct  7 10:28:05 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[364957]: [WARNING]  (364961) : All workers exited. Exiting... (0)
Oct  7 10:28:05 np0005473739 systemd[1]: libpod-0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce.scope: Deactivated successfully.
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.461 2 DEBUG nova.compute.manager [None req-2eb11ae5-a7e4-4a5e-a509-2f4b4409caa8 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:28:05 np0005473739 podman[365042]: 2025-10-07 14:28:05.466546517 +0000 UTC m=+0.050033467 container died 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 10:28:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce-userdata-shm.mount: Deactivated successfully.
Oct  7 10:28:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9e094ead86eef4fe25a38bfdd989dd00f422d4b0c9f86053c4d21d1603c9022a-merged.mount: Deactivated successfully.
Oct  7 10:28:05 np0005473739 podman[365042]: 2025-10-07 14:28:05.514721125 +0000 UTC m=+0.098208075 container cleanup 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:28:05 np0005473739 systemd[1]: libpod-conmon-0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce.scope: Deactivated successfully.
Oct  7 10:28:05 np0005473739 podman[365084]: 2025-10-07 14:28:05.586995116 +0000 UTC m=+0.047479270 container remove 0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.593 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a895dab-bc65-405c-bc14-85d6457aeb76]: (4, ('Tue Oct  7 02:28:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce)\n0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce\nTue Oct  7 02:28:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce)\n0f3e7d4c07be37c02ce427d0fd92e4d58d596fbab7555539daefb3e7e92bd2ce\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3aab05a-793d-4a0a-b454-5f7bfe118973]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.595 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:05 np0005473739 kernel: tapdf30eae3-80: left promiscuous mode
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.619 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3314a6b-5044-4028-bad7-01d37917e2b9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.638 2 DEBUG nova.compute.manager [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.639 2 DEBUG oslo_concurrency.lockutils [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.639 2 DEBUG oslo_concurrency.lockutils [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.640 2 DEBUG oslo_concurrency.lockutils [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.640 2 DEBUG nova.compute.manager [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:28:05 np0005473739 nova_compute[259550]: 2025-10-07 14:28:05.640 2 WARNING nova.compute.manager [req-be785c04-40d8-4f3a-8168-3f7b350923cf req-ce5f6ba3-77fc-4d13-a111-8e7a211b036a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state suspended and task_state None.#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.653 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06942c1e-4e09-4b7f-b32f-f305f30cff70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.656 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d85dbb0f-8ab4-45c8-8fa1-9f32a52563c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10da4fb2-a758-42d2-b7f4-d8dd643cbf7c]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 785896, 'reachable_time': 28789, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365102, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:05 np0005473739 systemd[1]: run-netns-ovnmeta\x2ddf30eae3\x2d80f8\x2d4787\x2d8c66\x2d2dfab55da65a.mount: Deactivated successfully.
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.684 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:28:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:05.684 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3d9324b5-0555-4ed0-afe7-b62b37f3af24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:06.016 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '18', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 10.100.0.2 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '15', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:28:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:06.018 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:28:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:06.020 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:28:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:06.021 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[643c449e-a468-4353-82e2-7ae1f274552f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1961: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 115 op/s
Oct  7 10:28:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e269 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e269 do_prune osdmap full prune enabled
Oct  7 10:28:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 e270: 3 total, 3 up, 3 in
Oct  7 10:28:06 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e270: 3 total, 3 up, 3 in
Oct  7 10:28:06 np0005473739 nova_compute[259550]: 2025-10-07 14:28:06.744 2 INFO nova.compute.manager [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Resuming#033[00m
Oct  7 10:28:06 np0005473739 nova_compute[259550]: 2025-10-07 14:28:06.746 2 DEBUG nova.objects.instance [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'flavor' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:28:06 np0005473739 nova_compute[259550]: 2025-10-07 14:28:06.779 2 DEBUG oslo_concurrency.lockutils [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:28:06 np0005473739 nova_compute[259550]: 2025-10-07 14:28:06.779 2 DEBUG oslo_concurrency.lockutils [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquired lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:28:06 np0005473739 nova_compute[259550]: 2025-10-07 14:28:06.780 2 DEBUG nova.network.neutron [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:28:06 np0005473739 nova_compute[259550]: 2025-10-07 14:28:06.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:07 np0005473739 nova_compute[259550]: 2025-10-07 14:28:07.758 2 DEBUG nova.compute.manager [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:28:07 np0005473739 nova_compute[259550]: 2025-10-07 14:28:07.759 2 DEBUG oslo_concurrency.lockutils [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:07 np0005473739 nova_compute[259550]: 2025-10-07 14:28:07.760 2 DEBUG oslo_concurrency.lockutils [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:07 np0005473739 nova_compute[259550]: 2025-10-07 14:28:07.760 2 DEBUG oslo_concurrency.lockutils [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:07 np0005473739 nova_compute[259550]: 2025-10-07 14:28:07.761 2 DEBUG nova.compute.manager [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:28:07 np0005473739 nova_compute[259550]: 2025-10-07 14:28:07.761 2 WARNING nova.compute.manager [req-712dc50c-784c-4360-b16b-6612097bfe3e req-9d315384-f387-428e-a4a6-55c425a270d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state suspended and task_state resuming.#033[00m
Oct  7 10:28:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1963: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 16 KiB/s wr, 115 op/s
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.446 2 DEBUG nova.network.neutron [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [{"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.463 2 DEBUG oslo_concurrency.lockutils [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Releasing lock "refresh_cache-91c66dff-47e6-4b52-9e3f-d8c58d256bcf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.470 2 DEBUG nova.virt.libvirt.vif [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:28:05Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.470 2 DEBUG nova.network.os_vif_util [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.471 2 DEBUG nova.network.os_vif_util [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.471 2 DEBUG os_vif [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8d87c94f-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8d87c94f-f4, col_values=(('external_ids', {'iface-id': '8d87c94f-f436-4619-9ca7-9e116cab44bf', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ce:79:3a', 'vm-uuid': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.476 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.477 2 INFO os_vif [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.500 2 DEBUG nova.objects.instance [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'numa_topology' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:08 np0005473739 kernel: tap8d87c94f-f4: entered promiscuous mode
Oct  7 10:28:08 np0005473739 NetworkManager[44949]: <info>  [1759847288.5827] manager: (tap8d87c94f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/422)
Oct  7 10:28:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:08Z|01043|binding|INFO|Claiming lport 8d87c94f-f436-4619-9ca7-9e116cab44bf for this chassis.
Oct  7 10:28:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:08Z|01044|binding|INFO|8d87c94f-f436-4619-9ca7-9e116cab44bf: Claiming fa:16:3e:ce:79:3a 10.100.0.3
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.593 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '10', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.594 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a bound to our chassis#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.595 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df30eae3-80f8-4787-8c66-2dfab55da65a#033[00m
Oct  7 10:28:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:08Z|01045|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf ovn-installed in OVS
Oct  7 10:28:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:08Z|01046|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf up in Southbound
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.614 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2bf991a5-dad6-4614-bf9e-102144cd5ffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.615 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf30eae3-81 in ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:28:08 np0005473739 systemd-udevd[365117]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.617 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf30eae3-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.617 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[03cb27db-349c-42a5-81e2-83b450872cf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.618 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92ae752c-e7b1-4cfe-b68d-5f93addaf648]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 NetworkManager[44949]: <info>  [1759847288.6311] device (tap8d87c94f-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:28:08 np0005473739 NetworkManager[44949]: <info>  [1759847288.6326] device (tap8d87c94f-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:28:08 np0005473739 systemd-machined[214580]: New machine qemu-130-instance-00000066.
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.634 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dac39046-aea7-45a7-9085-8fadb1c47e06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 systemd[1]: Started Virtual Machine qemu-130-instance-00000066.
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.664 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4168ebbc-7836-4da0-836f-b3bc19bdc1ab]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.695 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[88093356-93d1-4a52-beaa-8a8d1378d466]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 NetworkManager[44949]: <info>  [1759847288.7013] manager: (tapdf30eae3-80): new Veth device (/org/freedesktop/NetworkManager/Devices/423)
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.700 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3907554b-d42b-4cc3-adfd-21b255eb9b06]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 systemd-udevd[365122]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.735 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bebb12e9-e39a-4b4c-b75a-d2c6531192a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.737 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[668c3bdb-3e33-4dd1-b6da-23ce7be2e658]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 NetworkManager[44949]: <info>  [1759847288.7620] device (tapdf30eae3-80): carrier: link connected
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.770 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b9cb1d-af51-4513-bcf2-16865c7d3911]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.794 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d8f07bf2-9cb0-4694-b1b4-4d827b0e7927]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787232, 'reachable_time': 33075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365151, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b8e8fa44-2c97-48d6-b7ab-4c1b19a0deb9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed6:c8ab'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 787232, 'tstamp': 787232}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 365152, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.844 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76aa904a-1e46-4a85-8648-37406aa57ae0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf30eae3-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d6:c8:ab'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 304], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787232, 'reachable_time': 33075, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 365153, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.888 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[08911520-3c1f-42a2-a54a-f311be1225cd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.981 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41e1b1e1-8606-4dda-be8c-da78b20c0be4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.984 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.984 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.985 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf30eae3-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:08 np0005473739 kernel: tapdf30eae3-80: entered promiscuous mode
Oct  7 10:28:08 np0005473739 NetworkManager[44949]: <info>  [1759847288.9880] manager: (tapdf30eae3-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/424)
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:08.996 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf30eae3-80, col_values=(('external_ids', {'iface-id': '36ffe8eb-a28d-46e1-ae56-324c81416e5e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:08Z|01047|binding|INFO|Releasing lport 36ffe8eb-a28d-46e1-ae56-324c81416e5e from this chassis (sb_readonly=0)
Oct  7 10:28:08 np0005473739 nova_compute[259550]: 2025-10-07 14:28:08.998 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:09.016 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:09.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c6bc0daa-fe13-4179-900f-abe34d6d8ebc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:09.018 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-df30eae3-80f8-4787-8c66-2dfab55da65a
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/df30eae3-80f8-4787-8c66-2dfab55da65a.pid.haproxy
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID df30eae3-80f8-4787-8c66-2dfab55da65a
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:28:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:09.019 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'env', 'PROCESS_TAG=haproxy-df30eae3-80f8-4787-8c66-2dfab55da65a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df30eae3-80f8-4787-8c66-2dfab55da65a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:28:09 np0005473739 podman[365183]: 2025-10-07 14:28:09.395534212 +0000 UTC m=+0.047241333 container create c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:28:09 np0005473739 systemd[1]: Started libpod-conmon-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e.scope.
Oct  7 10:28:09 np0005473739 podman[365183]: 2025-10-07 14:28:09.371261384 +0000 UTC m=+0.022968515 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:28:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:28:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/377d196f06f2dbd8119d4f06850230af8d64915c4afaebe12208a94e230e18c6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:09 np0005473739 podman[365183]: 2025-10-07 14:28:09.494486096 +0000 UTC m=+0.146193217 container init c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:28:09 np0005473739 podman[365183]: 2025-10-07 14:28:09.502796849 +0000 UTC m=+0.154503970 container start c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:28:09 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [NOTICE]   (365203) : New worker (365205) forked
Oct  7 10:28:09 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [NOTICE]   (365203) : Loading success.
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.846 2 DEBUG nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.847 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.847 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.847 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.848 2 DEBUG nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.848 2 WARNING nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state suspended and task_state resuming.#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.848 2 DEBUG nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.848 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.849 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.849 2 DEBUG oslo_concurrency.lockutils [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.849 2 DEBUG nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:28:09 np0005473739 nova_compute[259550]: 2025-10-07 14:28:09.849 2 WARNING nova.compute.manager [req-53990dd7-85ca-4356-b431-436946fcdbf9 req-2eb32605-22d9-4fbf-9a17-03daec171500 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state suspended and task_state resuming.#033[00m
Oct  7 10:28:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1964: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 307 B/s wr, 49 op/s
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.369 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 91c66dff-47e6-4b52-9e3f-d8c58d256bcf due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.370 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847290.3693898, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.370 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Started (Lifecycle Event)#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.386 2 DEBUG nova.compute.manager [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.386 2 DEBUG nova.objects.instance [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'pci_devices' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.394 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.396 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.401 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance running successfully.#033[00m
Oct  7 10:28:10 np0005473739 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.404 2 DEBUG nova.virt.libvirt.guest [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.405 2 DEBUG nova.compute.manager [None req-81c54bf2-4962-492e-bb06-28ed8e8d73c9 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.418 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.418 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847290.3723118, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.419 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.472 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:28:10 np0005473739 nova_compute[259550]: 2025-10-07 14:28:10.476 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:28:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:11 np0005473739 nova_compute[259550]: 2025-10-07 14:28:11.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1965: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:13 np0005473739 nova_compute[259550]: 2025-10-07 14:28:13.555 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1966: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 11 op/s
Oct  7 10:28:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:15Z|00109|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:ce:79:3a 10.100.0.3
Oct  7 10:28:16 np0005473739 podman[365256]: 2025-10-07 14:28:16.077750784 +0000 UTC m=+0.062441490 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, managed_by=edpm_ansible)
Oct  7 10:28:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1967: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 441 KiB/s rd, 3.2 KiB/s wr, 33 op/s
Oct  7 10:28:16 np0005473739 podman[365257]: 2025-10-07 14:28:16.111823394 +0000 UTC m=+0.090244472 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:28:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:16 np0005473739 nova_compute[259550]: 2025-10-07 14:28:16.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1968: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 379 KiB/s rd, 2.8 KiB/s wr, 28 op/s
Oct  7 10:28:18 np0005473739 nova_compute[259550]: 2025-10-07 14:28:18.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1969: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 12 KiB/s wr, 48 op/s
Oct  7 10:28:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.759 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.759 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.760 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.760 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.760 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.761 2 INFO nova.compute.manager [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Terminating instance#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.762 2 DEBUG nova.compute.manager [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:28:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.765 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=28, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=27) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.767 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:28:21 np0005473739 kernel: tap8d87c94f-f4 (unregistering): left promiscuous mode
Oct  7 10:28:21 np0005473739 NetworkManager[44949]: <info>  [1759847301.8169] device (tap8d87c94f-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.819 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:21Z|01048|binding|INFO|Releasing lport 8d87c94f-f436-4619-9ca7-9e116cab44bf from this chassis (sb_readonly=0)
Oct  7 10:28:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:21Z|01049|binding|INFO|Setting lport 8d87c94f-f436-4619-9ca7-9e116cab44bf down in Southbound
Oct  7 10:28:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:21Z|01050|binding|INFO|Removing iface tap8d87c94f-f4 ovn-installed in OVS
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.838 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ce:79:3a 10.100.0.3'], port_security=['fa:16:3e:ce:79:3a 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '91c66dff-47e6-4b52-9e3f-d8c58d256bcf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df30eae3-80f8-4787-8c66-2dfab55da65a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '65245efcef84404ca2ccf82df5946696', 'neutron:revision_number': '11', 'neutron:security_group_ids': '7ce7b451-d224-4231-90f8-24fa819f5920', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=6efac365-0f4c-42a7-9c1f-c734401b92b1, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8d87c94f-f436-4619-9ca7-9e116cab44bf) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:28:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.839 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8d87c94f-f436-4619-9ca7-9e116cab44bf in datapath df30eae3-80f8-4787-8c66-2dfab55da65a unbound from our chassis#033[00m
Oct  7 10:28:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.840 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df30eae3-80f8-4787-8c66-2dfab55da65a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:28:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.841 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a4c99b2-ed7e-48b3-b849-19dca1c4ca48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:21.841 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a namespace which is not needed anymore#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:21 np0005473739 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000066.scope: Deactivated successfully.
Oct  7 10:28:21 np0005473739 systemd[1]: machine-qemu\x2d130\x2dinstance\x2d00000066.scope: Consumed 5.990s CPU time.
Oct  7 10:28:21 np0005473739 systemd-machined[214580]: Machine qemu-130-instance-00000066 terminated.
Oct  7 10:28:21 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [NOTICE]   (365203) : haproxy version is 2.8.14-c23fe91
Oct  7 10:28:21 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [NOTICE]   (365203) : path to executable is /usr/sbin/haproxy
Oct  7 10:28:21 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [WARNING]  (365203) : Exiting Master process...
Oct  7 10:28:21 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [ALERT]    (365203) : Current worker (365205) exited with code 143 (Terminated)
Oct  7 10:28:21 np0005473739 neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a[365198]: [WARNING]  (365203) : All workers exited. Exiting... (0)
Oct  7 10:28:21 np0005473739 systemd[1]: libpod-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e.scope: Deactivated successfully.
Oct  7 10:28:21 np0005473739 conmon[365198]: conmon c8524fb3c898a9ddaab3 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e.scope/container/memory.events
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:21 np0005473739 podman[365320]: 2025-10-07 14:28:21.980266253 +0000 UTC m=+0.042364724 container died c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.989 2 INFO nova.virt.libvirt.driver [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Instance destroyed successfully.#033[00m
Oct  7 10:28:21 np0005473739 nova_compute[259550]: 2025-10-07 14:28:21.990 2 DEBUG nova.objects.instance [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lazy-loading 'resources' on Instance uuid 91c66dff-47e6-4b52-9e3f-d8c58d256bcf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:28:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e-userdata-shm.mount: Deactivated successfully.
Oct  7 10:28:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-377d196f06f2dbd8119d4f06850230af8d64915c4afaebe12208a94e230e18c6-merged.mount: Deactivated successfully.
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.013 2 DEBUG nova.virt.libvirt.vif [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:26:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersNegativeTestJSON-server-1596186961',display_name='tempest-ServersNegativeTestJSON-server-1596186961',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serversnegativetestjson-server-1596186961',id=102,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:27:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='65245efcef84404ca2ccf82df5946696',ramdisk_id='',reservation_id='r-7x9o1pxk',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersNegativeTestJSON-1110185129',owner_user_name='tempest-ServersNegativeTestJSON-1110185129-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:28:10Z,user_data=None,user_id='6b6ae9b333804dcc8e1ed82ba0879e32',uuid=91c66dff-47e6-4b52-9e3f-d8c58d256bcf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.014 2 DEBUG nova.network.os_vif_util [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converting VIF {"id": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "address": "fa:16:3e:ce:79:3a", "network": {"id": "df30eae3-80f8-4787-8c66-2dfab55da65a", "bridge": "br-int", "label": "tempest-ServersNegativeTestJSON-1605871178-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "65245efcef84404ca2ccf82df5946696", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8d87c94f-f4", "ovs_interfaceid": "8d87c94f-f436-4619-9ca7-9e116cab44bf", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.015 2 DEBUG nova.network.os_vif_util [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.015 2 DEBUG os_vif [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:28:22 np0005473739 podman[365320]: 2025-10-07 14:28:22.015989957 +0000 UTC m=+0.078088428 container cleanup c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.017 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8d87c94f-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.022 2 INFO os_vif [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ce:79:3a,bridge_name='br-int',has_traffic_filtering=True,id=8d87c94f-f436-4619-9ca7-9e116cab44bf,network=Network(df30eae3-80f8-4787-8c66-2dfab55da65a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8d87c94f-f4')#033[00m
Oct  7 10:28:22 np0005473739 systemd[1]: libpod-conmon-c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e.scope: Deactivated successfully.
Oct  7 10:28:22 np0005473739 podman[365357]: 2025-10-07 14:28:22.089783469 +0000 UTC m=+0.050755207 container remove c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b55cfe4-2e48-4e70-8d68-ab43d0945336]: (4, ('Tue Oct  7 02:28:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e)\nc8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e\nTue Oct  7 02:28:22 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a (c8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e)\nc8524fb3c898a9ddaab31ff00d5059a1237d19263806f80fe04b26f3c291401e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.100 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99375131-8c33-479f-ab7e-4f6966e4f44c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf30eae3-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:22 np0005473739 kernel: tapdf30eae3-80: left promiscuous mode
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1970: 305 pgs: 305 active+clean; 121 MiB data, 770 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 20 KiB/s wr, 49 op/s
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.118 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7aa2411-0889-4bd7-b905-4d097a0f3c53]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.147 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[405c9141-c734-4aa2-9a88-7fc67d92295a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.148 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9967ff4d-f990-4fcc-abea-6ddb6b2aae6e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.170 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bd8c1e5-bda4-4901-9878-ab2a27c54488]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 787225, 'reachable_time': 32286, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 365390, 'error': None, 'target': 'ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.173 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df30eae3-80f8-4787-8c66-2dfab55da65a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:28:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:22.173 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[79d4db00-ff94-42f9-aeff-669fb5a504af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:22 np0005473739 systemd[1]: run-netns-ovnmeta\x2ddf30eae3\x2d80f8\x2d4787\x2d8c66\x2d2dfab55da65a.mount: Deactivated successfully.
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.213 2 DEBUG nova.compute.manager [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.214 2 DEBUG oslo_concurrency.lockutils [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.214 2 DEBUG oslo_concurrency.lockutils [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.215 2 DEBUG oslo_concurrency.lockutils [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.215 2 DEBUG nova.compute.manager [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.215 2 DEBUG nova.compute.manager [req-90d6499e-08bc-40bf-8952-c7dadc145435 req-868ac5dd-31bd-45a1-a4b2-e572037e0215 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-unplugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.375 2 INFO nova.virt.libvirt.driver [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deleting instance files /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_del#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.376 2 INFO nova.virt.libvirt.driver [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deletion of /var/lib/nova/instances/91c66dff-47e6-4b52-9e3f-d8c58d256bcf_del complete#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.452 2 INFO nova.compute.manager [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.453 2 DEBUG oslo.service.loopingcall [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.453 2 DEBUG nova.compute.manager [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:28:22 np0005473739 nova_compute[259550]: 2025-10-07 14:28:22.454 2 DEBUG nova.network.neutron [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:28:22
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.meta', 'backups', 'images', 'cephfs.cephfs.data', 'default.rgw.control', '.mgr', 'vms', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.log']
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:28:22 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:28:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:28:23 np0005473739 nova_compute[259550]: 2025-10-07 14:28:23.685 2 DEBUG nova.network.neutron [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:28:23 np0005473739 nova_compute[259550]: 2025-10-07 14:28:23.709 2 INFO nova.compute.manager [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Took 1.25 seconds to deallocate network for instance.#033[00m
Oct  7 10:28:23 np0005473739 nova_compute[259550]: 2025-10-07 14:28:23.762 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:23 np0005473739 nova_compute[259550]: 2025-10-07 14:28:23.763 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:23 np0005473739 nova_compute[259550]: 2025-10-07 14:28:23.830 2 DEBUG oslo_concurrency.processutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:28:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1971: 305 pgs: 305 active+clean; 82 MiB data, 751 MiB used, 59 GiB / 60 GiB avail; 551 KiB/s rd, 22 KiB/s wr, 75 op/s
Oct  7 10:28:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:28:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2991991570' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.261 2 DEBUG oslo_concurrency.processutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.431s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.268 2 DEBUG nova.compute.provider_tree [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.287 2 DEBUG nova.scheduler.client.report [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.315 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.552s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.323 2 DEBUG nova.compute.manager [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.324 2 DEBUG oslo_concurrency.lockutils [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.324 2 DEBUG oslo_concurrency.lockutils [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.324 2 DEBUG oslo_concurrency.lockutils [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.325 2 DEBUG nova.compute.manager [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] No waiting events found dispatching network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.325 2 WARNING nova.compute.manager [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received unexpected event network-vif-plugged-8d87c94f-f436-4619-9ca7-9e116cab44bf for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.325 2 DEBUG nova.compute.manager [req-a69b1c35-5a5e-4eb4-b581-ccca1b737ed1 req-715e6abb-843c-4a56-abd4-c6e3feb457df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Received event network-vif-deleted-8d87c94f-f436-4619-9ca7-9e116cab44bf external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.345 2 INFO nova.scheduler.client.report [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Deleted allocations for instance 91c66dff-47e6-4b52-9e3f-d8c58d256bcf#033[00m
Oct  7 10:28:24 np0005473739 nova_compute[259550]: 2025-10-07 14:28:24.407 2 DEBUG oslo_concurrency.lockutils [None req-d17fad48-56ec-4e91-843c-4ddb9eff832f 6b6ae9b333804dcc8e1ed82ba0879e32 65245efcef84404ca2ccf82df5946696 - - default default] Lock "91c66dff-47e6-4b52-9e3f-d8c58d256bcf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.648s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1972: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 502 KiB/s rd, 22 KiB/s wr, 67 op/s
Oct  7 10:28:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:26 np0005473739 nova_compute[259550]: 2025-10-07 14:28:26.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:27 np0005473739 nova_compute[259550]: 2025-10-07 14:28:27.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:27.770 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '28'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:28:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1973: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 19 KiB/s wr, 49 op/s
Oct  7 10:28:28 np0005473739 nova_compute[259550]: 2025-10-07 14:28:28.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:30 np0005473739 podman[365414]: 2025-10-07 14:28:30.072613243 +0000 UTC m=+0.060009504 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 10:28:30 np0005473739 podman[365415]: 2025-10-07 14:28:30.10653796 +0000 UTC m=+0.090451638 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  7 10:28:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1974: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 184 KiB/s rd, 19 KiB/s wr, 49 op/s
Oct  7 10:28:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:31 np0005473739 nova_compute[259550]: 2025-10-07 14:28:31.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:32 np0005473739 nova_compute[259550]: 2025-10-07 14:28:32.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1975: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 10 KiB/s wr, 28 op/s
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:28:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:28:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:28:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687165527' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:28:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:28:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2687165527' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:28:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1976: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.5 KiB/s wr, 28 op/s
Oct  7 10:28:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1977: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 2 op/s
Oct  7 10:28:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:36 np0005473739 nova_compute[259550]: 2025-10-07 14:28:36.855 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:36 np0005473739 nova_compute[259550]: 2025-10-07 14:28:36.989 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847301.988357, 91c66dff-47e6-4b52-9e3f-d8c58d256bcf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:28:36 np0005473739 nova_compute[259550]: 2025-10-07 14:28:36.990 2 INFO nova.compute.manager [-] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:28:37 np0005473739 nova_compute[259550]: 2025-10-07 14:28:37.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:37 np0005473739 nova_compute[259550]: 2025-10-07 14:28:37.134 2 DEBUG nova.compute.manager [None req-2843025a-6734-408c-a07e-37847966bef0 - - - - - -] [instance: 91c66dff-47e6-4b52-9e3f-d8c58d256bcf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:28:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1978: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:39 np0005473739 nova_compute[259550]: 2025-10-07 14:28:39.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1979: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:40 np0005473739 nova_compute[259550]: 2025-10-07 14:28:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:40 np0005473739 nova_compute[259550]: 2025-10-07 14:28:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:41.644 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8:0:1:f816:3eff:feac:9691'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '30', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e3b81107-fbd8-4b68-adb0-30f49b47dd04, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ed3753df-65a0-439b-ac95-e84eb7b44484) old=Port_Binding(mac=['fa:16:3e:ac:96:91 2001:db8::f816:3eff:feac:9691'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feac:9691/64', 'neutron:device_id': 'ovnmeta-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b1af60e6-ad2e-48e6-9494-943f38a2c137', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ce67336f1dc24551aca26f7099ac84de', 'neutron:revision_number': '28', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:28:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:41.645 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ed3753df-65a0-439b-ac95-e84eb7b44484 in datapath b1af60e6-ad2e-48e6-9494-943f38a2c137 updated#033[00m
Oct  7 10:28:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:41.645 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b1af60e6-ad2e-48e6-9494-943f38a2c137, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:28:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:41.646 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[21350ce9-43ca-4bdc-af48-468fd7746937]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:41 np0005473739 nova_compute[259550]: 2025-10-07 14:28:41.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:41 np0005473739 nova_compute[259550]: 2025-10-07 14:28:41.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:41 np0005473739 nova_compute[259550]: 2025-10-07 14:28:41.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:28:42 np0005473739 nova_compute[259550]: 2025-10-07 14:28:42.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1980: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1981: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:44 np0005473739 nova_compute[259550]: 2025-10-07 14:28:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1982: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:46 np0005473739 nova_compute[259550]: 2025-10-07 14:28:46.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:47 np0005473739 nova_compute[259550]: 2025-10-07 14:28:47.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:47 np0005473739 podman[365461]: 2025-10-07 14:28:47.076987566 +0000 UTC m=+0.058598697 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:28:47 np0005473739 podman[365460]: 2025-10-07 14:28:47.091781872 +0000 UTC m=+0.081551091 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:28:47 np0005473739 nova_compute[259550]: 2025-10-07 14:28:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.009 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.010 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.010 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:28:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1983: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:28:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687717762' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.500 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.682 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.684 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3832MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.684 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.684 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.765 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.782 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.798 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.799 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.822 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.848 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:28:48 np0005473739 nova_compute[259550]: 2025-10-07 14:28:48.867 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:28:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:48.882 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3c:42:ed 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ce30dd1c-3f23-4392-a65c-69af967e4a40', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce30dd1c-3f23-4392-a65c-69af967e4a40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=343ddd66-c1f1-4ffd-958d-c4bb2dd19d72, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=46bd0673-542e-432f-a5ba-9baa7ff32653) old=Port_Binding(mac=['fa:16:3e:3c:42:ed 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-ce30dd1c-3f23-4392-a65c-69af967e4a40', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-ce30dd1c-3f23-4392-a65c-69af967e4a40', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:28:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:48.884 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 46bd0673-542e-432f-a5ba-9baa7ff32653 in datapath ce30dd1c-3f23-4392-a65c-69af967e4a40 updated#033[00m
Oct  7 10:28:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:48.884 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network ce30dd1c-3f23-4392-a65c-69af967e4a40, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:28:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:28:48.886 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45d3ed8d-2707-456e-86aa-8a6f9cc28679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:28:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:28:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/479812549' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:28:49 np0005473739 nova_compute[259550]: 2025-10-07 14:28:49.336 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:28:49 np0005473739 nova_compute[259550]: 2025-10-07 14:28:49.343 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:28:49 np0005473739 nova_compute[259550]: 2025-10-07 14:28:49.374 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:28:49 np0005473739 nova_compute[259550]: 2025-10-07 14:28:49.416 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:28:49 np0005473739 nova_compute[259550]: 2025-10-07 14:28:49.417 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.733s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:28:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1984: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:50 np0005473739 nova_compute[259550]: 2025-10-07 14:28:50.418 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:51 np0005473739 nova_compute[259550]: 2025-10-07 14:28:51.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:52 np0005473739 nova_compute[259550]: 2025-10-07 14:28:52.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1985: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:28:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:28:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:28:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:28:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:28:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:28:52 np0005473739 nova_compute[259550]: 2025-10-07 14:28:52.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:52 np0005473739 nova_compute[259550]: 2025-10-07 14:28:52.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:28:52 np0005473739 nova_compute[259550]: 2025-10-07 14:28:52.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:28:52 np0005473739 nova_compute[259550]: 2025-10-07 14:28:52.998 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:28:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1986: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:28:56 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2744df9d-6abd-4b43-a390-b38137f12f0e does not exist
Oct  7 10:28:56 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c6c3301b-ebb1-4d8d-a962-c7a909abffe4 does not exist
Oct  7 10:28:56 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b8d84c17-02d5-4693-8391-dc36bbbe8436 does not exist
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:28:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1987: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:28:56 np0005473739 podman[365816]: 2025-10-07 14:28:56.656512595 +0000 UTC m=+0.036322732 container create 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:28:56 np0005473739 systemd[1]: Started libpod-conmon-24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb.scope.
Oct  7 10:28:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:28:56 np0005473739 podman[365816]: 2025-10-07 14:28:56.64026665 +0000 UTC m=+0.020076807 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:28:56 np0005473739 podman[365816]: 2025-10-07 14:28:56.737085787 +0000 UTC m=+0.116895954 container init 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:28:56 np0005473739 podman[365816]: 2025-10-07 14:28:56.744884446 +0000 UTC m=+0.124694583 container start 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 10:28:56 np0005473739 podman[365816]: 2025-10-07 14:28:56.748811631 +0000 UTC m=+0.128621788 container attach 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 10:28:56 np0005473739 clever_volhard[365833]: 167 167
Oct  7 10:28:56 np0005473739 systemd[1]: libpod-24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb.scope: Deactivated successfully.
Oct  7 10:28:56 np0005473739 conmon[365833]: conmon 24159a04891d6596201e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb.scope/container/memory.events
Oct  7 10:28:56 np0005473739 podman[365816]: 2025-10-07 14:28:56.756154997 +0000 UTC m=+0.135965134 container died 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 10:28:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-30c554a04a66ab6566aed93cc480c1f3a81ca0a516985e8ad3eef865c7183b8f-merged.mount: Deactivated successfully.
Oct  7 10:28:56 np0005473739 podman[365816]: 2025-10-07 14:28:56.799908636 +0000 UTC m=+0.179718763 container remove 24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=clever_volhard, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:28:56 np0005473739 systemd[1]: libpod-conmon-24159a04891d6596201ecec8b07dda30b9e87c490a5c309ed6a8f22d8413eccb.scope: Deactivated successfully.
Oct  7 10:28:56 np0005473739 nova_compute[259550]: 2025-10-07 14:28:56.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:56 np0005473739 podman[365856]: 2025-10-07 14:28:56.962431859 +0000 UTC m=+0.039860416 container create 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:28:57 np0005473739 systemd[1]: Started libpod-conmon-4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8.scope.
Oct  7 10:28:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:28:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:57 np0005473739 nova_compute[259550]: 2025-10-07 14:28:57.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:28:57 np0005473739 podman[365856]: 2025-10-07 14:28:56.94447873 +0000 UTC m=+0.021907307 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:28:57 np0005473739 podman[365856]: 2025-10-07 14:28:57.043253649 +0000 UTC m=+0.120682196 container init 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 10:28:57 np0005473739 podman[365856]: 2025-10-07 14:28:57.055156657 +0000 UTC m=+0.132585204 container start 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 10:28:57 np0005473739 podman[365856]: 2025-10-07 14:28:57.060098389 +0000 UTC m=+0.137526946 container attach 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:28:58 np0005473739 blissful_keldysh[365873]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:28:58 np0005473739 blissful_keldysh[365873]: --> relative data size: 1.0
Oct  7 10:28:58 np0005473739 blissful_keldysh[365873]: --> All data devices are unavailable
Oct  7 10:28:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1988: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:28:58 np0005473739 systemd[1]: libpod-4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8.scope: Deactivated successfully.
Oct  7 10:28:58 np0005473739 podman[365856]: 2025-10-07 14:28:58.143702263 +0000 UTC m=+1.221130810 container died 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:28:58 np0005473739 systemd[1]: libpod-4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8.scope: Consumed 1.034s CPU time.
Oct  7 10:28:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f2f308967f19607bac87169a3826783923512772124ced6b4c3d10d1d14e94e2-merged.mount: Deactivated successfully.
Oct  7 10:28:58 np0005473739 podman[365856]: 2025-10-07 14:28:58.202792942 +0000 UTC m=+1.280221479 container remove 4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_keldysh, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:28:58 np0005473739 systemd[1]: libpod-conmon-4dbb4eda46c8a80b1f3f2455220f4411bddfa30567d5a35747fc2b1731cc52f8.scope: Deactivated successfully.
Oct  7 10:28:58 np0005473739 podman[366057]: 2025-10-07 14:28:58.802414623 +0000 UTC m=+0.039665950 container create f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 10:28:58 np0005473739 systemd[1]: Started libpod-conmon-f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17.scope.
Oct  7 10:28:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:28:58 np0005473739 podman[366057]: 2025-10-07 14:28:58.785494551 +0000 UTC m=+0.022745928 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:28:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:28:58Z|01051|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  7 10:28:58 np0005473739 podman[366057]: 2025-10-07 14:28:58.886887671 +0000 UTC m=+0.124139038 container init f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:28:58 np0005473739 podman[366057]: 2025-10-07 14:28:58.893412415 +0000 UTC m=+0.130663772 container start f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:28:58 np0005473739 podman[366057]: 2025-10-07 14:28:58.896898259 +0000 UTC m=+0.134149626 container attach f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:28:58 np0005473739 fervent_spence[366073]: 167 167
Oct  7 10:28:58 np0005473739 systemd[1]: libpod-f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17.scope: Deactivated successfully.
Oct  7 10:28:58 np0005473739 podman[366057]: 2025-10-07 14:28:58.89916703 +0000 UTC m=+0.136418387 container died f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:28:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b2fb22d0dfc885b32ae040942d1c1e1f40c0f3f92a0df0adb1b7603f7ec82ec1-merged.mount: Deactivated successfully.
Oct  7 10:28:58 np0005473739 podman[366057]: 2025-10-07 14:28:58.943343349 +0000 UTC m=+0.180594676 container remove f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_spence, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 10:28:58 np0005473739 systemd[1]: libpod-conmon-f76f81eb514ff027784c223d42f18cd40da86a722a7feacf9b1da1ae507e0b17.scope: Deactivated successfully.
Oct  7 10:28:58 np0005473739 nova_compute[259550]: 2025-10-07 14:28:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:28:59 np0005473739 podman[366097]: 2025-10-07 14:28:59.112961522 +0000 UTC m=+0.046481123 container create 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:28:59 np0005473739 systemd[1]: Started libpod-conmon-2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d.scope.
Oct  7 10:28:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:28:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:28:59 np0005473739 podman[366097]: 2025-10-07 14:28:59.095759582 +0000 UTC m=+0.029279193 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:28:59 np0005473739 podman[366097]: 2025-10-07 14:28:59.195367644 +0000 UTC m=+0.128887285 container init 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:28:59 np0005473739 podman[366097]: 2025-10-07 14:28:59.202555836 +0000 UTC m=+0.136075437 container start 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:28:59 np0005473739 podman[366097]: 2025-10-07 14:28:59.206276955 +0000 UTC m=+0.139796616 container attach 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 10:28:59 np0005473739 competent_bohr[366113]: {
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:    "0": [
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:        {
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "devices": [
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "/dev/loop3"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            ],
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_name": "ceph_lv0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_size": "21470642176",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "name": "ceph_lv0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "tags": {
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cluster_name": "ceph",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.crush_device_class": "",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.encrypted": "0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osd_id": "0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.type": "block",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.vdo": "0"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            },
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "type": "block",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "vg_name": "ceph_vg0"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:        }
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:    ],
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:    "1": [
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:        {
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "devices": [
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "/dev/loop4"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            ],
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_name": "ceph_lv1",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_size": "21470642176",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "name": "ceph_lv1",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "tags": {
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cluster_name": "ceph",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.crush_device_class": "",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.encrypted": "0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osd_id": "1",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.type": "block",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.vdo": "0"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            },
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "type": "block",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "vg_name": "ceph_vg1"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:        }
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:    ],
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:    "2": [
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:        {
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "devices": [
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "/dev/loop5"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            ],
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_name": "ceph_lv2",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_size": "21470642176",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "name": "ceph_lv2",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "tags": {
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.cluster_name": "ceph",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.crush_device_class": "",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.encrypted": "0",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osd_id": "2",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.type": "block",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:                "ceph.vdo": "0"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            },
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "type": "block",
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:            "vg_name": "ceph_vg2"
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:        }
Oct  7 10:28:59 np0005473739 competent_bohr[366113]:    ]
Oct  7 10:28:59 np0005473739 competent_bohr[366113]: }
Oct  7 10:28:59 np0005473739 systemd[1]: libpod-2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d.scope: Deactivated successfully.
Oct  7 10:29:00 np0005473739 podman[366097]: 2025-10-07 14:28:59.999958133 +0000 UTC m=+0.933477764 container died 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:29:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:00.061 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:29:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:00.062 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:29:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:00.062 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:29:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cd48edd1fe3c051c07ae4f353cbdcbcf3664792bcffe473ef003bd7ecd90f256-merged.mount: Deactivated successfully.
Oct  7 10:29:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1989: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:00 np0005473739 podman[366097]: 2025-10-07 14:29:00.170083258 +0000 UTC m=+1.103602869 container remove 2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 10:29:00 np0005473739 systemd[1]: libpod-conmon-2d52b482c5f535c64ea48aafe6cfb16e55dc0dbaa39d060d98558e05a6e02d7d.scope: Deactivated successfully.
Oct  7 10:29:00 np0005473739 podman[366133]: 2025-10-07 14:29:00.221442391 +0000 UTC m=+0.102344626 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  7 10:29:00 np0005473739 podman[366146]: 2025-10-07 14:29:00.301155251 +0000 UTC m=+0.089656987 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:29:00 np0005473739 podman[366317]: 2025-10-07 14:29:00.765012955 +0000 UTC m=+0.048950969 container create 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:29:00 np0005473739 systemd[1]: Started libpod-conmon-3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7.scope.
Oct  7 10:29:00 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:29:00 np0005473739 podman[366317]: 2025-10-07 14:29:00.737362956 +0000 UTC m=+0.021300990 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:29:00 np0005473739 podman[366317]: 2025-10-07 14:29:00.838680264 +0000 UTC m=+0.122618298 container init 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 10:29:00 np0005473739 podman[366317]: 2025-10-07 14:29:00.847879869 +0000 UTC m=+0.131817883 container start 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 10:29:00 np0005473739 affectionate_antonelli[366334]: 167 167
Oct  7 10:29:00 np0005473739 systemd[1]: libpod-3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7.scope: Deactivated successfully.
Oct  7 10:29:00 np0005473739 podman[366317]: 2025-10-07 14:29:00.861463602 +0000 UTC m=+0.145401716 container attach 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:29:00 np0005473739 podman[366317]: 2025-10-07 14:29:00.862550702 +0000 UTC m=+0.146488776 container died 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 10:29:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ff64fd6b0945032ac3b090a9ac7bcda24f26720523e14203dc95b02606f903e0-merged.mount: Deactivated successfully.
Oct  7 10:29:00 np0005473739 podman[366317]: 2025-10-07 14:29:00.963736195 +0000 UTC m=+0.247674219 container remove 3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_antonelli, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:29:00 np0005473739 systemd[1]: libpod-conmon-3b0298be7782e921f9a360f57c8379257cad9548d516644f0de1999e05a902c7.scope: Deactivated successfully.
Oct  7 10:29:01 np0005473739 podman[366360]: 2025-10-07 14:29:01.16486534 +0000 UTC m=+0.057578070 container create b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:29:01 np0005473739 systemd[1]: Started libpod-conmon-b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca.scope.
Oct  7 10:29:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:29:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:29:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:29:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:29:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:29:01 np0005473739 podman[366360]: 2025-10-07 14:29:01.131599241 +0000 UTC m=+0.024311991 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:29:01 np0005473739 podman[366360]: 2025-10-07 14:29:01.233426862 +0000 UTC m=+0.126139602 container init b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:29:01 np0005473739 podman[366360]: 2025-10-07 14:29:01.239305388 +0000 UTC m=+0.132018128 container start b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:29:01 np0005473739 podman[366360]: 2025-10-07 14:29:01.241841796 +0000 UTC m=+0.134554536 container attach b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:29:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:01 np0005473739 nova_compute[259550]: 2025-10-07 14:29:01.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:02 np0005473739 nova_compute[259550]: 2025-10-07 14:29:02.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1990: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]: {
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "osd_id": 2,
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "type": "bluestore"
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:    },
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "osd_id": 1,
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "type": "bluestore"
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:    },
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "osd_id": 0,
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:        "type": "bluestore"
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]:    }
Oct  7 10:29:02 np0005473739 competent_khayyam[366377]: }
Oct  7 10:29:02 np0005473739 systemd[1]: libpod-b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca.scope: Deactivated successfully.
Oct  7 10:29:02 np0005473739 podman[366360]: 2025-10-07 14:29:02.177338353 +0000 UTC m=+1.070051093 container died b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 10:29:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a4347c68d0cf43913e2f0acc627fce5a84b05e815ade3d69cc903ebdfb0e91f6-merged.mount: Deactivated successfully.
Oct  7 10:29:02 np0005473739 podman[366360]: 2025-10-07 14:29:02.23448832 +0000 UTC m=+1.127201060 container remove b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_khayyam, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:29:02 np0005473739 systemd[1]: libpod-conmon-b5fb244dd063a46d3287f97fefcc9fd3056189b0559e8b6acf2d59f5490bc4ca.scope: Deactivated successfully.
Oct  7 10:29:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:29:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:29:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:29:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:29:02 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e314e5d-b26e-44fa-bd44-cf99d46d1772 does not exist
Oct  7 10:29:02 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e0c58db4-949e-4e33-944f-12a15bf3178d does not exist
Oct  7 10:29:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:29:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:29:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1991: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1992: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:06 np0005473739 nova_compute[259550]: 2025-10-07 14:29:06.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:07 np0005473739 nova_compute[259550]: 2025-10-07 14:29:07.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1993: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1994: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:11 np0005473739 nova_compute[259550]: 2025-10-07 14:29:11.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:12 np0005473739 nova_compute[259550]: 2025-10-07 14:29:12.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1995: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1996: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1997: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:16 np0005473739 ceph-mds[100686]: mds.beacon.cephfs.compute-0.xpofvx missed beacon ack from the monitors
Oct  7 10:29:16 np0005473739 nova_compute[259550]: 2025-10-07 14:29:16.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:17 np0005473739 nova_compute[259550]: 2025-10-07 14:29:17.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:18 np0005473739 podman[366475]: 2025-10-07 14:29:18.108538452 +0000 UTC m=+0.086134802 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:29:18 np0005473739 podman[366474]: 2025-10-07 14:29:18.112849037 +0000 UTC m=+0.087592031 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:29:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1998: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v1999: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:21 np0005473739 nova_compute[259550]: 2025-10-07 14:29:21.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:22 np0005473739 nova_compute[259550]: 2025-10-07 14:29:22.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2000: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:22.201 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=29, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=28) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:29:22 np0005473739 nova_compute[259550]: 2025-10-07 14:29:22.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:22.201 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:29:22
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', '.mgr', '.rgw.root', 'cephfs.cephfs.data', 'volumes', 'default.rgw.control', 'vms', 'default.rgw.log', 'default.rgw.meta', 'backups']
Oct  7 10:29:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:29:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:29:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:29:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:23.962 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:76:cd 10.100.0.18 10.100.0.2'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcaac634-4eae-48ce-9495-48bae919ff70, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0cb29088-c0bc-48df-80b9-858314de808b) old=Port_Binding(mac=['fa:16:3e:48:76:cd 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:29:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:23.963 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0cb29088-c0bc-48df-80b9-858314de808b in datapath e73bddf2-052a-4493-8a38-fe42db12e853 updated#033[00m
Oct  7 10:29:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:23.964 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e73bddf2-052a-4493-8a38-fe42db12e853, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:29:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:23.965 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b2fdac8-01b3-4ff0-988e-abe862240a75]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:29:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2001: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:25.203 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '29'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:29:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2002: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:26 np0005473739 nova_compute[259550]: 2025-10-07 14:29:26.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:27 np0005473739 nova_compute[259550]: 2025-10-07 14:29:27.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2003: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2004: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:31 np0005473739 podman[366515]: 2025-10-07 14:29:31.054191357 +0000 UTC m=+0.047149401 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:29:31 np0005473739 podman[366516]: 2025-10-07 14:29:31.083042198 +0000 UTC m=+0.074589754 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:29:31 np0005473739 nova_compute[259550]: 2025-10-07 14:29:31.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:32 np0005473739 nova_compute[259550]: 2025-10-07 14:29:32.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2005: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006661126644201341 of space, bias 1.0, pg target 0.19983379932604023 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:29:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:29:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:29:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284740012' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:29:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:29:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2284740012' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:29:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2006: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2007: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:36 np0005473739 nova_compute[259550]: 2025-10-07 14:29:36.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:37 np0005473739 nova_compute[259550]: 2025-10-07 14:29:37.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2008: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2009: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:40 np0005473739 nova_compute[259550]: 2025-10-07 14:29:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:41 np0005473739 nova_compute[259550]: 2025-10-07 14:29:41.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:41 np0005473739 nova_compute[259550]: 2025-10-07 14:29:41.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:41 np0005473739 nova_compute[259550]: 2025-10-07 14:29:41.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:42 np0005473739 nova_compute[259550]: 2025-10-07 14:29:42.149 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2010: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:42 np0005473739 nova_compute[259550]: 2025-10-07 14:29:42.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:42 np0005473739 nova_compute[259550]: 2025-10-07 14:29:42.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:29:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2011: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:44.467 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:76:cd 10.100.0.18 10.100.0.2 10.100.0.34'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28 10.100.0.34/28', 'neutron:device_id': 'ovnmeta-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=fcaac634-4eae-48ce-9495-48bae919ff70, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=0cb29088-c0bc-48df-80b9-858314de808b) old=Port_Binding(mac=['fa:16:3e:48:76:cd 10.100.0.18 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.18/28 10.100.0.2/28', 'neutron:device_id': 'ovnmeta-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e73bddf2-052a-4493-8a38-fe42db12e853', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '9053ea8abeec4d278cfe9e76fe5aa4bc', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:29:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:44.469 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 0cb29088-c0bc-48df-80b9-858314de808b in datapath e73bddf2-052a-4493-8a38-fe42db12e853 updated#033[00m
Oct  7 10:29:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:44.471 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e73bddf2-052a-4493-8a38-fe42db12e853, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:29:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:29:44.473 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43ac70d2-b1d3-499f-b3ad-bc29fa5f361b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:29:45 np0005473739 nova_compute[259550]: 2025-10-07 14:29:45.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2012: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:46 np0005473739 nova_compute[259550]: 2025-10-07 14:29:46.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:47 np0005473739 nova_compute[259550]: 2025-10-07 14:29:47.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2013: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:48 np0005473739 nova_compute[259550]: 2025-10-07 14:29:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:49 np0005473739 podman[366556]: 2025-10-07 14:29:49.058081006 +0000 UTC m=+0.047800368 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001)
Oct  7 10:29:49 np0005473739 podman[366555]: 2025-10-07 14:29:49.06421423 +0000 UTC m=+0.056725236 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:29:49 np0005473739 nova_compute[259550]: 2025-10-07 14:29:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.028 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.028 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.029 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:29:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2014: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:29:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/214961567' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.479 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.633 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.634 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3891MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.634 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.634 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.910 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.911 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:29:50 np0005473739 nova_compute[259550]: 2025-10-07 14:29:50.951 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:29:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:29:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1241488099' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:29:51 np0005473739 nova_compute[259550]: 2025-10-07 14:29:51.389 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:29:51 np0005473739 nova_compute[259550]: 2025-10-07 14:29:51.395 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:29:51 np0005473739 nova_compute[259550]: 2025-10-07 14:29:51.482 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:29:51 np0005473739 nova_compute[259550]: 2025-10-07 14:29:51.484 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:29:51 np0005473739 nova_compute[259550]: 2025-10-07 14:29:51.484 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.850s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:29:51 np0005473739 nova_compute[259550]: 2025-10-07 14:29:51.931 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:52 np0005473739 nova_compute[259550]: 2025-10-07 14:29:52.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2015: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:29:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:29:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:29:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:29:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:29:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:29:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2016: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:54 np0005473739 nova_compute[259550]: 2025-10-07 14:29:54.480 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:54 np0005473739 nova_compute[259550]: 2025-10-07 14:29:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:29:54 np0005473739 nova_compute[259550]: 2025-10-07 14:29:54.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:29:54 np0005473739 nova_compute[259550]: 2025-10-07 14:29:54.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:29:55 np0005473739 nova_compute[259550]: 2025-10-07 14:29:55.022 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:29:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2017: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:56 np0005473739 nova_compute[259550]: 2025-10-07 14:29:56.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:57 np0005473739 nova_compute[259550]: 2025-10-07 14:29:57.155 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:29:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:29:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2018: 305 pgs: 305 active+clean; 41 MiB data, 726 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:29:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e270 do_prune osdmap full prune enabled
Oct  7 10:29:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e271 e271: 3 total, 3 up, 3 in
Oct  7 10:29:59 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e271: 3 total, 3 up, 3 in
Oct  7 10:30:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:30:00.062 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:30:00.062 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:30:00.063 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2020: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 818 B/s rd, 307 B/s wr, 1 op/s
Oct  7 10:30:00 np0005473739 nova_compute[259550]: 2025-10-07 14:30:00.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:01 np0005473739 nova_compute[259550]: 2025-10-07 14:30:01.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:02 np0005473739 podman[366637]: 2025-10-07 14:30:02.085062285 +0000 UTC m=+0.073505386 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:30:02 np0005473739 podman[366638]: 2025-10-07 14:30:02.114042208 +0000 UTC m=+0.101843341 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:30:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2021: 305 pgs: 305 active+clean; 41 MiB data, 724 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.5 KiB/s wr, 14 op/s
Oct  7 10:30:02 np0005473739 nova_compute[259550]: 2025-10-07 14:30:02.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:02 np0005473739 nova_compute[259550]: 2025-10-07 14:30:02.782 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:02 np0005473739 nova_compute[259550]: 2025-10-07 14:30:02.783 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:02 np0005473739 nova_compute[259550]: 2025-10-07 14:30:02.804 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:30:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:02 np0005473739 nova_compute[259550]: 2025-10-07 14:30:02.905 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:02 np0005473739 nova_compute[259550]: 2025-10-07 14:30:02.907 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:02 np0005473739 nova_compute[259550]: 2025-10-07 14:30:02.914 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:30:02 np0005473739 nova_compute[259550]: 2025-10-07 14:30:02.915 2 INFO nova.compute.claims [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.020 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:30:03 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 03909f70-cb23-4d6c-a087-d46c22d41d37 does not exist
Oct  7 10:30:03 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ef0c811d-32fa-4e7a-9758-241e22c356ef does not exist
Oct  7 10:30:03 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 28e8a9aa-ce97-4f14-bae2-63ede14be609 does not exist
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:30:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1981700070' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.508 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.515 2 DEBUG nova.compute.provider_tree [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.531 2 DEBUG nova.scheduler.client.report [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.551 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.644s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.552 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.592 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.593 2 DEBUG nova.network.neutron [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.615 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.636 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.729 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.731 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.731 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Creating image(s)#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.753 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.782 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.808 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.813 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "ab71a249494f042ede61a514c9ad0d1e110b7453" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.814 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "ab71a249494f042ede61a514c9ad0d1e110b7453" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:03 np0005473739 podman[367018]: 2025-10-07 14:30:03.882828972 +0000 UTC m=+0.043451882 container create cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:30:03 np0005473739 systemd[1]: Started libpod-conmon-cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5.scope.
Oct  7 10:30:03 np0005473739 podman[367018]: 2025-10-07 14:30:03.862621981 +0000 UTC m=+0.023244911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:30:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.972 2 DEBUG nova.network.neutron [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 10:30:03 np0005473739 nova_compute[259550]: 2025-10-07 14:30:03.974 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:30:03 np0005473739 podman[367018]: 2025-10-07 14:30:03.983768359 +0000 UTC m=+0.144391349 container init cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:30:03 np0005473739 podman[367018]: 2025-10-07 14:30:03.994541516 +0000 UTC m=+0.155164426 container start cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:30:03 np0005473739 podman[367018]: 2025-10-07 14:30:03.999341555 +0000 UTC m=+0.159964465 container attach cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:30:04 np0005473739 serene_torvalds[367035]: 167 167
Oct  7 10:30:04 np0005473739 systemd[1]: libpod-cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5.scope: Deactivated successfully.
Oct  7 10:30:04 np0005473739 podman[367018]: 2025-10-07 14:30:04.001057471 +0000 UTC m=+0.161680401 container died cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:30:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-eb10c6f878c74439461db26139e427670b3bac64d1055652257173cd3c75a22c-merged.mount: Deactivated successfully.
Oct  7 10:30:04 np0005473739 podman[367018]: 2025-10-07 14:30:04.041951363 +0000 UTC m=+0.202574273 container remove cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:30:04 np0005473739 systemd[1]: libpod-conmon-cf718da5fa82a49289ec71dfcd5a8e5a4d91a1d6a2ace9b63447a41358a19cb5.scope: Deactivated successfully.
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.052 2 DEBUG nova.virt.libvirt.imagebackend [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/3b9fdcc4-750e-4e5b-a404-723fbbf47b21/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/3b9fdcc4-750e-4e5b-a404-723fbbf47b21/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.094 2 DEBUG nova.virt.libvirt.imagebackend [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/3b9fdcc4-750e-4e5b-a404-723fbbf47b21/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.095 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] cloning images/3b9fdcc4-750e-4e5b-a404-723fbbf47b21@snap to None/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:30:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2022: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.193 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "ab71a249494f042ede61a514c9ad0d1e110b7453" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.379s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:04 np0005473739 podman[367125]: 2025-10-07 14:30:04.220587437 +0000 UTC m=+0.049750741 container create 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 10:30:04 np0005473739 systemd[1]: Started libpod-conmon-9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535.scope.
Oct  7 10:30:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:30:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:04 np0005473739 podman[367125]: 2025-10-07 14:30:04.201441305 +0000 UTC m=+0.030604669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:30:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:04 np0005473739 podman[367125]: 2025-10-07 14:30:04.309712918 +0000 UTC m=+0.138876222 container init 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:30:04 np0005473739 podman[367125]: 2025-10-07 14:30:04.318449352 +0000 UTC m=+0.147612656 container start 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.320 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] resizing rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:30:04 np0005473739 podman[367125]: 2025-10-07 14:30:04.323516586 +0000 UTC m=+0.152679900 container attach 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:30:04 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.420 2 DEBUG nova.objects.instance [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lazy-loading 'migration_context' on Instance uuid 5dc59e63-b96a-4de8-a978-dc88a8d1dadd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.436 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.437 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Ensure instance console log exists: /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.438 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.438 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.438 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.440 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c1aca32feac3ec36f1e6a9b17261b81f',container_format='bare',created_at=2025-10-07T14:29:58Z,direct_url=<?>,disk_format='raw',id=3b9fdcc4-750e-4e5b-a404-723fbbf47b21,min_disk=0,min_ram=0,name='tempest-image-dependency-test-2093359418',owner='99916734e57d4c4a8076c48736dca89b',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-07T14:29:59Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '3b9fdcc4-750e-4e5b-a404-723fbbf47b21'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.446 2 WARNING nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.455 2 DEBUG nova.virt.libvirt.host [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.457 2 DEBUG nova.virt.libvirt.host [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.462 2 DEBUG nova.virt.libvirt.host [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.463 2 DEBUG nova.virt.libvirt.host [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.464 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.464 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c1aca32feac3ec36f1e6a9b17261b81f',container_format='bare',created_at=2025-10-07T14:29:58Z,direct_url=<?>,disk_format='raw',id=3b9fdcc4-750e-4e5b-a404-723fbbf47b21,min_disk=0,min_ram=0,name='tempest-image-dependency-test-2093359418',owner='99916734e57d4c4a8076c48736dca89b',properties=ImageMetaProps,protected=<?>,size=1024,status='active',tags=<?>,updated_at=2025-10-07T14:29:59Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.465 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.465 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.466 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.466 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.466 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.466 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.467 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.467 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.467 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.468 2 DEBUG nova.virt.hardware [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.470 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:30:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:30:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/71379425' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:30:04 np0005473739 nova_compute[259550]: 2025-10-07 14:30:04.983 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.004 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.008 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:30:05 np0005473739 great_hawking[367180]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:30:05 np0005473739 great_hawking[367180]: --> relative data size: 1.0
Oct  7 10:30:05 np0005473739 great_hawking[367180]: --> All data devices are unavailable
Oct  7 10:30:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:30:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2079564725' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:30:05 np0005473739 systemd[1]: libpod-9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535.scope: Deactivated successfully.
Oct  7 10:30:05 np0005473739 systemd[1]: libpod-9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535.scope: Consumed 1.058s CPU time.
Oct  7 10:30:05 np0005473739 podman[367125]: 2025-10-07 14:30:05.451568699 +0000 UTC m=+1.280732013 container died 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.455 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.458 2 DEBUG nova.objects.instance [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lazy-loading 'pci_devices' on Instance uuid 5dc59e63-b96a-4de8-a978-dc88a8d1dadd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:30:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-670e2335457cba625180717dcf3e3a77a755a716b64cec9eaf47810ff1973f97-merged.mount: Deactivated successfully.
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.489 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <uuid>5dc59e63-b96a-4de8-a978-dc88a8d1dadd</uuid>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <name>instance-00000068</name>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <nova:name>instance-depend-image</nova:name>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:30:04</nova:creationTime>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <nova:user uuid="abb7b1705a844f358f445b980c484003">tempest-ImageDependencyTests-283254304-project-member</nova:user>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <nova:project uuid="99916734e57d4c4a8076c48736dca89b">tempest-ImageDependencyTests-283254304</nova:project>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="3b9fdcc4-750e-4e5b-a404-723fbbf47b21"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <entry name="serial">5dc59e63-b96a-4de8-a978-dc88a8d1dadd</entry>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <entry name="uuid">5dc59e63-b96a-4de8-a978-dc88a8d1dadd</entry>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/console.log" append="off"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:30:05 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:30:05 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:30:05 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:30:05 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:30:05 np0005473739 podman[367125]: 2025-10-07 14:30:05.511417708 +0000 UTC m=+1.340581012 container remove 9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_hawking, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 10:30:05 np0005473739 systemd[1]: libpod-conmon-9c7b3d07ebc25573a7dee79e0317ee150c2e39513cb710b860ea265a3485f535.scope: Deactivated successfully.
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.556 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.556 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.557 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Using config drive#033[00m
Oct  7 10:30:05 np0005473739 nova_compute[259550]: 2025-10-07 14:30:05.584 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:30:06 np0005473739 podman[367482]: 2025-10-07 14:30:06.099010739 +0000 UTC m=+0.050970043 container create 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:30:06 np0005473739 nova_compute[259550]: 2025-10-07 14:30:06.132 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Creating config drive at /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config#033[00m
Oct  7 10:30:06 np0005473739 nova_compute[259550]: 2025-10-07 14:30:06.138 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53u0ocmz execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:30:06 np0005473739 systemd[1]: Started libpod-conmon-74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d.scope.
Oct  7 10:30:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2023: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Oct  7 10:30:06 np0005473739 podman[367482]: 2025-10-07 14:30:06.07022726 +0000 UTC m=+0.022186574 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:30:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:30:06 np0005473739 podman[367482]: 2025-10-07 14:30:06.188348406 +0000 UTC m=+0.140307700 container init 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:30:06 np0005473739 podman[367482]: 2025-10-07 14:30:06.19597057 +0000 UTC m=+0.147929864 container start 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 10:30:06 np0005473739 podman[367482]: 2025-10-07 14:30:06.19896308 +0000 UTC m=+0.150922394 container attach 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:30:06 np0005473739 optimistic_perlman[367498]: 167 167
Oct  7 10:30:06 np0005473739 systemd[1]: libpod-74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d.scope: Deactivated successfully.
Oct  7 10:30:06 np0005473739 podman[367482]: 2025-10-07 14:30:06.201694063 +0000 UTC m=+0.153653367 container died 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 10:30:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-07330ddf16c0bd219cdefec352b28c7c8260c9325c3048129294865aebf3b0c4-merged.mount: Deactivated successfully.
Oct  7 10:30:06 np0005473739 podman[367482]: 2025-10-07 14:30:06.240582392 +0000 UTC m=+0.192541686 container remove 74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_perlman, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 10:30:06 np0005473739 systemd[1]: libpod-conmon-74c2e33c450e9598ebc67b4b82af5687fb2b6a7e0e631eeca0409cad2967b49d.scope: Deactivated successfully.
Oct  7 10:30:06 np0005473739 nova_compute[259550]: 2025-10-07 14:30:06.282 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp53u0ocmz" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:30:06 np0005473739 nova_compute[259550]: 2025-10-07 14:30:06.306 2 DEBUG nova.storage.rbd_utils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] rbd image 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:30:06 np0005473739 nova_compute[259550]: 2025-10-07 14:30:06.310 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:30:06 np0005473739 podman[367544]: 2025-10-07 14:30:06.394665159 +0000 UTC m=+0.039874397 container create cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:30:06 np0005473739 systemd[1]: Started libpod-conmon-cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a.scope.
Oct  7 10:30:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:30:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:06 np0005473739 podman[367544]: 2025-10-07 14:30:06.377209542 +0000 UTC m=+0.022418800 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:30:06 np0005473739 podman[367544]: 2025-10-07 14:30:06.480312927 +0000 UTC m=+0.125522185 container init cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:30:06 np0005473739 podman[367544]: 2025-10-07 14:30:06.48715707 +0000 UTC m=+0.132366308 container start cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:30:06 np0005473739 podman[367544]: 2025-10-07 14:30:06.490393657 +0000 UTC m=+0.135602915 container attach cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:30:06 np0005473739 nova_compute[259550]: 2025-10-07 14:30:06.498 2 DEBUG oslo_concurrency.processutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config 5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:30:06 np0005473739 nova_compute[259550]: 2025-10-07 14:30:06.501 2 INFO nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Deleting local config drive /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd/disk.config because it was imported into RBD.#033[00m
Oct  7 10:30:06 np0005473739 systemd-machined[214580]: New machine qemu-131-instance-00000068.
Oct  7 10:30:06 np0005473739 systemd[1]: Started Virtual Machine qemu-131-instance-00000068.
Oct  7 10:30:06 np0005473739 nova_compute[259550]: 2025-10-07 14:30:06.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]: {
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:    "0": [
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:        {
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "devices": [
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "/dev/loop3"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            ],
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_name": "ceph_lv0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_size": "21470642176",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "name": "ceph_lv0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "tags": {
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cluster_name": "ceph",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.crush_device_class": "",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.encrypted": "0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osd_id": "0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.type": "block",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.vdo": "0"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            },
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "type": "block",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "vg_name": "ceph_vg0"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:        }
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:    ],
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:    "1": [
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:        {
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "devices": [
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "/dev/loop4"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            ],
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_name": "ceph_lv1",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_size": "21470642176",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "name": "ceph_lv1",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "tags": {
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cluster_name": "ceph",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.crush_device_class": "",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.encrypted": "0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osd_id": "1",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.type": "block",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.vdo": "0"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            },
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "type": "block",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "vg_name": "ceph_vg1"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:        }
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:    ],
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:    "2": [
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:        {
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "devices": [
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "/dev/loop5"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            ],
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_name": "ceph_lv2",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_size": "21470642176",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "name": "ceph_lv2",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "tags": {
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.cluster_name": "ceph",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.crush_device_class": "",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.encrypted": "0",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osd_id": "2",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.type": "block",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:                "ceph.vdo": "0"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            },
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "type": "block",
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:            "vg_name": "ceph_vg2"
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:        }
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]:    ]
Oct  7 10:30:07 np0005473739 affectionate_burnell[367580]: }
Oct  7 10:30:07 np0005473739 nova_compute[259550]: 2025-10-07 14:30:07.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:07 np0005473739 systemd[1]: libpod-cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a.scope: Deactivated successfully.
Oct  7 10:30:07 np0005473739 podman[367544]: 2025-10-07 14:30:07.217917097 +0000 UTC m=+0.863126335 container died cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 10:30:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c7a6424d1c6131feefe2a27912b2c39a2be3e9e5e43df80144dd48495ae136c0-merged.mount: Deactivated successfully.
Oct  7 10:30:07 np0005473739 podman[367544]: 2025-10-07 14:30:07.279360559 +0000 UTC m=+0.924569797 container remove cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_burnell, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 10:30:07 np0005473739 systemd[1]: libpod-conmon-cc5a516c7eac6723e5065fd887326d1678d40793b1a5fc95cf2f36c55d1acd6a.scope: Deactivated successfully.
Oct  7 10:30:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e271 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:07 np0005473739 podman[367801]: 2025-10-07 14:30:07.92463121 +0000 UTC m=+0.039010693 container create e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:30:07 np0005473739 systemd[1]: Started libpod-conmon-e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160.scope.
Oct  7 10:30:07 np0005473739 nova_compute[259550]: 2025-10-07 14:30:07.972 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847407.9720669, 5dc59e63-b96a-4de8-a978-dc88a8d1dadd => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:30:07 np0005473739 nova_compute[259550]: 2025-10-07 14:30:07.974 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:30:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:30:07 np0005473739 nova_compute[259550]: 2025-10-07 14:30:07.978 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:30:07 np0005473739 nova_compute[259550]: 2025-10-07 14:30:07.979 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:30:07 np0005473739 nova_compute[259550]: 2025-10-07 14:30:07.988 2 INFO nova.virt.libvirt.driver [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance spawned successfully.#033[00m
Oct  7 10:30:07 np0005473739 nova_compute[259550]: 2025-10-07 14:30:07.989 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:30:07 np0005473739 nova_compute[259550]: 2025-10-07 14:30:07.997 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.001 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:30:08 np0005473739 podman[367801]: 2025-10-07 14:30:07.908453268 +0000 UTC m=+0.022832771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:30:08 np0005473739 podman[367801]: 2025-10-07 14:30:08.00919145 +0000 UTC m=+0.123570953 container init e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:30:08 np0005473739 podman[367801]: 2025-10-07 14:30:08.016542826 +0000 UTC m=+0.130922309 container start e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:30:08 np0005473739 podman[367801]: 2025-10-07 14:30:08.019665869 +0000 UTC m=+0.134045362 container attach e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:30:08 np0005473739 determined_mclaren[367817]: 167 167
Oct  7 10:30:08 np0005473739 systemd[1]: libpod-e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160.scope: Deactivated successfully.
Oct  7 10:30:08 np0005473739 conmon[367817]: conmon e4c49e1a650564bec231 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160.scope/container/memory.events
Oct  7 10:30:08 np0005473739 podman[367822]: 2025-10-07 14:30:08.063401409 +0000 UTC m=+0.026548501 container died e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:30:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-21f3b0b4e813a3456f2e3105fa696680e5fc1238a055594ab37b49c227a4cee3-merged.mount: Deactivated successfully.
Oct  7 10:30:08 np0005473739 podman[367822]: 2025-10-07 14:30:08.102265206 +0000 UTC m=+0.065412288 container remove e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_mclaren, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:30:08 np0005473739 systemd[1]: libpod-conmon-e4c49e1a650564bec231477b1d7cc32dcfae0bb08c93f153e13cc0b9861d4160.scope: Deactivated successfully.
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.149 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.150 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.150 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.151 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.152 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.153 2 DEBUG nova.virt.libvirt.driver [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.156 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.156 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847407.9784286, 5dc59e63-b96a-4de8-a978-dc88a8d1dadd => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.157 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] VM Started (Lifecycle Event)#033[00m
Oct  7 10:30:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2024: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.1 KiB/s wr, 35 op/s
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.196 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.201 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.229 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.246 2 INFO nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 4.52 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.246 2 DEBUG nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:30:08 np0005473739 podman[367843]: 2025-10-07 14:30:08.28317721 +0000 UTC m=+0.058448782 container create bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.323 2 INFO nova.compute.manager [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 5.45 seconds to build instance.#033[00m
Oct  7 10:30:08 np0005473739 systemd[1]: Started libpod-conmon-bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0.scope.
Oct  7 10:30:08 np0005473739 podman[367843]: 2025-10-07 14:30:08.267335867 +0000 UTC m=+0.042607449 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:30:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:30:08 np0005473739 nova_compute[259550]: 2025-10-07 14:30:08.360 2 DEBUG oslo_concurrency.lockutils [None req-d0ed3ceb-2f4e-4066-bf1b-7473d20fde54 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.577s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:30:08 np0005473739 podman[367843]: 2025-10-07 14:30:08.381885948 +0000 UTC m=+0.157157540 container init bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default)
Oct  7 10:30:08 np0005473739 podman[367843]: 2025-10-07 14:30:08.38869838 +0000 UTC m=+0.163969972 container start bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:30:08 np0005473739 podman[367843]: 2025-10-07 14:30:08.396959001 +0000 UTC m=+0.172230593 container attach bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True)
Oct  7 10:30:09 np0005473739 tender_golick[367860]: {
Oct  7 10:30:09 np0005473739 tender_golick[367860]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "osd_id": 2,
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "type": "bluestore"
Oct  7 10:30:09 np0005473739 tender_golick[367860]:    },
Oct  7 10:30:09 np0005473739 tender_golick[367860]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "osd_id": 1,
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "type": "bluestore"
Oct  7 10:30:09 np0005473739 tender_golick[367860]:    },
Oct  7 10:30:09 np0005473739 tender_golick[367860]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "osd_id": 0,
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:30:09 np0005473739 tender_golick[367860]:        "type": "bluestore"
Oct  7 10:30:09 np0005473739 tender_golick[367860]:    }
Oct  7 10:30:09 np0005473739 tender_golick[367860]: }
Oct  7 10:30:09 np0005473739 systemd[1]: libpod-bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0.scope: Deactivated successfully.
Oct  7 10:30:09 np0005473739 systemd[1]: libpod-bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0.scope: Consumed 1.052s CPU time.
Oct  7 10:30:09 np0005473739 podman[367843]: 2025-10-07 14:30:09.43844568 +0000 UTC m=+1.213717252 container died bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  7 10:30:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-47b70a443bde841cc3b3d8f294d8bb802d9bba13169906060b0db33c18d2b227-merged.mount: Deactivated successfully.
Oct  7 10:30:09 np0005473739 podman[367843]: 2025-10-07 14:30:09.510759372 +0000 UTC m=+1.286030944 container remove bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_golick, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:30:09 np0005473739 systemd[1]: libpod-conmon-bd5d56810c6817179961f8666092ef603f996c733ae7472908d8b5e7355213a0.scope: Deactivated successfully.
Oct  7 10:30:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:30:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:30:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:30:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:30:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 80341229-ec6e-42bc-b90a-b364c485259c does not exist
Oct  7 10:30:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7947588a-38af-4916-a88c-5fdd62c08e77 does not exist
Oct  7 10:30:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2025: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 16 KiB/s wr, 54 op/s
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #93. Immutable memtables: 0.
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.588114) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 53] Flushing memtable with next log file: 93
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410588180, "job": 53, "event": "flush_started", "num_memtables": 1, "num_entries": 1337, "num_deletes": 252, "total_data_size": 1967648, "memory_usage": 2003152, "flush_reason": "Manual Compaction"}
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 53] Level-0 flush table #94: started
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410618342, "cf_name": "default", "job": 53, "event": "table_file_creation", "file_number": 94, "file_size": 1936590, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 41190, "largest_seqno": 42526, "table_properties": {"data_size": 1930317, "index_size": 3476, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1733, "raw_key_size": 13465, "raw_average_key_size": 20, "raw_value_size": 1917570, "raw_average_value_size": 2853, "num_data_blocks": 156, "num_entries": 672, "num_filter_entries": 672, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847277, "oldest_key_time": 1759847277, "file_creation_time": 1759847410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 94, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 53] Flush lasted 30250 microseconds, and 5670 cpu microseconds.
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.618373) [db/flush_job.cc:967] [default] [JOB 53] Level-0 flush table #94: 1936590 bytes OK
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.618390) [db/memtable_list.cc:519] [default] Level-0 commit table #94 started
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.625602) [db/memtable_list.cc:722] [default] Level-0 commit table #94: memtable #1 done
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.625620) EVENT_LOG_v1 {"time_micros": 1759847410625615, "job": 53, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.625639) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 53] Try to delete WAL files size 1961673, prev total WAL file size 1988161, number of live WAL files 2.
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000090.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.626640) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730033373635' seq:72057594037927935, type:22 .. '7061786F730034303137' seq:0, type:0; will stop at (end)
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 54] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 53 Base level 0, inputs: [94(1891KB)], [92(10046KB)]
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410626710, "job": 54, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [94], "files_L6": [92], "score": -1, "input_data_size": 12224361, "oldest_snapshot_seqno": -1}
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 54] Generated table #95: 6648 keys, 10593797 bytes, temperature: kUnknown
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410781499, "cf_name": "default", "job": 54, "event": "table_file_creation", "file_number": 95, "file_size": 10593797, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10546599, "index_size": 29486, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16645, "raw_key_size": 170681, "raw_average_key_size": 25, "raw_value_size": 10424761, "raw_average_value_size": 1568, "num_data_blocks": 1168, "num_entries": 6648, "num_filter_entries": 6648, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847410, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 95, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.781742) [db/compaction/compaction_job.cc:1663] [default] [JOB 54] Compacted 1@0 + 1@6 files to L6 => 10593797 bytes
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.788833) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.9 rd, 68.4 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 9.8 +0.0 blob) out(10.1 +0.0 blob), read-write-amplify(11.8) write-amplify(5.5) OK, records in: 7168, records dropped: 520 output_compression: NoCompression
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.788882) EVENT_LOG_v1 {"time_micros": 1759847410788863, "job": 54, "event": "compaction_finished", "compaction_time_micros": 154866, "compaction_time_cpu_micros": 26766, "output_level": 6, "num_output_files": 1, "total_output_size": 10593797, "num_input_records": 7168, "num_output_records": 6648, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000094.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410789497, "job": 54, "event": "table_file_deletion", "file_number": 94}
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000092.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847410791409, "job": 54, "event": "table_file_deletion", "file_number": 92}
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.626481) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791560) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791570) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791571) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:10 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:10.791573) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:11 np0005473739 nova_compute[259550]: 2025-10-07 14:30:11.264 2 DEBUG nova.compute.manager [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:30:11 np0005473739 nova_compute[259550]: 2025-10-07 14:30:11.321 2 INFO nova.compute.manager [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] instance snapshotting#033[00m
Oct  7 10:30:11 np0005473739 nova_compute[259550]: 2025-10-07 14:30:11.809 2 INFO nova.virt.libvirt.driver [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Beginning live snapshot process#033[00m
Oct  7 10:30:11 np0005473739 nova_compute[259550]: 2025-10-07 14:30:11.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:12 np0005473739 nova_compute[259550]: 2025-10-07 14:30:12.026 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] creating snapshot(08102c1045c946bf9671c4953ab06e52) on rbd image(5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:30:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2026: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 45 KiB/s rd, 15 KiB/s wr, 58 op/s
Oct  7 10:30:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e271 do_prune osdmap full prune enabled
Oct  7 10:30:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e272 e272: 3 total, 3 up, 3 in
Oct  7 10:30:12 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e272: 3 total, 3 up, 3 in
Oct  7 10:30:12 np0005473739 nova_compute[259550]: 2025-10-07 14:30:12.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:12 np0005473739 nova_compute[259550]: 2025-10-07 14:30:12.228 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] cloning vms/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk@08102c1045c946bf9671c4953ab06e52 to images/ca202b7f-8033-4699-bf89-025968f471a0 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:30:12 np0005473739 nova_compute[259550]: 2025-10-07 14:30:12.335 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] flattening images/ca202b7f-8033-4699-bf89-025968f471a0 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:30:12 np0005473739 nova_compute[259550]: 2025-10-07 14:30:12.635 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] removing snapshot(08102c1045c946bf9671c4953ab06e52) on rbd image(5dc59e63-b96a-4de8-a978-dc88a8d1dadd_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:30:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e272 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e272 do_prune osdmap full prune enabled
Oct  7 10:30:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e273 e273: 3 total, 3 up, 3 in
Oct  7 10:30:13 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e273: 3 total, 3 up, 3 in
Oct  7 10:30:13 np0005473739 nova_compute[259550]: 2025-10-07 14:30:13.268 2 DEBUG nova.storage.rbd_utils [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] creating snapshot(snap) on rbd image(ca202b7f-8033-4699-bf89-025968f471a0) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:30:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2029: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 23 KiB/s wr, 102 op/s
Oct  7 10:30:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e273 do_prune osdmap full prune enabled
Oct  7 10:30:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e274 e274: 3 total, 3 up, 3 in
Oct  7 10:30:14 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e274: 3 total, 3 up, 3 in
Oct  7 10:30:15 np0005473739 nova_compute[259550]: 2025-10-07 14:30:15.894 2 INFO nova.virt.libvirt.driver [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Snapshot image upload complete#033[00m
Oct  7 10:30:15 np0005473739 nova_compute[259550]: 2025-10-07 14:30:15.894 2 INFO nova.compute.manager [None req-ff21274a-0956-46e9-94e6-5370e7194fcf abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 4.57 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 10:30:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2031: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 5.2 KiB/s wr, 123 op/s
Oct  7 10:30:16 np0005473739 nova_compute[259550]: 2025-10-07 14:30:16.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:17 np0005473739 nova_compute[259550]: 2025-10-07 14:30:17.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e274 do_prune osdmap full prune enabled
Oct  7 10:30:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e275 e275: 3 total, 3 up, 3 in
Oct  7 10:30:17 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e275: 3 total, 3 up, 3 in
Oct  7 10:30:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e275 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.120 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.121 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.122 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.122 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.123 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.125 2 INFO nova.compute.manager [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Terminating instance#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.127 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "refresh_cache-5dc59e63-b96a-4de8-a978-dc88a8d1dadd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.128 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquired lock "refresh_cache-5dc59e63-b96a-4de8-a978-dc88a8d1dadd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.128 2 DEBUG nova.network.neutron [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:30:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2033: 305 pgs: 305 active+clean; 41 MiB data, 728 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 4.3 KiB/s wr, 105 op/s
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.653 2 DEBUG nova.network.neutron [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.962 2 DEBUG nova.network.neutron [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.977 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Releasing lock "refresh_cache-5dc59e63-b96a-4de8-a978-dc88a8d1dadd" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:30:18 np0005473739 nova_compute[259550]: 2025-10-07 14:30:18.978 2 DEBUG nova.compute.manager [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:30:19 np0005473739 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000068.scope: Deactivated successfully.
Oct  7 10:30:19 np0005473739 systemd[1]: machine-qemu\x2d131\x2dinstance\x2d00000068.scope: Consumed 1.712s CPU time.
Oct  7 10:30:19 np0005473739 systemd-machined[214580]: Machine qemu-131-instance-00000068 terminated.
Oct  7 10:30:19 np0005473739 nova_compute[259550]: 2025-10-07 14:30:19.201 2 INFO nova.virt.libvirt.driver [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance destroyed successfully.#033[00m
Oct  7 10:30:19 np0005473739 nova_compute[259550]: 2025-10-07 14:30:19.202 2 DEBUG nova.objects.instance [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lazy-loading 'resources' on Instance uuid 5dc59e63-b96a-4de8-a978-dc88a8d1dadd obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:30:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e275 do_prune osdmap full prune enabled
Oct  7 10:30:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e276 e276: 3 total, 3 up, 3 in
Oct  7 10:30:19 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e276: 3 total, 3 up, 3 in
Oct  7 10:30:19 np0005473739 nova_compute[259550]: 2025-10-07 14:30:19.732 2 INFO nova.virt.libvirt.driver [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Deleting instance files /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_del#033[00m
Oct  7 10:30:19 np0005473739 nova_compute[259550]: 2025-10-07 14:30:19.733 2 INFO nova.virt.libvirt.driver [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Deletion of /var/lib/nova/instances/5dc59e63-b96a-4de8-a978-dc88a8d1dadd_del complete#033[00m
Oct  7 10:30:19 np0005473739 nova_compute[259550]: 2025-10-07 14:30:19.935 2 INFO nova.compute.manager [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:30:19 np0005473739 nova_compute[259550]: 2025-10-07 14:30:19.935 2 DEBUG oslo.service.loopingcall [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:30:19 np0005473739 nova_compute[259550]: 2025-10-07 14:30:19.936 2 DEBUG nova.compute.manager [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:30:19 np0005473739 nova_compute[259550]: 2025-10-07 14:30:19.936 2 DEBUG nova.network.neutron [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:30:20 np0005473739 podman[368120]: 2025-10-07 14:30:20.08309955 +0000 UTC m=+0.074637335 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:30:20 np0005473739 podman[368121]: 2025-10-07 14:30:20.083875891 +0000 UTC m=+0.065447400 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:30:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2035: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 76 KiB/s rd, 2.7 KiB/s wr, 97 op/s
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.195 2 DEBUG nova.network.neutron [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.248 2 DEBUG nova.network.neutron [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.307 2 INFO nova.compute.manager [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Took 0.37 seconds to deallocate network for instance.#033[00m
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.440 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.440 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.496 2 DEBUG oslo_concurrency.processutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:30:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:30:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1680111326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.951 2 DEBUG oslo_concurrency.processutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.958 2 DEBUG nova.compute.provider_tree [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:30:20 np0005473739 nova_compute[259550]: 2025-10-07 14:30:20.980 2 DEBUG nova.scheduler.client.report [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:30:21 np0005473739 nova_compute[259550]: 2025-10-07 14:30:21.051 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:21 np0005473739 nova_compute[259550]: 2025-10-07 14:30:21.091 2 INFO nova.scheduler.client.report [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Deleted allocations for instance 5dc59e63-b96a-4de8-a978-dc88a8d1dadd#033[00m
Oct  7 10:30:21 np0005473739 nova_compute[259550]: 2025-10-07 14:30:21.164 2 DEBUG oslo_concurrency.lockutils [None req-93c29cb4-6976-44ee-a800-d48fbd8d0300 abb7b1705a844f358f445b980c484003 99916734e57d4c4a8076c48736dca89b - - default default] Lock "5dc59e63-b96a-4de8-a978-dc88a8d1dadd" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:21 np0005473739 nova_compute[259550]: 2025-10-07 14:30:21.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2036: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 99 KiB/s rd, 4.2 KiB/s wr, 128 op/s
Oct  7 10:30:22 np0005473739 nova_compute[259550]: 2025-10-07 14:30:22.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:30:22.283 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=30, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=29) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:30:22 np0005473739 nova_compute[259550]: 2025-10-07 14:30:22.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:30:22.286 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:30:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:30:22.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '30'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:30:22
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes', 'images']
Oct  7 10:30:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:30:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e276 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e276 do_prune osdmap full prune enabled
Oct  7 10:30:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e277 e277: 3 total, 3 up, 3 in
Oct  7 10:30:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e277: 3 total, 3 up, 3 in
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:30:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:30:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2038: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 108 KiB/s rd, 5.5 KiB/s wr, 143 op/s
Oct  7 10:30:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2039: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 93 KiB/s rd, 4.7 KiB/s wr, 123 op/s
Oct  7 10:30:26 np0005473739 nova_compute[259550]: 2025-10-07 14:30:26.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:27 np0005473739 nova_compute[259550]: 2025-10-07 14:30:27.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e277 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e277 do_prune osdmap full prune enabled
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 e278: 3 total, 3 up, 3 in
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e278: 3 total, 3 up, 3 in
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #96. Immutable memtables: 0.
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.898177) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 55] Flushing memtable with next log file: 96
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427898280, "job": 55, "event": "flush_started", "num_memtables": 1, "num_entries": 479, "num_deletes": 254, "total_data_size": 378359, "memory_usage": 387728, "flush_reason": "Manual Compaction"}
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 55] Level-0 flush table #97: started
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427903417, "cf_name": "default", "job": 55, "event": "table_file_creation", "file_number": 97, "file_size": 333876, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 42527, "largest_seqno": 43005, "table_properties": {"data_size": 331159, "index_size": 753, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 7183, "raw_average_key_size": 20, "raw_value_size": 325666, "raw_average_value_size": 946, "num_data_blocks": 33, "num_entries": 344, "num_filter_entries": 344, "num_deletions": 254, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847410, "oldest_key_time": 1759847410, "file_creation_time": 1759847427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 97, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 55] Flush lasted 5297 microseconds, and 2573 cpu microseconds.
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.903486) [db/flush_job.cc:967] [default] [JOB 55] Level-0 flush table #97: 333876 bytes OK
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.903519) [db/memtable_list.cc:519] [default] Level-0 commit table #97 started
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.906292) [db/memtable_list.cc:722] [default] Level-0 commit table #97: memtable #1 done
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.906348) EVENT_LOG_v1 {"time_micros": 1759847427906336, "job": 55, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.906380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 55] Try to delete WAL files size 375484, prev total WAL file size 375484, number of live WAL files 2.
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000093.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.907202) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031353032' seq:72057594037927935, type:22 .. '6D6772737461740031373534' seq:0, type:0; will stop at (end)
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 56] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 55 Base level 0, inputs: [97(326KB)], [95(10MB)]
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427907254, "job": 56, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [97], "files_L6": [95], "score": -1, "input_data_size": 10927673, "oldest_snapshot_seqno": -1}
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 56] Generated table #98: 6476 keys, 7664385 bytes, temperature: kUnknown
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427960808, "cf_name": "default", "job": 56, "event": "table_file_creation", "file_number": 98, "file_size": 7664385, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7622798, "index_size": 24325, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16197, "raw_key_size": 167332, "raw_average_key_size": 25, "raw_value_size": 7508349, "raw_average_value_size": 1159, "num_data_blocks": 952, "num_entries": 6476, "num_filter_entries": 6476, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847427, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 98, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.961242) [db/compaction/compaction_job.cc:1663] [default] [JOB 56] Compacted 1@0 + 1@6 files to L6 => 7664385 bytes
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.963149) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 203.5 rd, 142.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.3, 10.1 +0.0 blob) out(7.3 +0.0 blob), read-write-amplify(55.7) write-amplify(23.0) OK, records in: 6992, records dropped: 516 output_compression: NoCompression
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.963180) EVENT_LOG_v1 {"time_micros": 1759847427963166, "job": 56, "event": "compaction_finished", "compaction_time_micros": 53705, "compaction_time_cpu_micros": 20180, "output_level": 6, "num_output_files": 1, "total_output_size": 7664385, "num_input_records": 6992, "num_output_records": 6476, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000097.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427963438, "job": 56, "event": "table_file_deletion", "file_number": 97}
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000095.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847427966558, "job": 56, "event": "table_file_deletion", "file_number": 95}
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.907046) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:27.966729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2041: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 55 KiB/s rd, 2.9 KiB/s wr, 72 op/s
Oct  7 10:30:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2042: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 767 B/s wr, 18 op/s
Oct  7 10:30:31 np0005473739 nova_compute[259550]: 2025-10-07 14:30:31.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2043: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 661 B/s wr, 15 op/s
Oct  7 10:30:32 np0005473739 nova_compute[259550]: 2025-10-07 14:30:32.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:30:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:30:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:30:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2389509859' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:30:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:30:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2389509859' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:30:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:32 np0005473739 podman[368179]: 2025-10-07 14:30:32.976731842 +0000 UTC m=+0.057019704 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:30:33 np0005473739 podman[368180]: 2025-10-07 14:30:33.01483616 +0000 UTC m=+0.090983392 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Oct  7 10:30:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2044: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:34 np0005473739 nova_compute[259550]: 2025-10-07 14:30:34.202 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847419.1995983, 5dc59e63-b96a-4de8-a978-dc88a8d1dadd => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:30:34 np0005473739 nova_compute[259550]: 2025-10-07 14:30:34.203 2 INFO nova.compute.manager [-] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:30:34 np0005473739 nova_compute[259550]: 2025-10-07 14:30:34.233 2 DEBUG nova.compute.manager [None req-6d30e21f-b3c4-4fcc-a718-a5d5dec39bf8 - - - - - -] [instance: 5dc59e63-b96a-4de8-a978-dc88a8d1dadd] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:30:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 9371 writes, 42K keys, 9371 commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 9371 writes, 9371 syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1611 writes, 7729 keys, 1611 commit groups, 1.0 writes per commit group, ingest: 9.71 MB, 0.02 MB/s#012Interval WAL: 1611 writes, 1611 syncs, 1.00 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     75.6      0.68              0.16        28    0.024       0      0       0.0       0.0#012  L6      1/0    7.31 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.1    139.6    115.4      1.84              0.61        27    0.068    149K    15K       0.0       0.0#012 Sum      1/0    7.31 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.1    101.7    104.6      2.52              0.77        55    0.046    149K    15K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.6    111.7    108.4      0.64              0.19        14    0.046     48K   3615       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    139.6    115.4      1.84              0.61        27    0.068    149K    15K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     75.9      0.68              0.16        27    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 3600.0 total, 600.0 interval#012Flush(GB): cumulative 0.050, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.26 GB write, 0.07 MB/s write, 0.25 GB read, 0.07 MB/s read, 2.5 seconds#012Interval compaction: 0.07 GB write, 0.12 MB/s write, 0.07 GB read, 0.12 MB/s read, 0.6 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 28.97 MB table_size: 0 occupancy: 18446744073709551615 collections: 7 last_copies: 0 last_secs: 0.000285 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(1896,27.84 MB,9.15937%) FilterBlock(56,419.86 KB,0.134875%) IndexBlock(56,734.67 KB,0.236004%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 10:30:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2045: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:36 np0005473739 nova_compute[259550]: 2025-10-07 14:30:36.948 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:37 np0005473739 nova_compute[259550]: 2025-10-07 14:30:37.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2046: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2047: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:41 np0005473739 nova_compute[259550]: 2025-10-07 14:30:41.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:41 np0005473739 nova_compute[259550]: 2025-10-07 14:30:41.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2048: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:42 np0005473739 nova_compute[259550]: 2025-10-07 14:30:42.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:42 np0005473739 nova_compute[259550]: 2025-10-07 14:30:42.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:43 np0005473739 nova_compute[259550]: 2025-10-07 14:30:43.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2049: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:44 np0005473739 nova_compute[259550]: 2025-10-07 14:30:44.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:44 np0005473739 nova_compute[259550]: 2025-10-07 14:30:44.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:30:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2050: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:46 np0005473739 nova_compute[259550]: 2025-10-07 14:30:46.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:46 np0005473739 nova_compute[259550]: 2025-10-07 14:30:46.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:47 np0005473739 nova_compute[259550]: 2025-10-07 14:30:47.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2051: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.026 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.027 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.027 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.027 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:30:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2052: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:30:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4224071388' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.487 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:30:50 np0005473739 podman[368246]: 2025-10-07 14:30:50.582386363 +0000 UTC m=+0.056991704 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:30:50 np0005473739 podman[368247]: 2025-10-07 14:30:50.590317625 +0000 UTC m=+0.062297286 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.658 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.659 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3882MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.659 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.659 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.742 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.742 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:30:50 np0005473739 nova_compute[259550]: 2025-10-07 14:30:50.756 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:30:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:30:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2016555259' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:30:51 np0005473739 nova_compute[259550]: 2025-10-07 14:30:51.226 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:30:51 np0005473739 nova_compute[259550]: 2025-10-07 14:30:51.233 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:30:51 np0005473739 nova_compute[259550]: 2025-10-07 14:30:51.408 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:30:51 np0005473739 nova_compute[259550]: 2025-10-07 14:30:51.450 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:30:51 np0005473739 nova_compute[259550]: 2025-10-07 14:30:51.451 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.791s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:30:51 np0005473739 nova_compute[259550]: 2025-10-07 14:30:51.956 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2053: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:52 np0005473739 nova_compute[259550]: 2025-10-07 14:30:52.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:30:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:30:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:30:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:30:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:30:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #99. Immutable memtables: 0.
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:52.920681) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 57] Flushing memtable with next log file: 99
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847452920739, "job": 57, "event": "flush_started", "num_memtables": 1, "num_entries": 461, "num_deletes": 257, "total_data_size": 379656, "memory_usage": 388512, "flush_reason": "Manual Compaction"}
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 57] Level-0 flush table #100: started
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847452994088, "cf_name": "default", "job": 57, "event": "table_file_creation", "file_number": 100, "file_size": 376294, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43006, "largest_seqno": 43466, "table_properties": {"data_size": 373651, "index_size": 679, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6127, "raw_average_key_size": 18, "raw_value_size": 368430, "raw_average_value_size": 1086, "num_data_blocks": 31, "num_entries": 339, "num_filter_entries": 339, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847428, "oldest_key_time": 1759847428, "file_creation_time": 1759847452, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 100, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 57] Flush lasted 73476 microseconds, and 2244 cpu microseconds.
Oct  7 10:30:52 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:52.994157) [db/flush_job.cc:967] [default] [JOB 57] Level-0 flush table #100: 376294 bytes OK
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:52.994185) [db/memtable_list.cc:519] [default] Level-0 commit table #100 started
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.031998) [db/memtable_list.cc:722] [default] Level-0 commit table #100: memtable #1 done
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.032047) EVENT_LOG_v1 {"time_micros": 1759847453032038, "job": 57, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.032073) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 57] Try to delete WAL files size 376868, prev total WAL file size 376868, number of live WAL files 2.
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000096.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.032919) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031353039' seq:72057594037927935, type:22 .. '6C6F676D0031373632' seq:0, type:0; will stop at (end)
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 58] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 57 Base level 0, inputs: [100(367KB)], [98(7484KB)]
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847453033022, "job": 58, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [100], "files_L6": [98], "score": -1, "input_data_size": 8040679, "oldest_snapshot_seqno": -1}
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 58] Generated table #101: 6294 keys, 7918275 bytes, temperature: kUnknown
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847453172278, "cf_name": "default", "job": 58, "event": "table_file_creation", "file_number": 101, "file_size": 7918275, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 7877073, "index_size": 24388, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 15749, "raw_key_size": 164489, "raw_average_key_size": 26, "raw_value_size": 7764899, "raw_average_value_size": 1233, "num_data_blocks": 951, "num_entries": 6294, "num_filter_entries": 6294, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847453, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 101, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.172690) [db/compaction/compaction_job.cc:1663] [default] [JOB 58] Compacted 1@0 + 1@6 files to L6 => 7918275 bytes
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.181290) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 57.7 rd, 56.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 7.3 +0.0 blob) out(7.6 +0.0 blob), read-write-amplify(42.4) write-amplify(21.0) OK, records in: 6815, records dropped: 521 output_compression: NoCompression
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.181312) EVENT_LOG_v1 {"time_micros": 1759847453181300, "job": 58, "event": "compaction_finished", "compaction_time_micros": 139420, "compaction_time_cpu_micros": 23731, "output_level": 6, "num_output_files": 1, "total_output_size": 7918275, "num_input_records": 6815, "num_output_records": 6294, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000100.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847453181581, "job": 58, "event": "table_file_deletion", "file_number": 100}
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000098.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847453183352, "job": 58, "event": "table_file_deletion", "file_number": 98}
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.032625) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183479) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183487) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183489) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:30:53.183493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:30:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2054: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2055: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:30:56 np0005473739 nova_compute[259550]: 2025-10-07 14:30:56.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:57 np0005473739 nova_compute[259550]: 2025-10-07 14:30:57.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:30:57 np0005473739 nova_compute[259550]: 2025-10-07 14:30:57.451 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:30:57 np0005473739 nova_compute[259550]: 2025-10-07 14:30:57.452 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:30:57 np0005473739 nova_compute[259550]: 2025-10-07 14:30:57.452 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:30:57 np0005473739 nova_compute[259550]: 2025-10-07 14:30:57.477 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:30:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:30:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2056: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:31:00.063 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:31:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:31:00.064 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:31:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:31:00.064 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:31:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2057: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:01 np0005473739 nova_compute[259550]: 2025-10-07 14:31:01.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2058: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:02 np0005473739 nova_compute[259550]: 2025-10-07 14:31:02.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:02 np0005473739 nova_compute[259550]: 2025-10-07 14:31:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:04 np0005473739 podman[368307]: 2025-10-07 14:31:04.077574619 +0000 UTC m=+0.070857175 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:31:04 np0005473739 podman[368308]: 2025-10-07 14:31:04.085746476 +0000 UTC m=+0.073793712 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 10:31:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2059: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2060: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:06 np0005473739 nova_compute[259550]: 2025-10-07 14:31:06.961 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:07 np0005473739 nova_compute[259550]: 2025-10-07 14:31:07.257 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2061: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:09 np0005473739 nova_compute[259550]: 2025-10-07 14:31:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:09 np0005473739 nova_compute[259550]: 2025-10-07 14:31:09.982 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:31:09 np0005473739 nova_compute[259550]: 2025-10-07 14:31:09.983 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:31:09 np0005473739 nova_compute[259550]: 2025-10-07 14:31:09.983 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:31:09 np0005473739 nova_compute[259550]: 2025-10-07 14:31:09.984 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:31:09 np0005473739 nova_compute[259550]: 2025-10-07 14:31:09.984 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:31:09 np0005473739 nova_compute[259550]: 2025-10-07 14:31:09.984 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.187 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct  7 10:31:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2062: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.223 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.223 2 WARNING nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.224 2 WARNING nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.224 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Removable base files: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.224 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.224 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.224 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.225 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.225 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  7 10:31:10 np0005473739 nova_compute[259550]: 2025-10-07 14:31:10.225 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:31:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bc6e9d08-efb3-4e13-9bd1-79d3bab8eacd does not exist
Oct  7 10:31:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 268e7816-9da1-4970-bca0-d7eca3244db0 does not exist
Oct  7 10:31:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev af9c69df-db05-4bb6-b0c2-79a2614c67bb does not exist
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:31:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:31:11 np0005473739 podman[368628]: 2025-10-07 14:31:11.050526728 +0000 UTC m=+0.044727316 container create 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:31:11 np0005473739 systemd[1]: Started libpod-conmon-81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f.scope.
Oct  7 10:31:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:31:11 np0005473739 podman[368628]: 2025-10-07 14:31:11.130619408 +0000 UTC m=+0.124820086 container init 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:31:11 np0005473739 podman[368628]: 2025-10-07 14:31:11.034457278 +0000 UTC m=+0.028657886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:31:11 np0005473739 podman[368628]: 2025-10-07 14:31:11.136904225 +0000 UTC m=+0.131104813 container start 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:31:11 np0005473739 podman[368628]: 2025-10-07 14:31:11.140566723 +0000 UTC m=+0.134767331 container attach 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True)
Oct  7 10:31:11 np0005473739 cranky_hertz[368644]: 167 167
Oct  7 10:31:11 np0005473739 systemd[1]: libpod-81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f.scope: Deactivated successfully.
Oct  7 10:31:11 np0005473739 podman[368628]: 2025-10-07 14:31:11.143309347 +0000 UTC m=+0.137509935 container died 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:31:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-00b4461583832ab0d43e90e2edf2c11e8c9971471b5e74d201063172aa037345-merged.mount: Deactivated successfully.
Oct  7 10:31:11 np0005473739 podman[368628]: 2025-10-07 14:31:11.191717171 +0000 UTC m=+0.185917769 container remove 81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_hertz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:31:11 np0005473739 systemd[1]: libpod-conmon-81b58ebd11480b600bf3edd084d5542e3291eeeb0836fbdd74a7eb45a7154a7f.scope: Deactivated successfully.
Oct  7 10:31:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:31:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:31:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:31:11 np0005473739 podman[368668]: 2025-10-07 14:31:11.375994994 +0000 UTC m=+0.049152464 container create 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:31:11 np0005473739 systemd[1]: Started libpod-conmon-84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b.scope.
Oct  7 10:31:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:31:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:11 np0005473739 podman[368668]: 2025-10-07 14:31:11.356820331 +0000 UTC m=+0.029977841 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:31:11 np0005473739 podman[368668]: 2025-10-07 14:31:11.453770482 +0000 UTC m=+0.126927962 container init 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:31:11 np0005473739 podman[368668]: 2025-10-07 14:31:11.461946611 +0000 UTC m=+0.135104071 container start 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:31:11 np0005473739 podman[368668]: 2025-10-07 14:31:11.465566317 +0000 UTC m=+0.138723807 container attach 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:31:11 np0005473739 nova_compute[259550]: 2025-10-07 14:31:11.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2063: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:12 np0005473739 nova_compute[259550]: 2025-10-07 14:31:12.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:12 np0005473739 unruffled_vaughan[368685]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:31:12 np0005473739 unruffled_vaughan[368685]: --> relative data size: 1.0
Oct  7 10:31:12 np0005473739 unruffled_vaughan[368685]: --> All data devices are unavailable
Oct  7 10:31:12 np0005473739 systemd[1]: libpod-84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b.scope: Deactivated successfully.
Oct  7 10:31:12 np0005473739 systemd[1]: libpod-84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b.scope: Consumed 1.008s CPU time.
Oct  7 10:31:12 np0005473739 podman[368714]: 2025-10-07 14:31:12.588215305 +0000 UTC m=+0.032563991 container died 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:31:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b5afb7662f24797114240f6cd28376e019a89aed461e6e4fa2b65f3aa2a126cd-merged.mount: Deactivated successfully.
Oct  7 10:31:12 np0005473739 podman[368714]: 2025-10-07 14:31:12.65240674 +0000 UTC m=+0.096755396 container remove 84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_vaughan, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 10:31:12 np0005473739 systemd[1]: libpod-conmon-84ed720b1ce37a829e47d5c2beac4d1d50f67c3d4275724021ddb214f7163a7b.scope: Deactivated successfully.
Oct  7 10:31:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:13 np0005473739 podman[368870]: 2025-10-07 14:31:13.300779055 +0000 UTC m=+0.035252793 container create 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:31:13 np0005473739 systemd[1]: Started libpod-conmon-60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c.scope.
Oct  7 10:31:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:31:13 np0005473739 podman[368870]: 2025-10-07 14:31:13.286447452 +0000 UTC m=+0.020921210 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:31:13 np0005473739 podman[368870]: 2025-10-07 14:31:13.382878358 +0000 UTC m=+0.117352116 container init 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:31:13 np0005473739 podman[368870]: 2025-10-07 14:31:13.391501829 +0000 UTC m=+0.125975567 container start 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:31:13 np0005473739 podman[368870]: 2025-10-07 14:31:13.394472719 +0000 UTC m=+0.128946637 container attach 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:31:13 np0005473739 vibrant_herschel[368887]: 167 167
Oct  7 10:31:13 np0005473739 systemd[1]: libpod-60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c.scope: Deactivated successfully.
Oct  7 10:31:13 np0005473739 podman[368870]: 2025-10-07 14:31:13.397444048 +0000 UTC m=+0.131917816 container died 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:31:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-083c2f38c93c245c34f2b349604e237db69705cc3aad75c709ea55984f39e141-merged.mount: Deactivated successfully.
Oct  7 10:31:13 np0005473739 podman[368870]: 2025-10-07 14:31:13.447130055 +0000 UTC m=+0.181603793 container remove 60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_herschel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:31:13 np0005473739 systemd[1]: libpod-conmon-60a3f3a5f57365c1eb32f79153c5ca0b9846039e315c500138089407c05db27c.scope: Deactivated successfully.
Oct  7 10:31:13 np0005473739 podman[368910]: 2025-10-07 14:31:13.675169479 +0000 UTC m=+0.054178808 container create e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:31:13 np0005473739 systemd[1]: Started libpod-conmon-e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a.scope.
Oct  7 10:31:13 np0005473739 podman[368910]: 2025-10-07 14:31:13.654683512 +0000 UTC m=+0.033692831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:31:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:31:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:13 np0005473739 podman[368910]: 2025-10-07 14:31:13.78632996 +0000 UTC m=+0.165339319 container init e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:31:13 np0005473739 podman[368910]: 2025-10-07 14:31:13.799340687 +0000 UTC m=+0.178350046 container start e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:31:13 np0005473739 podman[368910]: 2025-10-07 14:31:13.80358097 +0000 UTC m=+0.182590309 container attach e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:31:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2064: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:14 np0005473739 sharp_ride[368926]: {
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:    "0": [
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:        {
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "devices": [
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "/dev/loop3"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            ],
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_name": "ceph_lv0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_size": "21470642176",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "name": "ceph_lv0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "tags": {
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cluster_name": "ceph",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.crush_device_class": "",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.encrypted": "0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osd_id": "0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.type": "block",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.vdo": "0"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            },
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "type": "block",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "vg_name": "ceph_vg0"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:        }
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:    ],
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:    "1": [
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:        {
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "devices": [
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "/dev/loop4"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            ],
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_name": "ceph_lv1",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_size": "21470642176",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "name": "ceph_lv1",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "tags": {
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cluster_name": "ceph",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.crush_device_class": "",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.encrypted": "0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osd_id": "1",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.type": "block",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.vdo": "0"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            },
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "type": "block",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "vg_name": "ceph_vg1"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:        }
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:    ],
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:    "2": [
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:        {
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "devices": [
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "/dev/loop5"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            ],
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_name": "ceph_lv2",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_size": "21470642176",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "name": "ceph_lv2",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "tags": {
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.cluster_name": "ceph",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.crush_device_class": "",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.encrypted": "0",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osd_id": "2",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.type": "block",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:                "ceph.vdo": "0"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            },
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "type": "block",
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:            "vg_name": "ceph_vg2"
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:        }
Oct  7 10:31:14 np0005473739 sharp_ride[368926]:    ]
Oct  7 10:31:14 np0005473739 sharp_ride[368926]: }
Oct  7 10:31:14 np0005473739 systemd[1]: libpod-e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a.scope: Deactivated successfully.
Oct  7 10:31:14 np0005473739 podman[368910]: 2025-10-07 14:31:14.631213785 +0000 UTC m=+1.010223114 container died e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:31:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f5b462f81417e286b3524676ceaabbebe01deeeee65dcc3ad57083b396c3575d-merged.mount: Deactivated successfully.
Oct  7 10:31:14 np0005473739 podman[368910]: 2025-10-07 14:31:14.709758813 +0000 UTC m=+1.088768152 container remove e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:31:14 np0005473739 systemd[1]: libpod-conmon-e9d7b2ca888e2d801ce40e07f5682d6f21841f1f31c0700ee046e8110c413f5a.scope: Deactivated successfully.
Oct  7 10:31:15 np0005473739 podman[369087]: 2025-10-07 14:31:15.319248939 +0000 UTC m=+0.051806165 container create 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:31:15 np0005473739 systemd[1]: Started libpod-conmon-81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547.scope.
Oct  7 10:31:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:31:15 np0005473739 podman[369087]: 2025-10-07 14:31:15.29384613 +0000 UTC m=+0.026403426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:31:15 np0005473739 podman[369087]: 2025-10-07 14:31:15.397776728 +0000 UTC m=+0.130334004 container init 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:31:15 np0005473739 podman[369087]: 2025-10-07 14:31:15.404673681 +0000 UTC m=+0.137230917 container start 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:31:15 np0005473739 podman[369087]: 2025-10-07 14:31:15.407709463 +0000 UTC m=+0.140266719 container attach 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:31:15 np0005473739 gracious_einstein[369103]: 167 167
Oct  7 10:31:15 np0005473739 systemd[1]: libpod-81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547.scope: Deactivated successfully.
Oct  7 10:31:15 np0005473739 podman[369087]: 2025-10-07 14:31:15.410779405 +0000 UTC m=+0.143336651 container died 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:31:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4ccab8c5e09f03464210ae4f4fb0145ad4b952ec401bb2d8e43830e5bdd59054-merged.mount: Deactivated successfully.
Oct  7 10:31:15 np0005473739 podman[369087]: 2025-10-07 14:31:15.45248384 +0000 UTC m=+0.185041086 container remove 81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_einstein, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:31:15 np0005473739 systemd[1]: libpod-conmon-81c08aa8fd0eb32c15ddfc786c081ac37731b8569c22eb43f932e1c02ba66547.scope: Deactivated successfully.
Oct  7 10:31:15 np0005473739 podman[369126]: 2025-10-07 14:31:15.640476923 +0000 UTC m=+0.051761434 container create e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 10:31:15 np0005473739 systemd[1]: Started libpod-conmon-e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2.scope.
Oct  7 10:31:15 np0005473739 podman[369126]: 2025-10-07 14:31:15.615509215 +0000 UTC m=+0.026793756 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:31:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:31:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:31:15 np0005473739 podman[369126]: 2025-10-07 14:31:15.743116355 +0000 UTC m=+0.154400896 container init e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:31:15 np0005473739 podman[369126]: 2025-10-07 14:31:15.75040761 +0000 UTC m=+0.161692111 container start e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:31:15 np0005473739 podman[369126]: 2025-10-07 14:31:15.75452577 +0000 UTC m=+0.165810311 container attach e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:31:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2065: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]: {
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "osd_id": 2,
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "type": "bluestore"
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:    },
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "osd_id": 1,
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "type": "bluestore"
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:    },
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "osd_id": 0,
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:        "type": "bluestore"
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]:    }
Oct  7 10:31:16 np0005473739 unruffled_engelbart[369143]: }
Oct  7 10:31:16 np0005473739 systemd[1]: libpod-e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2.scope: Deactivated successfully.
Oct  7 10:31:16 np0005473739 podman[369126]: 2025-10-07 14:31:16.747385829 +0000 UTC m=+1.158670330 container died e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:31:16 np0005473739 systemd[1]: libpod-e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2.scope: Consumed 1.004s CPU time.
Oct  7 10:31:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-36e5cd4690c1c256ab4b36aabfc465e0485133cfece561db8a9809eb48057ba2-merged.mount: Deactivated successfully.
Oct  7 10:31:16 np0005473739 podman[369126]: 2025-10-07 14:31:16.808152413 +0000 UTC m=+1.219436914 container remove e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_engelbart, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:31:16 np0005473739 systemd[1]: libpod-conmon-e0f26255f09fb2297e79faa17e5a44dbbd55c303d1a2ec75385e5c0afd643ff2.scope: Deactivated successfully.
Oct  7 10:31:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:31:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:31:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:31:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:31:16 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9d87e0a3-a92d-4ac6-b8bd-81fb4535bfbc does not exist
Oct  7 10:31:16 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 83fce1de-332f-4197-9389-6fd9f55bfffe does not exist
Oct  7 10:31:16 np0005473739 nova_compute[259550]: 2025-10-07 14:31:16.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:17 np0005473739 nova_compute[259550]: 2025-10-07 14:31:17.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:31:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:31:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2066: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2067: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:21 np0005473739 podman[369240]: 2025-10-07 14:31:21.075753296 +0000 UTC m=+0.065746377 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:31:21 np0005473739 podman[369241]: 2025-10-07 14:31:21.080155754 +0000 UTC m=+0.070540875 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:31:21 np0005473739 nova_compute[259550]: 2025-10-07 14:31:21.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2068: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:22 np0005473739 nova_compute[259550]: 2025-10-07 14:31:22.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:31:22
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'images', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'backups', '.mgr', 'vms', 'volumes', 'default.rgw.control', '.rgw.root', 'default.rgw.meta']
Oct  7 10:31:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:31:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:31:22.989 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=31, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=30) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:31:22 np0005473739 nova_compute[259550]: 2025-10-07 14:31:22.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:31:22.991 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:31:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:31:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2069: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:31:24.993 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '31'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:31:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2070: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:26 np0005473739 nova_compute[259550]: 2025-10-07 14:31:26.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:27 np0005473739 nova_compute[259550]: 2025-10-07 14:31:27.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2071: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2072: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:30 np0005473739 nova_compute[259550]: 2025-10-07 14:31:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:30 np0005473739 nova_compute[259550]: 2025-10-07 14:31:30.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:31:31 np0005473739 nova_compute[259550]: 2025-10-07 14:31:31.004 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:31:31 np0005473739 nova_compute[259550]: 2025-10-07 14:31:31.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2073: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:32 np0005473739 nova_compute[259550]: 2025-10-07 14:31:32.267 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:31:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:31:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:31:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2724747782' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:31:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:31:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2724747782' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:31:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2074: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:35 np0005473739 podman[369282]: 2025-10-07 14:31:35.057117094 +0000 UTC m=+0.046925705 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:31:35 np0005473739 podman[369283]: 2025-10-07 14:31:35.097825172 +0000 UTC m=+0.084844238 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 10:31:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2075: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:37 np0005473739 nova_compute[259550]: 2025-10-07 14:31:37.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:37 np0005473739 nova_compute[259550]: 2025-10-07 14:31:37.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:37 np0005473739 nova_compute[259550]: 2025-10-07 14:31:37.923 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2076: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2077: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:42 np0005473739 nova_compute[259550]: 2025-10-07 14:31:42.007 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2078: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:42 np0005473739 nova_compute[259550]: 2025-10-07 14:31:42.272 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:43 np0005473739 nova_compute[259550]: 2025-10-07 14:31:43.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:43 np0005473739 nova_compute[259550]: 2025-10-07 14:31:43.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:43 np0005473739 nova_compute[259550]: 2025-10-07 14:31:43.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:31:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2079: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:44 np0005473739 nova_compute[259550]: 2025-10-07 14:31:44.998 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:45 np0005473739 nova_compute[259550]: 2025-10-07 14:31:44.999 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:45 np0005473739 nova_compute[259550]: 2025-10-07 14:31:45.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:45 np0005473739 nova_compute[259550]: 2025-10-07 14:31:45.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:31:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2080: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:47 np0005473739 nova_compute[259550]: 2025-10-07 14:31:47.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:47 np0005473739 nova_compute[259550]: 2025-10-07 14:31:47.274 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:47 np0005473739 nova_compute[259550]: 2025-10-07 14:31:47.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2081: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:49 np0005473739 nova_compute[259550]: 2025-10-07 14:31:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2082: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:51 np0005473739 nova_compute[259550]: 2025-10-07 14:31:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.032 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.033 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.033 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.034 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:31:52 np0005473739 podman[369326]: 2025-10-07 14:31:52.070990309 +0000 UTC m=+0.059459069 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3)
Oct  7 10:31:52 np0005473739 podman[369327]: 2025-10-07 14:31:52.09684065 +0000 UTC m=+0.083210344 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid)
Oct  7 10:31:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2083: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:31:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656170575' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.512 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:31:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.719 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.721 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3889MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.722 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.722 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:31:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.946 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:31:52 np0005473739 nova_compute[259550]: 2025-10-07 14:31:52.947 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:31:53 np0005473739 nova_compute[259550]: 2025-10-07 14:31:53.004 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:31:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:31:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2099138458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:31:53 np0005473739 nova_compute[259550]: 2025-10-07 14:31:53.475 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:31:53 np0005473739 nova_compute[259550]: 2025-10-07 14:31:53.482 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:31:53 np0005473739 nova_compute[259550]: 2025-10-07 14:31:53.503 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:31:53 np0005473739 nova_compute[259550]: 2025-10-07 14:31:53.505 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:31:53 np0005473739 nova_compute[259550]: 2025-10-07 14:31:53.505 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:31:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2084: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:54 np0005473739 nova_compute[259550]: 2025-10-07 14:31:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:55 np0005473739 nova_compute[259550]: 2025-10-07 14:31:55.029 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:55 np0005473739 nova_compute[259550]: 2025-10-07 14:31:55.995 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:31:55 np0005473739 nova_compute[259550]: 2025-10-07 14:31:55.995 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:31:55 np0005473739 nova_compute[259550]: 2025-10-07 14:31:55.996 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:31:56 np0005473739 nova_compute[259550]: 2025-10-07 14:31:56.067 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:31:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2085: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:31:57 np0005473739 nova_compute[259550]: 2025-10-07 14:31:57.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:57 np0005473739 nova_compute[259550]: 2025-10-07 14:31:57.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:31:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:31:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2086: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:00.064 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:00.065 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:00.066 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2087: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:02 np0005473739 nova_compute[259550]: 2025-10-07 14:32:02.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2088: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:02 np0005473739 nova_compute[259550]: 2025-10-07 14:32:02.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:02 np0005473739 nova_compute[259550]: 2025-10-07 14:32:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2089: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:04.686 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0d:f4:0e 10.100.0.2 2001:db8::f816:3eff:fe0d:f40e'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe0d:f40e/64', 'neutron:device_id': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ac55d93f-5af1-4917-a7be-679169a02318) old=Port_Binding(mac=['fa:16:3e:0d:f4:0e 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:32:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:04.687 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ac55d93f-5af1-4917-a7be-679169a02318 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc updated#033[00m
Oct  7 10:32:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:04.688 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:32:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:04.690 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f9b14ea-127b-4b33-8483-aeeca7635fa2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:06 np0005473739 podman[369408]: 2025-10-07 14:32:06.067857752 +0000 UTC m=+0.053958423 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:32:06 np0005473739 podman[369409]: 2025-10-07 14:32:06.129168311 +0000 UTC m=+0.109174048 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:32:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2090: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:06 np0005473739 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  7 10:32:07 np0005473739 nova_compute[259550]: 2025-10-07 14:32:07.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:07 np0005473739 nova_compute[259550]: 2025-10-07 14:32:07.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2091: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2092: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:12 np0005473739 nova_compute[259550]: 2025-10-07 14:32:12.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2093: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:12 np0005473739 nova_compute[259550]: 2025-10-07 14:32:12.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:13 np0005473739 nova_compute[259550]: 2025-10-07 14:32:13.343 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:13 np0005473739 nova_compute[259550]: 2025-10-07 14:32:13.344 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:13 np0005473739 nova_compute[259550]: 2025-10-07 14:32:13.428 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:32:13 np0005473739 nova_compute[259550]: 2025-10-07 14:32:13.748 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:13 np0005473739 nova_compute[259550]: 2025-10-07 14:32:13.748 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:13 np0005473739 nova_compute[259550]: 2025-10-07 14:32:13.755 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:32:13 np0005473739 nova_compute[259550]: 2025-10-07 14:32:13.755 2 INFO nova.compute.claims [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:32:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.0 total, 600.0 interval#012Cumulative writes: 31K writes, 126K keys, 31K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 31K writes, 11K syncs, 2.86 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5046 writes, 18K keys, 5046 commit groups, 1.0 writes per commit group, ingest: 17.27 MB, 0.03 MB/s#012Interval WAL: 5046 writes, 2084 syncs, 2.42 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:32:13 np0005473739 nova_compute[259550]: 2025-10-07 14:32:13.991 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2094: 305 pgs: 305 active+clean; 41 MiB data, 732 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:32:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:32:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1193526635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:32:14 np0005473739 nova_compute[259550]: 2025-10-07 14:32:14.443 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.452s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:14 np0005473739 nova_compute[259550]: 2025-10-07 14:32:14.448 2 DEBUG nova.compute.provider_tree [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:32:14 np0005473739 nova_compute[259550]: 2025-10-07 14:32:14.594 2 DEBUG nova.scheduler.client.report [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:32:14 np0005473739 nova_compute[259550]: 2025-10-07 14:32:14.687 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.938s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:14 np0005473739 nova_compute[259550]: 2025-10-07 14:32:14.688 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:32:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:14Z|01052|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Oct  7 10:32:14 np0005473739 nova_compute[259550]: 2025-10-07 14:32:14.890 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:32:14 np0005473739 nova_compute[259550]: 2025-10-07 14:32:14.892 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:32:14 np0005473739 nova_compute[259550]: 2025-10-07 14:32:14.946 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.034 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.177 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.178 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.178 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Creating image(s)#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.203 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.226 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.248 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.251 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.306 2 DEBUG nova.policy [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.356 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.357 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.359 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.359 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.382 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.386 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0af0082a-1adc-40e7-b254-88e03182e802_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.732 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0af0082a-1adc-40e7-b254-88e03182e802_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.346s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.811 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:32:15 np0005473739 nova_compute[259550]: 2025-10-07 14:32:15.917 2 DEBUG nova.objects.instance [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 0af0082a-1adc-40e7-b254-88e03182e802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:32:16 np0005473739 nova_compute[259550]: 2025-10-07 14:32:16.005 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:32:16 np0005473739 nova_compute[259550]: 2025-10-07 14:32:16.006 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Ensure instance console log exists: /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:32:16 np0005473739 nova_compute[259550]: 2025-10-07 14:32:16.007 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:16 np0005473739 nova_compute[259550]: 2025-10-07 14:32:16.007 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:16 np0005473739 nova_compute[259550]: 2025-10-07 14:32:16.007 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2095: 305 pgs: 305 active+clean; 47 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 105 KiB/s wr, 11 op/s
Oct  7 10:32:16 np0005473739 nova_compute[259550]: 2025-10-07 14:32:16.677 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Successfully created port: 52f43128-c899-4d76-9e65-c99941c834d4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:32:17 np0005473739 nova_compute[259550]: 2025-10-07 14:32:17.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:17 np0005473739 nova_compute[259550]: 2025-10-07 14:32:17.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:32:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 27b074ba-49e9-41ba-960e-83360daa37bd does not exist
Oct  7 10:32:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f5f1fc18-5be6-442a-a971-0a039c8ff29b does not exist
Oct  7 10:32:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c4425fd0-b0d4-4087-b46a-5e52d95cfd10 does not exist
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:32:17 np0005473739 nova_compute[259550]: 2025-10-07 14:32:17.791 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Successfully updated port: 52f43128-c899-4d76-9e65-c99941c834d4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:32:17 np0005473739 nova_compute[259550]: 2025-10-07 14:32:17.914 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:32:17 np0005473739 nova_compute[259550]: 2025-10-07 14:32:17.914 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:32:17 np0005473739 nova_compute[259550]: 2025-10-07 14:32:17.915 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:32:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:18 np0005473739 nova_compute[259550]: 2025-10-07 14:32:18.072 2 DEBUG nova.compute.manager [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-changed-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:18 np0005473739 nova_compute[259550]: 2025-10-07 14:32:18.073 2 DEBUG nova.compute.manager [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing instance network info cache due to event network-changed-52f43128-c899-4d76-9e65-c99941c834d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:32:18 np0005473739 nova_compute[259550]: 2025-10-07 14:32:18.073 2 DEBUG oslo_concurrency.lockutils [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:32:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 36K writes, 138K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.04 MB/s#012Cumulative WAL: 36K writes, 12K syncs, 2.79 writes per sync, written: 0.13 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5197 writes, 17K keys, 5197 commit groups, 1.0 writes per commit group, ingest: 15.78 MB, 0.03 MB/s#012Interval WAL: 5197 writes, 2171 syncs, 2.39 writes per sync, written: 0.02 GB, 0.03 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:32:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2096: 305 pgs: 305 active+clean; 47 MiB data, 732 MiB used, 59 GiB / 60 GiB avail; 6.2 KiB/s rd, 105 KiB/s wr, 11 op/s
Oct  7 10:32:18 np0005473739 podman[369919]: 2025-10-07 14:32:18.311737813 +0000 UTC m=+0.046480153 container create acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:32:18 np0005473739 systemd[1]: Started libpod-conmon-acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc.scope.
Oct  7 10:32:18 np0005473739 podman[369919]: 2025-10-07 14:32:18.28727429 +0000 UTC m=+0.022016720 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:32:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:32:18 np0005473739 podman[369919]: 2025-10-07 14:32:18.416892364 +0000 UTC m=+0.151634724 container init acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:32:18 np0005473739 podman[369919]: 2025-10-07 14:32:18.425922544 +0000 UTC m=+0.160664884 container start acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:32:18 np0005473739 podman[369919]: 2025-10-07 14:32:18.430033044 +0000 UTC m=+0.164775404 container attach acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:32:18 np0005473739 systemd[1]: libpod-acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc.scope: Deactivated successfully.
Oct  7 10:32:18 np0005473739 ecstatic_lewin[369935]: 167 167
Oct  7 10:32:18 np0005473739 conmon[369935]: conmon acc99383613f4cf1139c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc.scope/container/memory.events
Oct  7 10:32:18 np0005473739 podman[369919]: 2025-10-07 14:32:18.436224779 +0000 UTC m=+0.170967119 container died acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:32:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8a4eb203dbabd22801070dd86a80d322351ad944af0e4409dd3620fb99fe0042-merged.mount: Deactivated successfully.
Oct  7 10:32:18 np0005473739 podman[369919]: 2025-10-07 14:32:18.488414815 +0000 UTC m=+0.223157195 container remove acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_lewin, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:32:18 np0005473739 nova_compute[259550]: 2025-10-07 14:32:18.503 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:32:18 np0005473739 systemd[1]: libpod-conmon-acc99383613f4cf1139ccf792fbc17f778930d1c05d7c28193df398396e315dc.scope: Deactivated successfully.
Oct  7 10:32:18 np0005473739 podman[369959]: 2025-10-07 14:32:18.664389886 +0000 UTC m=+0.050101819 container create a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 10:32:18 np0005473739 systemd[1]: Started libpod-conmon-a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc.scope.
Oct  7 10:32:18 np0005473739 podman[369959]: 2025-10-07 14:32:18.64656228 +0000 UTC m=+0.032274243 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:32:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:32:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:18 np0005473739 podman[369959]: 2025-10-07 14:32:18.76819868 +0000 UTC m=+0.153910633 container init a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:32:18 np0005473739 podman[369959]: 2025-10-07 14:32:18.7794315 +0000 UTC m=+0.165143443 container start a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:32:18 np0005473739 podman[369959]: 2025-10-07 14:32:18.783986522 +0000 UTC m=+0.169698465 container attach a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:32:19 np0005473739 stoic_carver[369977]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:32:19 np0005473739 stoic_carver[369977]: --> relative data size: 1.0
Oct  7 10:32:19 np0005473739 stoic_carver[369977]: --> All data devices are unavailable
Oct  7 10:32:19 np0005473739 systemd[1]: libpod-a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc.scope: Deactivated successfully.
Oct  7 10:32:19 np0005473739 podman[369959]: 2025-10-07 14:32:19.850082628 +0000 UTC m=+1.235794561 container died a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:32:19 np0005473739 systemd[1]: libpod-a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc.scope: Consumed 1.025s CPU time.
Oct  7 10:32:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1d8335a6c70330c017af1958cb8c5710f3634b4378330da9b028540833b8120f-merged.mount: Deactivated successfully.
Oct  7 10:32:19 np0005473739 podman[369959]: 2025-10-07 14:32:19.906179847 +0000 UTC m=+1.291891780 container remove a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 10:32:19 np0005473739 systemd[1]: libpod-conmon-a77bb6ac38dff74e718d5350afb42a54abc54f8c89edb7f9e8e58559bb1258dc.scope: Deactivated successfully.
Oct  7 10:32:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2097: 305 pgs: 305 active+clean; 84 MiB data, 748 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 1.4 MiB/s wr, 24 op/s
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.320 2 DEBUG nova.network.neutron [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.459 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.459 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance network_info: |[{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.459 2 DEBUG oslo_concurrency.lockutils [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.460 2 DEBUG nova.network.neutron [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.463 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start _get_guest_xml network_info=[{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.469 2 WARNING nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.475 2 DEBUG nova.virt.libvirt.host [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:32:20 np0005473739 podman[370159]: 2025-10-07 14:32:20.476677012 +0000 UTC m=+0.043622427 container create 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.477 2 DEBUG nova.virt.libvirt.host [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.481 2 DEBUG nova.virt.libvirt.host [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.481 2 DEBUG nova.virt.libvirt.host [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.482 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.482 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.483 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.484 2 DEBUG nova.virt.hardware [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.488 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:20 np0005473739 systemd[1]: Started libpod-conmon-546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b.scope.
Oct  7 10:32:20 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:32:20 np0005473739 podman[370159]: 2025-10-07 14:32:20.456847792 +0000 UTC m=+0.023793237 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:32:20 np0005473739 podman[370159]: 2025-10-07 14:32:20.566318997 +0000 UTC m=+0.133264442 container init 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  7 10:32:20 np0005473739 podman[370159]: 2025-10-07 14:32:20.573113829 +0000 UTC m=+0.140059244 container start 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 10:32:20 np0005473739 gallant_kare[370176]: 167 167
Oct  7 10:32:20 np0005473739 systemd[1]: libpod-546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b.scope: Deactivated successfully.
Oct  7 10:32:20 np0005473739 podman[370159]: 2025-10-07 14:32:20.593910623 +0000 UTC m=+0.160856038 container attach 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:32:20 np0005473739 podman[370159]: 2025-10-07 14:32:20.594410087 +0000 UTC m=+0.161355522 container died 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 10:32:20 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ff3bc5a2b6358647cbbb4fa3418bdcf456903a189b4717319d6fc7ca4c30e576-merged.mount: Deactivated successfully.
Oct  7 10:32:20 np0005473739 podman[370159]: 2025-10-07 14:32:20.81006078 +0000 UTC m=+0.377006205 container remove 546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_kare, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:32:20 np0005473739 systemd[1]: libpod-conmon-546b9a5a6e818f969963544f40f5296d1be86c04f30a07219c14acee4c54d64b.scope: Deactivated successfully.
Oct  7 10:32:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:32:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1219155710' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.956 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.985 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:20 np0005473739 nova_compute[259550]: 2025-10-07 14:32:20.988 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:20 np0005473739 podman[370221]: 2025-10-07 14:32:20.998784523 +0000 UTC m=+0.070394453 container create c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:32:21 np0005473739 podman[370221]: 2025-10-07 14:32:20.952805284 +0000 UTC m=+0.024415234 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:32:21 np0005473739 systemd[1]: Started libpod-conmon-c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b.scope.
Oct  7 10:32:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:21 np0005473739 podman[370221]: 2025-10-07 14:32:21.097559121 +0000 UTC m=+0.169169081 container init c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:32:21 np0005473739 podman[370221]: 2025-10-07 14:32:21.104662241 +0000 UTC m=+0.176272171 container start c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:32:21 np0005473739 podman[370221]: 2025-10-07 14:32:21.114090563 +0000 UTC m=+0.185700503 container attach c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:32:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:32:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3796852326' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.426 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.430 2 DEBUG nova.virt.libvirt.vif [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1273910432',display_name='tempest-TestGettingAddress-server-1273910432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1273910432',id=105,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-v3y9hbt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:15Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=0af0082a-1adc-40e7-b254-88e03182e802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.430 2 DEBUG nova.network.os_vif_util [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.431 2 DEBUG nova.network.os_vif_util [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.433 2 DEBUG nova.objects.instance [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 0af0082a-1adc-40e7-b254-88e03182e802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.453 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <uuid>0af0082a-1adc-40e7-b254-88e03182e802</uuid>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <name>instance-00000069</name>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1273910432</nova:name>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:32:20</nova:creationTime>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <nova:port uuid="52f43128-c899-4d76-9e65-c99941c834d4">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe52:178c" ipVersion="6"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <entry name="serial">0af0082a-1adc-40e7-b254-88e03182e802</entry>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <entry name="uuid">0af0082a-1adc-40e7-b254-88e03182e802</entry>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/0af0082a-1adc-40e7-b254-88e03182e802_disk">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/0af0082a-1adc-40e7-b254-88e03182e802_disk.config">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:52:17:8c"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <target dev="tap52f43128-c8"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/console.log" append="off"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:32:21 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:32:21 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:32:21 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:32:21 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.454 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Preparing to wait for external event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.455 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.456 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.456 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.457 2 DEBUG nova.virt.libvirt.vif [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1273910432',display_name='tempest-TestGettingAddress-server-1273910432',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1273910432',id=105,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-v3y9hbt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:15Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=0af0082a-1adc-40e7-b254-88e03182e802,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.457 2 DEBUG nova.network.os_vif_util [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.458 2 DEBUG nova.network.os_vif_util [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.458 2 DEBUG os_vif [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.459 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.460 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap52f43128-c8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap52f43128-c8, col_values=(('external_ids', {'iface-id': '52f43128-c899-4d76-9e65-c99941c834d4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:17:8c', 'vm-uuid': '0af0082a-1adc-40e7-b254-88e03182e802'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:21 np0005473739 NetworkManager[44949]: <info>  [1759847541.4669] manager: (tap52f43128-c8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/425)
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.478 2 INFO os_vif [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8')#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.542 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.543 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.544 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:52:17:8c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.545 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Using config drive#033[00m
Oct  7 10:32:21 np0005473739 nova_compute[259550]: 2025-10-07 14:32:21.577 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:21 np0005473739 cool_benz[370258]: {
Oct  7 10:32:21 np0005473739 cool_benz[370258]:    "0": [
Oct  7 10:32:21 np0005473739 cool_benz[370258]:        {
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "devices": [
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "/dev/loop3"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            ],
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_name": "ceph_lv0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_size": "21470642176",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "name": "ceph_lv0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "tags": {
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cluster_name": "ceph",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.crush_device_class": "",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.encrypted": "0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osd_id": "0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.type": "block",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.vdo": "0"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            },
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "type": "block",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "vg_name": "ceph_vg0"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:        }
Oct  7 10:32:21 np0005473739 cool_benz[370258]:    ],
Oct  7 10:32:21 np0005473739 cool_benz[370258]:    "1": [
Oct  7 10:32:21 np0005473739 cool_benz[370258]:        {
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "devices": [
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "/dev/loop4"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            ],
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_name": "ceph_lv1",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_size": "21470642176",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "name": "ceph_lv1",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "tags": {
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cluster_name": "ceph",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.crush_device_class": "",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.encrypted": "0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osd_id": "1",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.type": "block",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.vdo": "0"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            },
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "type": "block",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "vg_name": "ceph_vg1"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:        }
Oct  7 10:32:21 np0005473739 cool_benz[370258]:    ],
Oct  7 10:32:21 np0005473739 cool_benz[370258]:    "2": [
Oct  7 10:32:21 np0005473739 cool_benz[370258]:        {
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "devices": [
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "/dev/loop5"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            ],
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_name": "ceph_lv2",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_size": "21470642176",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "name": "ceph_lv2",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "tags": {
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.cluster_name": "ceph",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.crush_device_class": "",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.encrypted": "0",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osd_id": "2",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.type": "block",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:                "ceph.vdo": "0"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            },
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "type": "block",
Oct  7 10:32:21 np0005473739 cool_benz[370258]:            "vg_name": "ceph_vg2"
Oct  7 10:32:21 np0005473739 cool_benz[370258]:        }
Oct  7 10:32:21 np0005473739 cool_benz[370258]:    ]
Oct  7 10:32:21 np0005473739 cool_benz[370258]: }
Oct  7 10:32:21 np0005473739 systemd[1]: libpod-c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b.scope: Deactivated successfully.
Oct  7 10:32:21 np0005473739 podman[370221]: 2025-10-07 14:32:21.89763106 +0000 UTC m=+0.969240990 container died c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:32:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3d3e4bebd85ebd312242a8567909aa0fec8ac1ddb56d3616450cd8ceb0371c16-merged.mount: Deactivated successfully.
Oct  7 10:32:21 np0005473739 podman[370221]: 2025-10-07 14:32:21.988100147 +0000 UTC m=+1.059710067 container remove c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cool_benz, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:32:21 np0005473739 systemd[1]: libpod-conmon-c8c4d57c1bb63bb60e6aeb6091dd650e56dfacb943e7249a1b4a2b904ffe935b.scope: Deactivated successfully.
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.011 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Creating config drive at /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.016 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8af9lohh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.082 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.130 2 DEBUG nova.network.neutron [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updated VIF entry in instance network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.130 2 DEBUG nova.network.neutron [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.163 2 DEBUG oslo_concurrency.lockutils [req-25b925b4-4644-4ffc-b50c-b10fb77cbf93 req-5801e83f-461c-4eeb-8f28-bad4c7b3bfa8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.190 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp8af9lohh" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:22 np0005473739 podman[370349]: 2025-10-07 14:32:22.213530281 +0000 UTC m=+0.063571770 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:32:22 np0005473739 podman[370350]: 2025-10-07 14:32:22.213656384 +0000 UTC m=+0.061073892 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2098: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.233 2 DEBUG nova.storage.rbd_utils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 0af0082a-1adc-40e7-b254-88e03182e802_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.237 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config 0af0082a-1adc-40e7-b254-88e03182e802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:32:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 3600.1 total, 600.0 interval#012Cumulative writes: 25K writes, 101K keys, 25K commit groups, 1.0 writes per commit group, ingest: 0.10 GB, 0.03 MB/s#012Cumulative WAL: 25K writes, 8900 syncs, 2.89 writes per sync, written: 0.10 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 3566 writes, 12K keys, 3566 commit groups, 1.0 writes per commit group, ingest: 9.89 MB, 0.02 MB/s#012Interval WAL: 3566 writes, 1483 syncs, 2.40 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.515 2 DEBUG oslo_concurrency.processutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config 0af0082a-1adc-40e7-b254-88e03182e802_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.278s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.516 2 INFO nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Deleting local config drive /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802/disk.config because it was imported into RBD.#033[00m
Oct  7 10:32:22 np0005473739 systemd[1]: Starting libvirt secret daemon...
Oct  7 10:32:22 np0005473739 systemd[1]: Started libvirt secret daemon.
Oct  7 10:32:22 np0005473739 kernel: tap52f43128-c8: entered promiscuous mode
Oct  7 10:32:22 np0005473739 NetworkManager[44949]: <info>  [1759847542.6307] manager: (tap52f43128-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/426)
Oct  7 10:32:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:22Z|01053|binding|INFO|Claiming lport 52f43128-c899-4d76-9e65-c99941c834d4 for this chassis.
Oct  7 10:32:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:22Z|01054|binding|INFO|52f43128-c899-4d76-9e65-c99941c834d4: Claiming fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:22 np0005473739 systemd-machined[214580]: New machine qemu-132-instance-00000069.
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.668 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], port_security=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe52:178c/64', 'neutron:device_id': '0af0082a-1adc-40e7-b254-88e03182e802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=52f43128-c899-4d76-9e65-c99941c834d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.669 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 52f43128-c899-4d76-9e65-c99941c834d4 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc bound to our chassis#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.670 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc#033[00m
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:32:22 np0005473739 systemd[1]: Started Virtual Machine qemu-132-instance-00000069.
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1688fbf1-b3e0-4f5e-a71d-a43e42659ff8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.689 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap1da6903e-11 in ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:32:22
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'volumes', 'vms', 'default.rgw.log', '.mgr', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta']
Oct  7 10:32:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.694 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap1da6903e-10 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e18f8abd-2b3a-4644-9fa6-ccdb8388341d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.695 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[462bd3e3-33a8-419e-9f07-7db3fb561cdf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 systemd-udevd[370580]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.707 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cdc1af53-1687-46f9-ba90-917598195487]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 NetworkManager[44949]: <info>  [1759847542.7103] device (tap52f43128-c8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:32:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:22Z|01055|binding|INFO|Setting lport 52f43128-c899-4d76-9e65-c99941c834d4 ovn-installed in OVS
Oct  7 10:32:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:22Z|01056|binding|INFO|Setting lport 52f43128-c899-4d76-9e65-c99941c834d4 up in Southbound
Oct  7 10:32:22 np0005473739 NetworkManager[44949]: <info>  [1759847542.7131] device (tap52f43128-c8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:32:22 np0005473739 nova_compute[259550]: 2025-10-07 14:32:22.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.725 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3aefb430-81dc-429d-ac32-aa1bf02ec672]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 podman[370559]: 2025-10-07 14:32:22.657397001 +0000 UTC m=+0.025619556 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.770 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cc99483d-cd24-4161-9c3f-8c69974dc24b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.776 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[942d2479-d543-42a5-92d9-38af5b82d68b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 systemd-udevd[370583]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:32:22 np0005473739 NetworkManager[44949]: <info>  [1759847542.7783] manager: (tap1da6903e-10): new Veth device (/org/freedesktop/NetworkManager/Devices/427)
Oct  7 10:32:22 np0005473739 podman[370559]: 2025-10-07 14:32:22.797316239 +0000 UTC m=+0.165538794 container create 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.814 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1f0feee2-3e4d-48b5-8dd7-1a4480f70f62]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.817 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[93e7447c-8f8e-46c2-8597-9c46fd06ad93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 NetworkManager[44949]: <info>  [1759847542.8362] device (tap1da6903e-10): carrier: link connected
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.842 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cb05c1dd-a3bc-4adf-b65e-2d9e46fc9689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 systemd[1]: Started libpod-conmon-5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65.scope.
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.863 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d49f1e0-fc5a-44d9-a21b-60cad97a85b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da6903e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:f4:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812640, 'reachable_time': 33851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 370614, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.878 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07ac826d-0173-48f7-be27-14898741835c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0d:f40e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812640, 'tstamp': 812640}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 370617, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.895 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe24f0f8-5f8e-4905-9966-60b0a9a23d4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da6903e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:f4:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812640, 'reachable_time': 33851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 370619, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:32:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:22.938 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75302ff5-6807-4074-b064-c794f4154f1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:22 np0005473739 podman[370559]: 2025-10-07 14:32:22.950169723 +0000 UTC m=+0.318392308 container init 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:32:22 np0005473739 podman[370559]: 2025-10-07 14:32:22.957361466 +0000 UTC m=+0.325584021 container start 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:32:22 np0005473739 zen_perlman[370615]: 167 167
Oct  7 10:32:22 np0005473739 systemd[1]: libpod-5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65.scope: Deactivated successfully.
Oct  7 10:32:22 np0005473739 podman[370559]: 2025-10-07 14:32:22.964853296 +0000 UTC m=+0.333075841 container attach 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True)
Oct  7 10:32:22 np0005473739 podman[370559]: 2025-10-07 14:32:22.965268507 +0000 UTC m=+0.333491062 container died 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.008 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9713c7d2-596f-412b-b4b6-32986cc1baa0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.010 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da6903e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.010 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.011 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da6903e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.013 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:23 np0005473739 NetworkManager[44949]: <info>  [1759847543.0136] manager: (tap1da6903e-10): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/428)
Oct  7 10:32:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6985ba1244852fc3e900502d24bff83ad71a98f6712c748b5c8fd76b72ac6803-merged.mount: Deactivated successfully.
Oct  7 10:32:23 np0005473739 kernel: tap1da6903e-10: entered promiscuous mode
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.019 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1da6903e-10, col_values=(('external_ids', {'iface-id': 'ac55d93f-5af1-4917-a7be-679169a02318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:23Z|01057|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.022 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/1da6903e-17a3-4ac8-b5a0-50ed4bf377bc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/1da6903e-17a3-4ac8-b5a0-50ed4bf377bc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:32:23 np0005473739 podman[370559]: 2025-10-07 14:32:23.023897424 +0000 UTC m=+0.392119979 container remove 5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_perlman, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.023 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b74c2a1d-9b9c-4066-a9d9-b2b3f310ff78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.026 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/1da6903e-17a3-4ac8-b5a0-50ed4bf377bc.pid.haproxy
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:32:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:23.027 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'env', 'PROCESS_TAG=haproxy-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/1da6903e-17a3-4ac8-b5a0-50ed4bf377bc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:23 np0005473739 systemd[1]: libpod-conmon-5eed8adefc907d33a5239057c16cddecf0fa94b38f3e905e0ee1eaed18badb65.scope: Deactivated successfully.
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:32:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:32:23 np0005473739 podman[370649]: 2025-10-07 14:32:23.235007435 +0000 UTC m=+0.052408012 container create a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:32:23 np0005473739 systemd[1]: Started libpod-conmon-a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5.scope.
Oct  7 10:32:23 np0005473739 podman[370649]: 2025-10-07 14:32:23.209236856 +0000 UTC m=+0.026637453 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:32:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:32:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:23 np0005473739 podman[370649]: 2025-10-07 14:32:23.340199766 +0000 UTC m=+0.157600373 container init a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:32:23 np0005473739 podman[370649]: 2025-10-07 14:32:23.351525458 +0000 UTC m=+0.168926035 container start a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:32:23 np0005473739 podman[370649]: 2025-10-07 14:32:23.355853824 +0000 UTC m=+0.173254401 container attach a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 10:32:23 np0005473739 podman[370723]: 2025-10-07 14:32:23.41935581 +0000 UTC m=+0.056937871 container create 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  7 10:32:23 np0005473739 systemd[1]: Started libpod-conmon-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b.scope.
Oct  7 10:32:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:32:23 np0005473739 podman[370723]: 2025-10-07 14:32:23.391711012 +0000 UTC m=+0.029293103 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:32:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2e23988a13804a7caa057b6ca3196289d2a8449fff2e969f7ade775e8fde0162/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:23 np0005473739 podman[370723]: 2025-10-07 14:32:23.501340301 +0000 UTC m=+0.138922362 container init 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:32:23 np0005473739 podman[370723]: 2025-10-07 14:32:23.508476272 +0000 UTC m=+0.146058333 container start 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:32:23 np0005473739 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [NOTICE]   (370753) : New worker (370755) forked
Oct  7 10:32:23 np0005473739 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [NOTICE]   (370753) : Loading success.
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.782 2 DEBUG nova.compute.manager [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.783 2 DEBUG oslo_concurrency.lockutils [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.783 2 DEBUG oslo_concurrency.lockutils [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.783 2 DEBUG oslo_concurrency.lockutils [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.784 2 DEBUG nova.compute.manager [req-c0a1801d-2d86-4130-be97-536c0e1050b0 req-45b49a4f-6b9b-4128-a057-ffa1ad108d35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Processing event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.863 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847543.8633685, 0af0082a-1adc-40e7-b254-88e03182e802 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.864 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] VM Started (Lifecycle Event)#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.866 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.869 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.873 2 INFO nova.virt.libvirt.driver [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance spawned successfully.#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.874 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.881 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.885 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.892 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.892 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.893 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.893 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.893 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.894 2 DEBUG nova.virt.libvirt.driver [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.902 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.902 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847543.8636093, 0af0082a-1adc-40e7-b254-88e03182e802 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.903 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.923 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.927 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847543.868457, 0af0082a-1adc-40e7-b254-88e03182e802 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.927 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.953 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.957 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.961 2 INFO nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Took 8.78 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.961 2 DEBUG nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:32:23 np0005473739 nova_compute[259550]: 2025-10-07 14:32:23.976 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:32:24 np0005473739 nova_compute[259550]: 2025-10-07 14:32:24.020 2 INFO nova.compute.manager [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Took 10.47 seconds to build instance.#033[00m
Oct  7 10:32:24 np0005473739 nova_compute[259550]: 2025-10-07 14:32:24.035 2 DEBUG oslo_concurrency.lockutils [None req-adeb3f33-b8bb-460a-96e1-7ad97bd6f44a d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.691s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 10:32:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2099: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  7 10:32:24 np0005473739 goofy_payne[370671]: {
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "osd_id": 2,
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "type": "bluestore"
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:    },
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "osd_id": 1,
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "type": "bluestore"
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:    },
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "osd_id": 0,
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:        "type": "bluestore"
Oct  7 10:32:24 np0005473739 goofy_payne[370671]:    }
Oct  7 10:32:24 np0005473739 goofy_payne[370671]: }
Oct  7 10:32:24 np0005473739 systemd[1]: libpod-a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5.scope: Deactivated successfully.
Oct  7 10:32:24 np0005473739 systemd[1]: libpod-a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5.scope: Consumed 1.038s CPU time.
Oct  7 10:32:24 np0005473739 podman[370792]: 2025-10-07 14:32:24.450210436 +0000 UTC m=+0.022964785 container died a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:32:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4bf53702f677fb9cff2454f5dc0947bbab5e8b2e976fd66e62c52be63c8e67c6-merged.mount: Deactivated successfully.
Oct  7 10:32:24 np0005473739 podman[370792]: 2025-10-07 14:32:24.504801065 +0000 UTC m=+0.077555404 container remove a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_payne, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:32:24 np0005473739 systemd[1]: libpod-conmon-a47e44911f213d93eb021984bda87b9d45478beff7ae572b4b20d1c2095e60a5.scope: Deactivated successfully.
Oct  7 10:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:32:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:32:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:32:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:32:24 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a1b17141-a7fe-41c2-b9ff-ed98ab9cf51e does not exist
Oct  7 10:32:24 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 72b4d39f-516c-445c-8e0b-7d775f77b442 does not exist
Oct  7 10:32:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:32:25 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:32:26 np0005473739 nova_compute[259550]: 2025-10-07 14:32:26.202 2 DEBUG nova.compute.manager [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:26 np0005473739 nova_compute[259550]: 2025-10-07 14:32:26.203 2 DEBUG oslo_concurrency.lockutils [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:26 np0005473739 nova_compute[259550]: 2025-10-07 14:32:26.203 2 DEBUG oslo_concurrency.lockutils [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:26 np0005473739 nova_compute[259550]: 2025-10-07 14:32:26.204 2 DEBUG oslo_concurrency.lockutils [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:26 np0005473739 nova_compute[259550]: 2025-10-07 14:32:26.204 2 DEBUG nova.compute.manager [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] No waiting events found dispatching network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:32:26 np0005473739 nova_compute[259550]: 2025-10-07 14:32:26.204 2 WARNING nova.compute.manager [req-55732c70-bc13-4943-bcf4-c4fa22c23233 req-d7505817-780c-466d-8473-41e6fd603073 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received unexpected event network-vif-plugged-52f43128-c899-4d76-9e65-c99941c834d4 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:32:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2100: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 408 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct  7 10:32:26 np0005473739 nova_compute[259550]: 2025-10-07 14:32:26.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:27 np0005473739 nova_compute[259550]: 2025-10-07 14:32:27.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2101: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 402 KiB/s rd, 1.7 MiB/s wr, 38 op/s
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:29 np0005473739 NetworkManager[44949]: <info>  [1759847549.1786] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/429)
Oct  7 10:32:29 np0005473739 NetworkManager[44949]: <info>  [1759847549.1797] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/430)
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:29Z|01058|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:29.390 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=32, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=31) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:29.391 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.955 2 DEBUG nova.compute.manager [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-changed-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.956 2 DEBUG nova.compute.manager [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing instance network info cache due to event network-changed-52f43128-c899-4d76-9e65-c99941c834d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.956 2 DEBUG oslo_concurrency.lockutils [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.956 2 DEBUG oslo_concurrency.lockutils [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:32:29 np0005473739 nova_compute[259550]: 2025-10-07 14:32:29.957 2 DEBUG nova.network.neutron [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:32:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2102: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 89 op/s
Oct  7 10:32:31 np0005473739 nova_compute[259550]: 2025-10-07 14:32:31.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:31 np0005473739 nova_compute[259550]: 2025-10-07 14:32:31.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:32 np0005473739 nova_compute[259550]: 2025-10-07 14:32:32.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2103: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 354 KiB/s wr, 75 op/s
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:32:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:32:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:32:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508426753' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:32:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:32:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/508426753' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:32:32 np0005473739 nova_compute[259550]: 2025-10-07 14:32:32.709 2 DEBUG nova.network.neutron [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updated VIF entry in instance network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:32:32 np0005473739 nova_compute[259550]: 2025-10-07 14:32:32.709 2 DEBUG nova.network.neutron [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:32:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:33 np0005473739 nova_compute[259550]: 2025-10-07 14:32:33.610 2 DEBUG oslo_concurrency.lockutils [req-d32698d0-2998-4004-9135-d4fd5e62327c req-ee7c08c0-3d24-49a6-bdbe-0f444cb89655 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:32:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2104: 305 pgs: 305 active+clean; 88 MiB data, 753 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:32:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2105: 305 pgs: 305 active+clean; 96 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 513 KiB/s wr, 75 op/s
Oct  7 10:32:36 np0005473739 nova_compute[259550]: 2025-10-07 14:32:36.249 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:36 np0005473739 nova_compute[259550]: 2025-10-07 14:32:36.249 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:36.393 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '32'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:36 np0005473739 nova_compute[259550]: 2025-10-07 14:32:36.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:36 np0005473739 nova_compute[259550]: 2025-10-07 14:32:36.597 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:32:36 np0005473739 nova_compute[259550]: 2025-10-07 14:32:36.900 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:36 np0005473739 nova_compute[259550]: 2025-10-07 14:32:36.901 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:36 np0005473739 nova_compute[259550]: 2025-10-07 14:32:36.911 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:32:36 np0005473739 nova_compute[259550]: 2025-10-07 14:32:36.912 2 INFO nova.compute.claims [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.067 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:37 np0005473739 podman[370860]: 2025-10-07 14:32:37.124379984 +0000 UTC m=+0.091152767 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:37 np0005473739 podman[370861]: 2025-10-07 14:32:37.15043767 +0000 UTC m=+0.124264091 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:32:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:37Z|00110|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:52:17:8c 10.100.0.13
Oct  7 10:32:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:37Z|00111|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:52:17:8c 10.100.0.13
Oct  7 10:32:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:32:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/687542214' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.529 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.534 2 DEBUG nova.compute.provider_tree [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.550 2 DEBUG nova.scheduler.client.report [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.570 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.669s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.571 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.644 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.645 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.667 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.686 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.774 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.776 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.777 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Creating image(s)#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.797 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.816 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.841 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.845 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.935 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.936 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.937 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.937 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.961 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:37 np0005473739 nova_compute[259550]: 2025-10-07 14:32:37.966 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2106: 305 pgs: 305 active+clean; 96 MiB data, 759 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 513 KiB/s wr, 62 op/s
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.433 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.485 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.578 2 DEBUG nova.objects.instance [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.582 2 DEBUG nova.policy [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.598 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.598 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Ensure instance console log exists: /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.599 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.599 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:38 np0005473739 nova_compute[259550]: 2025-10-07 14:32:38.599 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2107: 305 pgs: 305 active+clean; 144 MiB data, 778 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 3.2 MiB/s wr, 117 op/s
Oct  7 10:32:41 np0005473739 nova_compute[259550]: 2025-10-07 14:32:41.242 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Successfully created port: 70d147cb-a82e-4f80-88cf-016ec19f5a2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:32:41 np0005473739 nova_compute[259550]: 2025-10-07 14:32:41.536 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:42 np0005473739 nova_compute[259550]: 2025-10-07 14:32:42.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:42 np0005473739 nova_compute[259550]: 2025-10-07 14:32:42.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2108: 305 pgs: 305 active+clean; 167 MiB data, 787 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 89 op/s
Oct  7 10:32:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:43 np0005473739 nova_compute[259550]: 2025-10-07 14:32:43.323 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Successfully updated port: 70d147cb-a82e-4f80-88cf-016ec19f5a2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:32:43 np0005473739 nova_compute[259550]: 2025-10-07 14:32:43.344 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:32:43 np0005473739 nova_compute[259550]: 2025-10-07 14:32:43.345 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:32:43 np0005473739 nova_compute[259550]: 2025-10-07 14:32:43.345 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:32:43 np0005473739 nova_compute[259550]: 2025-10-07 14:32:43.465 2 DEBUG nova.compute.manager [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:43 np0005473739 nova_compute[259550]: 2025-10-07 14:32:43.465 2 DEBUG nova.compute.manager [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing instance network info cache due to event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:32:43 np0005473739 nova_compute[259550]: 2025-10-07 14:32:43.465 2 DEBUG oslo_concurrency.lockutils [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:32:43 np0005473739 nova_compute[259550]: 2025-10-07 14:32:43.581 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:32:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2109: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct  7 10:32:44 np0005473739 nova_compute[259550]: 2025-10-07 14:32:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:44 np0005473739 nova_compute[259550]: 2025-10-07 14:32:44.996 2 DEBUG nova.network.neutron [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.246 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.246 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance network_info: |[{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.247 2 DEBUG oslo_concurrency.lockutils [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.247 2 DEBUG nova.network.neutron [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.250 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start _get_guest_xml network_info=[{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.256 2 WARNING nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.261 2 DEBUG nova.virt.libvirt.host [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.262 2 DEBUG nova.virt.libvirt.host [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.265 2 DEBUG nova.virt.libvirt.host [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.266 2 DEBUG nova.virt.libvirt.host [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.266 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.266 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.267 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.268 2 DEBUG nova.virt.hardware [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.271 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:32:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1813904550' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.709 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.731 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.734 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:45 np0005473739 nova_compute[259550]: 2025-10-07 14:32:45.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:32:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:32:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4281085148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.175 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.178 2 DEBUG nova.virt.libvirt.vif [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1785477500',display_name='tempest-TestNetworkAdvancedServerOps-server-1785477500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1785477500',id=106,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOmKPVgdwoa2PL66bDGbFu3FUOEtM4uSjrJKdQm0+Dh5hnoywovo36nzSmeP/PaULcFe/lE8zWO3xuub67iudTqw2d8wzPnJMGL68v9G5hFZdOFPK6yXS41BfolaqLZgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1280135326',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-pm6ky9yn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:37Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=08184d43-2bd1-4f46-9bdf-63437c87a8ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.178 2 DEBUG nova.network.os_vif_util [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.179 2 DEBUG nova.network.os_vif_util [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.181 2 DEBUG nova.objects.instance [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:32:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2110: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 343 KiB/s rd, 3.9 MiB/s wr, 90 op/s
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.253 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <uuid>08184d43-2bd1-4f46-9bdf-63437c87a8ad</uuid>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <name>instance-0000006a</name>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1785477500</nova:name>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:32:45</nova:creationTime>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <nova:port uuid="70d147cb-a82e-4f80-88cf-016ec19f5a2c">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <entry name="serial">08184d43-2bd1-4f46-9bdf-63437c87a8ad</entry>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <entry name="uuid">08184d43-2bd1-4f46-9bdf-63437c87a8ad</entry>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:96:b2:f9"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <target dev="tap70d147cb-a8"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/console.log" append="off"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:32:46 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:32:46 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:32:46 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:32:46 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.254 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Preparing to wait for external event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.254 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.255 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.256 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.257 2 DEBUG nova.virt.libvirt.vif [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1785477500',display_name='tempest-TestNetworkAdvancedServerOps-server-1785477500',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1785477500',id=106,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOmKPVgdwoa2PL66bDGbFu3FUOEtM4uSjrJKdQm0+Dh5hnoywovo36nzSmeP/PaULcFe/lE8zWO3xuub67iudTqw2d8wzPnJMGL68v9G5hFZdOFPK6yXS41BfolaqLZgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1280135326',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-pm6ky9yn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:37Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=08184d43-2bd1-4f46-9bdf-63437c87a8ad,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.257 2 DEBUG nova.network.os_vif_util [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.258 2 DEBUG nova.network.os_vif_util [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.258 2 DEBUG os_vif [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.259 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.260 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70d147cb-a8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap70d147cb-a8, col_values=(('external_ids', {'iface-id': '70d147cb-a82e-4f80-88cf-016ec19f5a2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:b2:f9', 'vm-uuid': '08184d43-2bd1-4f46-9bdf-63437c87a8ad'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:46 np0005473739 NetworkManager[44949]: <info>  [1759847566.2677] manager: (tap70d147cb-a8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/431)
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.277 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.279 2 INFO os_vif [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8')#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.404 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.404 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.405 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:96:b2:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.405 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Using config drive#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.429 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.707 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Creating config drive at /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.712 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9p9vm03 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.861 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpx9p9vm03" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.890 2 DEBUG nova.storage.rbd_utils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:46 np0005473739 nova_compute[259550]: 2025-10-07 14:32:46.894 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.063 2 DEBUG oslo_concurrency.processutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config 08184d43-2bd1-4f46-9bdf-63437c87a8ad_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.064 2 INFO nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Deleting local config drive /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad/disk.config because it was imported into RBD.#033[00m
Oct  7 10:32:47 np0005473739 NetworkManager[44949]: <info>  [1759847567.1054] manager: (tap70d147cb-a8): new Tun device (/org/freedesktop/NetworkManager/Devices/432)
Oct  7 10:32:47 np0005473739 kernel: tap70d147cb-a8: entered promiscuous mode
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:47Z|01059|binding|INFO|Claiming lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c for this chassis.
Oct  7 10:32:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:47Z|01060|binding|INFO|70d147cb-a82e-4f80-88cf-016ec19f5a2c: Claiming fa:16:3e:96:b2:f9 10.100.0.3
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.116 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:b2:f9 10.100.0.3'], port_security=['fa:16:3e:96:b2:f9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08184d43-2bd1-4f46-9bdf-63437c87a8ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48fde043-e83e-45b7-ab13-37b209fd562c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8bbba0f-6faa-43dc-9c06-619017376051', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea6d7f11-49ef-4d9e-9bd1-b8e683eae537, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=70d147cb-a82e-4f80-88cf-016ec19f5a2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.118 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 70d147cb-a82e-4f80-88cf-016ec19f5a2c in datapath 48fde043-e83e-45b7-ab13-37b209fd562c bound to our chassis#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.120 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 48fde043-e83e-45b7-ab13-37b209fd562c#033[00m
Oct  7 10:32:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:47Z|01061|binding|INFO|Setting lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c ovn-installed in OVS
Oct  7 10:32:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:47Z|01062|binding|INFO|Setting lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c up in Southbound
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.133 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d12dff6-116d-455c-9c4e-fb3d88d075ca]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.134 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap48fde043-e1 in ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.136 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap48fde043-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.136 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[77bba6db-7741-4450-bd50-ac3910f6c4fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.137 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7cfbcb86-7b79-4afb-a867-fbf253bf6649]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:47 np0005473739 systemd-udevd[371229]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.149 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6dddabd4-926f-43f0-9f33-7688b96cc1ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 systemd-machined[214580]: New machine qemu-133-instance-0000006a.
Oct  7 10:32:47 np0005473739 NetworkManager[44949]: <info>  [1759847567.1580] device (tap70d147cb-a8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:32:47 np0005473739 NetworkManager[44949]: <info>  [1759847567.1595] device (tap70d147cb-a8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:32:47 np0005473739 systemd[1]: Started Virtual Machine qemu-133-instance-0000006a.
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.173 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3af77505-2ac1-499f-b03a-acc7103f230a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.205 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[94967e97-0dca-4da6-b004-9395049a2877]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 systemd-udevd[371232]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.212 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be912a1f-4153-46bc-b260-8eb50757d58e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 NetworkManager[44949]: <info>  [1759847567.2131] manager: (tap48fde043-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/433)
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.244 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[332538bb-f9c5-4da7-a4a2-d00110c9f748]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.247 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0bbe670b-e948-4fe6-bebc-db7191013f51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 NetworkManager[44949]: <info>  [1759847567.2721] device (tap48fde043-e0): carrier: link connected
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.278 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e0a645a7-5637-4a8e-a072-758903fae7e3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.300 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e09228e7-b00e-40cf-b673-1ca4fa1f5eb9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48fde043-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815083, 'reachable_time': 32833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371260, 'error': None, 'target': 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.319 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[602d855f-d783-4f60-980e-2ec23767839c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:adca'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 815083, 'tstamp': 815083}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371261, 'error': None, 'target': 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.321 2 DEBUG nova.compute.manager [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.322 2 DEBUG oslo_concurrency.lockutils [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.322 2 DEBUG oslo_concurrency.lockutils [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.322 2 DEBUG oslo_concurrency.lockutils [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.323 2 DEBUG nova.compute.manager [req-c56d2b3a-ee9c-4ead-b837-95b1729a0584 req-50fbf4b3-91b7-4d65-98d7-93c5361d12de 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Processing event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.344 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f2d88bf-61d7-4328-9e54-9606435cfe42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap48fde043-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:ad:ca'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 309], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815083, 'reachable_time': 32833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 371262, 'error': None, 'target': 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.375 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[88b13b32-ad09-4781-868f-962ee6243d4c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.438 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f444a7e-1423-411a-a769-90e42003aaf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.439 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48fde043-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.439 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.440 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap48fde043-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:47 np0005473739 NetworkManager[44949]: <info>  [1759847567.4426] manager: (tap48fde043-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/434)
Oct  7 10:32:47 np0005473739 kernel: tap48fde043-e0: entered promiscuous mode
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.446 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap48fde043-e0, col_values=(('external_ids', {'iface-id': 'd62984c1-9132-4390-b975-fecc00942743'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:32:47Z|01063|binding|INFO|Releasing lport d62984c1-9132-4390-b975-fecc00942743 from this chassis (sb_readonly=0)
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:47 np0005473739 nova_compute[259550]: 2025-10-07 14:32:47.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.462 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/48fde043-e83e-45b7-ab13-37b209fd562c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/48fde043-e83e-45b7-ab13-37b209fd562c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.463 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a819e4c4-665f-4004-8785-01adb05646b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.464 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-48fde043-e83e-45b7-ab13-37b209fd562c
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/48fde043-e83e-45b7-ab13-37b209fd562c.pid.haproxy
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 48fde043-e83e-45b7-ab13-37b209fd562c
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:32:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:32:47.464 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'env', 'PROCESS_TAG=haproxy-48fde043-e83e-45b7-ab13-37b209fd562c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/48fde043-e83e-45b7-ab13-37b209fd562c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:32:47 np0005473739 podman[371336]: 2025-10-07 14:32:47.819862312 +0000 UTC m=+0.052415182 container create 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:32:47 np0005473739 systemd[1]: Started libpod-conmon-2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7.scope.
Oct  7 10:32:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:32:47 np0005473739 podman[371336]: 2025-10-07 14:32:47.791297938 +0000 UTC m=+0.023850838 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:32:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4cc61ebfbc859062541ed2a2564be14d195084d66aefaf18d107a00c5495bc9e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:32:47 np0005473739 podman[371336]: 2025-10-07 14:32:47.901877383 +0000 UTC m=+0.134430273 container init 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:32:47 np0005473739 podman[371336]: 2025-10-07 14:32:47.914183762 +0000 UTC m=+0.146736632 container start 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:32:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:47 np0005473739 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [NOTICE]   (371355) : New worker (371357) forked
Oct  7 10:32:47 np0005473739 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [NOTICE]   (371355) : Loading success.
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.013 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847568.0130007, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.013 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Started (Lifecycle Event)#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.016 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.020 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.026 2 INFO nova.virt.libvirt.driver [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance spawned successfully.#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.026 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.036 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.042 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.058 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.058 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.059 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.059 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.060 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.061 2 DEBUG nova.virt.libvirt.driver [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.066 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.067 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847568.0131211, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.067 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.091 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.096 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847568.0202394, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.097 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.115 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.119 2 INFO nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Took 10.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.120 2 DEBUG nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.121 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.147 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.180 2 INFO nova.compute.manager [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Took 11.51 seconds to build instance.#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.194 2 DEBUG oslo_concurrency.lockutils [None req-772efadd-8b18-42c1-8cc1-5e089b811185 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2111: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 318 KiB/s rd, 3.4 MiB/s wr, 79 op/s
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.524 2 DEBUG nova.network.neutron [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updated VIF entry in instance network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.525 2 DEBUG nova.network.neutron [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.541 2 DEBUG oslo_concurrency.lockutils [req-361f8e2a-6f7b-46b2-b31c-0e2d775463c9 req-e74d3bf6-c1cd-4f43-b933-85a3032c0879 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:32:48 np0005473739 nova_compute[259550]: 2025-10-07 14:32:48.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.418 2 DEBUG nova.compute.manager [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.419 2 DEBUG oslo_concurrency.lockutils [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.419 2 DEBUG oslo_concurrency.lockutils [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.419 2 DEBUG oslo_concurrency.lockutils [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.420 2 DEBUG nova.compute.manager [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] No waiting events found dispatching network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.420 2 WARNING nova.compute.manager [req-f5b1b5a6-8e6b-47ec-beef-85913c5067ba req-bf5aca01-4b01-4a84-afbb-5228385020f4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received unexpected event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c for instance with vm_state active and task_state None.#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.635 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.635 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.667 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.743 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.743 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.751 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.751 2 INFO nova.compute.claims [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:32:49 np0005473739 nova_compute[259550]: 2025-10-07 14:32:49.902 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2112: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 3.4 MiB/s wr, 90 op/s
Oct  7 10:32:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:32:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4104398649' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.415 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.421 2 DEBUG nova.compute.provider_tree [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.441 2 DEBUG nova.scheduler.client.report [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.478 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.479 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.555 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.556 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.582 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.605 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.732 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.735 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.736 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Creating image(s)#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.761 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.785 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.809 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.814 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.898 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.899 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.900 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.900 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.924 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.928 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d3fa3175-2379-4c66-9d83-0a37f5559db8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.973 2 DEBUG nova.policy [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:32:50 np0005473739 nova_compute[259550]: 2025-10-07 14:32:50.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.244 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d3fa3175-2379-4c66-9d83-0a37f5559db8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.276 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.324 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.427 2 DEBUG nova.objects.instance [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid d3fa3175-2379-4c66-9d83-0a37f5559db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.513 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.514 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Ensure instance console log exists: /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.514 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.515 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:51 np0005473739 nova_compute[259550]: 2025-10-07 14:32:51.515 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:52 np0005473739 nova_compute[259550]: 2025-10-07 14:32:52.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2113: 305 pgs: 305 active+clean; 167 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 859 KiB/s rd, 769 KiB/s wr, 61 op/s
Oct  7 10:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:32:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:32:52 np0005473739 nova_compute[259550]: 2025-10-07 14:32:52.811 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Successfully created port: c6894b29-6b20-445c-991a-9aefefb3823c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:32:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:53 np0005473739 podman[371554]: 2025-10-07 14:32:53.078967217 +0000 UTC m=+0.067989118 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  7 10:32:53 np0005473739 podman[371555]: 2025-10-07 14:32:53.084193796 +0000 UTC m=+0.068342306 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:32:53 np0005473739 nova_compute[259550]: 2025-10-07 14:32:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.143 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.144 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.145 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.145 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.146 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2114: 305 pgs: 305 active+clean; 208 MiB data, 817 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 MiB/s wr, 100 op/s
Oct  7 10:32:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:32:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1279964445' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.631 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.719 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.720 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000069 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.724 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.724 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.896 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.897 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3466MB free_disk=59.90550231933594GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.898 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.898 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.987 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 0af0082a-1adc-40e7-b254-88e03182e802 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 08184d43-2bd1-4f46-9bdf-63437c87a8ad actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d3fa3175-2379-4c66-9d83-0a37f5559db8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:32:54 np0005473739 nova_compute[259550]: 2025-10-07 14:32:54.989 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.061 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:32:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1293899815' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.499 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.504 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.521 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.547 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.547 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.650s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.633 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Successfully updated port: c6894b29-6b20-445c-991a-9aefefb3823c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.653 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.654 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.654 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.705 2 DEBUG nova.compute.manager [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.705 2 DEBUG nova.compute.manager [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing instance network info cache due to event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.706 2 DEBUG oslo_concurrency.lockutils [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.706 2 DEBUG oslo_concurrency.lockutils [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.706 2 DEBUG nova.network.neutron [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:32:55 np0005473739 nova_compute[259550]: 2025-10-07 14:32:55.990 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:32:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2115: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  7 10:32:56 np0005473739 nova_compute[259550]: 2025-10-07 14:32:56.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:57 np0005473739 nova_compute[259550]: 2025-10-07 14:32:57.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:57 np0005473739 nova_compute[259550]: 2025-10-07 14:32:57.858 2 DEBUG nova.compute.manager [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:32:57 np0005473739 nova_compute[259550]: 2025-10-07 14:32:57.859 2 DEBUG nova.compute.manager [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing instance network info cache due to event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:32:57 np0005473739 nova_compute[259550]: 2025-10-07 14:32:57.859 2 DEBUG oslo_concurrency.lockutils [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:32:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.011 2 DEBUG nova.network.neutron [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.019 2 DEBUG nova.network.neutron [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updated VIF entry in instance network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.020 2 DEBUG nova.network.neutron [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.059 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.059 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance network_info: |[{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.060 2 DEBUG oslo_concurrency.lockutils [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.060 2 DEBUG nova.network.neutron [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.064 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start _get_guest_xml network_info=[{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.070 2 WARNING nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.077 2 DEBUG nova.virt.libvirt.host [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.077 2 DEBUG nova.virt.libvirt.host [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.083 2 DEBUG nova.virt.libvirt.host [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.084 2 DEBUG nova.virt.libvirt.host [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.084 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.084 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.085 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.085 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.086 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.086 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.086 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.087 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.087 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.087 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.088 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.088 2 DEBUG nova.virt.hardware [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.091 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.134 2 DEBUG oslo_concurrency.lockutils [req-3e74b534-756d-498a-a6b2-513e596e7956 req-07aee861-76f5-4410-88f8-b23a60107dbc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:32:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2116: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:32:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2669948632' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.550 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.570 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:58 np0005473739 nova_compute[259550]: 2025-10-07 14:32:58.573 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:32:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:32:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/786880835' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.039 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.042 2 DEBUG nova.virt.libvirt.vif [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1497631187',display_name='tempest-TestGettingAddress-server-1497631187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1497631187',id=107,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-k6d5uiwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:50Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=d3fa3175-2379-4c66-9d83-0a37f5559db8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.042 2 DEBUG nova.network.os_vif_util [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.043 2 DEBUG nova.network.os_vif_util [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.045 2 DEBUG nova.objects.instance [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid d3fa3175-2379-4c66-9d83-0a37f5559db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.061 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <uuid>d3fa3175-2379-4c66-9d83-0a37f5559db8</uuid>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <name>instance-0000006b</name>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1497631187</nova:name>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:32:58</nova:creationTime>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <nova:port uuid="c6894b29-6b20-445c-991a-9aefefb3823c">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:d160" ipVersion="6"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <entry name="serial">d3fa3175-2379-4c66-9d83-0a37f5559db8</entry>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <entry name="uuid">d3fa3175-2379-4c66-9d83-0a37f5559db8</entry>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d3fa3175-2379-4c66-9d83-0a37f5559db8_disk">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:28:d1:60"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <target dev="tapc6894b29-6b"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/console.log" append="off"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:32:59 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:32:59 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:32:59 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:32:59 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.063 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Preparing to wait for external event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.063 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.064 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.064 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.065 2 DEBUG nova.virt.libvirt.vif [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:32:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1497631187',display_name='tempest-TestGettingAddress-server-1497631187',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1497631187',id=107,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-k6d5uiwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:32:50Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=d3fa3175-2379-4c66-9d83-0a37f5559db8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.065 2 DEBUG nova.network.os_vif_util [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.066 2 DEBUG nova.network.os_vif_util [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.066 2 DEBUG os_vif [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.067 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.068 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.070 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc6894b29-6b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.071 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc6894b29-6b, col_values=(('external_ids', {'iface-id': 'c6894b29-6b20-445c-991a-9aefefb3823c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:d1:60', 'vm-uuid': 'd3fa3175-2379-4c66-9d83-0a37f5559db8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:59 np0005473739 NetworkManager[44949]: <info>  [1759847579.0738] manager: (tapc6894b29-6b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/435)
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.081 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.082 2 INFO os_vif [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b')#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.209 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.209 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.209 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:28:d1:60, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.210 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Using config drive#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.231 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.547 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.548 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.549 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:32:59 np0005473739 nova_compute[259550]: 2025-10-07 14:32:59.580 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.065 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.066 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.067 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.140 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.141 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.141 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.141 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 0af0082a-1adc-40e7-b254-88e03182e802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.160 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Creating config drive at /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.166 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcemc4rjp execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:00Z|00112|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:b2:f9 10.100.0.3
Oct  7 10:33:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:00Z|00113|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:b2:f9 10.100.0.3
Oct  7 10:33:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2117: 305 pgs: 305 active+clean; 213 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.312 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcemc4rjp" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.349 2 DEBUG nova.storage.rbd_utils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.353 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.563 2 DEBUG oslo_concurrency.processutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config d3fa3175-2379-4c66-9d83-0a37f5559db8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.210s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.564 2 INFO nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Deleting local config drive /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8/disk.config because it was imported into RBD.#033[00m
Oct  7 10:33:00 np0005473739 kernel: tapc6894b29-6b: entered promiscuous mode
Oct  7 10:33:00 np0005473739 NetworkManager[44949]: <info>  [1759847580.6142] manager: (tapc6894b29-6b): new Tun device (/org/freedesktop/NetworkManager/Devices/436)
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:00Z|01064|binding|INFO|Claiming lport c6894b29-6b20-445c-991a-9aefefb3823c for this chassis.
Oct  7 10:33:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:00Z|01065|binding|INFO|c6894b29-6b20-445c-991a-9aefefb3823c: Claiming fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.674 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160'], port_security=['fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe28:d160/64', 'neutron:device_id': 'd3fa3175-2379-4c66-9d83-0a37f5559db8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c6894b29-6b20-445c-991a-9aefefb3823c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.675 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c6894b29-6b20-445c-991a-9aefefb3823c in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc bound to our chassis#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.677 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc#033[00m
Oct  7 10:33:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:00Z|01066|binding|INFO|Setting lport c6894b29-6b20-445c-991a-9aefefb3823c ovn-installed in OVS
Oct  7 10:33:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:00Z|01067|binding|INFO|Setting lport c6894b29-6b20-445c-991a-9aefefb3823c up in Southbound
Oct  7 10:33:00 np0005473739 systemd-udevd[371773]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:00 np0005473739 systemd-machined[214580]: New machine qemu-134-instance-0000006b.
Oct  7 10:33:00 np0005473739 NetworkManager[44949]: <info>  [1759847580.6978] device (tapc6894b29-6b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:33:00 np0005473739 NetworkManager[44949]: <info>  [1759847580.6987] device (tapc6894b29-6b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.696 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2e466682-5cb2-4a65-a152-0b6dcb534558]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:00 np0005473739 systemd[1]: Started Virtual Machine qemu-134-instance-0000006b.
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.731 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ab075bfc-1edc-4719-82ea-8f5b5ac4eccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.734 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[16781a8f-b458-4273-8d3b-595960ac9f54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.764 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0637ea-c929-43a5-a7f8-78d27479b6a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.785 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c580558f-feda-4fc8-8947-f110fea1d763]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da6903e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:f4:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812640, 'reachable_time': 33851, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371787, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.803 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ccbb464-cd7e-4bb2-be65-62c36c687901]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1da6903e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812653, 'tstamp': 812653}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371789, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1da6903e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812656, 'tstamp': 812656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 371789, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.806 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da6903e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.811 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da6903e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.811 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1da6903e-10, col_values=(('external_ids', {'iface-id': 'ac55d93f-5af1-4917-a7be-679169a02318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:00.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.884 2 DEBUG nova.network.neutron [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updated VIF entry in instance network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.885 2 DEBUG nova.network.neutron [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:00 np0005473739 nova_compute[259550]: 2025-10-07 14:33:00.905 2 DEBUG oslo_concurrency.lockutils [req-01e0695e-ef4f-4d66-8176-0c681d9f75c9 req-3683347f-e863-4e88-9aad-ad3d49992d3a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:33:01 np0005473739 nova_compute[259550]: 2025-10-07 14:33:01.550 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847581.5496955, d3fa3175-2379-4c66-9d83-0a37f5559db8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:01 np0005473739 nova_compute[259550]: 2025-10-07 14:33:01.550 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] VM Started (Lifecycle Event)#033[00m
Oct  7 10:33:01 np0005473739 nova_compute[259550]: 2025-10-07 14:33:01.567 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:01 np0005473739 nova_compute[259550]: 2025-10-07 14:33:01.570 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847581.5523415, d3fa3175-2379-4c66-9d83-0a37f5559db8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:01 np0005473739 nova_compute[259550]: 2025-10-07 14:33:01.570 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:33:01 np0005473739 nova_compute[259550]: 2025-10-07 14:33:01.587 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:01 np0005473739 nova_compute[259550]: 2025-10-07 14:33:01.591 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:33:01 np0005473739 nova_compute[259550]: 2025-10-07 14:33:01.606 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:33:02 np0005473739 nova_compute[259550]: 2025-10-07 14:33:02.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2118: 305 pgs: 305 active+clean; 226 MiB data, 836 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.0 MiB/s wr, 102 op/s
Oct  7 10:33:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:03 np0005473739 nova_compute[259550]: 2025-10-07 14:33:03.481 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:03 np0005473739 nova_compute[259550]: 2025-10-07 14:33:03.639 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:33:03 np0005473739 nova_compute[259550]: 2025-10-07 14:33:03.639 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:33:03 np0005473739 nova_compute[259550]: 2025-10-07 14:33:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:04 np0005473739 nova_compute[259550]: 2025-10-07 14:33:04.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2119: 305 pgs: 305 active+clean; 244 MiB data, 846 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 3.9 MiB/s wr, 134 op/s
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.022 2 DEBUG nova.compute.manager [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.023 2 DEBUG oslo_concurrency.lockutils [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.023 2 DEBUG oslo_concurrency.lockutils [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.023 2 DEBUG oslo_concurrency.lockutils [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.023 2 DEBUG nova.compute.manager [req-fe3d9eb2-50a6-4447-81e8-bc7010946318 req-61dd3f2a-34b4-447f-a8f4-f5a1665ddee3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Processing event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.024 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.035 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847585.0282066, d3fa3175-2379-4c66-9d83-0a37f5559db8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.035 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.036 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.041 2 INFO nova.virt.libvirt.driver [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance spawned successfully.#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.042 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.168 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.175 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.179 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.180 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.180 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.181 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.182 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.182 2 DEBUG nova.virt.libvirt.driver [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.318 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.410 2 INFO nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Took 14.68 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.411 2 DEBUG nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:05 np0005473739 nova_compute[259550]: 2025-10-07 14:33:05.724 2 INFO nova.compute.manager [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Took 16.01 seconds to build instance.#033[00m
Oct  7 10:33:06 np0005473739 nova_compute[259550]: 2025-10-07 14:33:06.027 2 DEBUG oslo_concurrency.lockutils [None req-9bf71402-bb33-4ba4-9792-c81a9b172060 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2120: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.5 MiB/s wr, 74 op/s
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.112 2 DEBUG nova.compute.manager [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.113 2 DEBUG oslo_concurrency.lockutils [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.113 2 DEBUG oslo_concurrency.lockutils [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.113 2 DEBUG oslo_concurrency.lockutils [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.114 2 DEBUG nova.compute.manager [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] No waiting events found dispatching network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.114 2 WARNING nova.compute.manager [req-c263d050-21fa-4c82-a50c-a1385665fb49 req-eb0b0473-27af-44c9-9dec-4b3380d2d803 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received unexpected event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c for instance with vm_state active and task_state None.#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.662 2 INFO nova.compute.manager [None req-d667a55d-499b-4883-9e20-ba05c4f28b3f 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Get console output#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.667 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:33:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.956 2 INFO nova.compute.manager [None req-b760ddf5-9662-40aa-9d46-d464f0958af5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Pausing#033[00m
Oct  7 10:33:07 np0005473739 nova_compute[259550]: 2025-10-07 14:33:07.957 2 DEBUG nova.objects.instance [None req-b760ddf5-9662-40aa-9d46-d464f0958af5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:08 np0005473739 nova_compute[259550]: 2025-10-07 14:33:08.011 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847588.0107796, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:08 np0005473739 nova_compute[259550]: 2025-10-07 14:33:08.011 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:33:08 np0005473739 nova_compute[259550]: 2025-10-07 14:33:08.013 2 DEBUG nova.compute.manager [None req-b760ddf5-9662-40aa-9d46-d464f0958af5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:08 np0005473739 nova_compute[259550]: 2025-10-07 14:33:08.069 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:08 np0005473739 nova_compute[259550]: 2025-10-07 14:33:08.074 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: pausing, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:33:08 np0005473739 podman[371832]: 2025-10-07 14:33:08.082770526 +0000 UTC m=+0.057400114 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 10:33:08 np0005473739 podman[371833]: 2025-10-07 14:33:08.113181518 +0000 UTC m=+0.091310561 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 10:33:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2121: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct  7 10:33:09 np0005473739 nova_compute[259550]: 2025-10-07 14:33:09.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2122: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 124 op/s
Oct  7 10:33:11 np0005473739 nova_compute[259550]: 2025-10-07 14:33:11.945 2 DEBUG nova.compute.manager [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:11 np0005473739 nova_compute[259550]: 2025-10-07 14:33:11.946 2 DEBUG nova.compute.manager [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing instance network info cache due to event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:33:11 np0005473739 nova_compute[259550]: 2025-10-07 14:33:11.946 2 DEBUG oslo_concurrency.lockutils [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:11 np0005473739 nova_compute[259550]: 2025-10-07 14:33:11.946 2 DEBUG oslo_concurrency.lockutils [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:33:11 np0005473739 nova_compute[259550]: 2025-10-07 14:33:11.947 2 DEBUG nova.network.neutron [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.151 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2123: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.437 2 INFO nova.compute.manager [None req-2aed1fcd-2b59-495a-9362-73b8982ac1d4 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Get console output#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.443 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.766 2 INFO nova.compute.manager [None req-60d0cbc3-668b-4e81-b13a-6de4830b3e37 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Unpausing#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.767 2 DEBUG nova.objects.instance [None req-60d0cbc3-668b-4e81-b13a-6de4830b3e37 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.813 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847592.8133488, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.814 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:33:12 np0005473739 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.819 2 DEBUG nova.virt.libvirt.guest [None req-60d0cbc3-668b-4e81-b13a-6de4830b3e37 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.819 2 DEBUG nova.compute.manager [None req-60d0cbc3-668b-4e81-b13a-6de4830b3e37 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.863 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:12 np0005473739 nova_compute[259550]: 2025-10-07 14:33:12.866 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: paused, current task_state: unpausing, current DB power_state: 3, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:33:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:14 np0005473739 nova_compute[259550]: 2025-10-07 14:33:14.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2124: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 939 KiB/s wr, 124 op/s
Oct  7 10:33:15 np0005473739 nova_compute[259550]: 2025-10-07 14:33:15.503 2 DEBUG nova.network.neutron [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updated VIF entry in instance network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:33:15 np0005473739 nova_compute[259550]: 2025-10-07 14:33:15.503 2 DEBUG nova.network.neutron [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:15 np0005473739 nova_compute[259550]: 2025-10-07 14:33:15.547 2 DEBUG oslo_concurrency.lockutils [req-90817a98-05a3-435c-b967-f403281b9608 req-0a634e85-315d-46a0-b149-325d3bf6c55d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:33:15 np0005473739 nova_compute[259550]: 2025-10-07 14:33:15.708 2 INFO nova.compute.manager [None req-c63ae1d1-53b8-4abc-b22e-defa775315c6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Get console output#033[00m
Oct  7 10:33:15 np0005473739 nova_compute[259550]: 2025-10-07 14:33:15.714 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:33:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2125: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 33 KiB/s wr, 67 op/s
Oct  7 10:33:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:16Z|01068|binding|INFO|Releasing lport d62984c1-9132-4390-b975-fecc00942743 from this chassis (sb_readonly=0)
Oct  7 10:33:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:16Z|01069|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:16 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #47. Immutable memtables: 4.
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.576 2 DEBUG nova.compute.manager [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.576 2 DEBUG nova.compute.manager [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing instance network info cache due to event network-changed-70d147cb-a82e-4f80-88cf-016ec19f5a2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.576 2 DEBUG oslo_concurrency.lockutils [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.577 2 DEBUG oslo_concurrency.lockutils [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.577 2 DEBUG nova.network.neutron [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Refreshing network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.621 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.621 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.622 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.622 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.622 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.624 2 INFO nova.compute.manager [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Terminating instance#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.625 2 DEBUG nova.compute.manager [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:33:16 np0005473739 kernel: tap70d147cb-a8 (unregistering): left promiscuous mode
Oct  7 10:33:16 np0005473739 NetworkManager[44949]: <info>  [1759847596.6788] device (tap70d147cb-a8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:33:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:16Z|01070|binding|INFO|Releasing lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c from this chassis (sb_readonly=0)
Oct  7 10:33:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:16Z|01071|binding|INFO|Setting lport 70d147cb-a82e-4f80-88cf-016ec19f5a2c down in Southbound
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:16Z|01072|binding|INFO|Removing iface tap70d147cb-a8 ovn-installed in OVS
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.700 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:b2:f9 10.100.0.3'], port_security=['fa:16:3e:96:b2:f9 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '08184d43-2bd1-4f46-9bdf-63437c87a8ad', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-48fde043-e83e-45b7-ab13-37b209fd562c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8bbba0f-6faa-43dc-9c06-619017376051', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ea6d7f11-49ef-4d9e-9bd1-b8e683eae537, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=70d147cb-a82e-4f80-88cf-016ec19f5a2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:33:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.701 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 70d147cb-a82e-4f80-88cf-016ec19f5a2c in datapath 48fde043-e83e-45b7-ab13-37b209fd562c unbound from our chassis#033[00m
Oct  7 10:33:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.702 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 48fde043-e83e-45b7-ab13-37b209fd562c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.703 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e56cd581-d729-4405-926b-3ba4437cc633]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:16.710 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c namespace which is not needed anymore#033[00m
Oct  7 10:33:16 np0005473739 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Deactivated successfully.
Oct  7 10:33:16 np0005473739 systemd[1]: machine-qemu\x2d133\x2dinstance\x2d0000006a.scope: Consumed 13.401s CPU time.
Oct  7 10:33:16 np0005473739 systemd-machined[214580]: Machine qemu-133-instance-0000006a terminated.
Oct  7 10:33:16 np0005473739 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [NOTICE]   (371355) : haproxy version is 2.8.14-c23fe91
Oct  7 10:33:16 np0005473739 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [NOTICE]   (371355) : path to executable is /usr/sbin/haproxy
Oct  7 10:33:16 np0005473739 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [WARNING]  (371355) : Exiting Master process...
Oct  7 10:33:16 np0005473739 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [ALERT]    (371355) : Current worker (371357) exited with code 143 (Terminated)
Oct  7 10:33:16 np0005473739 neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c[371351]: [WARNING]  (371355) : All workers exited. Exiting... (0)
Oct  7 10:33:16 np0005473739 systemd[1]: libpod-2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7.scope: Deactivated successfully.
Oct  7 10:33:16 np0005473739 podman[371903]: 2025-10-07 14:33:16.848255783 +0000 UTC m=+0.050276374 container died 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.892 2 INFO nova.virt.libvirt.driver [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Instance destroyed successfully.#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.893 2 DEBUG nova.objects.instance [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid 08184d43-2bd1-4f46-9bdf-63437c87a8ad obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7-userdata-shm.mount: Deactivated successfully.
Oct  7 10:33:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4cc61ebfbc859062541ed2a2564be14d195084d66aefaf18d107a00c5495bc9e-merged.mount: Deactivated successfully.
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.914 2 DEBUG nova.virt.libvirt.vif [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:32:34Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1785477500',display_name='tempest-TestNetworkAdvancedServerOps-server-1785477500',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1785477500',id=106,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDOmKPVgdwoa2PL66bDGbFu3FUOEtM4uSjrJKdQm0+Dh5hnoywovo36nzSmeP/PaULcFe/lE8zWO3xuub67iudTqw2d8wzPnJMGL68v9G5hFZdOFPK6yXS41BfolaqLZgQ==',key_name='tempest-TestNetworkAdvancedServerOps-1280135326',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:32:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-pm6ky9yn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:33:12Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=08184d43-2bd1-4f46-9bdf-63437c87a8ad,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.914 2 DEBUG nova.network.os_vif_util [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.915 2 DEBUG nova.network.os_vif_util [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.916 2 DEBUG os_vif [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70d147cb-a8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:16 np0005473739 nova_compute[259550]: 2025-10-07 14:33:16.924 2 INFO os_vif [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:b2:f9,bridge_name='br-int',has_traffic_filtering=True,id=70d147cb-a82e-4f80-88cf-016ec19f5a2c,network=Network(48fde043-e83e-45b7-ab13-37b209fd562c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap70d147cb-a8')#033[00m
Oct  7 10:33:16 np0005473739 podman[371903]: 2025-10-07 14:33:16.926801802 +0000 UTC m=+0.128822383 container cleanup 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:33:16 np0005473739 systemd[1]: libpod-conmon-2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7.scope: Deactivated successfully.
Oct  7 10:33:17 np0005473739 podman[371947]: 2025-10-07 14:33:17.01765305 +0000 UTC m=+0.059643965 container remove 2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.027 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c73362a1-9f36-482b-b7bf-bf29de252772]: (4, ('Tue Oct  7 02:33:16 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c (2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7)\n2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7\nTue Oct  7 02:33:16 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c (2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7)\n2a038666bbf116a3bed948a5a11fc1265ac401715f9f4aeed3692afd2753e2f7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.030 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[04872295-79a6-45d1-92ea-e1d0e4fb0123]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.031 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap48fde043-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:17 np0005473739 kernel: tap48fde043-e0: left promiscuous mode
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.056 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dcd1cde6-3082-46cc-b064-80a1cb73ddee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.086 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ec2834a1-a5ab-4536-b559-45b755b90512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.088 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23def0b7-191f-4fe0-8f4e-81848081e789]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.111 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8b642b1b-e59b-4935-bd46-5c976faa0b7b]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 815076, 'reachable_time': 28892, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 371973, 'error': None, 'target': 'ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:17 np0005473739 systemd[1]: run-netns-ovnmeta\x2d48fde043\x2de83e\x2d45b7\x2dab13\x2d37b209fd562c.mount: Deactivated successfully.
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.118 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-48fde043-e83e-45b7-ab13-37b209fd562c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:33:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:17.118 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[95d81e4c-5457-4434-ae7c-22f05efeed07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.153 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.381 2 INFO nova.virt.libvirt.driver [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Deleting instance files /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad_del#033[00m
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.382 2 INFO nova.virt.libvirt.driver [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Deletion of /var/lib/nova/instances/08184d43-2bd1-4f46-9bdf-63437c87a8ad_del complete#033[00m
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.425 2 INFO nova.compute.manager [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Took 0.80 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.426 2 DEBUG oslo.service.loopingcall [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.426 2 DEBUG nova.compute.manager [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:33:17 np0005473739 nova_compute[259550]: 2025-10-07 14:33:17.427 2 DEBUG nova.network.neutron [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:33:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:17Z|00114|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:d1:60 10.100.0.11
Oct  7 10:33:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:17Z|00115|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:d1:60 10.100.0.11
Oct  7 10:33:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2126: 305 pgs: 305 active+clean; 246 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 64 op/s
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.681 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-unplugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] No waiting events found dispatching network-vif-unplugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.682 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-unplugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG oslo_concurrency.lockutils [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.683 2 DEBUG nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] No waiting events found dispatching network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:33:18 np0005473739 nova_compute[259550]: 2025-10-07 14:33:18.683 2 WARNING nova.compute.manager [req-14ad271b-1030-4dfe-8dfe-7e0853058a63 req-bf5ee974-29dc-4d3d-9b48-3f7581dc83d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received unexpected event network-vif-plugged-70d147cb-a82e-4f80-88cf-016ec19f5a2c for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.414 2 DEBUG nova.network.neutron [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.451 2 INFO nova.compute.manager [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Took 2.02 seconds to deallocate network for instance.#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.490 2 DEBUG nova.network.neutron [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updated VIF entry in instance network info cache for port 70d147cb-a82e-4f80-88cf-016ec19f5a2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.490 2 DEBUG nova.network.neutron [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Updating instance_info_cache with network_info: [{"id": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "address": "fa:16:3e:96:b2:f9", "network": {"id": "48fde043-e83e-45b7-ab13-37b209fd562c", "bridge": "br-int", "label": "tempest-network-smoke--8404626", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap70d147cb-a8", "ovs_interfaceid": "70d147cb-a82e-4f80-88cf-016ec19f5a2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.498 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.498 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.520 2 DEBUG nova.compute.manager [req-d53defe6-ad8d-42d9-b63a-9b36ee9d1faf req-2d74a42f-eba3-480c-a899-e2f3d1c9d233 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Received event network-vif-deleted-70d147cb-a82e-4f80-88cf-016ec19f5a2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.537 2 DEBUG oslo_concurrency.lockutils [req-963ad272-43ff-4654-ab99-aac5063e5d17 req-8ea92ec7-8f60-43dd-904a-220c5b36448d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-08184d43-2bd1-4f46-9bdf-63437c87a8ad" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:33:19 np0005473739 nova_compute[259550]: 2025-10-07 14:33:19.595 2 DEBUG oslo_concurrency.processutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:33:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672913450' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:33:20 np0005473739 nova_compute[259550]: 2025-10-07 14:33:20.073 2 DEBUG oslo_concurrency.processutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:20 np0005473739 nova_compute[259550]: 2025-10-07 14:33:20.079 2 DEBUG nova.compute.provider_tree [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:33:20 np0005473739 nova_compute[259550]: 2025-10-07 14:33:20.098 2 DEBUG nova.scheduler.client.report [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:33:20 np0005473739 nova_compute[259550]: 2025-10-07 14:33:20.127 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:20 np0005473739 nova_compute[259550]: 2025-10-07 14:33:20.166 2 INFO nova.scheduler.client.report [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance 08184d43-2bd1-4f46-9bdf-63437c87a8ad#033[00m
Oct  7 10:33:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2127: 305 pgs: 305 active+clean; 228 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 1.9 MiB/s wr, 131 op/s
Oct  7 10:33:20 np0005473739 nova_compute[259550]: 2025-10-07 14:33:20.261 2 DEBUG oslo_concurrency.lockutils [None req-badfa563-15cc-43fa-b5a1-e5fb022f425b 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "08184d43-2bd1-4f46-9bdf-63437c87a8ad" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:21 np0005473739 nova_compute[259550]: 2025-10-07 14:33:21.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:22 np0005473739 nova_compute[259550]: 2025-10-07 14:33:22.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2128: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 760 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:33:22
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.data', '.mgr', 'default.rgw.control', 'default.rgw.log', 'backups', 'default.rgw.meta', 'images', 'cephfs.cephfs.meta', '.rgw.root', 'volumes']
Oct  7 10:33:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:33:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:33:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:33:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:23Z|01073|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct  7 10:33:24 np0005473739 nova_compute[259550]: 2025-10-07 14:33:24.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:24 np0005473739 podman[371998]: 2025-10-07 14:33:24.085060444 +0000 UTC m=+0.053171222 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  7 10:33:24 np0005473739 podman[371997]: 2025-10-07 14:33:24.092712898 +0000 UTC m=+0.062334176 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct  7 10:33:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2129: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct  7 10:33:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:25Z|01074|binding|INFO|Releasing lport ac55d93f-5af1-4917-a7be-679169a02318 from this chassis (sb_readonly=0)
Oct  7 10:33:25 np0005473739 nova_compute[259550]: 2025-10-07 14:33:25.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:33:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2130: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct  7 10:33:26 np0005473739 podman[372426]: 2025-10-07 14:33:26.725594879 +0000 UTC m=+0.075116198 container create c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:33:26 np0005473739 podman[372426]: 2025-10-07 14:33:26.672079839 +0000 UTC m=+0.021601138 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:33:26 np0005473739 systemd[1]: Started libpod-conmon-c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189.scope.
Oct  7 10:33:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:26 np0005473739 nova_compute[259550]: 2025-10-07 14:33:26.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:26 np0005473739 podman[372426]: 2025-10-07 14:33:26.94267443 +0000 UTC m=+0.292195729 container init c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:33:26 np0005473739 podman[372426]: 2025-10-07 14:33:26.950904369 +0000 UTC m=+0.300425648 container start c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:33:26 np0005473739 festive_lederberg[372442]: 167 167
Oct  7 10:33:26 np0005473739 systemd[1]: libpod-c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189.scope: Deactivated successfully.
Oct  7 10:33:26 np0005473739 podman[372426]: 2025-10-07 14:33:26.969523847 +0000 UTC m=+0.319045146 container attach c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:33:26 np0005473739 podman[372426]: 2025-10-07 14:33:26.969962649 +0000 UTC m=+0.319483938 container died c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:33:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6cd3968c017fdbbab46d149f971e7366e5dcbb05e94dead09fea5690b653b1e2-merged.mount: Deactivated successfully.
Oct  7 10:33:27 np0005473739 nova_compute[259550]: 2025-10-07 14:33:27.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:27 np0005473739 podman[372426]: 2025-10-07 14:33:27.169737637 +0000 UTC m=+0.519258946 container remove c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_lederberg, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:33:27 np0005473739 systemd[1]: libpod-conmon-c91f7f47ced1d34d7636a2f04b92d39b209c0538fba2ee51b7d9dcfaaad97189.scope: Deactivated successfully.
Oct  7 10:33:27 np0005473739 podman[372466]: 2025-10-07 14:33:27.393694462 +0000 UTC m=+0.023841908 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:33:27 np0005473739 podman[372466]: 2025-10-07 14:33:27.523182431 +0000 UTC m=+0.153329887 container create bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default)
Oct  7 10:33:27 np0005473739 systemd[1]: Started libpod-conmon-bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77.scope.
Oct  7 10:33:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:27 np0005473739 podman[372466]: 2025-10-07 14:33:27.749513679 +0000 UTC m=+0.379661215 container init bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:33:27 np0005473739 podman[372466]: 2025-10-07 14:33:27.766115552 +0000 UTC m=+0.396263008 container start bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:33:27 np0005473739 podman[372466]: 2025-10-07 14:33:27.847436165 +0000 UTC m=+0.477583661 container attach bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:33:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2131: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 360 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]: [
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:    {
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        "available": false,
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        "ceph_device": false,
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        "lsm_data": {},
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        "lvs": [],
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        "path": "/dev/sr0",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        "rejected_reasons": [
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "Has a FileSystem",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "Insufficient space (<5GB)"
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        ],
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        "sys_api": {
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "actuators": null,
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "device_nodes": "sr0",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "devname": "sr0",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "human_readable_size": "482.00 KB",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "id_bus": "ata",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "model": "QEMU DVD-ROM",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "nr_requests": "2",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "parent": "/dev/sr0",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "partitions": {},
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "path": "/dev/sr0",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "removable": "1",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "rev": "2.5+",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "ro": "0",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "rotational": "0",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "sas_address": "",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "sas_device_handle": "",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "scheduler_mode": "mq-deadline",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "sectors": 0,
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "sectorsize": "2048",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "size": 493568.0,
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "support_discard": "2048",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "type": "disk",
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:            "vendor": "QEMU"
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:        }
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]:    }
Oct  7 10:33:29 np0005473739 jovial_brattain[372483]: ]
Oct  7 10:33:29 np0005473739 systemd[1]: libpod-bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77.scope: Deactivated successfully.
Oct  7 10:33:29 np0005473739 systemd[1]: libpod-bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77.scope: Consumed 1.514s CPU time.
Oct  7 10:33:29 np0005473739 podman[372466]: 2025-10-07 14:33:29.374215381 +0000 UTC m=+2.004362807 container died bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:33:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1ccb4fc6fd00d9ed5040c709026deeb266947aa7c6f6b3db0fd5ec91edd4ac6d-merged.mount: Deactivated successfully.
Oct  7 10:33:29 np0005473739 podman[372466]: 2025-10-07 14:33:29.766431701 +0000 UTC m=+2.396579157 container remove bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_brattain, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:33:29 np0005473739 systemd[1]: libpod-conmon-bad34c18e7b22068b4643d325640a180aa22263aa8670acccdad981786958a77.scope: Deactivated successfully.
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1f5b9385-f581-4053-9bcc-024fe0eb852e does not exist
Oct  7 10:33:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0a0d9797-ad92-47fc-a805-d6b4cc98c4fe does not exist
Oct  7 10:33:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ccb41a5f-8711-4a5d-a9e6-109e9fd1397f does not exist
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:33:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:33:29 np0005473739 nova_compute[259550]: 2025-10-07 14:33:29.898 2 DEBUG nova.compute.manager [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:29 np0005473739 nova_compute[259550]: 2025-10-07 14:33:29.900 2 DEBUG nova.compute.manager [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing instance network info cache due to event network-changed-c6894b29-6b20-445c-991a-9aefefb3823c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:33:29 np0005473739 nova_compute[259550]: 2025-10-07 14:33:29.900 2 DEBUG oslo_concurrency.lockutils [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:29 np0005473739 nova_compute[259550]: 2025-10-07 14:33:29.900 2 DEBUG oslo_concurrency.lockutils [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:33:29 np0005473739 nova_compute[259550]: 2025-10-07 14:33:29.900 2 DEBUG nova.network.neutron [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Refreshing network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:33:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2132: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 370 KiB/s rd, 2.2 MiB/s wr, 101 op/s
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.362 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.362 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.363 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.363 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.363 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.364 2 INFO nova.compute.manager [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Terminating instance#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.365 2 DEBUG nova.compute.manager [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:33:30 np0005473739 podman[374584]: 2025-10-07 14:33:30.472902989 +0000 UTC m=+0.052161085 container create eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:33:30 np0005473739 kernel: tapc6894b29-6b (unregistering): left promiscuous mode
Oct  7 10:33:30 np0005473739 NetworkManager[44949]: <info>  [1759847610.5181] device (tapc6894b29-6b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:33:30 np0005473739 systemd[1]: Started libpod-conmon-eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283.scope.
Oct  7 10:33:30 np0005473739 podman[374584]: 2025-10-07 14:33:30.438181291 +0000 UTC m=+0.017439387 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:33:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:30Z|01075|binding|INFO|Releasing lport c6894b29-6b20-445c-991a-9aefefb3823c from this chassis (sb_readonly=0)
Oct  7 10:33:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:30Z|01076|binding|INFO|Setting lport c6894b29-6b20-445c-991a-9aefefb3823c down in Southbound
Oct  7 10:33:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:30Z|01077|binding|INFO|Removing iface tapc6894b29-6b ovn-installed in OVS
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.626 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160'], port_security=['fa:16:3e:28:d1:60 10.100.0.11 2001:db8::f816:3eff:fe28:d160'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28 2001:db8::f816:3eff:fe28:d160/64', 'neutron:device_id': 'd3fa3175-2379-4c66-9d83-0a37f5559db8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c6894b29-6b20-445c-991a-9aefefb3823c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.627 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c6894b29-6b20-445c-991a-9aefefb3823c in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc unbound from our chassis#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.628 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc#033[00m
Oct  7 10:33:30 np0005473739 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Deactivated successfully.
Oct  7 10:33:30 np0005473739 systemd[1]: machine-qemu\x2d134\x2dinstance\x2d0000006b.scope: Consumed 13.735s CPU time.
Oct  7 10:33:30 np0005473739 systemd-machined[214580]: Machine qemu-134-instance-0000006b terminated.
Oct  7 10:33:30 np0005473739 podman[374584]: 2025-10-07 14:33:30.64286146 +0000 UTC m=+0.222119566 container init eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.651 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fd9da96-3728-416f-8c90-94416cc7cac8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:30 np0005473739 podman[374584]: 2025-10-07 14:33:30.653787112 +0000 UTC m=+0.233045188 container start eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:33:30 np0005473739 crazy_goldberg[374606]: 167 167
Oct  7 10:33:30 np0005473739 podman[374584]: 2025-10-07 14:33:30.659355481 +0000 UTC m=+0.238613587 container attach eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:33:30 np0005473739 systemd[1]: libpod-eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283.scope: Deactivated successfully.
Oct  7 10:33:30 np0005473739 podman[374584]: 2025-10-07 14:33:30.663692067 +0000 UTC m=+0.242950133 container died eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.683 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[760ce19e-58b5-46cb-94d4-1deaa37db4ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.686 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4d16d45e-4f22-440d-a192-8bb4d20f35f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.713 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4b8181a3-a9c4-4e43-89fc-5a30aec3f69b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cb54edd1a8661dec284dab482b4d3a949ebfd313e354f11f94b2ff4dc2929fb6-merged.mount: Deactivated successfully.
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3d4f3dd7-015d-4a82-94d4-9db4fa8ca0da]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap1da6903e-11'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0d:f4:0e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 307], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812640, 'reachable_time': 24827, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 374626, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.748 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df3b7791-1d22-4f62-a856-83ee3ac78e75]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap1da6903e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812653, 'tstamp': 812653}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374627, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap1da6903e-11'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 812656, 'tstamp': 812656}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 374627, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.750 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da6903e-10, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.757 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1da6903e-10, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap1da6903e-10, col_values=(('external_ids', {'iface-id': 'ac55d93f-5af1-4917-a7be-679169a02318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.758 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:33:30 np0005473739 podman[374584]: 2025-10-07 14:33:30.788695506 +0000 UTC m=+0.367953582 container remove eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_goldberg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.804 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=33, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=32) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:30.805 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.806 2 INFO nova.virt.libvirt.driver [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Instance destroyed successfully.#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.806 2 DEBUG nova.objects.instance [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid d3fa3175-2379-4c66-9d83-0a37f5559db8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:30 np0005473739 systemd[1]: libpod-conmon-eeac03258cad587136ae8a462622f5d992ac5bcb768a0c2a82bf81b95a8c1283.scope: Deactivated successfully.
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.830 2 DEBUG nova.virt.libvirt.vif [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:32:48Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1497631187',display_name='tempest-TestGettingAddress-server-1497631187',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1497631187',id=107,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:33:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-k6d5uiwn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:33:05Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=d3fa3175-2379-4c66-9d83-0a37f5559db8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.831 2 DEBUG nova.network.os_vif_util [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.831 2 DEBUG nova.network.os_vif_util [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.832 2 DEBUG os_vif [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.833 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc6894b29-6b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:33:30 np0005473739 nova_compute[259550]: 2025-10-07 14:33:30.839 2 INFO os_vif [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:d1:60,bridge_name='br-int',has_traffic_filtering=True,id=c6894b29-6b20-445c-991a-9aefefb3823c,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc6894b29-6b')#033[00m
Oct  7 10:33:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:33:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:33:31 np0005473739 podman[374659]: 2025-10-07 14:33:31.005621203 +0000 UTC m=+0.085191328 container create 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 10:33:31 np0005473739 podman[374659]: 2025-10-07 14:33:30.942875706 +0000 UTC m=+0.022445861 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:33:31 np0005473739 systemd[1]: Started libpod-conmon-5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781.scope.
Oct  7 10:33:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:31 np0005473739 podman[374659]: 2025-10-07 14:33:31.141457053 +0000 UTC m=+0.221027188 container init 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:33:31 np0005473739 podman[374659]: 2025-10-07 14:33:31.148316216 +0000 UTC m=+0.227886341 container start 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:33:31 np0005473739 podman[374659]: 2025-10-07 14:33:31.15632453 +0000 UTC m=+0.235894655 container attach 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 10:33:31 np0005473739 nova_compute[259550]: 2025-10-07 14:33:31.891 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847596.890288, 08184d43-2bd1-4f46-9bdf-63437c87a8ad => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:31 np0005473739 nova_compute[259550]: 2025-10-07 14:33:31.893 2 INFO nova.compute.manager [-] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:33:31 np0005473739 nova_compute[259550]: 2025-10-07 14:33:31.936 2 DEBUG nova.compute.manager [None req-7982060a-feb8-427b-a4cb-af31037be146 - - - - - -] [instance: 08184d43-2bd1-4f46-9bdf-63437c87a8ad] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-unplugged-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.077 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] No waiting events found dispatching network-vif-unplugged-c6894b29-6b20-445c-991a-9aefefb3823c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-unplugged-c6894b29-6b20-445c-991a-9aefefb3823c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.078 2 DEBUG oslo_concurrency.lockutils [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.079 2 DEBUG nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] No waiting events found dispatching network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.079 2 WARNING nova.compute.manager [req-53cf40fb-d7ca-4dfc-a716-b790abb709ea req-a3a43182-e96c-46ac-ab42-2a6ffe589caf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received unexpected event network-vif-plugged-c6894b29-6b20-445c-991a-9aefefb3823c for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2133: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 119 KiB/s rd, 259 KiB/s wr, 52 op/s
Oct  7 10:33:32 np0005473739 eager_goldwasser[374679]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:33:32 np0005473739 eager_goldwasser[374679]: --> relative data size: 1.0
Oct  7 10:33:32 np0005473739 eager_goldwasser[374679]: --> All data devices are unavailable
Oct  7 10:33:32 np0005473739 systemd[1]: libpod-5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781.scope: Deactivated successfully.
Oct  7 10:33:32 np0005473739 systemd[1]: libpod-5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781.scope: Consumed 1.021s CPU time.
Oct  7 10:33:32 np0005473739 podman[374709]: 2025-10-07 14:33:32.569426908 +0000 UTC m=+0.026517049 container died 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:33:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:33:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d7d3a21b6bdb42506fb63d870a87776ba82c502258c5bce28e426eeb586e7145-merged.mount: Deactivated successfully.
Oct  7 10:33:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:33:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337432773' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:33:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:33:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3337432773' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:33:32 np0005473739 podman[374709]: 2025-10-07 14:33:32.739393931 +0000 UTC m=+0.196484032 container remove 5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_goldwasser, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:33:32 np0005473739 systemd[1]: libpod-conmon-5de1f66ce22b0ab0dfc5220df489900c2f4c13630ea237859d524dcc9a069781.scope: Deactivated successfully.
Oct  7 10:33:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.960 2 INFO nova.virt.libvirt.driver [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Deleting instance files /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8_del#033[00m
Oct  7 10:33:32 np0005473739 nova_compute[259550]: 2025-10-07 14:33:32.961 2 INFO nova.virt.libvirt.driver [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Deletion of /var/lib/nova/instances/d3fa3175-2379-4c66-9d83-0a37f5559db8_del complete#033[00m
Oct  7 10:33:33 np0005473739 nova_compute[259550]: 2025-10-07 14:33:33.024 2 DEBUG nova.network.neutron [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updated VIF entry in instance network info cache for port c6894b29-6b20-445c-991a-9aefefb3823c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:33:33 np0005473739 nova_compute[259550]: 2025-10-07 14:33:33.024 2 DEBUG nova.network.neutron [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [{"id": "c6894b29-6b20-445c-991a-9aefefb3823c", "address": "fa:16:3e:28:d1:60", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:d160", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc6894b29-6b", "ovs_interfaceid": "c6894b29-6b20-445c-991a-9aefefb3823c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:33 np0005473739 nova_compute[259550]: 2025-10-07 14:33:33.052 2 INFO nova.compute.manager [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Took 2.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:33:33 np0005473739 nova_compute[259550]: 2025-10-07 14:33:33.053 2 DEBUG oslo.service.loopingcall [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:33:33 np0005473739 nova_compute[259550]: 2025-10-07 14:33:33.053 2 DEBUG nova.compute.manager [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:33:33 np0005473739 nova_compute[259550]: 2025-10-07 14:33:33.053 2 DEBUG nova.network.neutron [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:33:33 np0005473739 nova_compute[259550]: 2025-10-07 14:33:33.068 2 DEBUG oslo_concurrency.lockutils [req-7f5d2f49-41a4-4104-a755-d8d1e20a4193 req-6764839d-a89d-4a90-8d4f-d1bccefe6bcb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d3fa3175-2379-4c66-9d83-0a37f5559db8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:33:33 np0005473739 podman[374866]: 2025-10-07 14:33:33.403871775 +0000 UTC m=+0.042868306 container create 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:33:33 np0005473739 systemd[1]: Started libpod-conmon-66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e.scope.
Oct  7 10:33:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:33 np0005473739 podman[374866]: 2025-10-07 14:33:33.384462286 +0000 UTC m=+0.023458837 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:33:33 np0005473739 podman[374866]: 2025-10-07 14:33:33.499550612 +0000 UTC m=+0.138547143 container init 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:33:33 np0005473739 podman[374866]: 2025-10-07 14:33:33.508366138 +0000 UTC m=+0.147362669 container start 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 10:33:33 np0005473739 gallant_noyce[374882]: 167 167
Oct  7 10:33:33 np0005473739 systemd[1]: libpod-66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e.scope: Deactivated successfully.
Oct  7 10:33:33 np0005473739 podman[374866]: 2025-10-07 14:33:33.539744596 +0000 UTC m=+0.178741157 container attach 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:33:33 np0005473739 podman[374866]: 2025-10-07 14:33:33.541099112 +0000 UTC m=+0.180095653 container died 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:33:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-51b495e4b5db4ec2e24ba10586650741ed66d8eab834d8796556a385fd00e40e-merged.mount: Deactivated successfully.
Oct  7 10:33:33 np0005473739 podman[374866]: 2025-10-07 14:33:33.665908847 +0000 UTC m=+0.304905368 container remove 66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_noyce, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:33:33 np0005473739 systemd[1]: libpod-conmon-66bb5df8803695da2598248efd858f1bc4f3994b2772858e7ba6b1dc7ec5932e.scope: Deactivated successfully.
Oct  7 10:33:33 np0005473739 podman[374909]: 2025-10-07 14:33:33.848776194 +0000 UTC m=+0.051082877 container create de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:33:33 np0005473739 systemd[1]: Started libpod-conmon-de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808.scope.
Oct  7 10:33:33 np0005473739 podman[374909]: 2025-10-07 14:33:33.821705461 +0000 UTC m=+0.024012154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:33:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:33 np0005473739 podman[374909]: 2025-10-07 14:33:33.945129908 +0000 UTC m=+0.147436611 container init de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 10:33:33 np0005473739 podman[374909]: 2025-10-07 14:33:33.951140279 +0000 UTC m=+0.153446962 container start de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:33:33 np0005473739 podman[374909]: 2025-10-07 14:33:33.968732199 +0000 UTC m=+0.171038902 container attach de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:33:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2134: 305 pgs: 305 active+clean; 132 MiB data, 804 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 24 KiB/s wr, 91 op/s
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.285 2 DEBUG nova.network.neutron [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.315 2 INFO nova.compute.manager [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Took 1.26 seconds to deallocate network for instance.#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.369 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.369 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.417 2 DEBUG nova.compute.manager [req-da7d357d-045d-4451-a071-2a38fbd9d749 req-8cf76ac2-e8bc-4418-b124-9f154619f5f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Received event network-vif-deleted-c6894b29-6b20-445c-991a-9aefefb3823c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.454 2 DEBUG oslo_concurrency.processutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:34 np0005473739 zen_liskov[374925]: {
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:    "0": [
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:        {
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "devices": [
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "/dev/loop3"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            ],
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_name": "ceph_lv0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_size": "21470642176",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "name": "ceph_lv0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "tags": {
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cluster_name": "ceph",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.crush_device_class": "",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.encrypted": "0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osd_id": "0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.type": "block",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.vdo": "0"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            },
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "type": "block",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "vg_name": "ceph_vg0"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:        }
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:    ],
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:    "1": [
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:        {
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "devices": [
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "/dev/loop4"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            ],
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_name": "ceph_lv1",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_size": "21470642176",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "name": "ceph_lv1",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "tags": {
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cluster_name": "ceph",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.crush_device_class": "",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.encrypted": "0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osd_id": "1",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.type": "block",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.vdo": "0"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            },
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "type": "block",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "vg_name": "ceph_vg1"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:        }
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:    ],
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:    "2": [
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:        {
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "devices": [
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "/dev/loop5"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            ],
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_name": "ceph_lv2",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_size": "21470642176",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "name": "ceph_lv2",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "tags": {
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.cluster_name": "ceph",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.crush_device_class": "",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.encrypted": "0",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osd_id": "2",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.type": "block",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:                "ceph.vdo": "0"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            },
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "type": "block",
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:            "vg_name": "ceph_vg2"
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:        }
Oct  7 10:33:34 np0005473739 zen_liskov[374925]:    ]
Oct  7 10:33:34 np0005473739 zen_liskov[374925]: }
Oct  7 10:33:34 np0005473739 systemd[1]: libpod-de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808.scope: Deactivated successfully.
Oct  7 10:33:34 np0005473739 podman[374909]: 2025-10-07 14:33:34.765418327 +0000 UTC m=+0.967725010 container died de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:33:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d1f4a21851a1b7766e07e28cb7f4f45f004b2e5413e2e86bdc69350af43fa289-merged.mount: Deactivated successfully.
Oct  7 10:33:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:33:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3425001993' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.879 2 DEBUG oslo_concurrency.processutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.886 2 DEBUG nova.compute.provider_tree [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.917 2 DEBUG nova.scheduler.client.report [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.938 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.569s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:34 np0005473739 nova_compute[259550]: 2025-10-07 14:33:34.975 2 INFO nova.scheduler.client.report [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance d3fa3175-2379-4c66-9d83-0a37f5559db8#033[00m
Oct  7 10:33:35 np0005473739 podman[374909]: 2025-10-07 14:33:35.026173324 +0000 UTC m=+1.228480007 container remove de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_liskov, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct  7 10:33:35 np0005473739 systemd[1]: libpod-conmon-de6d25b724b154aff6061cced2161779f7fb1ee579a67f305c98a26eec6de808.scope: Deactivated successfully.
Oct  7 10:33:35 np0005473739 nova_compute[259550]: 2025-10-07 14:33:35.063 2 DEBUG oslo_concurrency.lockutils [None req-879a9578-699c-490b-b963-4983c9f97220 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "d3fa3175-2379-4c66-9d83-0a37f5559db8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.701s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:35 np0005473739 nova_compute[259550]: 2025-10-07 14:33:35.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:35 np0005473739 podman[375109]: 2025-10-07 14:33:35.670380708 +0000 UTC m=+0.049258117 container create 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:33:35 np0005473739 systemd[1]: Started libpod-conmon-8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804.scope.
Oct  7 10:33:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:35 np0005473739 podman[375109]: 2025-10-07 14:33:35.640457128 +0000 UTC m=+0.019334557 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:33:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:35.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '33'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:35 np0005473739 podman[375109]: 2025-10-07 14:33:35.830140166 +0000 UTC m=+0.209017605 container init 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:33:35 np0005473739 nova_compute[259550]: 2025-10-07 14:33:35.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:35 np0005473739 podman[375109]: 2025-10-07 14:33:35.837629496 +0000 UTC m=+0.216506905 container start 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:33:35 np0005473739 peaceful_feistel[375125]: 167 167
Oct  7 10:33:35 np0005473739 systemd[1]: libpod-8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804.scope: Deactivated successfully.
Oct  7 10:33:35 np0005473739 podman[375109]: 2025-10-07 14:33:35.914949363 +0000 UTC m=+0.293826772 container attach 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:33:35 np0005473739 podman[375109]: 2025-10-07 14:33:35.916100583 +0000 UTC m=+0.294977992 container died 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:33:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-03cbdda2e848ef06b00ccdb729eb1ccdb9b01341a744b6453b6ab9346b1b28f6-merged.mount: Deactivated successfully.
Oct  7 10:33:36 np0005473739 podman[375109]: 2025-10-07 14:33:36.148176384 +0000 UTC m=+0.527053793 container remove 8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_feistel, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 10:33:36 np0005473739 systemd[1]: libpod-conmon-8d5314086025ff5e8fcfe031589838ce197f8dc661e65b8c93db44897290f804.scope: Deactivated successfully.
Oct  7 10:33:36 np0005473739 nova_compute[259550]: 2025-10-07 14:33:36.246 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2135: 305 pgs: 305 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct  7 10:33:36 np0005473739 podman[375150]: 2025-10-07 14:33:36.368513943 +0000 UTC m=+0.081683375 container create 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:33:36 np0005473739 podman[375150]: 2025-10-07 14:33:36.312449905 +0000 UTC m=+0.025619377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:33:36 np0005473739 systemd[1]: Started libpod-conmon-761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde.scope.
Oct  7 10:33:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:36 np0005473739 podman[375150]: 2025-10-07 14:33:36.492996488 +0000 UTC m=+0.206165920 container init 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:33:36 np0005473739 podman[375150]: 2025-10-07 14:33:36.501385082 +0000 UTC m=+0.214554504 container start 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:33:36 np0005473739 podman[375150]: 2025-10-07 14:33:36.510673471 +0000 UTC m=+0.223842923 container attach 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.163 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]: {
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "osd_id": 2,
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "type": "bluestore"
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:    },
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "osd_id": 1,
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "type": "bluestore"
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:    },
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "osd_id": 0,
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:        "type": "bluestore"
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]:    }
Oct  7 10:33:37 np0005473739 keen_chaplygin[375167]: }
Oct  7 10:33:37 np0005473739 systemd[1]: libpod-761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde.scope: Deactivated successfully.
Oct  7 10:33:37 np0005473739 systemd[1]: libpod-761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde.scope: Consumed 1.009s CPU time.
Oct  7 10:33:37 np0005473739 podman[375150]: 2025-10-07 14:33:37.509363726 +0000 UTC m=+1.222533148 container died 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:33:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0d4e5e9c48a50d999c67364d5a1620eab42d3849f90c37053280fe64597428ac-merged.mount: Deactivated successfully.
Oct  7 10:33:37 np0005473739 podman[375150]: 2025-10-07 14:33:37.572871933 +0000 UTC m=+1.286041355 container remove 761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_chaplygin, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 10:33:37 np0005473739 systemd[1]: libpod-conmon-761d0897dd13e5bf1b78c7d0ddbe13761891209458d24adc3bb2a188d3e2dbde.scope: Deactivated successfully.
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b02e064d-cb06-4a3f-9c24-3e1815d9147b does not exist
Oct  7 10:33:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d12b17b3-f191-4cbf-a851-f6d02ccdae28 does not exist
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #102. Immutable memtables: 0.
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.623457) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 59] Flushing memtable with next log file: 102
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617623491, "job": 59, "event": "flush_started", "num_memtables": 1, "num_entries": 1648, "num_deletes": 251, "total_data_size": 2587203, "memory_usage": 2627536, "flush_reason": "Manual Compaction"}
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 59] Level-0 flush table #103: started
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617635520, "cf_name": "default", "job": 59, "event": "table_file_creation", "file_number": 103, "file_size": 2517267, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 43467, "largest_seqno": 45114, "table_properties": {"data_size": 2509778, "index_size": 4432, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15615, "raw_average_key_size": 19, "raw_value_size": 2494730, "raw_average_value_size": 3186, "num_data_blocks": 199, "num_entries": 783, "num_filter_entries": 783, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847454, "oldest_key_time": 1759847454, "file_creation_time": 1759847617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 103, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 59] Flush lasted 12108 microseconds, and 5389 cpu microseconds.
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.635561) [db/flush_job.cc:967] [default] [JOB 59] Level-0 flush table #103: 2517267 bytes OK
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.635578) [db/memtable_list.cc:519] [default] Level-0 commit table #103 started
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.636997) [db/memtable_list.cc:722] [default] Level-0 commit table #103: memtable #1 done
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.637009) EVENT_LOG_v1 {"time_micros": 1759847617637005, "job": 59, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.637024) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 59] Try to delete WAL files size 2580096, prev total WAL file size 2580096, number of live WAL files 2.
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000099.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.637773) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034303136' seq:72057594037927935, type:22 .. '7061786F730034323638' seq:0, type:0; will stop at (end)
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 60] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 59 Base level 0, inputs: [103(2458KB)], [101(7732KB)]
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617637819, "job": 60, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [103], "files_L6": [101], "score": -1, "input_data_size": 10435542, "oldest_snapshot_seqno": -1}
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.636 2 DEBUG nova.compute.manager [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-changed-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.637 2 DEBUG nova.compute.manager [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing instance network info cache due to event network-changed-52f43128-c899-4d76-9e65-c99941c834d4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.637 2 DEBUG oslo_concurrency.lockutils [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.637 2 DEBUG oslo_concurrency.lockutils [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.637 2 DEBUG nova.network.neutron [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Refreshing network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 60] Generated table #104: 6563 keys, 8724631 bytes, temperature: kUnknown
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617683395, "cf_name": "default", "job": 60, "event": "table_file_creation", "file_number": 104, "file_size": 8724631, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8680962, "index_size": 26149, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 16453, "raw_key_size": 170677, "raw_average_key_size": 26, "raw_value_size": 8563505, "raw_average_value_size": 1304, "num_data_blocks": 1022, "num_entries": 6563, "num_filter_entries": 6563, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847617, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 104, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.683604) [db/compaction/compaction_job.cc:1663] [default] [JOB 60] Compacted 1@0 + 1@6 files to L6 => 8724631 bytes
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.684726) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 228.7 rd, 191.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 7.6 +0.0 blob) out(8.3 +0.0 blob), read-write-amplify(7.6) write-amplify(3.5) OK, records in: 7077, records dropped: 514 output_compression: NoCompression
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.684740) EVENT_LOG_v1 {"time_micros": 1759847617684733, "job": 60, "event": "compaction_finished", "compaction_time_micros": 45633, "compaction_time_cpu_micros": 19680, "output_level": 6, "num_output_files": 1, "total_output_size": 8724631, "num_input_records": 7077, "num_output_records": 6563, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000103.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617685189, "job": 60, "event": "table_file_deletion", "file_number": 103}
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000101.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847617686309, "job": 60, "event": "table_file_deletion", "file_number": 101}
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.637676) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686455) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686460) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686462) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686463) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:33:37.686464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.812 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.812 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.813 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "0af0082a-1adc-40e7-b254-88e03182e802-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.813 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.813 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.816 2 INFO nova.compute.manager [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Terminating instance#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.817 2 DEBUG nova.compute.manager [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:33:37 np0005473739 kernel: tap52f43128-c8 (unregistering): left promiscuous mode
Oct  7 10:33:37 np0005473739 NetworkManager[44949]: <info>  [1759847617.8693] device (tap52f43128-c8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:33:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:37Z|01078|binding|INFO|Releasing lport 52f43128-c899-4d76-9e65-c99941c834d4 from this chassis (sb_readonly=0)
Oct  7 10:33:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:37Z|01079|binding|INFO|Setting lport 52f43128-c899-4d76-9e65-c99941c834d4 down in Southbound
Oct  7 10:33:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:37Z|01080|binding|INFO|Removing iface tap52f43128-c8 ovn-installed in OVS
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.899 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], port_security=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe52:178c/64', 'neutron:device_id': '0af0082a-1adc-40e7-b254-88e03182e802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=52f43128-c899-4d76-9e65-c99941c834d4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:33:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.903 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 52f43128-c899-4d76-9e65-c99941c834d4 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc unbound from our chassis#033[00m
Oct  7 10:33:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.904 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:33:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.905 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0da81019-f4da-4e04-b6c9-3145b1ff1c41]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:37.906 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc namespace which is not needed anymore#033[00m
Oct  7 10:33:37 np0005473739 nova_compute[259550]: 2025-10-07 14:33:37.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:37 np0005473739 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Deactivated successfully.
Oct  7 10:33:37 np0005473739 systemd[1]: machine-qemu\x2d132\x2dinstance\x2d00000069.scope: Consumed 16.057s CPU time.
Oct  7 10:33:37 np0005473739 systemd-machined[214580]: Machine qemu-132-instance-00000069 terminated.
Oct  7 10:33:38 np0005473739 kernel: tap52f43128-c8: entered promiscuous mode
Oct  7 10:33:38 np0005473739 NetworkManager[44949]: <info>  [1759847618.0372] manager: (tap52f43128-c8): new Tun device (/org/freedesktop/NetworkManager/Devices/437)
Oct  7 10:33:38 np0005473739 kernel: tap52f43128-c8 (unregistering): left promiscuous mode
Oct  7 10:33:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:38Z|01081|binding|INFO|Claiming lport 52f43128-c899-4d76-9e65-c99941c834d4 for this chassis.
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:38Z|01082|binding|INFO|52f43128-c899-4d76-9e65-c99941c834d4: Claiming fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c
Oct  7 10:33:38 np0005473739 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [NOTICE]   (370753) : haproxy version is 2.8.14-c23fe91
Oct  7 10:33:38 np0005473739 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [NOTICE]   (370753) : path to executable is /usr/sbin/haproxy
Oct  7 10:33:38 np0005473739 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [WARNING]  (370753) : Exiting Master process...
Oct  7 10:33:38 np0005473739 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [WARNING]  (370753) : Exiting Master process...
Oct  7 10:33:38 np0005473739 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [ALERT]    (370753) : Current worker (370755) exited with code 143 (Terminated)
Oct  7 10:33:38 np0005473739 neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc[370748]: [WARNING]  (370753) : All workers exited. Exiting... (0)
Oct  7 10:33:38 np0005473739 systemd[1]: libpod-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b.scope: Deactivated successfully.
Oct  7 10:33:38 np0005473739 conmon[370748]: conmon 008e9149d07e11d61fbd <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b.scope/container/memory.events
Oct  7 10:33:38 np0005473739 podman[375286]: 2025-10-07 14:33:38.06187819 +0000 UTC m=+0.054092117 container died 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:33:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:38Z|01083|binding|INFO|Setting lport 52f43128-c899-4d76-9e65-c99941c834d4 ovn-installed in OVS
Oct  7 10:33:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:38Z|01084|if_status|INFO|Dropped 2 log messages in last 627 seconds (most recently, 627 seconds ago) due to excessive rate
Oct  7 10:33:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:38Z|01085|if_status|INFO|Not setting lport 52f43128-c899-4d76-9e65-c99941c834d4 down as sb is readonly
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.064 2 INFO nova.virt.libvirt.driver [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Instance destroyed successfully.#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.064 2 DEBUG nova.objects.instance [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 0af0082a-1adc-40e7-b254-88e03182e802 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:38Z|01086|binding|INFO|Releasing lport 52f43128-c899-4d76-9e65-c99941c834d4 from this chassis (sb_readonly=0)
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.073 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], port_security=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe52:178c/64', 'neutron:device_id': '0af0082a-1adc-40e7-b254-88e03182e802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=52f43128-c899-4d76-9e65-c99941c834d4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.085 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], port_security=['fa:16:3e:52:17:8c 10.100.0.13 2001:db8::f816:3eff:fe52:178c'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8::f816:3eff:fe52:178c/64', 'neutron:device_id': '0af0082a-1adc-40e7-b254-88e03182e802', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3172dca0-91ca-4f82-9e12-53d0e4f57177', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=33b7c271-47f1-414b-a27c-b99f92f4e7c6, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=52f43128-c899-4d76-9e65-c99941c834d4) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.091 2 DEBUG nova.virt.libvirt.vif [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:32:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1273910432',display_name='tempest-TestGettingAddress-server-1273910432',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1273910432',id=105,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKZcRW+cTahv1daeBOuj+cvRjMyW8WsW20MGafd+5cNwSmbuhLgTtTRGaDrZ7xqiE/Dwq9wlNoEqtjkRltl/UATKHeenR7LAEwYRgqoAMdni0PUi0HrATcyFAYghdo06gw==',key_name='tempest-TestGettingAddress-855853305',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:32:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-v3y9hbt0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:32:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=0af0082a-1adc-40e7-b254-88e03182e802,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.091 2 DEBUG nova.network.os_vif_util [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.197", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.092 2 DEBUG nova.network.os_vif_util [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.093 2 DEBUG os_vif [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.095 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap52f43128-c8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.101 2 INFO os_vif [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:52:17:8c,bridge_name='br-int',has_traffic_filtering=True,id=52f43128-c899-4d76-9e65-c99941c834d4,network=Network(1da6903e-17a3-4ac8-b5a0-50ed4bf377bc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap52f43128-c8')#033[00m
Oct  7 10:33:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2e23988a13804a7caa057b6ca3196289d2a8449fff2e969f7ade775e8fde0162-merged.mount: Deactivated successfully.
Oct  7 10:33:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b-userdata-shm.mount: Deactivated successfully.
Oct  7 10:33:38 np0005473739 podman[375286]: 2025-10-07 14:33:38.123057835 +0000 UTC m=+0.115271732 container cleanup 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:33:38 np0005473739 systemd[1]: libpod-conmon-008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b.scope: Deactivated successfully.
Oct  7 10:33:38 np0005473739 podman[375313]: 2025-10-07 14:33:38.185675107 +0000 UTC m=+0.061144514 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:33:38 np0005473739 podman[375336]: 2025-10-07 14:33:38.194830292 +0000 UTC m=+0.050906561 container remove 008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.201 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0691181-72cf-43c8-8377-51468cf99f02]: (4, ('Tue Oct  7 02:33:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc (008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b)\n008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b\nTue Oct  7 02:33:38 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc (008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b)\n008e9149d07e11d61fbd7bb82461b7b2d2a9a4f1ba06e4b00f15d6f38a03931b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.204 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b21b002-e057-47c0-bcf9-abc70715960a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1da6903e-10, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:38 np0005473739 kernel: tap1da6903e-10: left promiscuous mode
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.224 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e75ee102-1b08-4ea9-b58d-8ecdc8add1cb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.248 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[94c10817-d68e-4f09-a447-4aa71e4ee962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.251 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7e0b9da-c571-4ee7-8961-4e3099c44ddf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2136: 305 pgs: 305 active+clean; 121 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 16 KiB/s wr, 102 op/s
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7126b42a-74b3-4dac-a45a-6ff5f44844e4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 812632, 'reachable_time': 37875, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 375391, 'error': None, 'target': 'ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 podman[375337]: 2025-10-07 14:33:38.272030065 +0000 UTC m=+0.113848183 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:33:38 np0005473739 systemd[1]: run-netns-ovnmeta\x2d1da6903e\x2d17a3\x2d4ac8\x2db5a0\x2d50ed4bf377bc.mount: Deactivated successfully.
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.274 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-1da6903e-17a3-4ac8-b5a0-50ed4bf377bc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.274 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[2d6939fd-9b28-4d75-b8ca-58c1aa6a6a33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.275 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 52f43128-c899-4d76-9e65-c99941c834d4 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc unbound from our chassis#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.276 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.277 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8f42e266-6273-4760-bde9-626aa7a5b7f2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.277 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 52f43128-c899-4d76-9e65-c99941c834d4 in datapath 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc unbound from our chassis#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.278 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 1da6903e-17a3-4ac8-b5a0-50ed4bf377bc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:33:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:38.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42387c74-8f5d-43c9-a732-e1ba148e47e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.556 2 INFO nova.virt.libvirt.driver [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Deleting instance files /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802_del#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.557 2 INFO nova.virt.libvirt.driver [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Deletion of /var/lib/nova/instances/0af0082a-1adc-40e7-b254-88e03182e802_del complete#033[00m
Oct  7 10:33:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.810 2 INFO nova.compute.manager [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Took 0.99 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.810 2 DEBUG oslo.service.loopingcall [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.810 2 DEBUG nova.compute.manager [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:33:38 np0005473739 nova_compute[259550]: 2025-10-07 14:33:38.811 2 DEBUG nova.network.neutron [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:33:39 np0005473739 nova_compute[259550]: 2025-10-07 14:33:39.685 2 DEBUG nova.network.neutron [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:39 np0005473739 nova_compute[259550]: 2025-10-07 14:33:39.704 2 INFO nova.compute.manager [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Took 0.89 seconds to deallocate network for instance.#033[00m
Oct  7 10:33:39 np0005473739 nova_compute[259550]: 2025-10-07 14:33:39.753 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:39 np0005473739 nova_compute[259550]: 2025-10-07 14:33:39.754 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:39 np0005473739 nova_compute[259550]: 2025-10-07 14:33:39.796 2 DEBUG nova.compute.manager [req-89cd28da-5b8a-48a5-93aa-6d8206645dc9 req-25e8dc78-c521-4c5b-951a-923a7af5a05a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Received event network-vif-deleted-52f43128-c899-4d76-9e65-c99941c834d4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:39 np0005473739 nova_compute[259550]: 2025-10-07 14:33:39.813 2 DEBUG oslo_concurrency.processutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:33:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3463791443' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:33:40 np0005473739 nova_compute[259550]: 2025-10-07 14:33:40.241 2 DEBUG oslo_concurrency.processutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.428s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:40 np0005473739 nova_compute[259550]: 2025-10-07 14:33:40.246 2 DEBUG nova.compute.provider_tree [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:33:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2137: 305 pgs: 305 active+clean; 63 MiB data, 796 MiB used, 59 GiB / 60 GiB avail; 90 KiB/s rd, 17 KiB/s wr, 124 op/s
Oct  7 10:33:40 np0005473739 nova_compute[259550]: 2025-10-07 14:33:40.271 2 DEBUG nova.scheduler.client.report [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:33:40 np0005473739 nova_compute[259550]: 2025-10-07 14:33:40.298 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:40 np0005473739 nova_compute[259550]: 2025-10-07 14:33:40.326 2 INFO nova.scheduler.client.report [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 0af0082a-1adc-40e7-b254-88e03182e802#033[00m
Oct  7 10:33:40 np0005473739 nova_compute[259550]: 2025-10-07 14:33:40.436 2 DEBUG oslo_concurrency.lockutils [None req-538a7c5c-3c2f-41bc-bf39-6b4e28ac4fa0 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "0af0082a-1adc-40e7-b254-88e03182e802" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.624s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:40 np0005473739 nova_compute[259550]: 2025-10-07 14:33:40.989 2 DEBUG nova.network.neutron [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updated VIF entry in instance network info cache for port 52f43128-c899-4d76-9e65-c99941c834d4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:33:40 np0005473739 nova_compute[259550]: 2025-10-07 14:33:40.990 2 DEBUG nova.network.neutron [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Updating instance_info_cache with network_info: [{"id": "52f43128-c899-4d76-9e65-c99941c834d4", "address": "fa:16:3e:52:17:8c", "network": {"id": "1da6903e-17a3-4ac8-b5a0-50ed4bf377bc", "bridge": "br-int", "label": "tempest-network-smoke--278474882", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe52:178c", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap52f43128-c8", "ovs_interfaceid": "52f43128-c899-4d76-9e65-c99941c834d4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:41 np0005473739 nova_compute[259550]: 2025-10-07 14:33:41.009 2 DEBUG oslo_concurrency.lockutils [req-bca4eb52-319b-46ea-9208-ff07fcd02b41 req-823da5db-19e3-4503-8bde-ce66b04a249a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0af0082a-1adc-40e7-b254-88e03182e802" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:33:42 np0005473739 nova_compute[259550]: 2025-10-07 14:33:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2138: 305 pgs: 305 active+clean; 41 MiB data, 785 MiB used, 59 GiB / 60 GiB avail; 82 KiB/s rd, 5.0 KiB/s wr, 121 op/s
Oct  7 10:33:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:43 np0005473739 nova_compute[259550]: 2025-10-07 14:33:43.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:44 np0005473739 nova_compute[259550]: 2025-10-07 14:33:44.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2139: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 66 KiB/s rd, 5.0 KiB/s wr, 103 op/s
Oct  7 10:33:45 np0005473739 nova_compute[259550]: 2025-10-07 14:33:45.803 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847610.8020964, d3fa3175-2379-4c66-9d83-0a37f5559db8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:45 np0005473739 nova_compute[259550]: 2025-10-07 14:33:45.804 2 INFO nova.compute.manager [-] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:33:45 np0005473739 nova_compute[259550]: 2025-10-07 14:33:45.832 2 DEBUG nova.compute.manager [None req-a52bd597-e808-4e32-bba6-de73c46aaf58 - - - - - -] [instance: d3fa3175-2379-4c66-9d83-0a37f5559db8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:45 np0005473739 nova_compute[259550]: 2025-10-07 14:33:45.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:45 np0005473739 nova_compute[259550]: 2025-10-07 14:33:45.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2140: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.5 KiB/s wr, 39 op/s
Oct  7 10:33:46 np0005473739 nova_compute[259550]: 2025-10-07 14:33:46.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:46 np0005473739 nova_compute[259550]: 2025-10-07 14:33:46.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:46 np0005473739 nova_compute[259550]: 2025-10-07 14:33:46.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:33:47 np0005473739 nova_compute[259550]: 2025-10-07 14:33:47.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:47 np0005473739 nova_compute[259550]: 2025-10-07 14:33:47.854 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:47 np0005473739 nova_compute[259550]: 2025-10-07 14:33:47.855 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:47 np0005473739 nova_compute[259550]: 2025-10-07 14:33:47.891 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:33:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:47 np0005473739 nova_compute[259550]: 2025-10-07 14:33:47.984 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:47 np0005473739 nova_compute[259550]: 2025-10-07 14:33:47.985 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:47 np0005473739 nova_compute[259550]: 2025-10-07 14:33:47.993 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:33:47 np0005473739 nova_compute[259550]: 2025-10-07 14:33:47.994 2 INFO nova.compute.claims [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2141: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.323 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:33:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3220293337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.755 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.761 2 DEBUG nova.compute.provider_tree [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.788 2 DEBUG nova.scheduler.client.report [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.834 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.849s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.836 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.888 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.888 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.909 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.930 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:33:48 np0005473739 nova_compute[259550]: 2025-10-07 14:33:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.040 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.042 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.042 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Creating image(s)#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.063 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.083 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.103 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.107 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.145 2 DEBUG nova.policy [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.180 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.181 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.182 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.182 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.265 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.268 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:49 np0005473739 nova_compute[259550]: 2025-10-07 14:33:49.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.030 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Successfully created port: ea75e1fb-474b-484e-9a52-73939d17da1b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.135 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.136 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.162 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.238 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.238 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.244 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.244 2 INFO nova.compute.claims [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:33:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2142: 305 pgs: 305 active+clean; 41 MiB data, 750 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.331 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.366 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.366 2 DEBUG nova.compute.provider_tree [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.386 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.412 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.473 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.878 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.610s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:50 np0005473739 nova_compute[259550]: 2025-10-07 14:33:50.943 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.041 2 DEBUG nova.objects.instance [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:33:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/718782002' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.105 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.632s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.110 2 DEBUG nova.compute.provider_tree [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.206 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.207 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Ensure instance console log exists: /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.207 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.207 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.208 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.220 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Successfully updated port: ea75e1fb-474b-484e-9a52-73939d17da1b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.226 2 DEBUG nova.scheduler.client.report [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.254 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.254 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.254 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.259 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.021s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.260 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.312 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.313 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.342 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.360 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.458 2 DEBUG nova.compute.manager [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.459 2 DEBUG nova.compute.manager [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing instance network info cache due to event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.459 2 DEBUG oslo_concurrency.lockutils [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.461 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.462 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.462 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Creating image(s)#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.482 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.505 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.525 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.528 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.599 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.600 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.601 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.601 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.624 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.627 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.698 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.939 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.312s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:51 np0005473739 nova_compute[259550]: 2025-10-07 14:33:51.998 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.002 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.093 2 DEBUG nova.objects.instance [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.216 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.217 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Ensure instance console log exists: /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.217 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.218 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.218 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2143: 305 pgs: 305 active+clean; 45 MiB data, 754 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 24 KiB/s wr, 5 op/s
Oct  7 10:33:52 np0005473739 nova_compute[259550]: 2025-10-07 14:33:52.296 2 DEBUG nova.policy [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:33:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:33:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.062 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847618.0612252, 0af0082a-1adc-40e7-b254-88e03182e802 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.063 2 INFO nova.compute.manager [-] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.101 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.250 2 DEBUG nova.compute.manager [None req-05363ca4-7460-44cc-a1f3-bc7cd46d5479 - - - - - -] [instance: 0af0082a-1adc-40e7-b254-88e03182e802] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.794 2 DEBUG nova.network.neutron [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.898 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.899 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance network_info: |[{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.899 2 DEBUG oslo_concurrency.lockutils [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.900 2 DEBUG nova.network.neutron [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.904 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start _get_guest_xml network_info=[{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.911 2 WARNING nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.916 2 DEBUG nova.virt.libvirt.host [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.917 2 DEBUG nova.virt.libvirt.host [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.920 2 DEBUG nova.virt.libvirt.host [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.921 2 DEBUG nova.virt.libvirt.host [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.922 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.922 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.923 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.923 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.924 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.924 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.924 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.924 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.925 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.925 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.925 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.925 2 DEBUG nova.virt.hardware [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:33:53 np0005473739 nova_compute[259550]: 2025-10-07 14:33:53.929 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2144: 305 pgs: 305 active+clean; 97 MiB data, 775 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 52 op/s
Oct  7 10:33:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:33:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089750526' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.370 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.393 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.397 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:33:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1585117156' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.845 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.848 2 DEBUG nova.virt.libvirt.vif [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-797854475',display_name='tempest-TestNetworkAdvancedServerOps-server-797854475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-797854475',id=108,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4Lkj2HHIhO2PrQ11lMJZ07v9Q3dgwfIb7H7MpsdLLDdtvBovL59oN2/XacURcr+DjY2ldySrFXIZcVMnzxGBRmm67eVPlJbvF3TQEBfQs6C/dhmbAY+AHYoEt1hPfXxw==',key_name='tempest-TestNetworkAdvancedServerOps-512345044',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-qabao1fo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:33:48Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.848 2 DEBUG nova.network.os_vif_util [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.850 2 DEBUG nova.network.os_vif_util [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.852 2 DEBUG nova.objects.instance [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:33:54 np0005473739 nova_compute[259550]: 2025-10-07 14:33:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:55 np0005473739 podman[375855]: 2025-10-07 14:33:55.07282427 +0000 UTC m=+0.062479600 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:33:55 np0005473739 podman[375856]: 2025-10-07 14:33:55.093857952 +0000 UTC m=+0.081045886 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.313 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <uuid>a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8</uuid>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <name>instance-0000006c</name>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-797854475</nova:name>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:33:53</nova:creationTime>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <nova:port uuid="ea75e1fb-474b-484e-9a52-73939d17da1b">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <entry name="serial">a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8</entry>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <entry name="uuid">a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8</entry>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:57:91:21"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <target dev="tapea75e1fb-47"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/console.log" append="off"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:33:55 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:33:55 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:33:55 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:33:55 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.314 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Preparing to wait for external event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.315 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.315 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.315 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.316 2 DEBUG nova.virt.libvirt.vif [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-797854475',display_name='tempest-TestNetworkAdvancedServerOps-server-797854475',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-797854475',id=108,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4Lkj2HHIhO2PrQ11lMJZ07v9Q3dgwfIb7H7MpsdLLDdtvBovL59oN2/XacURcr+DjY2ldySrFXIZcVMnzxGBRmm67eVPlJbvF3TQEBfQs6C/dhmbAY+AHYoEt1hPfXxw==',key_name='tempest-TestNetworkAdvancedServerOps-512345044',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-qabao1fo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:33:48Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.316 2 DEBUG nova.network.os_vif_util [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.317 2 DEBUG nova.network.os_vif_util [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.317 2 DEBUG os_vif [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.318 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.319 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapea75e1fb-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.322 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapea75e1fb-47, col_values=(('external_ids', {'iface-id': 'ea75e1fb-474b-484e-9a52-73939d17da1b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:91:21', 'vm-uuid': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:55 np0005473739 NetworkManager[44949]: <info>  [1759847635.3241] manager: (tapea75e1fb-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/438)
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.331 2 INFO os_vif [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47')#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.705 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.706 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.706 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.706 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.706 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:55 np0005473739 nova_compute[259550]: 2025-10-07 14:33:55.741 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Successfully created port: 13fb3ec4-0080-406c-8cea-6620016c0513 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:33:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:33:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2366465579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:33:56 np0005473739 nova_compute[259550]: 2025-10-07 14:33:56.134 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2145: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  7 10:33:56 np0005473739 nova_compute[259550]: 2025-10-07 14:33:56.629 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:33:56 np0005473739 nova_compute[259550]: 2025-10-07 14:33:56.629 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:33:56 np0005473739 nova_compute[259550]: 2025-10-07 14:33:56.629 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:57:91:21, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:33:56 np0005473739 nova_compute[259550]: 2025-10-07 14:33:56.630 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Using config drive#033[00m
Oct  7 10:33:56 np0005473739 nova_compute[259550]: 2025-10-07 14:33:56.651 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:56 np0005473739 nova_compute[259550]: 2025-10-07 14:33:56.784 2 DEBUG nova.network.neutron [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updated VIF entry in instance network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:33:56 np0005473739 nova_compute[259550]: 2025-10-07 14:33:56.785 2 DEBUG nova.network.neutron [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.015 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.015 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.144 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.145 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3760MB free_disk=59.96270751953125GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.145 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.146 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.293 2 DEBUG oslo_concurrency.lockutils [req-19e34f79-6c50-4de8-8cec-f5afc12c4da8 req-deb9d6fa-ffb1-4e44-8b2a-0961a98ab5b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.537 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Creating config drive at /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.541 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9tyabgrq execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.611 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.611 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.612 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.612 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.672 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.704 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9tyabgrq" returned: 0 in 0.163s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.734 2 DEBUG nova.storage.rbd_utils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.739 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:33:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:33:57 np0005473739 nova_compute[259550]: 2025-10-07 14:33:57.999 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Successfully updated port: 13fb3ec4-0080-406c-8cea-6620016c0513 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.131 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.131 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.131 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:33:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:33:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2304762170' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.157 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.163 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.201 2 DEBUG nova.compute.manager [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.201 2 DEBUG nova.compute.manager [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing instance network info cache due to event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.202 2 DEBUG oslo_concurrency.lockutils [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.256 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.259 2 DEBUG oslo_concurrency.processutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.260 2 INFO nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Deleting local config drive /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8/disk.config because it was imported into RBD.#033[00m
Oct  7 10:33:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2146: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  7 10:33:58 np0005473739 kernel: tapea75e1fb-47: entered promiscuous mode
Oct  7 10:33:58 np0005473739 NetworkManager[44949]: <info>  [1759847638.3057] manager: (tapea75e1fb-47): new Tun device (/org/freedesktop/NetworkManager/Devices/439)
Oct  7 10:33:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:58Z|01087|binding|INFO|Claiming lport ea75e1fb-474b-484e-9a52-73939d17da1b for this chassis.
Oct  7 10:33:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:58Z|01088|binding|INFO|ea75e1fb-474b-484e-9a52-73939d17da1b: Claiming fa:16:3e:57:91:21 10.100.0.14
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.319 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:91:21 10.100.0.14'], port_security=['fa:16:3e:57:91:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '37af7aab-7feb-47ed-a12c-8267b21bb6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a239abd8-f455-4712-a289-4273b897ab8d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ea75e1fb-474b-484e-9a52-73939d17da1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.320 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ea75e1fb-474b-484e-9a52-73939d17da1b in datapath 256ae6d5-60ea-4d02-8c43-2f55874712c6 bound to our chassis#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.321 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 256ae6d5-60ea-4d02-8c43-2f55874712c6#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.334 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62fbd5f1-259c-4f40-9743-20162d41c9cd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.336 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap256ae6d5-61 in ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:33:58 np0005473739 systemd-udevd[376013]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.338 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap256ae6d5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[14e5cf2d-3a6d-4f88-b187-0b961e016d89]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63444818-c62b-4d46-acef-9d0fcf5262da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 systemd-machined[214580]: New machine qemu-135-instance-0000006c.
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.350 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f21690eb-cda3-4095-9bea-320ddb0e66cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 NetworkManager[44949]: <info>  [1759847638.3557] device (tapea75e1fb-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:33:58 np0005473739 NetworkManager[44949]: <info>  [1759847638.3563] device (tapea75e1fb-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.356 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.356 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:58 np0005473739 systemd[1]: Started Virtual Machine qemu-135-instance-0000006c.
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.377 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1cc06042-e078-47e0-a027-8ab823788bb5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:58Z|01089|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b ovn-installed in OVS
Oct  7 10:33:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:58Z|01090|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b up in Southbound
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.409 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[63faec56-a293-4eb4-ad53-fec3964f8ee6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.415 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68a0ab1e-a647-440a-b3c9-9a7be48caba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 NetworkManager[44949]: <info>  [1759847638.4163] manager: (tap256ae6d5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/440)
Oct  7 10:33:58 np0005473739 systemd-udevd[376016]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.443 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[14608631-c5d2-45cd-bd1d-3837c00ac802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.448 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02f8b1d7-389a-4605-bcce-8444bd93e649]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 NetworkManager[44949]: <info>  [1759847638.4784] device (tap256ae6d5-60): carrier: link connected
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.484 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[25eab7b9-47db-4a74-bc7f-475f7dae5a8d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.487 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.508 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b54244d-b3c4-47f5-acde-1e4e564eba59]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap256ae6d5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:7e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822204, 'reachable_time': 30648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376045, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.526 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73fb00c9-d0c0-478d-82c1-28fed2707f43]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:7e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822204, 'tstamp': 822204}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376046, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.548 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fdc9048b-dc78-469a-a679-fdadff8aaa5c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap256ae6d5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:7e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 315], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822204, 'reachable_time': 30648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376047, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.574 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f03be55-c1dd-4d0c-b4c7-ec8014b5e04c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cefaa19-e07a-4e3e-a1ad-51d3c5de7c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.647 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap256ae6d5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.647 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.648 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap256ae6d5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:58 np0005473739 NetworkManager[44949]: <info>  [1759847638.6504] manager: (tap256ae6d5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/441)
Oct  7 10:33:58 np0005473739 kernel: tap256ae6d5-60: entered promiscuous mode
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.654 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap256ae6d5-60, col_values=(('external_ids', {'iface-id': 'fcf1dc99-f823-4d28-9074-21a7bced7a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:33:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:33:58Z|01091|binding|INFO|Releasing lport fcf1dc99-f823-4d28-9074-21a7bced7a57 from this chassis (sb_readonly=0)
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.669 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.670 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a80a7215-e745-4d82-b87d-ffed3d638a1d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.671 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:33:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:33:58.672 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'env', 'PROCESS_TAG=haproxy-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/256ae6d5-60ea-4d02-8c43-2f55874712c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.903 2 DEBUG nova.compute.manager [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.904 2 DEBUG oslo_concurrency.lockutils [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.904 2 DEBUG oslo_concurrency.lockutils [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.905 2 DEBUG oslo_concurrency.lockutils [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:33:58 np0005473739 nova_compute[259550]: 2025-10-07 14:33:58.905 2 DEBUG nova.compute.manager [req-f43451f4-2b1e-42a1-90d1-bb81b1c0c968 req-0efb4040-89a9-4119-a075-d2b66f90c507 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Processing event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:33:59 np0005473739 podman[376097]: 2025-10-07 14:33:59.036186943 +0000 UTC m=+0.059410798 container create 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:33:59 np0005473739 systemd[1]: Started libpod-conmon-3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f.scope.
Oct  7 10:33:59 np0005473739 podman[376097]: 2025-10-07 14:33:59.005263487 +0000 UTC m=+0.028487372 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:33:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:33:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3cca578f989b6450d529d2e7b4f782d731435dda711704c70dbd7335068f20d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:33:59 np0005473739 podman[376097]: 2025-10-07 14:33:59.137913011 +0000 UTC m=+0.161136866 container init 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:33:59 np0005473739 podman[376097]: 2025-10-07 14:33:59.143726797 +0000 UTC m=+0.166950662 container start 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:33:59 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [NOTICE]   (376140) : New worker (376142) forked
Oct  7 10:33:59 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [NOTICE]   (376140) : Loading success.
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.352 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.407 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.407 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.476 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:33:59 np0005473739 auditd[701]: Audit daemon rotating log files
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.553 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847639.5531995, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.554 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Started (Lifecycle Event)#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.557 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.563 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.568 2 INFO nova.virt.libvirt.driver [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance spawned successfully.#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.568 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.658 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.664 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.669 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.670 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.671 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.672 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.673 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.674 2 DEBUG nova.virt.libvirt.driver [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.760 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.761 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847639.553405, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.761 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.823 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.827 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847639.5611787, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.827 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.848 2 INFO nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Took 10.81 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.849 2 DEBUG nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.890 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:33:59 np0005473739 nova_compute[259550]: 2025-10-07 14:33:59.893 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:00 np0005473739 nova_compute[259550]: 2025-10-07 14:34:00.003 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:34:00 np0005473739 nova_compute[259550]: 2025-10-07 14:34:00.055 2 INFO nova.compute.manager [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Took 12.10 seconds to build instance.#033[00m
Oct  7 10:34:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:00.067 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:00.067 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:00.068 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:00 np0005473739 nova_compute[259550]: 2025-10-07 14:34:00.190 2 DEBUG oslo_concurrency.lockutils [None req-276f0b68-5445-403e-afef-4c7c4640e7e6 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2147: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.6 MiB/s wr, 57 op/s
Oct  7 10:34:00 np0005473739 nova_compute[259550]: 2025-10-07 14:34:00.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.014 2 DEBUG nova.network.neutron [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.042 2 DEBUG nova.compute.manager [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.043 2 DEBUG oslo_concurrency.lockutils [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.043 2 DEBUG oslo_concurrency.lockutils [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.044 2 DEBUG oslo_concurrency.lockutils [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.044 2 DEBUG nova.compute.manager [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.044 2 WARNING nova.compute.manager [req-78a23dc4-8b63-48b4-b927-3d23b806d783 req-155b380b-3685-4f0c-9c7b-f98bb55c92da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.063 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.064 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance network_info: |[{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.064 2 DEBUG oslo_concurrency.lockutils [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.065 2 DEBUG nova.network.neutron [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.068 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start _get_guest_xml network_info=[{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.072 2 WARNING nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.079 2 DEBUG nova.virt.libvirt.host [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.080 2 DEBUG nova.virt.libvirt.host [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.089 2 DEBUG nova.virt.libvirt.host [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.090 2 DEBUG nova.virt.libvirt.host [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.091 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.091 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.092 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.092 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.093 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.093 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.094 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.094 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.094 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.095 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.095 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.096 2 DEBUG nova.virt.hardware [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.098 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:34:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1866524971' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.640 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.675 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:01 np0005473739 nova_compute[259550]: 2025-10-07 14:34:01.684 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:34:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3393270145' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.174 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.176 2 DEBUG nova.virt.libvirt.vif [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:33:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213898697',display_name='tempest-TestNetworkBasicOps-server-1213898697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213898697',id=109,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOXXnJfbNjLe/xM7CNQv3JpMjH7QLX8ezU0CsayXoRWsJliurdVjk2soP6dLM8VmWxLkM0XL+W32Oeh6Y4JueSCzXDjtscbAk00DGutpE2Eyr2ZzKKLBUUQM3YVtmhffw==',key_name='tempest-TestNetworkBasicOps-1417079346',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-6sgmyxmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:33:51Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=109b6c3d-5810-4b7d-a96d-4e1b64dc2daa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.177 2 DEBUG nova.network.os_vif_util [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.178 2 DEBUG nova.network.os_vif_util [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.179 2 DEBUG nova.objects.instance [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.225 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <uuid>109b6c3d-5810-4b7d-a96d-4e1b64dc2daa</uuid>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <name>instance-0000006d</name>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-1213898697</nova:name>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:34:01</nova:creationTime>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <nova:port uuid="13fb3ec4-0080-406c-8cea-6620016c0513">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <entry name="serial">109b6c3d-5810-4b7d-a96d-4e1b64dc2daa</entry>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <entry name="uuid">109b6c3d-5810-4b7d-a96d-4e1b64dc2daa</entry>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:a7:48:df"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <target dev="tap13fb3ec4-00"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/console.log" append="off"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:34:02 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:34:02 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:34:02 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:34:02 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.227 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Preparing to wait for external event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.227 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.227 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.228 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.228 2 DEBUG nova.virt.libvirt.vif [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:33:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213898697',display_name='tempest-TestNetworkBasicOps-server-1213898697',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213898697',id=109,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOXXnJfbNjLe/xM7CNQv3JpMjH7QLX8ezU0CsayXoRWsJliurdVjk2soP6dLM8VmWxLkM0XL+W32Oeh6Y4JueSCzXDjtscbAk00DGutpE2Eyr2ZzKKLBUUQM3YVtmhffw==',key_name='tempest-TestNetworkBasicOps-1417079346',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-6sgmyxmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:33:51Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=109b6c3d-5810-4b7d-a96d-4e1b64dc2daa,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.229 2 DEBUG nova.network.os_vif_util [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.231 2 DEBUG nova.network.os_vif_util [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.232 2 DEBUG os_vif [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.232 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.233 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap13fb3ec4-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.238 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap13fb3ec4-00, col_values=(('external_ids', {'iface-id': '13fb3ec4-0080-406c-8cea-6620016c0513', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a7:48:df', 'vm-uuid': '109b6c3d-5810-4b7d-a96d-4e1b64dc2daa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:02 np0005473739 NetworkManager[44949]: <info>  [1759847642.2427] manager: (tap13fb3ec4-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/442)
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.249 2 INFO os_vif [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00')#033[00m
Oct  7 10:34:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2148: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 414 KiB/s rd, 3.6 MiB/s wr, 72 op/s
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.336 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.336 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.336 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:a7:48:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.337 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Using config drive#033[00m
Oct  7 10:34:02 np0005473739 nova_compute[259550]: 2025-10-07 14:34:02.366 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.297 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Creating config drive at /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config#033[00m
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.302 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpamtu_6w6 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.445 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpamtu_6w6" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.469 2 DEBUG nova.storage.rbd_utils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.472 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.633 2 DEBUG oslo_concurrency.processutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.634 2 INFO nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Deleting local config drive /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa/disk.config because it was imported into RBD.#033[00m
Oct  7 10:34:03 np0005473739 kernel: tap13fb3ec4-00: entered promiscuous mode
Oct  7 10:34:03 np0005473739 NetworkManager[44949]: <info>  [1759847643.7051] manager: (tap13fb3ec4-00): new Tun device (/org/freedesktop/NetworkManager/Devices/443)
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:03Z|01092|binding|INFO|Claiming lport 13fb3ec4-0080-406c-8cea-6620016c0513 for this chassis.
Oct  7 10:34:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:03Z|01093|binding|INFO|13fb3ec4-0080-406c-8cea-6620016c0513: Claiming fa:16:3e:a7:48:df 10.100.0.13
Oct  7 10:34:03 np0005473739 systemd-udevd[376285]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:34:03 np0005473739 NetworkManager[44949]: <info>  [1759847643.7493] device (tap13fb3ec4-00): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:34:03 np0005473739 NetworkManager[44949]: <info>  [1759847643.7513] device (tap13fb3ec4-00): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:34:03 np0005473739 systemd-machined[214580]: New machine qemu-136-instance-0000006d.
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:03 np0005473739 systemd[1]: Started Virtual Machine qemu-136-instance-0000006d.
Oct  7 10:34:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:03Z|01094|binding|INFO|Setting lport 13fb3ec4-0080-406c-8cea-6620016c0513 ovn-installed in OVS
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:03Z|01095|binding|INFO|Setting lport 13fb3ec4-0080-406c-8cea-6620016c0513 up in Southbound
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.834 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:48:df 10.100.0.13'], port_security=['fa:16:3e:a7:48:df 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '109b6c3d-5810-4b7d-a96d-4e1b64dc2daa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'f7aecf35-a2bc-4233-a6ec-98effd9bc888', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf60a0a-2571-43f1-914f-0f00c5d191c5, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=13fb3ec4-0080-406c-8cea-6620016c0513) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.835 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 13fb3ec4-0080-406c-8cea-6620016c0513 in datapath 8e6e03c6-002b-464f-aea7-5ae708e3e5dc bound to our chassis#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.836 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8e6e03c6-002b-464f-aea7-5ae708e3e5dc#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.850 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d0be2b34-ccfa-48ec-b458-90df9f9ac3c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.852 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8e6e03c6-01 in ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.854 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8e6e03c6-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.854 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1a3fd6a-dbe2-4e65-a9ed-8b11777846b3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.855 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cd01057f-6fbe-4a15-a1f0-1684b42c9490]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.868 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dc8a9793-d477-413c-b6fa-336023a1c3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa066717-711e-4aa4-bf4c-138a70dc9a78]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.928 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[33c02fc2-1cb6-4c7a-893d-58c0a780046f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.935 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[af0eb095-4340-487c-9acd-4d8753b4fe11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 NetworkManager[44949]: <info>  [1759847643.9366] manager: (tap8e6e03c6-00): new Veth device (/org/freedesktop/NetworkManager/Devices/444)
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.971 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2af032f4-c91d-49e1-8179-191e4d0ff7ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.974 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ad5a470-0b05-4ea2-930f-52abfaa8f987]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:03 np0005473739 nova_compute[259550]: 2025-10-07 14:34:03.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:34:03 np0005473739 NetworkManager[44949]: <info>  [1759847643.9941] device (tap8e6e03c6-00): carrier: link connected
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:03.999 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e3c49c91-01c2-4234-aa9b-3aa4781f68dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.017 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3577a473-4567-4e45-a36d-df1594d95730]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e6e03c6-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:49:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822755, 'reachable_time': 37564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376322, 'error': None, 'target': 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.033 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[85a29652-d4b3-4ab6-b0a6-7bd7821c1ded]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:49ff'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 822755, 'tstamp': 822755}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376323, 'error': None, 'target': 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.050 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7dd442af-4454-4816-b12b-a7954bb6b0d2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8e6e03c6-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:49:ff'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 317], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822755, 'reachable_time': 37564, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376324, 'error': None, 'target': 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.080 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[89b99713-bb82-48cc-b558-fb79c9c01ff3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.149 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[997a9a69-0ff3-4954-abee-b80935cb2f80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.152 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e6e03c6-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.152 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.153 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e6e03c6-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:04 np0005473739 kernel: tap8e6e03c6-00: entered promiscuous mode
Oct  7 10:34:04 np0005473739 NetworkManager[44949]: <info>  [1759847644.1556] manager: (tap8e6e03c6-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/445)
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.157 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8e6e03c6-00, col_values=(('external_ids', {'iface-id': '3f0bb527-e0e8-409d-b11f-0c1205bec67a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:04Z|01096|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.174 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8e6e03c6-002b-464f-aea7-5ae708e3e5dc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8e6e03c6-002b-464f-aea7-5ae708e3e5dc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.176 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7350d25-e13f-4a3f-94fb-dc016af13db7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.177 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-8e6e03c6-002b-464f-aea7-5ae708e3e5dc
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/8e6e03c6-002b-464f-aea7-5ae708e3e5dc.pid.haproxy
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 8e6e03c6-002b-464f-aea7-5ae708e3e5dc
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:34:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:04.179 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'env', 'PROCESS_TAG=haproxy-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8e6e03c6-002b-464f-aea7-5ae708e3e5dc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:34:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2149: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.5 MiB/s wr, 125 op/s
Oct  7 10:34:04 np0005473739 podman[376398]: 2025-10-07 14:34:04.544255861 +0000 UTC m=+0.053094730 container create 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.593 2 DEBUG nova.network.neutron [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updated VIF entry in instance network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.593 2 DEBUG nova.network.neutron [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:04 np0005473739 systemd[1]: Started libpod-conmon-342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db.scope.
Oct  7 10:34:04 np0005473739 podman[376398]: 2025-10-07 14:34:04.516297444 +0000 UTC m=+0.025136333 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.618 2 DEBUG oslo_concurrency.lockutils [req-32e62629-1851-42be-91a0-7e29d1d0f9a5 req-2a39097c-2de7-4b4e-89e1-06144fe19469 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ce9d977b3319507425402d4f816a2d462fb60a6cead3d48e6a5a2ead545ba35/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:04 np0005473739 podman[376398]: 2025-10-07 14:34:04.650350086 +0000 UTC m=+0.159188985 container init 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:34:04 np0005473739 podman[376398]: 2025-10-07 14:34:04.657702232 +0000 UTC m=+0.166541101 container start 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  7 10:34:04 np0005473739 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [NOTICE]   (376417) : New worker (376419) forked
Oct  7 10:34:04 np0005473739 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [NOTICE]   (376417) : Loading success.
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.828 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847644.827904, 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.828 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] VM Started (Lifecycle Event)#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.874 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.879 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847644.8287969, 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.880 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.924 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:04 np0005473739 nova_compute[259550]: 2025-10-07 14:34:04.926 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:05 np0005473739 nova_compute[259550]: 2025-10-07 14:34:05.027 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:34:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2150: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 78 op/s
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.429 2 DEBUG nova.compute.manager [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.430 2 DEBUG oslo_concurrency.lockutils [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.430 2 DEBUG oslo_concurrency.lockutils [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.430 2 DEBUG oslo_concurrency.lockutils [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.431 2 DEBUG nova.compute.manager [req-e5383fa0-7f81-4113-b26b-c8728ac40110 req-1a664de8-08a0-4059-9a30-dc6b2e774d81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Processing event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.431 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.434 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847646.434416, 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.435 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.438 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.444 2 INFO nova.virt.libvirt.driver [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance spawned successfully.#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.445 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.490 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.500 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.501 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.501 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.502 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.502 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.503 2 DEBUG nova.virt.libvirt.driver [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.596 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.871 2 INFO nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Took 15.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.872 2 DEBUG nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:06 np0005473739 nova_compute[259550]: 2025-10-07 14:34:06.978 2 INFO nova.compute.manager [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Took 16.77 seconds to build instance.#033[00m
Oct  7 10:34:07 np0005473739 nova_compute[259550]: 2025-10-07 14:34:07.011 2 DEBUG oslo_concurrency.lockutils [None req-1694bcbf-8ead-4282-b78e-aa339ff5c4ee 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.876s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:07 np0005473739 nova_compute[259550]: 2025-10-07 14:34:07.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:07 np0005473739 nova_compute[259550]: 2025-10-07 14:34:07.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:08Z|01097|binding|INFO|Releasing lport fcf1dc99-f823-4d28-9074-21a7bced7a57 from this chassis (sb_readonly=0)
Oct  7 10:34:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:08Z|01098|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:08 np0005473739 NetworkManager[44949]: <info>  [1759847648.1765] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/446)
Oct  7 10:34:08 np0005473739 NetworkManager[44949]: <info>  [1759847648.1783] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/447)
Oct  7 10:34:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:08Z|01099|binding|INFO|Releasing lport fcf1dc99-f823-4d28-9074-21a7bced7a57 from this chassis (sb_readonly=0)
Oct  7 10:34:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:08Z|01100|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.198 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2151: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.605 2 DEBUG nova.compute.manager [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.606 2 DEBUG oslo_concurrency.lockutils [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.606 2 DEBUG oslo_concurrency.lockutils [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.608 2 DEBUG oslo_concurrency.lockutils [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.610 2 DEBUG nova.compute.manager [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] No waiting events found dispatching network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:08 np0005473739 nova_compute[259550]: 2025-10-07 14:34:08.610 2 WARNING nova.compute.manager [req-a913345d-14f6-469f-9e62-fb249bb5dd31 req-5d01109c-82ab-49e6-ac37-f4422b75a13a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received unexpected event network-vif-plugged-13fb3ec4-0080-406c-8cea-6620016c0513 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:34:09 np0005473739 podman[376429]: 2025-10-07 14:34:09.065002927 +0000 UTC m=+0.054595750 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  7 10:34:09 np0005473739 podman[376430]: 2025-10-07 14:34:09.110442901 +0000 UTC m=+0.097790874 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:34:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2152: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 25 KiB/s wr, 103 op/s
Oct  7 10:34:11 np0005473739 nova_compute[259550]: 2025-10-07 14:34:11.560 2 DEBUG nova.compute.manager [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:11 np0005473739 nova_compute[259550]: 2025-10-07 14:34:11.561 2 DEBUG nova.compute.manager [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing instance network info cache due to event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:34:11 np0005473739 nova_compute[259550]: 2025-10-07 14:34:11.562 2 DEBUG oslo_concurrency.lockutils [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:11 np0005473739 nova_compute[259550]: 2025-10-07 14:34:11.562 2 DEBUG oslo_concurrency.lockutils [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:11 np0005473739 nova_compute[259550]: 2025-10-07 14:34:11.562 2 DEBUG nova.network.neutron [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:34:12 np0005473739 nova_compute[259550]: 2025-10-07 14:34:12.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:12 np0005473739 nova_compute[259550]: 2025-10-07 14:34:12.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2153: 305 pgs: 305 active+clean; 134 MiB data, 792 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 13 KiB/s wr, 144 op/s
Oct  7 10:34:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.121 2 DEBUG nova.network.neutron [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updated VIF entry in instance network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.122 2 DEBUG nova.network.neutron [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.143 2 DEBUG oslo_concurrency.lockutils [req-1f9f7554-ac49-4c77-bc5a-aeb1c629b270 req-ce1cdbf9-d52f-4792-a777-668b5055508c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:13Z|00116|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:91:21 10.100.0.14
Oct  7 10:34:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:13Z|00117|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:91:21 10.100.0.14
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.822 2 DEBUG nova.compute.manager [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.823 2 DEBUG nova.compute.manager [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing instance network info cache due to event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.823 2 DEBUG oslo_concurrency.lockutils [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.824 2 DEBUG oslo_concurrency.lockutils [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:13 np0005473739 nova_compute[259550]: 2025-10-07 14:34:13.824 2 DEBUG nova.network.neutron [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:34:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2154: 305 pgs: 305 active+clean; 153 MiB data, 814 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 1.8 MiB/s wr, 184 op/s
Oct  7 10:34:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2155: 305 pgs: 305 active+clean; 167 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.1 MiB/s wr, 139 op/s
Oct  7 10:34:16 np0005473739 nova_compute[259550]: 2025-10-07 14:34:16.275 2 DEBUG nova.network.neutron [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updated VIF entry in instance network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:34:16 np0005473739 nova_compute[259550]: 2025-10-07 14:34:16.276 2 DEBUG nova.network.neutron [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:16 np0005473739 nova_compute[259550]: 2025-10-07 14:34:16.356 2 DEBUG oslo_concurrency.lockutils [req-24e07542-9fec-4ac1-a8ea-5ca588cbc6bd req-2de2193f-27ab-4b70-8ebf-f61a744a9dfb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:17 np0005473739 nova_compute[259550]: 2025-10-07 14:34:17.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:17 np0005473739 nova_compute[259550]: 2025-10-07 14:34:17.245 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2156: 305 pgs: 305 active+clean; 167 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct  7 10:34:19 np0005473739 nova_compute[259550]: 2025-10-07 14:34:19.055 2 INFO nova.compute.manager [None req-d8b16fbf-61ea-4af6-8d5e-54be8c0ae1d4 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Get console output#033[00m
Oct  7 10:34:19 np0005473739 nova_compute[259550]: 2025-10-07 14:34:19.060 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:34:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:19Z|00118|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a7:48:df 10.100.0.13
Oct  7 10:34:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:19Z|00119|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a7:48:df 10.100.0.13
Oct  7 10:34:19 np0005473739 nova_compute[259550]: 2025-10-07 14:34:19.540 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:19 np0005473739 nova_compute[259550]: 2025-10-07 14:34:19.540 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:19 np0005473739 nova_compute[259550]: 2025-10-07 14:34:19.541 2 INFO nova.compute.manager [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Rebooting instance#033[00m
Oct  7 10:34:19 np0005473739 nova_compute[259550]: 2025-10-07 14:34:19.567 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:19 np0005473739 nova_compute[259550]: 2025-10-07 14:34:19.568 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:19 np0005473739 nova_compute[259550]: 2025-10-07 14:34:19.568 2 DEBUG nova.network.neutron [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:34:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2157: 305 pgs: 305 active+clean; 180 MiB data, 831 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.4 MiB/s wr, 176 op/s
Oct  7 10:34:20 np0005473739 nova_compute[259550]: 2025-10-07 14:34:20.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:22 np0005473739 nova_compute[259550]: 2025-10-07 14:34:22.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:22 np0005473739 nova_compute[259550]: 2025-10-07 14:34:22.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2158: 305 pgs: 305 active+clean; 196 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 4.3 MiB/s wr, 168 op/s
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:34:22
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'volumes', 'default.rgw.log', 'vms', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', '.rgw.root', 'backups', 'cephfs.cephfs.meta']
Oct  7 10:34:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:34:22 np0005473739 nova_compute[259550]: 2025-10-07 14:34:22.892 2 DEBUG nova.network.neutron [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:22 np0005473739 nova_compute[259550]: 2025-10-07 14:34:22.942 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:22 np0005473739 nova_compute[259550]: 2025-10-07 14:34:22.943 2 DEBUG nova.compute.manager [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:34:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:34:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2159: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 651 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct  7 10:34:25 np0005473739 kernel: tapea75e1fb-47 (unregistering): left promiscuous mode
Oct  7 10:34:25 np0005473739 NetworkManager[44949]: <info>  [1759847665.8778] device (tapea75e1fb-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:34:25 np0005473739 nova_compute[259550]: 2025-10-07 14:34:25.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:25Z|01101|binding|INFO|Releasing lport ea75e1fb-474b-484e-9a52-73939d17da1b from this chassis (sb_readonly=0)
Oct  7 10:34:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:25Z|01102|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b down in Southbound
Oct  7 10:34:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:25Z|01103|binding|INFO|Removing iface tapea75e1fb-47 ovn-installed in OVS
Oct  7 10:34:25 np0005473739 nova_compute[259550]: 2025-10-07 14:34:25.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:25 np0005473739 nova_compute[259550]: 2025-10-07 14:34:25.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:25 np0005473739 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct  7 10:34:25 np0005473739 systemd[1]: machine-qemu\x2d135\x2dinstance\x2d0000006c.scope: Consumed 13.945s CPU time.
Oct  7 10:34:25 np0005473739 systemd-machined[214580]: Machine qemu-135-instance-0000006c terminated.
Oct  7 10:34:25 np0005473739 podman[376472]: 2025-10-07 14:34:25.972155304 +0000 UTC m=+0.071625565 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:34:26 np0005473739 podman[376475]: 2025-10-07 14:34:26.003554313 +0000 UTC m=+0.102738957 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.002 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:91:21 10.100.0.14'], port_security=['fa:16:3e:57:91:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37af7aab-7feb-47ed-a12c-8267b21bb6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a239abd8-f455-4712-a289-4273b897ab8d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ea75e1fb-474b-484e-9a52-73939d17da1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.004 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ea75e1fb-474b-484e-9a52-73939d17da1b in datapath 256ae6d5-60ea-4d02-8c43-2f55874712c6 unbound from our chassis#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.006 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 256ae6d5-60ea-4d02-8c43-2f55874712c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf3ce05-dd5d-4fe0-952c-12820400bc74]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.008 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 namespace which is not needed anymore#033[00m
Oct  7 10:34:26 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [NOTICE]   (376140) : haproxy version is 2.8.14-c23fe91
Oct  7 10:34:26 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [NOTICE]   (376140) : path to executable is /usr/sbin/haproxy
Oct  7 10:34:26 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [WARNING]  (376140) : Exiting Master process...
Oct  7 10:34:26 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [ALERT]    (376140) : Current worker (376142) exited with code 143 (Terminated)
Oct  7 10:34:26 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376135]: [WARNING]  (376140) : All workers exited. Exiting... (0)
Oct  7 10:34:26 np0005473739 systemd[1]: libpod-3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f.scope: Deactivated successfully.
Oct  7 10:34:26 np0005473739 podman[376533]: 2025-10-07 14:34:26.137980075 +0000 UTC m=+0.046728360 container died 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:34:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f-userdata-shm.mount: Deactivated successfully.
Oct  7 10:34:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3cca578f989b6450d529d2e7b4f782d731435dda711704c70dbd7335068f20d2-merged.mount: Deactivated successfully.
Oct  7 10:34:26 np0005473739 podman[376533]: 2025-10-07 14:34:26.194390072 +0000 UTC m=+0.103138337 container cleanup 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:34:26 np0005473739 systemd[1]: libpod-conmon-3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f.scope: Deactivated successfully.
Oct  7 10:34:26 np0005473739 podman[376573]: 2025-10-07 14:34:26.255043633 +0000 UTC m=+0.040395831 container remove 3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.262 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23e44a7c-cdd7-4e56-9751-e155fe8ec9da]: (4, ('Tue Oct  7 02:34:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 (3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f)\n3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f\nTue Oct  7 02:34:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 (3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f)\n3435c2c4b92e607e26ca449a4bd5101dec0fa008a1bf63f046d4b3b46b62609f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[026a86e0-34a8-470a-9e5d-53e3ff64f38f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.264 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap256ae6d5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:26 np0005473739 kernel: tap256ae6d5-60: left promiscuous mode
Oct  7 10:34:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2160: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.5 MiB/s wr, 73 op/s
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.291 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b277ba3-e92e-4927-bbc2-bb13f655fd9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.322 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[316d32c0-5f3f-4699-88d8-75323965f207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.323 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d3335b2d-c4dd-485f-af45-a30a5c407e8c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad07cdac-4eda-4bb4-90dc-f27186d010aa]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822196, 'reachable_time': 39177, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376592, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.342 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.342 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c8191e19-6e36-49ce-b3b6-bb9aeedcafcf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 systemd[1]: run-netns-ovnmeta\x2d256ae6d5\x2d60ea\x2d4d02\x2d8c43\x2d2f55874712c6.mount: Deactivated successfully.
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.488 2 INFO nova.virt.libvirt.driver [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance shutdown successfully.#033[00m
Oct  7 10:34:26 np0005473739 kernel: tapea75e1fb-47: entered promiscuous mode
Oct  7 10:34:26 np0005473739 systemd-udevd[376496]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:34:26 np0005473739 NetworkManager[44949]: <info>  [1759847666.5430] manager: (tapea75e1fb-47): new Tun device (/org/freedesktop/NetworkManager/Devices/448)
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.545 2 INFO nova.compute.manager [None req-f8281924-01f4-4df5-ae8a-b1ace5b13103 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Get console output#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:26Z|01104|binding|INFO|Claiming lport ea75e1fb-474b-484e-9a52-73939d17da1b for this chassis.
Oct  7 10:34:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:26Z|01105|binding|INFO|ea75e1fb-474b-484e-9a52-73939d17da1b: Claiming fa:16:3e:57:91:21 10.100.0.14
Oct  7 10:34:26 np0005473739 NetworkManager[44949]: <info>  [1759847666.5539] device (tapea75e1fb-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:34:26 np0005473739 NetworkManager[44949]: <info>  [1759847666.5546] device (tapea75e1fb-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.556 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:34:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:26Z|01106|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b ovn-installed in OVS
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:26 np0005473739 systemd-machined[214580]: New machine qemu-137-instance-0000006c.
Oct  7 10:34:26 np0005473739 systemd[1]: Started Virtual Machine qemu-137-instance-0000006c.
Oct  7 10:34:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:26Z|01107|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b up in Southbound
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.616 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:91:21 10.100.0.14'], port_security=['fa:16:3e:57:91:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '37af7aab-7feb-47ed-a12c-8267b21bb6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.186'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a239abd8-f455-4712-a289-4273b897ab8d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ea75e1fb-474b-484e-9a52-73939d17da1b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.618 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ea75e1fb-474b-484e-9a52-73939d17da1b in datapath 256ae6d5-60ea-4d02-8c43-2f55874712c6 bound to our chassis#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.620 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 256ae6d5-60ea-4d02-8c43-2f55874712c6#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.636 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bc932f76-f444-4981-ab4b-8d978c84a339]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.637 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap256ae6d5-61 in ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.638 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap256ae6d5-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.639 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c5cda4a-c94c-454c-8e65-75b36d8384d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.640 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[daeab1da-318d-4bb9-9798-55385d0a0cdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.658 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[58805da6-a281-44c4-b38e-f641622c2f99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[98994e63-3b3e-463f-8cd5-d0e67af4e89a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.706 2 DEBUG nova.compute.manager [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.707 2 DEBUG oslo_concurrency.lockutils [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.707 2 DEBUG oslo_concurrency.lockutils [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.707 2 DEBUG oslo_concurrency.lockutils [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.707 2 DEBUG nova.compute.manager [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.708 2 WARNING nova.compute.manager [req-a9abfe04-9a50-42f6-b59d-c06f77b064c2 req-23e3b025-6e19-4156-948e-b854397cfbb6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state reboot_started.#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.712 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[77298a6d-2e02-4ada-8a78-e4fc3e1caae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.718 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf052e8b-9aaf-4a15-a3aa-37de77724813]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 NetworkManager[44949]: <info>  [1759847666.7202] manager: (tap256ae6d5-60): new Veth device (/org/freedesktop/NetworkManager/Devices/449)
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.755 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2b328f04-1134-4eee-9654-1051cdb61c1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.759 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[07c16194-9584-41f4-8006-07ace51c2d19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 NetworkManager[44949]: <info>  [1759847666.7912] device (tap256ae6d5-60): carrier: link connected
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.797 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6b1518c1-4a26-4c6c-a1b0-4d11494e35ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.815 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42422fa3-c9b4-4501-9057-cebc34d90709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap256ae6d5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:7e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825035, 'reachable_time': 38957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 376656, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.834 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95ac88b6-3877-4b93-b0d3-08bca70ed482]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe05:7e20'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 825035, 'tstamp': 825035}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 376674, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.856 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20b8bcc3-9c14-4b1c-85a4-bd8ca4e40abc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap256ae6d5-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:05:7e:20'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 320], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825035, 'reachable_time': 38957, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 376676, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a176c53a-0ae3-4c1e-af3e-1ee46f56c06a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.973 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee40e848-2926-4651-8edd-4751d5363c7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.975 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap256ae6d5-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.975 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap256ae6d5-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:26 np0005473739 kernel: tap256ae6d5-60: entered promiscuous mode
Oct  7 10:34:26 np0005473739 NetworkManager[44949]: <info>  [1759847666.9795] manager: (tap256ae6d5-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/450)
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:26.983 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap256ae6d5-60, col_values=(('external_ids', {'iface-id': 'fcf1dc99-f823-4d28-9074-21a7bced7a57'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:26 np0005473739 nova_compute[259550]: 2025-10-07 14:34:26.984 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:26Z|01108|binding|INFO|Releasing lport fcf1dc99-f823-4d28-9074-21a7bced7a57 from this chassis (sb_readonly=0)
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:27.003 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:27.004 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a32c1930-e546-4aeb-a406-98dae6808657]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:27.005 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/256ae6d5-60ea-4d02-8c43-2f55874712c6.pid.haproxy
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 256ae6d5-60ea-4d02-8c43-2f55874712c6
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:34:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:27.013 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'env', 'PROCESS_TAG=haproxy-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/256ae6d5-60ea-4d02-8c43-2f55874712c6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.378 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.379 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847667.3778539, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.379 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.384 2 INFO nova.virt.libvirt.driver [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance running successfully.#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.385 2 INFO nova.virt.libvirt.driver [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance soft rebooted successfully.#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.385 2 DEBUG nova.compute.manager [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:27 np0005473739 podman[376713]: 2025-10-07 14:34:27.449066808 +0000 UTC m=+0.058385902 container create c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:34:27 np0005473739 systemd[1]: Started libpod-conmon-c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac.scope.
Oct  7 10:34:27 np0005473739 podman[376713]: 2025-10-07 14:34:27.416111727 +0000 UTC m=+0.025430811 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.525 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.529 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8ab2a485858f7d7c59681ecb143d683598d5f4b822096692bfe1648f09b1a197/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:27 np0005473739 podman[376713]: 2025-10-07 14:34:27.584681201 +0000 UTC m=+0.194000305 container init c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:34:27 np0005473739 podman[376713]: 2025-10-07 14:34:27.592448118 +0000 UTC m=+0.201767192 container start c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:34:27 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [NOTICE]   (376732) : New worker (376734) forked
Oct  7 10:34:27 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [NOTICE]   (376732) : Loading success.
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.846 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847667.3782148, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:27 np0005473739 nova_compute[259550]: 2025-10-07 14:34:27.847 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Started (Lifecycle Event)#033[00m
Oct  7 10:34:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.205 2 DEBUG oslo_concurrency.lockutils [None req-f3711e0a-380f-4335-a993-60faa6e49b3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 8.665s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2161: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.296 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.301 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.830 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.831 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.832 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.832 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.833 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.833 2 WARNING nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.833 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.835 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.836 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.837 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.837 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.838 2 WARNING nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.839 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.839 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.840 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.841 2 DEBUG oslo_concurrency.lockutils [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.842 2 DEBUG nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:28 np0005473739 nova_compute[259550]: 2025-10-07 14:34:28.842 2 WARNING nova.compute.manager [req-57dfa3c0-1aca-40e6-9bef-bfc4834063cf req-30c82f03-2b2f-430c-ad5c-fdc3b6e33c39 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:34:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2162: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 2.2 MiB/s wr, 120 op/s
Oct  7 10:34:30 np0005473739 nova_compute[259550]: 2025-10-07 14:34:30.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:31.352 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=34, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=33) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:34:31 np0005473739 nova_compute[259550]: 2025-10-07 14:34:31.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:31.354 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:34:32 np0005473739 nova_compute[259550]: 2025-10-07 14:34:32.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:32 np0005473739 nova_compute[259550]: 2025-10-07 14:34:32.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2163: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 985 KiB/s wr, 95 op/s
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015193727819561111 of space, bias 1.0, pg target 0.45581183458683333 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:34:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:34:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:34:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334373561' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:34:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:34:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/334373561' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:34:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:33.355 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '34'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2164: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 41 KiB/s wr, 75 op/s
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.220 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.221 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.241 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.310 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.310 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.317 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.318 2 INFO nova.compute.claims [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.454 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:34:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/291774645' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.944 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.950 2 DEBUG nova.compute.provider_tree [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.967 2 DEBUG nova.scheduler.client.report [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.988 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:35 np0005473739 nova_compute[259550]: 2025-10-07 14:34:35.988 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.033 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.034 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.054 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.069 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.153 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.154 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.155 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Creating image(s)#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.176 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.199 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.225 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.229 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.263 2 DEBUG nova.policy [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:34:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2165: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 36 KiB/s wr, 73 op/s
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.303 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.304 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.305 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.305 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.326 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.331 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 df052dd5-fecd-4dd3-be36-4becc3f9f318_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.667 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 df052dd5-fecd-4dd3-be36-4becc3f9f318_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.743 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.887 2 DEBUG nova.objects.instance [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid df052dd5-fecd-4dd3-be36-4becc3f9f318 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.905 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.906 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Ensure instance console log exists: /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.907 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.908 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:36 np0005473739 nova_compute[259550]: 2025-10-07 14:34:36.909 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:37 np0005473739 nova_compute[259550]: 2025-10-07 14:34:37.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:37 np0005473739 nova_compute[259550]: 2025-10-07 14:34:37.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:37 np0005473739 nova_compute[259550]: 2025-10-07 14:34:37.719 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Successfully created port: 72db4fd3-8171-42af-9801-69a061614ccc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:34:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:34:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:34:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2166: 305 pgs: 305 active+clean; 200 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 24 KiB/s wr, 71 op/s
Oct  7 10:34:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:38Z|00120|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:91:21 10.100.0.14
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:39 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a996b5e8-9450-48fa-9bd2-68eaa57faeb3 does not exist
Oct  7 10:34:39 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 60d9a18e-7b4c-425f-9b0b-4c2ecc1a68fc does not exist
Oct  7 10:34:39 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c6224891-a622-4004-beed-d992f66da967 does not exist
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:34:39 np0005473739 podman[377208]: 2025-10-07 14:34:39.275886965 +0000 UTC m=+0.064177566 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:34:39 np0005473739 podman[377209]: 2025-10-07 14:34:39.333703039 +0000 UTC m=+0.122096923 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:34:39 np0005473739 nova_compute[259550]: 2025-10-07 14:34:39.516 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Successfully created port: 87dbfe27-9436-4e21-a648-df77ddfec6ca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:34:39 np0005473739 podman[377368]: 2025-10-07 14:34:39.753814165 +0000 UTC m=+0.045422684 container create 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:34:39 np0005473739 systemd[1]: Started libpod-conmon-26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425.scope.
Oct  7 10:34:39 np0005473739 podman[377368]: 2025-10-07 14:34:39.729894546 +0000 UTC m=+0.021503095 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:34:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:39 np0005473739 podman[377368]: 2025-10-07 14:34:39.851212988 +0000 UTC m=+0.142821547 container init 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:34:39 np0005473739 podman[377368]: 2025-10-07 14:34:39.861052781 +0000 UTC m=+0.152661300 container start 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:34:39 np0005473739 zen_jepsen[377384]: 167 167
Oct  7 10:34:39 np0005473739 podman[377368]: 2025-10-07 14:34:39.866860716 +0000 UTC m=+0.158469235 container attach 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 10:34:39 np0005473739 systemd[1]: libpod-26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425.scope: Deactivated successfully.
Oct  7 10:34:39 np0005473739 podman[377368]: 2025-10-07 14:34:39.869008843 +0000 UTC m=+0.160617392 container died 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:34:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-abfb868c2bd6b450ccbea031f7918d119a5b3c656fdbbbb302c02d4b8b271b9f-merged.mount: Deactivated successfully.
Oct  7 10:34:39 np0005473739 nova_compute[259550]: 2025-10-07 14:34:39.900 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:39 np0005473739 nova_compute[259550]: 2025-10-07 14:34:39.902 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:39 np0005473739 podman[377368]: 2025-10-07 14:34:39.910962314 +0000 UTC m=+0.202570853 container remove 26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 10:34:39 np0005473739 systemd[1]: libpod-conmon-26e925d90a977404c1de300a42b13bb1a87dc6eeb5dd4589ec619de8bede6425.scope: Deactivated successfully.
Oct  7 10:34:39 np0005473739 nova_compute[259550]: 2025-10-07 14:34:39.929 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:34:40 np0005473739 nova_compute[259550]: 2025-10-07 14:34:40.018 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:40 np0005473739 nova_compute[259550]: 2025-10-07 14:34:40.019 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:40 np0005473739 nova_compute[259550]: 2025-10-07 14:34:40.028 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:34:40 np0005473739 nova_compute[259550]: 2025-10-07 14:34:40.028 2 INFO nova.compute.claims [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:34:40 np0005473739 podman[377408]: 2025-10-07 14:34:40.112249072 +0000 UTC m=+0.051599110 container create 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef)
Oct  7 10:34:40 np0005473739 systemd[1]: Started libpod-conmon-1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6.scope.
Oct  7 10:34:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:40 np0005473739 podman[377408]: 2025-10-07 14:34:40.090013028 +0000 UTC m=+0.029363096 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:40 np0005473739 podman[377408]: 2025-10-07 14:34:40.204899238 +0000 UTC m=+0.144249306 container init 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:34:40 np0005473739 podman[377408]: 2025-10-07 14:34:40.21360428 +0000 UTC m=+0.152954328 container start 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:34:40 np0005473739 podman[377408]: 2025-10-07 14:34:40.218020639 +0000 UTC m=+0.157370697 container attach 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 10:34:40 np0005473739 nova_compute[259550]: 2025-10-07 14:34:40.226 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2167: 305 pgs: 305 active+clean; 233 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 1.2 MiB/s wr, 129 op/s
Oct  7 10:34:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:34:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/34419485' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:34:40 np0005473739 nova_compute[259550]: 2025-10-07 14:34:40.682 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:40 np0005473739 nova_compute[259550]: 2025-10-07 14:34:40.689 2 DEBUG nova.compute.provider_tree [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:34:40 np0005473739 nova_compute[259550]: 2025-10-07 14:34:40.857 2 DEBUG nova.scheduler.client.report [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.200 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.201 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:34:41 np0005473739 wizardly_shamir[377424]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:34:41 np0005473739 wizardly_shamir[377424]: --> relative data size: 1.0
Oct  7 10:34:41 np0005473739 wizardly_shamir[377424]: --> All data devices are unavailable
Oct  7 10:34:41 np0005473739 systemd[1]: libpod-1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6.scope: Deactivated successfully.
Oct  7 10:34:41 np0005473739 systemd[1]: libpod-1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6.scope: Consumed 1.009s CPU time.
Oct  7 10:34:41 np0005473739 podman[377408]: 2025-10-07 14:34:41.297911134 +0000 UTC m=+1.237261182 container died 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:34:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f806f3f7e30aacc4f6fcc222ec0e87cd3a25eeddc829d77ded208bd60ba43889-merged.mount: Deactivated successfully.
Oct  7 10:34:41 np0005473739 podman[377408]: 2025-10-07 14:34:41.366300231 +0000 UTC m=+1.305650279 container remove 1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_shamir, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:34:41 np0005473739 systemd[1]: libpod-conmon-1eb547fbbdc840487b8695fc2d93d0e62b7e363a451ae9c58181604d985fbce6.scope: Deactivated successfully.
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.526 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.527 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.549 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Successfully updated port: 72db4fd3-8171-42af-9801-69a061614ccc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.581 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG nova.compute.manager [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-changed-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG nova.compute.manager [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing instance network info cache due to event network-changed-72db4fd3-8171-42af-9801-69a061614ccc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG oslo_concurrency.lockutils [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG oslo_concurrency.lockutils [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.688 2 DEBUG nova.network.neutron [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.690 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.725 2 DEBUG nova.policy [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.956 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.957 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.958 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Creating image(s)#033[00m
Oct  7 10:34:41 np0005473739 podman[377629]: 2025-10-07 14:34:41.962204814 +0000 UTC m=+0.039310442 container create eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:34:41 np0005473739 nova_compute[259550]: 2025-10-07 14:34:41.983 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:42 np0005473739 systemd[1]: Started libpod-conmon-eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b.scope.
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.007 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.033 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.037 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:42 np0005473739 podman[377629]: 2025-10-07 14:34:42.039786467 +0000 UTC m=+0.116892125 container init eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:34:42 np0005473739 podman[377629]: 2025-10-07 14:34:41.945568249 +0000 UTC m=+0.022673897 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:34:42 np0005473739 podman[377629]: 2025-10-07 14:34:42.04851111 +0000 UTC m=+0.125616738 container start eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:34:42 np0005473739 hungry_gould[377678]: 167 167
Oct  7 10:34:42 np0005473739 podman[377629]: 2025-10-07 14:34:42.055812585 +0000 UTC m=+0.132918243 container attach eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:34:42 np0005473739 systemd[1]: libpod-eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b.scope: Deactivated successfully.
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.071 2 DEBUG nova.network.neutron [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:34:42 np0005473739 podman[377705]: 2025-10-07 14:34:42.094286544 +0000 UTC m=+0.022026070 container died eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.110 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.111 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.111 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.112 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e303d867e3813a6f97d7cf0d8fccda86f4abc663f81f781f7c635267279a63d5-merged.mount: Deactivated successfully.
Oct  7 10:34:42 np0005473739 podman[377705]: 2025-10-07 14:34:42.134338543 +0000 UTC m=+0.062078049 container remove eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_gould, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.135 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:42 np0005473739 systemd[1]: libpod-conmon-eda200767d94f3c5e9f5b47ff37f4b62f5cb2f077d0d5c59448735ecd206ae1b.scope: Deactivated successfully.
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.144 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a5f57fca-2121-432c-b000-2fd92f5c1b12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2168: 305 pgs: 305 active+clean; 246 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 991 KiB/s rd, 1.8 MiB/s wr, 87 op/s
Oct  7 10:34:42 np0005473739 podman[377764]: 2025-10-07 14:34:42.330178286 +0000 UTC m=+0.057349753 container create 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:34:42 np0005473739 systemd[1]: Started libpod-conmon-34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585.scope.
Oct  7 10:34:42 np0005473739 podman[377764]: 2025-10-07 14:34:42.299398904 +0000 UTC m=+0.026570401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:34:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:42 np0005473739 podman[377764]: 2025-10-07 14:34:42.442490607 +0000 UTC m=+0.169662114 container init 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:34:42 np0005473739 podman[377764]: 2025-10-07 14:34:42.45006449 +0000 UTC m=+0.177235957 container start 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:34:42 np0005473739 podman[377764]: 2025-10-07 14:34:42.456946964 +0000 UTC m=+0.184118471 container attach 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.492 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a5f57fca-2121-432c-b000-2fd92f5c1b12_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.348s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.548 2 DEBUG nova.network.neutron [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.556 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.670 2 DEBUG nova.objects.instance [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid a5f57fca-2121-432c-b000-2fd92f5c1b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.711 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Successfully updated port: 87dbfe27-9436-4e21-a648-df77ddfec6ca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.766 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.766 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Ensure instance console log exists: /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.767 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.767 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.767 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.770 2 DEBUG oslo_concurrency.lockutils [req-2726bb49-28fd-49f8-9371-7ab4f8482a44 req-99accf3b-a685-4113-b42f-481bcacee65d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.772 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.772 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:42 np0005473739 nova_compute[259550]: 2025-10-07 14:34:42.772 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:34:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:43 np0005473739 nova_compute[259550]: 2025-10-07 14:34:43.056 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:34:43 np0005473739 zealous_turing[377783]: {
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:    "0": [
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:        {
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "devices": [
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "/dev/loop3"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            ],
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_name": "ceph_lv0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_size": "21470642176",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "name": "ceph_lv0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "tags": {
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cluster_name": "ceph",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.crush_device_class": "",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.encrypted": "0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osd_id": "0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.type": "block",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.vdo": "0"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            },
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "type": "block",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "vg_name": "ceph_vg0"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:        }
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:    ],
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:    "1": [
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:        {
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "devices": [
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "/dev/loop4"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            ],
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_name": "ceph_lv1",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_size": "21470642176",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "name": "ceph_lv1",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "tags": {
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cluster_name": "ceph",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.crush_device_class": "",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.encrypted": "0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osd_id": "1",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.type": "block",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.vdo": "0"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            },
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "type": "block",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "vg_name": "ceph_vg1"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:        }
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:    ],
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:    "2": [
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:        {
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "devices": [
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "/dev/loop5"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            ],
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_name": "ceph_lv2",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_size": "21470642176",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "name": "ceph_lv2",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "tags": {
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.cluster_name": "ceph",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.crush_device_class": "",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.encrypted": "0",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osd_id": "2",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.type": "block",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:                "ceph.vdo": "0"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            },
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "type": "block",
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:            "vg_name": "ceph_vg2"
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:        }
Oct  7 10:34:43 np0005473739 zealous_turing[377783]:    ]
Oct  7 10:34:43 np0005473739 zealous_turing[377783]: }
Oct  7 10:34:43 np0005473739 systemd[1]: libpod-34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585.scope: Deactivated successfully.
Oct  7 10:34:43 np0005473739 podman[377764]: 2025-10-07 14:34:43.345791554 +0000 UTC m=+1.072963041 container died 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:34:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-90a3fb5773ab16613e869de286957cf0e4779bf2df7bf5606a0cc6b7169a17ad-merged.mount: Deactivated successfully.
Oct  7 10:34:43 np0005473739 podman[377764]: 2025-10-07 14:34:43.436754025 +0000 UTC m=+1.163925502 container remove 34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_turing, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 10:34:43 np0005473739 systemd[1]: libpod-conmon-34c94ffd3eb1c85715b1ef34fb5b5a52bf4e610e005d248ac1f76c3d8c6d8585.scope: Deactivated successfully.
Oct  7 10:34:44 np0005473739 podman[378014]: 2025-10-07 14:34:44.075871382 +0000 UTC m=+0.055199396 container create 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:34:44 np0005473739 systemd[1]: Started libpod-conmon-73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db.scope.
Oct  7 10:34:44 np0005473739 podman[378014]: 2025-10-07 14:34:44.045310265 +0000 UTC m=+0.024638309 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:34:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:44 np0005473739 podman[378014]: 2025-10-07 14:34:44.160451002 +0000 UTC m=+0.139779036 container init 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 10:34:44 np0005473739 podman[378014]: 2025-10-07 14:34:44.168040175 +0000 UTC m=+0.147368189 container start 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:34:44 np0005473739 hopeful_gagarin[378030]: 167 167
Oct  7 10:34:44 np0005473739 systemd[1]: libpod-73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db.scope: Deactivated successfully.
Oct  7 10:34:44 np0005473739 podman[378014]: 2025-10-07 14:34:44.174143518 +0000 UTC m=+0.153471532 container attach 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:34:44 np0005473739 podman[378014]: 2025-10-07 14:34:44.175203606 +0000 UTC m=+0.154531650 container died 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:34:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c3548046edebff1c733161ca13c43a812db3e57abb70eeb763a7d39a9192e1b7-merged.mount: Deactivated successfully.
Oct  7 10:34:44 np0005473739 podman[378014]: 2025-10-07 14:34:44.23711704 +0000 UTC m=+0.216445054 container remove 73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_gagarin, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:34:44 np0005473739 systemd[1]: libpod-conmon-73e93b039bcf6463b00b708794e33c8c8f81a5cf79e06207f4f646a1a21956db.scope: Deactivated successfully.
Oct  7 10:34:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2169: 305 pgs: 305 active+clean; 281 MiB data, 882 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.3 MiB/s wr, 100 op/s
Oct  7 10:34:44 np0005473739 podman[378053]: 2025-10-07 14:34:44.412427055 +0000 UTC m=+0.039089026 container create 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:34:44 np0005473739 systemd[1]: Started libpod-conmon-0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8.scope.
Oct  7 10:34:44 np0005473739 podman[378053]: 2025-10-07 14:34:44.393784647 +0000 UTC m=+0.020446648 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:34:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:44 np0005473739 podman[378053]: 2025-10-07 14:34:44.519587288 +0000 UTC m=+0.146249279 container init 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:34:44 np0005473739 podman[378053]: 2025-10-07 14:34:44.527435107 +0000 UTC m=+0.154097078 container start 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:34:44 np0005473739 podman[378053]: 2025-10-07 14:34:44.53164501 +0000 UTC m=+0.158307011 container attach 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef)
Oct  7 10:34:45 np0005473739 nova_compute[259550]: 2025-10-07 14:34:45.140 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Successfully created port: 647fc35d-c5c1-4818-9fe1-b704468ceb32 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]: {
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "osd_id": 2,
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "type": "bluestore"
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:    },
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "osd_id": 1,
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "type": "bluestore"
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:    },
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "osd_id": 0,
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:        "type": "bluestore"
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]:    }
Oct  7 10:34:45 np0005473739 charming_chatterjee[378071]: }
Oct  7 10:34:45 np0005473739 systemd[1]: libpod-0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8.scope: Deactivated successfully.
Oct  7 10:34:45 np0005473739 podman[378104]: 2025-10-07 14:34:45.554146432 +0000 UTC m=+0.025346588 container died 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:34:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0fb03ca82ac7bea5c65cbb29e4d19835a00b8b4128fc248fd91bdbe23feb4b8e-merged.mount: Deactivated successfully.
Oct  7 10:34:45 np0005473739 podman[378104]: 2025-10-07 14:34:45.707359986 +0000 UTC m=+0.178560122 container remove 0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=charming_chatterjee, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 10:34:45 np0005473739 systemd[1]: libpod-conmon-0abbbe93a18f06fd1c1f0e25323f00171c51747bf0776ec4e99ccbf8573b53c8.scope: Deactivated successfully.
Oct  7 10:34:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:34:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:34:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ff4c46c7-d236-4f44-9c66-c22ebe81560e does not exist
Oct  7 10:34:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a7a829e6-29f6-4af9-bf0d-a4a3d579c1bd does not exist
Oct  7 10:34:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2170: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct  7 10:34:46 np0005473739 nova_compute[259550]: 2025-10-07 14:34:46.475 2 DEBUG nova.compute.manager [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-changed-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:46 np0005473739 nova_compute[259550]: 2025-10-07 14:34:46.476 2 DEBUG nova.compute.manager [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing instance network info cache due to event network-changed-87dbfe27-9436-4e21-a648-df77ddfec6ca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:34:46 np0005473739 nova_compute[259550]: 2025-10-07 14:34:46.476 2 DEBUG oslo_concurrency.lockutils [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:46 np0005473739 nova_compute[259550]: 2025-10-07 14:34:46.776 2 INFO nova.compute.manager [None req-9996e800-54fd-4cd8-825c-7fb0b74e8a82 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Get console output#033[00m
Oct  7 10:34:46 np0005473739 nova_compute[259550]: 2025-10-07 14:34:46.781 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:34:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:34:46 np0005473739 nova_compute[259550]: 2025-10-07 14:34:46.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.900 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Successfully updated port: 647fc35d-c5c1-4818-9fe1-b704468ceb32 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.943 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.944 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.944 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:34:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.979 2 DEBUG nova.network.neutron [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:34:47 np0005473739 nova_compute[259550]: 2025-10-07 14:34:47.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.158 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.159 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance network_info: |[{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.159 2 DEBUG oslo_concurrency.lockutils [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.159 2 DEBUG nova.network.neutron [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing network info cache for port 87dbfe27-9436-4e21-a648-df77ddfec6ca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.163 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start _get_guest_xml network_info=[{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.169 2 WARNING nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.178 2 DEBUG nova.virt.libvirt.host [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.179 2 DEBUG nova.virt.libvirt.host [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.183 2 DEBUG nova.virt.libvirt.host [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.184 2 DEBUG nova.virt.libvirt.host [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.184 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.184 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.185 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.185 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.185 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.185 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.186 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.187 2 DEBUG nova.virt.hardware [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.189 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.230 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.231 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.231 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.231 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.231 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.233 2 INFO nova.compute.manager [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Terminating instance#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.234 2 DEBUG nova.compute.manager [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:34:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2171: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 567 KiB/s rd, 3.6 MiB/s wr, 100 op/s
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.292 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.367 2 DEBUG nova.compute.manager [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-changed-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.367 2 DEBUG nova.compute.manager [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Refreshing instance network info cache due to event network-changed-647fc35d-c5c1-4818-9fe1-b704468ceb32. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.367 2 DEBUG oslo_concurrency.lockutils [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:48 np0005473739 kernel: tapea75e1fb-47 (unregistering): left promiscuous mode
Oct  7 10:34:48 np0005473739 NetworkManager[44949]: <info>  [1759847688.4381] device (tapea75e1fb-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:34:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:48Z|01109|binding|INFO|Releasing lport ea75e1fb-474b-484e-9a52-73939d17da1b from this chassis (sb_readonly=0)
Oct  7 10:34:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:48Z|01110|binding|INFO|Setting lport ea75e1fb-474b-484e-9a52-73939d17da1b down in Southbound
Oct  7 10:34:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:48Z|01111|binding|INFO|Removing iface tapea75e1fb-47 ovn-installed in OVS
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.464 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:91:21 10.100.0.14'], port_security=['fa:16:3e:57:91:21 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': 'a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '37af7aab-7feb-47ed-a12c-8267b21bb6f0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a239abd8-f455-4712-a289-4273b897ab8d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ea75e1fb-474b-484e-9a52-73939d17da1b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.465 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ea75e1fb-474b-484e-9a52-73939d17da1b in datapath 256ae6d5-60ea-4d02-8c43-2f55874712c6 unbound from our chassis#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.469 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 256ae6d5-60ea-4d02-8c43-2f55874712c6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.471 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1cd982fa-1961-40de-922c-af61833f6e94]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.471 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 namespace which is not needed anymore#033[00m
Oct  7 10:34:48 np0005473739 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006c.scope: Deactivated successfully.
Oct  7 10:34:48 np0005473739 systemd[1]: machine-qemu\x2d137\x2dinstance\x2d0000006c.scope: Consumed 12.580s CPU time.
Oct  7 10:34:48 np0005473739 systemd-machined[214580]: Machine qemu-137-instance-0000006c terminated.
Oct  7 10:34:48 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [NOTICE]   (376732) : haproxy version is 2.8.14-c23fe91
Oct  7 10:34:48 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [NOTICE]   (376732) : path to executable is /usr/sbin/haproxy
Oct  7 10:34:48 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [WARNING]  (376732) : Exiting Master process...
Oct  7 10:34:48 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [WARNING]  (376732) : Exiting Master process...
Oct  7 10:34:48 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [ALERT]    (376732) : Current worker (376734) exited with code 143 (Terminated)
Oct  7 10:34:48 np0005473739 neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6[376728]: [WARNING]  (376732) : All workers exited. Exiting... (0)
Oct  7 10:34:48 np0005473739 systemd[1]: libpod-c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac.scope: Deactivated successfully.
Oct  7 10:34:48 np0005473739 podman[378211]: 2025-10-07 14:34:48.621802771 +0000 UTC m=+0.048338453 container died c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:34:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:34:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2800255612' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:34:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac-userdata-shm.mount: Deactivated successfully.
Oct  7 10:34:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8ab2a485858f7d7c59681ecb143d683598d5f4b822096692bfe1648f09b1a197-merged.mount: Deactivated successfully.
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 podman[378211]: 2025-10-07 14:34:48.666215047 +0000 UTC m=+0.092750729 container cleanup c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.671 2 INFO nova.virt.libvirt.driver [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Instance destroyed successfully.#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.672 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.672 2 DEBUG nova.objects.instance [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:34:48 np0005473739 systemd[1]: libpod-conmon-c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac.scope: Deactivated successfully.
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.702 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.708 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:48 np0005473739 podman[378251]: 2025-10-07 14:34:48.739870726 +0000 UTC m=+0.047444629 container remove c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.746 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed336f94-95f9-4158-9b36-92eae507eb13]: (4, ('Tue Oct  7 02:34:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 (c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac)\nc689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac\nTue Oct  7 02:34:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 (c689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac)\nc689510a52820e0cddeb3d0479991c1a6f97964bba8c8b0e813f790fd6faa2ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.748 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0e0147d3-4b43-4b0a-9997-a5338656dfc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.749 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap256ae6d5-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 kernel: tap256ae6d5-60: left promiscuous mode
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.773 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[640a4d4e-5921-4e2c-ad51-1b82defe85d6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.775 2 DEBUG nova.virt.libvirt.vif [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:33:46Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-797854475',display_name='tempest-TestNetworkAdvancedServerOps-server-797854475',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-797854475',id=108,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBD4Lkj2HHIhO2PrQ11lMJZ07v9Q3dgwfIb7H7MpsdLLDdtvBovL59oN2/XacURcr+DjY2ldySrFXIZcVMnzxGBRmm67eVPlJbvF3TQEBfQs6C/dhmbAY+AHYoEt1hPfXxw==',key_name='tempest-TestNetworkAdvancedServerOps-512345044',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:33:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-qabao1fo',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:27Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.775 2 DEBUG nova.network.os_vif_util [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.186", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.778 2 DEBUG nova.network.os_vif_util [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.778 2 DEBUG os_vif [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.780 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapea75e1fb-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.790 2 INFO os_vif [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:91:21,bridge_name='br-int',has_traffic_filtering=True,id=ea75e1fb-474b-484e-9a52-73939d17da1b,network=Network(256ae6d5-60ea-4d02-8c43-2f55874712c6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapea75e1fb-47')#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.808 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c6dd65b-df2d-49c5-bb45-836132b4fe7b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.810 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95545db9-d3a6-413b-bbbc-d38a93fc8911]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.825 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc7a293-89f2-4efa-9c96-ea7c3157e606]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 825027, 'reachable_time': 30808, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378302, 'error': None, 'target': 'ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.828 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-256ae6d5-60ea-4d02-8c43-2f55874712c6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:34:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:48.828 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cdfeb7de-146b-4f9f-a896-37291f837878]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:48 np0005473739 systemd[1]: run-netns-ovnmeta\x2d256ae6d5\x2d60ea\x2d4d02\x2d8c43\x2d2f55874712c6.mount: Deactivated successfully.
Oct  7 10:34:48 np0005473739 nova_compute[259550]: 2025-10-07 14:34:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.035 2 DEBUG nova.compute.manager [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.036 2 DEBUG nova.compute.manager [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing instance network info cache due to event network-changed-ea75e1fb-474b-484e-9a52-73939d17da1b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.036 2 DEBUG oslo_concurrency.lockutils [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.037 2 DEBUG oslo_concurrency.lockutils [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.037 2 DEBUG nova.network.neutron [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Refreshing network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:34:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:34:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3673809848' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.224 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.225 2 DEBUG nova.virt.libvirt.vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:36Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.225 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.226 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.227 2 DEBUG nova.virt.libvirt.vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:36Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.227 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.227 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.228 2 DEBUG nova.objects.instance [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid df052dd5-fecd-4dd3-be36-4becc3f9f318 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.445 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <uuid>df052dd5-fecd-4dd3-be36-4becc3f9f318</uuid>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <name>instance-0000006e</name>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1627574273</nova:name>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:34:48</nova:creationTime>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:port uuid="72db4fd3-8171-42af-9801-69a061614ccc">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <nova:port uuid="87dbfe27-9436-4e21-a648-df77ddfec6ca">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe44:cb8e" ipVersion="6"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <entry name="serial">df052dd5-fecd-4dd3-be36-4becc3f9f318</entry>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <entry name="uuid">df052dd5-fecd-4dd3-be36-4becc3f9f318</entry>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/df052dd5-fecd-4dd3-be36-4becc3f9f318_disk">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:4d:17:d0"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <target dev="tap72db4fd3-81"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:44:cb:8e"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <target dev="tap87dbfe27-94"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/console.log" append="off"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:34:49 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:34:49 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:34:49 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:34:49 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.446 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Preparing to wait for external event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.446 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.446 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Preparing to wait for external event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.447 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.448 2 DEBUG nova.virt.libvirt.vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:36Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.448 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.449 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.449 2 DEBUG os_vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.450 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.453 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72db4fd3-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.454 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72db4fd3-81, col_values=(('external_ids', {'iface-id': '72db4fd3-8171-42af-9801-69a061614ccc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4d:17:d0', 'vm-uuid': 'df052dd5-fecd-4dd3-be36-4becc3f9f318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:49 np0005473739 NetworkManager[44949]: <info>  [1759847689.4563] manager: (tap72db4fd3-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/451)
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.461 2 INFO os_vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81')#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.462 2 DEBUG nova.virt.libvirt.vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:36Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.462 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.463 2 DEBUG nova.network.os_vif_util [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.463 2 DEBUG os_vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.464 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.466 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap87dbfe27-94, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap87dbfe27-94, col_values=(('external_ids', {'iface-id': '87dbfe27-9436-4e21-a648-df77ddfec6ca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:44:cb:8e', 'vm-uuid': 'df052dd5-fecd-4dd3-be36-4becc3f9f318'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:49 np0005473739 NetworkManager[44949]: <info>  [1759847689.5063] manager: (tap87dbfe27-94): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/452)
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.513 2 INFO os_vif [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94')#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.556 2 INFO nova.virt.libvirt.driver [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Deleting instance files /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_del#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.557 2 INFO nova.virt.libvirt.driver [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Deletion of /var/lib/nova/instances/a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8_del complete#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.580 2 DEBUG nova.network.neutron [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Updating instance_info_cache with network_info: [{"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.941 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.942 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.942 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:4d:17:d0, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.942 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:44:cb:8e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.943 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Using config drive#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.966 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.972 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.973 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance network_info: |[{"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.973 2 DEBUG oslo_concurrency.lockutils [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.974 2 DEBUG nova.network.neutron [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Refreshing network info cache for port 647fc35d-c5c1-4818-9fe1-b704468ceb32 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.976 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start _get_guest_xml network_info=[{"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.979 2 INFO nova.compute.manager [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Took 1.74 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.980 2 DEBUG oslo.service.loopingcall [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.980 2 DEBUG nova.compute.manager [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.980 2 DEBUG nova.network.neutron [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.984 2 WARNING nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.991 2 DEBUG nova.virt.libvirt.host [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.991 2 DEBUG nova.virt.libvirt.host [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.994 2 DEBUG nova.virt.libvirt.host [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.995 2 DEBUG nova.virt.libvirt.host [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.995 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.995 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.996 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.997 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.997 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.997 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:34:49 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.997 2 DEBUG nova.virt.hardware [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:49.999 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.043 2 DEBUG nova.network.neutron [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updated VIF entry in instance network info cache for port 87dbfe27-9436-4e21-a648-df77ddfec6ca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.043 2 DEBUG nova.network.neutron [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.104 2 DEBUG oslo_concurrency.lockutils [req-0c7d0f82-97c1-4003-81bb-3a564f72accf req-94187f31-8b59-4f14-8d41-bf6b83dee8d0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2172: 305 pgs: 305 active+clean; 294 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 569 KiB/s rd, 3.6 MiB/s wr, 101 op/s
Oct  7 10:34:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:34:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3801410304' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.528 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.551 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.554 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.590 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Creating config drive at /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.595 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sy2t63i execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.739 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7sy2t63i" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.770 2 DEBUG nova.storage.rbd_utils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.776 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.854 2 DEBUG nova.network.neutron [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updated VIF entry in instance network info cache for port ea75e1fb-474b-484e-9a52-73939d17da1b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:34:50 np0005473739 nova_compute[259550]: 2025-10-07 14:34:50.855 2 DEBUG nova.network.neutron [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [{"id": "ea75e1fb-474b-484e-9a52-73939d17da1b", "address": "fa:16:3e:57:91:21", "network": {"id": "256ae6d5-60ea-4d02-8c43-2f55874712c6", "bridge": "br-int", "label": "tempest-network-smoke--2069686593", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapea75e1fb-47", "ovs_interfaceid": "ea75e1fb-474b-484e-9a52-73939d17da1b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:34:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4005761768' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.044 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.045 2 DEBUG nova.virt.libvirt.vif [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-81555906',display_name='tempest-TestNetworkBasicOps-server-81555906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-81555906',id=111,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVpb0If8IJnGODxIDZvyf13jL46mQ77emkpmotXMzF/nDOLDOQfFE/H2m8lCVqJZPCaNeKJqv9eLmV7ud/4driHFAI2qIDYCbnN7/CMc/8whSX25QhcG+umNda5hVH/NQ==',key_name='tempest-TestNetworkBasicOps-307113164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-edoddh5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:41Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=a5f57fca-2121-432c-b000-2fd92f5c1b12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.045 2 DEBUG nova.network.os_vif_util [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.046 2 DEBUG nova.network.os_vif_util [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.047 2 DEBUG nova.objects.instance [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid a5f57fca-2121-432c-b000-2fd92f5c1b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.102 2 DEBUG oslo_concurrency.lockutils [req-6172aa79-5f02-449c-8555-bf5834fc5b7a req-a0fa3e06-1160-414d-9ca1-46a859bbeef0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.163 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <uuid>a5f57fca-2121-432c-b000-2fd92f5c1b12</uuid>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <name>instance-0000006f</name>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-81555906</nova:name>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:34:49</nova:creationTime>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <nova:port uuid="647fc35d-c5c1-4818-9fe1-b704468ceb32">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <entry name="serial">a5f57fca-2121-432c-b000-2fd92f5c1b12</entry>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <entry name="uuid">a5f57fca-2121-432c-b000-2fd92f5c1b12</entry>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a5f57fca-2121-432c-b000-2fd92f5c1b12_disk">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:12:ca:dd"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <target dev="tap647fc35d-c5"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/console.log" append="off"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:34:51 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:34:51 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:34:51 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:34:51 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.165 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Preparing to wait for external event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.165 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.165 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.166 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.166 2 DEBUG nova.virt.libvirt.vif [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-81555906',display_name='tempest-TestNetworkBasicOps-server-81555906',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-81555906',id=111,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVpb0If8IJnGODxIDZvyf13jL46mQ77emkpmotXMzF/nDOLDOQfFE/H2m8lCVqJZPCaNeKJqv9eLmV7ud/4driHFAI2qIDYCbnN7/CMc/8whSX25QhcG+umNda5hVH/NQ==',key_name='tempest-TestNetworkBasicOps-307113164',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-edoddh5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:34:41Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=a5f57fca-2121-432c-b000-2fd92f5c1b12,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.167 2 DEBUG nova.network.os_vif_util [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.167 2 DEBUG nova.network.os_vif_util [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.168 2 DEBUG os_vif [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.168 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.169 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap647fc35d-c5, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.173 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap647fc35d-c5, col_values=(('external_ids', {'iface-id': '647fc35d-c5c1-4818-9fe1-b704468ceb32', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:12:ca:dd', 'vm-uuid': 'a5f57fca-2121-432c-b000-2fd92f5c1b12'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.1760] manager: (tap647fc35d-c5): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/453)
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.186 2 INFO os_vif [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5')#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.237 2 DEBUG nova.network.neutron [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.259 2 DEBUG oslo_concurrency.processutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config df052dd5-fecd-4dd3-be36-4becc3f9f318_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.260 2 INFO nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Deleting local config drive /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318/disk.config because it was imported into RBD.#033[00m
Oct  7 10:34:51 np0005473739 systemd-udevd[378192]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.3183] manager: (tap72db4fd3-81): new Tun device (/org/freedesktop/NetworkManager/Devices/454)
Oct  7 10:34:51 np0005473739 kernel: tap72db4fd3-81: entered promiscuous mode
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01112|binding|INFO|Claiming lport 72db4fd3-8171-42af-9801-69a061614ccc for this chassis.
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01113|binding|INFO|72db4fd3-8171-42af-9801-69a061614ccc: Claiming fa:16:3e:4d:17:d0 10.100.0.8
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.3342] device (tap72db4fd3-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.3350] device (tap72db4fd3-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.3386] manager: (tap87dbfe27-94): new Tun device (/org/freedesktop/NetworkManager/Devices/455)
Oct  7 10:34:51 np0005473739 kernel: tap87dbfe27-94: entered promiscuous mode
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01114|binding|INFO|Setting lport 72db4fd3-8171-42af-9801-69a061614ccc ovn-installed in OVS
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01115|if_status|INFO|Dropped 1 log messages in last 1141 seconds (most recently, 1141 seconds ago) due to excessive rate
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01116|if_status|INFO|Not updating pb chassis for 87dbfe27-9436-4e21-a648-df77ddfec6ca now as sb is readonly
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.3543] device (tap87dbfe27-94): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.3550] device (tap87dbfe27-94): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 systemd-machined[214580]: New machine qemu-138-instance-0000006e.
Oct  7 10:34:51 np0005473739 systemd[1]: Started Virtual Machine qemu-138-instance-0000006e.
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01117|binding|INFO|Claiming lport 87dbfe27-9436-4e21-a648-df77ddfec6ca for this chassis.
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01118|binding|INFO|87dbfe27-9436-4e21-a648-df77ddfec6ca: Claiming fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01119|binding|INFO|Setting lport 72db4fd3-8171-42af-9801-69a061614ccc up in Southbound
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.477 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:17:d0 10.100.0.8'], port_security=['fa:16:3e:4d:17:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'df052dd5-fecd-4dd3-be36-4becc3f9f318', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c899e05d-224c-44fe-8294-eaece58d7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34b97456-8a84-4c85-bc2d-80b2de48e489, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=72db4fd3-8171-42af-9801-69a061614ccc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01120|binding|INFO|Setting lport 87dbfe27-9436-4e21-a648-df77ddfec6ca ovn-installed in OVS
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.479 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 72db4fd3-8171-42af-9801-69a061614ccc in datapath c899e05d-224c-44fe-8294-eaece58d7fe7 bound to our chassis#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.480 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c899e05d-224c-44fe-8294-eaece58d7fe7#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d983c2fd-e625-4c01-9d49-d174edb37024]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.495 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc899e05d-21 in ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.497 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc899e05d-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.497 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1de5961d-49c6-4fa8-8d63-e8c50ef7e4da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.498 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c861c3d6-b50c-4bce-b31a-2f02fd4c69b4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.510 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[96da4c4f-1c19-4350-9a87-72a243fac43b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.530 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.530 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-unplugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.531 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.532 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.532 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.532 2 DEBUG oslo_concurrency.lockutils [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.532 2 DEBUG nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] No waiting events found dispatching network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.532 2 WARNING nova.compute.manager [req-333426eb-252b-4e6c-95e1-56a890a0b5ce req-aa8acce8-f34d-4775-95cc-b852475aa051 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received unexpected event network-vif-plugged-ea75e1fb-474b-484e-9a52-73939d17da1b for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.534 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1abbd32-42f1-45fe-bfa6-c3314dea8a8c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.561 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8f868e6-0136-4059-952f-5d1f0faf5f49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.5787] manager: (tapc899e05d-20): new Veth device (/org/freedesktop/NetworkManager/Devices/456)
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[00163c49-d960-4c5e-99f7-c8c3461ca669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 systemd-udevd[378491]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.614 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b90c45f7-86f3-41e0-9df2-8f43c4bcc5d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.617 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a933f626-11a3-49d7-9437-705953f25378]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.6407] device (tapc899e05d-20): carrier: link connected
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.646 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fe854340-f969-4e32-97e7-adaf71eaed02]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.658 2 INFO nova.compute.manager [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Took 1.68 seconds to deallocate network for instance.#033[00m
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01121|binding|INFO|Setting lport 87dbfe27-9436-4e21-a648-df77ddfec6ca up in Southbound
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.659 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e'], port_security=['fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe44:cb8e/64', 'neutron:device_id': 'df052dd5-fecd-4dd3-be36-4becc3f9f318', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae04c5e-8ef6-447a-8c6c-5575a0afadaf, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=87dbfe27-9436-4e21-a648-df77ddfec6ca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.662 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c21a8019-db47-4642-abbf-f27debe9d8ea]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc899e05d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ba:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827520, 'reachable_time': 17717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378512, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.670 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.670 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.671 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:12:ca:dd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.671 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Using config drive#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.683 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[af80d870-ee14-4c7e-aa17-6ecf857661fa]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe74:babd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827520, 'tstamp': 827520}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378513, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.693 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.703 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70b6535c-031c-48ad-96e0-0d0b0464d850]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc899e05d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ba:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827520, 'reachable_time': 17717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378530, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.735 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d8422788-a7e8-4f2e-bd59-3e73ea6d7af6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.798 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[edd36dc3-12bb-4537-af48-8240d5778aa3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.801 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc899e05d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.801 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.802 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc899e05d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:51 np0005473739 NetworkManager[44949]: <info>  [1759847691.8046] manager: (tapc899e05d-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/457)
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.806 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.807 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 kernel: tapc899e05d-20: entered promiscuous mode
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.813 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc899e05d-20, col_values=(('external_ids', {'iface-id': 'a420e217-e19f-447c-8a17-e0fdc42144b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:51Z|01122|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.837 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c899e05d-224c-44fe-8294-eaece58d7fe7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c899e05d-224c-44fe-8294-eaece58d7fe7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.838 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ea5e00b-f9f7-42a4-ae3a-27ce3b8b74b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.839 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-c899e05d-224c-44fe-8294-eaece58d7fe7
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/c899e05d-224c-44fe-8294-eaece58d7fe7.pid.haproxy
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID c899e05d-224c-44fe-8294-eaece58d7fe7
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:34:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:51.841 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'env', 'PROCESS_TAG=haproxy-c899e05d-224c-44fe-8294-eaece58d7fe7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c899e05d-224c-44fe-8294-eaece58d7fe7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.852 2 DEBUG nova.network.neutron [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Updated VIF entry in instance network info cache for port 647fc35d-c5c1-4818-9fe1-b704468ceb32. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.853 2 DEBUG nova.network.neutron [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Updating instance_info_cache with network_info: [{"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.897 2 DEBUG oslo_concurrency.lockutils [req-07f8d396-b011-4880-b7d8-7c94ed7eba09 req-2a2fd73b-cba8-4c0f-ad3f-4b97ca5f3c85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a5f57fca-2121-432c-b000-2fd92f5c1b12" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:34:51 np0005473739 nova_compute[259550]: 2025-10-07 14:34:51.974 2 DEBUG oslo_concurrency.processutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.268 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Creating config drive at /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.273 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl1h660cx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2173: 305 pgs: 305 active+clean; 269 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 2.4 MiB/s wr, 50 op/s
Oct  7 10:34:52 np0005473739 podman[378605]: 2025-10-07 14:34:52.20667886 +0000 UTC m=+0.032614952 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.361 2 DEBUG nova.compute.manager [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.363 2 DEBUG oslo_concurrency.lockutils [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.363 2 DEBUG oslo_concurrency.lockutils [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.364 2 DEBUG oslo_concurrency.lockutils [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.364 2 DEBUG nova.compute.manager [req-b8bdf78e-5dba-4ba7-932b-4bd3d835d27d req-5b892d10-e845-491c-bfa7-b90cb5befcbf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Processing event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.415 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpl1h660cx" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:34:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/41990193' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.511 2 DEBUG nova.storage.rbd_utils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.515 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.566 2 DEBUG oslo_concurrency.processutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.591s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.572 2 DEBUG nova.compute.provider_tree [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.621 2 DEBUG nova.scheduler.client.report [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:34:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.767 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.960s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:52 np0005473739 podman[378605]: 2025-10-07 14:34:52.824239982 +0000 UTC m=+0.650176074 container create 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:34:52 np0005473739 nova_compute[259550]: 2025-10-07 14:34:52.892 2 INFO nova.scheduler.client.report [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8#033[00m
Oct  7 10:34:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:53 np0005473739 systemd[1]: Started libpod-conmon-5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484.scope.
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.061 2 DEBUG oslo_concurrency.lockutils [None req-a26abdaf-9132-43c4-8f7c-3baebdf5b4e9 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.831s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/325f40c5397e88b9b38207d1a08b9557bb0033598dc986796ff5fd8d9abcf075/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.125 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847693.1246157, df052dd5-fecd-4dd3-be36-4becc3f9f318 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.125 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] VM Started (Lifecycle Event)#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.211 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.215 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847693.1247613, df052dd5-fecd-4dd3-be36-4becc3f9f318 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.216 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.282 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.286 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:53 np0005473739 podman[378605]: 2025-10-07 14:34:53.296159601 +0000 UTC m=+1.122095713 container init 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  7 10:34:53 np0005473739 podman[378605]: 2025-10-07 14:34:53.301902905 +0000 UTC m=+1.127838997 container start 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.314 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:34:53 np0005473739 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [NOTICE]   (378692) : New worker (378694) forked
Oct  7 10:34:53 np0005473739 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [NOTICE]   (378692) : Loading success.
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.490 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 87dbfe27-9436-4e21-a648-df77ddfec6ca in datapath 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 unbound from our chassis#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.493 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.509 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78c5c3e0-b415-4a97-bfa4-46238d6e0e50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.510 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap673dd6ba-41 in ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.512 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap673dd6ba-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.512 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f41047a0-94f7-4f3f-bff4-05c075193939]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.514 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5653ff94-5a9c-4eb3-baa0-463ee0403bc6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.528 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[841bb333-c2f9-46c0-8846-d7dc8c93b5dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.543 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5cd872cb-da51-481a-a289-505e1712c141]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.573 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e58c58c5-2983-4f29-9b0f-d61e03ec5954]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 systemd-udevd[378495]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:34:53 np0005473739 NetworkManager[44949]: <info>  [1759847693.5794] manager: (tap673dd6ba-40): new Veth device (/org/freedesktop/NetworkManager/Devices/458)
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d40bbb3c-3ca9-4afa-850b-f524bf4986af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.608 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[11ea1202-52e4-4c4e-b9f2-0da65aa110df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.616 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ea091f95-dda2-4dc0-a909-3f7f8060ce47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 NetworkManager[44949]: <info>  [1759847693.6403] device (tap673dd6ba-40): carrier: link connected
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.646 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ff915d1a-f002-4213-802c-c4f49475320e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.666 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1ec52e4-f920-4f8a-a091-913628539b3c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap673dd6ba-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:cb:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827720, 'reachable_time': 16406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378719, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.682 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f6b5842-3e6d-4a03-9a1e-dd59530ea1c8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7b:cb46'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827720, 'tstamp': 827720}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378720, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf1fb016-9c34-4249-99f8-53e7c5d54343]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap673dd6ba-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:cb:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827720, 'reachable_time': 16406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378721, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.712 2 DEBUG oslo_concurrency.processutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config a5f57fca-2121-432c-b000-2fd92f5c1b12_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.197s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.713 2 INFO nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Deleting local config drive /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12/disk.config because it was imported into RBD.#033[00m
Oct  7 10:34:53 np0005473739 virtqemud[259430]: End of file while reading data: Input/output error
Oct  7 10:34:53 np0005473739 virtqemud[259430]: End of file while reading data: Input/output error
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06002ab7-697d-4a2a-9a2b-d8bb108a23e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.764 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e42c6bfa-c354-4998-a600-286a7434649d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673dd6ba-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.766 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap673dd6ba-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:53 np0005473739 NetworkManager[44949]: <info>  [1759847693.7692] manager: (tap673dd6ba-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/459)
Oct  7 10:34:53 np0005473739 kernel: tap673dd6ba-40: entered promiscuous mode
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.774 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap673dd6ba-40, col_values=(('external_ids', {'iface-id': 'e2eca697-1003-4116-b184-c9191a00584f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:53Z|01123|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct  7 10:34:53 np0005473739 NetworkManager[44949]: <info>  [1759847693.7783] manager: (tap647fc35d-c5): new Tun device (/org/freedesktop/NetworkManager/Devices/460)
Oct  7 10:34:53 np0005473739 kernel: tap647fc35d-c5: entered promiscuous mode
Oct  7 10:34:53 np0005473739 NetworkManager[44949]: <info>  [1759847693.7880] device (tap647fc35d-c5): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:34:53 np0005473739 NetworkManager[44949]: <info>  [1759847693.7886] device (tap647fc35d-c5): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:53Z|01124|binding|INFO|Claiming lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 for this chassis.
Oct  7 10:34:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:53Z|01125|binding|INFO|647fc35d-c5c1-4818-9fe1-b704468ceb32: Claiming fa:16:3e:12:ca:dd 10.100.0.27
Oct  7 10:34:53 np0005473739 systemd-machined[214580]: New machine qemu-139-instance-0000006f.
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.823 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:ca:dd 10.100.0.27'], port_security=['fa:16:3e:12:ca:dd 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'a5f57fca-2121-432c-b000-2fd92f5c1b12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df76b87a-da1a-4438-aff3-46b99df6a681', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2ae626df-0151-4864-97c5-333f386c32b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f05eda8c-3c14-40e1-b5d0-16d149ef6c4e, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=647fc35d-c5c1-4818-9fe1-b704468ceb32) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:34:53 np0005473739 systemd[1]: Started Virtual Machine qemu-139-instance-0000006f.
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.840 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.841 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a901487-6e01-4273-8dbb-6c9ba6f412c4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.844 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3.pid.haproxy
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:34:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:53.844 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'env', 'PROCESS_TAG=haproxy-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:34:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:53Z|01126|binding|INFO|Setting lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 ovn-installed in OVS
Oct  7 10:34:53 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:53Z|01127|binding|INFO|Setting lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 up in Southbound
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.869 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Received event network-vif-deleted-ea75e1fb-474b-484e-9a52-73939d17da1b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.870 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.870 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.870 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Processing event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.871 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.872 2 DEBUG oslo_concurrency.lockutils [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.872 2 DEBUG nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.872 2 WARNING nova.compute.manager [req-53bd2539-c84b-468d-a2aa-78804321a547 req-8b084621-602b-42ae-80fd-fd514a8f2477 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received unexpected event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.873 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.883 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847693.8827105, df052dd5-fecd-4dd3-be36-4becc3f9f318 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.883 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.886 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.895 2 INFO nova.virt.libvirt.driver [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance spawned successfully.#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.895 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.930 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.933 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.945 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.945 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.945 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.946 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.946 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:53 np0005473739 nova_compute[259550]: 2025-10-07 14:34:53.947 2 DEBUG nova.virt.libvirt.driver [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.023 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.108 2 INFO nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Took 17.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.108 2 DEBUG nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:54 np0005473739 podman[378811]: 2025-10-07 14:34:54.220670445 +0000 UTC m=+0.047987774 container create 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.242 2 INFO nova.compute.manager [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Took 18.96 seconds to build instance.#033[00m
Oct  7 10:34:54 np0005473739 systemd[1]: Started libpod-conmon-4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d.scope.
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.276 2 DEBUG oslo_concurrency.lockutils [None req-a8399405-12ac-4536-a5fc-1517410a47eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 19.055s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2174: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 73 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Oct  7 10:34:54 np0005473739 podman[378811]: 2025-10-07 14:34:54.194791013 +0000 UTC m=+0.022108362 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:34:54 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fd9610b23e7fa9be83c256b0b81e796a0e4842e582d51c70137e3b1797edeeac/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:54 np0005473739 podman[378811]: 2025-10-07 14:34:54.321791947 +0000 UTC m=+0.149109296 container init 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:34:54 np0005473739 podman[378811]: 2025-10-07 14:34:54.341075263 +0000 UTC m=+0.168392592 container start 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  7 10:34:54 np0005473739 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [NOTICE]   (378834) : New worker (378836) forked
Oct  7 10:34:54 np0005473739 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [NOTICE]   (378834) : Loading success.
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.404 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 647fc35d-c5c1-4818-9fe1-b704468ceb32 in datapath df76b87a-da1a-4438-aff3-46b99df6a681 unbound from our chassis#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.406 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network df76b87a-da1a-4438-aff3-46b99df6a681#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.420 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee8d5810-2217-49e1-ad23-ab06f72fa086]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.421 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapdf76b87a-d1 in ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.424 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapdf76b87a-d0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d32dda-5b0e-42c6-bbd4-02fe58fa6c8c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.425 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[707e8dd6-6331-407c-a23e-b0e0290d61e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.438 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6d62a0d4-f59f-4b87-b906-59cb0335cdb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.454 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae123f92-7834-47a6-b18a-62752b8a1247]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.488 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3d427f4f-1fd4-4d31-a749-3c439bd005ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 NetworkManager[44949]: <info>  [1759847694.4974] manager: (tapdf76b87a-d0): new Veth device (/org/freedesktop/NetworkManager/Devices/461)
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.501 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[378aee8b-1c11-40cc-b466-db6f9ccf9901]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.537 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0c108b56-7155-4fa1-84ea-9d99f739992d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.541 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0afa756a-9a5c-4481-8dc3-7a1670fd713f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.544 2 DEBUG nova.compute.manager [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.544 2 DEBUG oslo_concurrency.lockutils [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.546 2 DEBUG oslo_concurrency.lockutils [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.546 2 DEBUG oslo_concurrency.lockutils [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.546 2 DEBUG nova.compute.manager [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.546 2 WARNING nova.compute.manager [req-1590f589-cd43-4bbd-a8d3-a5ead688d938 req-02a3ff99-61ad-41d1-9b60-a1cb66358cc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received unexpected event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc for instance with vm_state active and task_state None.#033[00m
Oct  7 10:34:54 np0005473739 NetworkManager[44949]: <info>  [1759847694.5671] device (tapdf76b87a-d0): carrier: link connected
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.573 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ea51d3c8-471f-4909-ba52-63294f7dc785]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f00d938a-c4bb-4771-bc8a-351c47e8287f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf76b87a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:f9:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827813, 'reachable_time': 27744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 378855, 'error': None, 'target': 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.612 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58b2944e-ad3a-48c2-902a-1e1df1cf9526]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe60:f9dc'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827813, 'tstamp': 827813}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 378856, 'error': None, 'target': 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.631 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[549cfe56-8a84-4d97-96f6-02e3f3428792]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapdf76b87a-d1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:60:f9:dc'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 327], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827813, 'reachable_time': 27744, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 378857, 'error': None, 'target': 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.667 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69563dd4-94d6-4120-8c83-c3c75a3fc65e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.739 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5114e02c-5476-4c9c-a3af-b8acd3de16ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.741 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf76b87a-d0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.741 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.742 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdf76b87a-d0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:54 np0005473739 NetworkManager[44949]: <info>  [1759847694.7447] manager: (tapdf76b87a-d0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/462)
Oct  7 10:34:54 np0005473739 kernel: tapdf76b87a-d0: entered promiscuous mode
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapdf76b87a-d0, col_values=(('external_ids', {'iface-id': 'a27bf1b8-ba5c-4c7e-947d-4fb0bbc09db2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:54Z|01128|binding|INFO|Releasing lport a27bf1b8-ba5c-4c7e-947d-4fb0bbc09db2 from this chassis (sb_readonly=0)
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.753 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847694.7531753, a5f57fca-2121-432c-b000-2fd92f5c1b12 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.753 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] VM Started (Lifecycle Event)#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.751 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/df76b87a-da1a-4438-aff3-46b99df6a681.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/df76b87a-da1a-4438-aff3-46b99df6a681.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[945a95f8-c495-44a4-8cac-0667560b71d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.756 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-df76b87a-da1a-4438-aff3-46b99df6a681
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/df76b87a-da1a-4438-aff3-46b99df6a681.pid.haproxy
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID df76b87a-da1a-4438-aff3-46b99df6a681
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:34:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:34:54.756 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'env', 'PROCESS_TAG=haproxy-df76b87a-da1a-4438-aff3-46b99df6a681', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/df76b87a-da1a-4438-aff3-46b99df6a681.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.993 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.998 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847694.753738, a5f57fca-2121-432c-b000-2fd92f5c1b12 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:54 np0005473739 nova_compute[259550]: 2025-10-07 14:34:54.998 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:34:55 np0005473739 nova_compute[259550]: 2025-10-07 14:34:55.194 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:55 np0005473739 nova_compute[259550]: 2025-10-07 14:34:55.197 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:55 np0005473739 podman[378889]: 2025-10-07 14:34:55.134379889 +0000 UTC m=+0.025725568 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:34:55 np0005473739 podman[378889]: 2025-10-07 14:34:55.270290381 +0000 UTC m=+0.161636040 container create a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 10:34:55 np0005473739 systemd[1]: Started libpod-conmon-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222.scope.
Oct  7 10:34:55 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:34:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d8f73fee24f065dab00f339734911ef1ec093684fe1fea8ceea6e76dc54383b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:34:55 np0005473739 nova_compute[259550]: 2025-10-07 14:34:55.396 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:34:55 np0005473739 podman[378889]: 2025-10-07 14:34:55.428660982 +0000 UTC m=+0.320006641 container init a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:34:55 np0005473739 podman[378889]: 2025-10-07 14:34:55.434949641 +0000 UTC m=+0.326295300 container start a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:34:55 np0005473739 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [NOTICE]   (378909) : New worker (378911) forked
Oct  7 10:34:55 np0005473739 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [NOTICE]   (378909) : Loading success.
Oct  7 10:34:56 np0005473739 podman[378920]: 2025-10-07 14:34:56.086553272 +0000 UTC m=+0.072933560 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd)
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.139 2 DEBUG nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.139 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.139 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Processing event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.140 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.141 2 DEBUG oslo_concurrency.lockutils [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.141 2 DEBUG nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] No waiting events found dispatching network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.141 2 WARNING nova.compute.manager [req-979a5c68-73a7-4e78-b86c-23068373a05a req-68560d5b-a0e4-4c8f-9b0d-667ac079a007 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received unexpected event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.141 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.150 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847696.1503851, a5f57fca-2121-432c-b000-2fd92f5c1b12 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.150 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.162 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.165 2 INFO nova.virt.libvirt.driver [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance spawned successfully.#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.165 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:56 np0005473739 podman[378941]: 2025-10-07 14:34:56.176825034 +0000 UTC m=+0.067146175 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.248 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.253 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.254 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.254 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.254 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.255 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.255 2 DEBUG nova.virt.libvirt.driver [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.258 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:34:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2175: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 337 KiB/s wr, 70 op/s
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.393 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.607 2 INFO nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Took 14.65 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.609 2 DEBUG nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.916 2 INFO nova.compute.manager [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Took 16.92 seconds to build instance.#033[00m
Oct  7 10:34:56 np0005473739 nova_compute[259550]: 2025-10-07 14:34:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.004 2 DEBUG oslo_concurrency.lockutils [None req-fc5dd960-c12c-4d19-a9df-b0a25dfcbbbe 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.102s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.124 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.124 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.125 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.125 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.125 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.240 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:34:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2741335439' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.621 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.995 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:34:57 np0005473739 nova_compute[259550]: 2025-10-07 14:34:57.995 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.000 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.000 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.005 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.005 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.233 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.235 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3339MB free_disk=59.901039123535156GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.235 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.235 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:34:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2176: 305 pgs: 305 active+clean; 213 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 770 KiB/s rd, 27 KiB/s wr, 69 op/s
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.520 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.521 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance df052dd5-fecd-4dd3-be36-4becc3f9f318 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.521 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a5f57fca-2121-432c-b000-2fd92f5c1b12 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.521 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.522 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.600 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:34:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:58Z|01129|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct  7 10:34:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:58Z|01130|binding|INFO|Releasing lport a27bf1b8-ba5c-4c7e-947d-4fb0bbc09db2 from this chassis (sb_readonly=0)
Oct  7 10:34:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:58Z|01131|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct  7 10:34:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:34:58Z|01132|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct  7 10:34:58 np0005473739 nova_compute[259550]: 2025-10-07 14:34:58.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:34:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:34:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3437138190' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.122 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.128 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.184 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.321 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.322 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.086s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.896 2 DEBUG nova.compute.manager [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-changed-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.897 2 DEBUG nova.compute.manager [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing instance network info cache due to event network-changed-72db4fd3-8171-42af-9801-69a061614ccc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.897 2 DEBUG oslo_concurrency.lockutils [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.898 2 DEBUG oslo_concurrency.lockutils [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:34:59 np0005473739 nova_compute[259550]: 2025-10-07 14:34:59.898 2 DEBUG nova.network.neutron [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:35:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:00.067 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:00.068 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:00.069 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2177: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 39 KiB/s wr, 166 op/s
Oct  7 10:35:01 np0005473739 nova_compute[259550]: 2025-10-07 14:35:01.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:01 np0005473739 nova_compute[259550]: 2025-10-07 14:35:01.322 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:01 np0005473739 nova_compute[259550]: 2025-10-07 14:35:01.323 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:35:01 np0005473739 nova_compute[259550]: 2025-10-07 14:35:01.323 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:35:02 np0005473739 nova_compute[259550]: 2025-10-07 14:35:02.171 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:02 np0005473739 nova_compute[259550]: 2025-10-07 14:35:02.173 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:02 np0005473739 nova_compute[259550]: 2025-10-07 14:35:02.173 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:35:02 np0005473739 nova_compute[259550]: 2025-10-07 14:35:02.173 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:35:02 np0005473739 nova_compute[259550]: 2025-10-07 14:35:02.242 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2178: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 29 KiB/s wr, 175 op/s
Oct  7 10:35:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:03 np0005473739 nova_compute[259550]: 2025-10-07 14:35:03.481 2 DEBUG nova.network.neutron [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updated VIF entry in instance network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:35:03 np0005473739 nova_compute[259550]: 2025-10-07 14:35:03.481 2 DEBUG nova.network.neutron [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:03 np0005473739 nova_compute[259550]: 2025-10-07 14:35:03.668 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847688.6666355, a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:03 np0005473739 nova_compute[259550]: 2025-10-07 14:35:03.668 2 INFO nova.compute.manager [-] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:35:03 np0005473739 nova_compute[259550]: 2025-10-07 14:35:03.972 2 DEBUG nova.compute.manager [None req-cb6ab8e2-5310-48eb-b77a-ca5f52b011dd - - - - - -] [instance: a076e8a5-9222-4fc6-a3c6-c88eed8aeaa8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:04 np0005473739 nova_compute[259550]: 2025-10-07 14:35:04.021 2 DEBUG oslo_concurrency.lockutils [req-7455615d-5b61-4bf8-966d-c47e3714bab7 req-25d9fe97-5b6c-4791-9fab-4d778e3b5877 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2179: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 169 op/s
Oct  7 10:35:04 np0005473739 nova_compute[259550]: 2025-10-07 14:35:04.801 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:04 np0005473739 nova_compute[259550]: 2025-10-07 14:35:04.953 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:04 np0005473739 nova_compute[259550]: 2025-10-07 14:35:04.954 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:35:04 np0005473739 nova_compute[259550]: 2025-10-07 14:35:04.954 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:06 np0005473739 nova_compute[259550]: 2025-10-07 14:35:06.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2180: 305 pgs: 305 active+clean; 215 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 184 KiB/s wr, 142 op/s
Oct  7 10:35:07 np0005473739 nova_compute[259550]: 2025-10-07 14:35:07.244 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2181: 305 pgs: 305 active+clean; 215 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.1 MiB/s rd, 184 KiB/s wr, 110 op/s
Oct  7 10:35:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:08Z|00121|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4d:17:d0 10.100.0.8
Oct  7 10:35:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:08Z|00122|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4d:17:d0 10.100.0.8
Oct  7 10:35:10 np0005473739 podman[379007]: 2025-10-07 14:35:10.083599329 +0000 UTC m=+0.067718310 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:35:10 np0005473739 podman[379008]: 2025-10-07 14:35:10.118904172 +0000 UTC m=+0.101058741 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  7 10:35:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2182: 305 pgs: 305 active+clean; 237 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 1.7 MiB/s wr, 158 op/s
Oct  7 10:35:11 np0005473739 nova_compute[259550]: 2025-10-07 14:35:11.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:11Z|00123|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:12:ca:dd 10.100.0.27
Oct  7 10:35:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:11Z|00124|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:12:ca:dd 10.100.0.27
Oct  7 10:35:11 np0005473739 nova_compute[259550]: 2025-10-07 14:35:11.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:12 np0005473739 nova_compute[259550]: 2025-10-07 14:35:12.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2183: 305 pgs: 305 active+clean; 241 MiB data, 867 MiB used, 59 GiB / 60 GiB avail; 718 KiB/s rd, 2.4 MiB/s wr, 74 op/s
Oct  7 10:35:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2184: 305 pgs: 305 active+clean; 277 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 725 KiB/s rd, 4.3 MiB/s wr, 127 op/s
Oct  7 10:35:16 np0005473739 nova_compute[259550]: 2025-10-07 14:35:16.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2185: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 731 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct  7 10:35:16 np0005473739 nova_compute[259550]: 2025-10-07 14:35:16.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:17 np0005473739 nova_compute[259550]: 2025-10-07 14:35:17.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2186: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 722 KiB/s rd, 4.1 MiB/s wr, 124 op/s
Oct  7 10:35:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2187: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 723 KiB/s rd, 4.1 MiB/s wr, 125 op/s
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.073 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.074 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.074 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.075 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.075 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.076 2 INFO nova.compute.manager [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Terminating instance#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.077 2 DEBUG nova.compute.manager [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:35:21 np0005473739 kernel: tap647fc35d-c5 (unregistering): left promiscuous mode
Oct  7 10:35:21 np0005473739 NetworkManager[44949]: <info>  [1759847721.1466] device (tap647fc35d-c5): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:35:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:21Z|01133|binding|INFO|Releasing lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 from this chassis (sb_readonly=0)
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:21Z|01134|binding|INFO|Setting lport 647fc35d-c5c1-4818-9fe1-b704468ceb32 down in Southbound
Oct  7 10:35:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:21Z|01135|binding|INFO|Removing iface tap647fc35d-c5 ovn-installed in OVS
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006f.scope: Deactivated successfully.
Oct  7 10:35:21 np0005473739 systemd[1]: machine-qemu\x2d139\x2dinstance\x2d0000006f.scope: Consumed 14.315s CPU time.
Oct  7 10:35:21 np0005473739 systemd-machined[214580]: Machine qemu-139-instance-0000006f terminated.
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.233 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:12:ca:dd 10.100.0.27'], port_security=['fa:16:3e:12:ca:dd 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'a5f57fca-2121-432c-b000-2fd92f5c1b12', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-df76b87a-da1a-4438-aff3-46b99df6a681', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2ae626df-0151-4864-97c5-333f386c32b3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f05eda8c-3c14-40e1-b5d0-16d149ef6c4e, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=647fc35d-c5c1-4818-9fe1-b704468ceb32) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.234 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 647fc35d-c5c1-4818-9fe1-b704468ceb32 in datapath df76b87a-da1a-4438-aff3-46b99df6a681 unbound from our chassis#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.235 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network df76b87a-da1a-4438-aff3-46b99df6a681, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.237 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[496fb9fb-01be-48b9-a891-2892e8bd6d07]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.238 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 namespace which is not needed anymore#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.323 2 INFO nova.virt.libvirt.driver [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Instance destroyed successfully.#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.324 2 DEBUG nova.objects.instance [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid a5f57fca-2121-432c-b000-2fd92f5c1b12 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:35:21 np0005473739 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [NOTICE]   (378909) : haproxy version is 2.8.14-c23fe91
Oct  7 10:35:21 np0005473739 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [NOTICE]   (378909) : path to executable is /usr/sbin/haproxy
Oct  7 10:35:21 np0005473739 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [WARNING]  (378909) : Exiting Master process...
Oct  7 10:35:21 np0005473739 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [ALERT]    (378909) : Current worker (378911) exited with code 143 (Terminated)
Oct  7 10:35:21 np0005473739 neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681[378905]: [WARNING]  (378909) : All workers exited. Exiting... (0)
Oct  7 10:35:21 np0005473739 systemd[1]: libpod-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222.scope: Deactivated successfully.
Oct  7 10:35:21 np0005473739 conmon[378905]: conmon a2e5ff7d139732459641 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222.scope/container/memory.events
Oct  7 10:35:21 np0005473739 podman[379080]: 2025-10-07 14:35:21.389185751 +0000 UTC m=+0.048050056 container died a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:35:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222-userdata-shm.mount: Deactivated successfully.
Oct  7 10:35:21 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6d8f73fee24f065dab00f339734911ef1ec093684fe1fea8ceea6e76dc54383b-merged.mount: Deactivated successfully.
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.427 2 DEBUG nova.virt.libvirt.vif [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:34:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-81555906',display_name='tempest-TestNetworkBasicOps-server-81555906',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-81555906',id=111,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHVpb0If8IJnGODxIDZvyf13jL46mQ77emkpmotXMzF/nDOLDOQfFE/H2m8lCVqJZPCaNeKJqv9eLmV7ud/4driHFAI2qIDYCbnN7/CMc/8whSX25QhcG+umNda5hVH/NQ==',key_name='tempest-TestNetworkBasicOps-307113164',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:34:56Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-edoddh5w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:56Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=a5f57fca-2121-432c-b000-2fd92f5c1b12,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.428 2 DEBUG nova.network.os_vif_util [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "address": "fa:16:3e:12:ca:dd", "network": {"id": "df76b87a-da1a-4438-aff3-46b99df6a681", "bridge": "br-int", "label": "tempest-network-smoke--1383689585", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": "10.100.0.17", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap647fc35d-c5", "ovs_interfaceid": "647fc35d-c5c1-4818-9fe1-b704468ceb32", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.429 2 DEBUG nova.network.os_vif_util [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.430 2 DEBUG os_vif [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:35:21 np0005473739 podman[379080]: 2025-10-07 14:35:21.432335143 +0000 UTC m=+0.091199448 container cleanup a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap647fc35d-c5, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:35:21 np0005473739 systemd[1]: libpod-conmon-a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222.scope: Deactivated successfully.
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.443 2 INFO os_vif [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:12:ca:dd,bridge_name='br-int',has_traffic_filtering=True,id=647fc35d-c5c1-4818-9fe1-b704468ceb32,network=Network(df76b87a-da1a-4438-aff3-46b99df6a681),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap647fc35d-c5')#033[00m
Oct  7 10:35:21 np0005473739 podman[379112]: 2025-10-07 14:35:21.502033315 +0000 UTC m=+0.044294414 container remove a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.508 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e779ecf5-669c-4165-8510-f9ffda582114]: (4, ('Tue Oct  7 02:35:21 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 (a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222)\na2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222\nTue Oct  7 02:35:21 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 (a2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222)\na2e5ff7d13973245964120912a6a5092b46e9fad29d5211697dd30bb44655222\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.510 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b3f1d55-7aec-43f4-a687-5daa43c06b31]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.511 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdf76b87a-d0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:21 np0005473739 kernel: tapdf76b87a-d0: left promiscuous mode
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.513 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.533 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[50a30880-4f97-4b7a-a6c0-140a8981bf43]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.568 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f87af27-e8ed-4a79-85ac-5a7045b4de33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.570 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8195ce4f-4884-47e4-8070-97aa47b21d61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[83721fb5-8566-4cfc-a8c6-0f48e0055d91]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827804, 'reachable_time': 22666, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379145, 'error': None, 'target': 'ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:21 np0005473739 systemd[1]: run-netns-ovnmeta\x2ddf76b87a\x2dda1a\x2d4438\x2daff3\x2d46b99df6a681.mount: Deactivated successfully.
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.601 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-df76b87a-da1a-4438-aff3-46b99df6a681 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:35:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:21.601 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9c9b8e9b-dd3c-46a4-8d17-eac2748d77c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.818 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.820 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.850 2 INFO nova.virt.libvirt.driver [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Deleting instance files /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12_del#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.851 2 INFO nova.virt.libvirt.driver [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Deletion of /var/lib/nova/instances/a5f57fca-2121-432c-b000-2fd92f5c1b12_del complete#033[00m
Oct  7 10:35:21 np0005473739 nova_compute[259550]: 2025-10-07 14:35:21.977 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.076 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.076 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.081 2 INFO nova.compute.manager [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.081 2 DEBUG oslo.service.loopingcall [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.082 2 DEBUG nova.compute.manager [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.082 2 DEBUG nova.network.neutron [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.150 2 DEBUG nova.compute.manager [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-unplugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.151 2 DEBUG oslo_concurrency.lockutils [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.151 2 DEBUG oslo_concurrency.lockutils [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.151 2 DEBUG oslo_concurrency.lockutils [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.152 2 DEBUG nova.compute.manager [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] No waiting events found dispatching network-vif-unplugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.152 2 DEBUG nova.compute.manager [req-b8479340-7414-4360-976e-cab12568bd96 req-be7e7d8a-1603-494a-be9d-bfb663bb52a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-unplugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.193 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.202 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.203 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.212 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.212 2 INFO nova.compute.claims [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2188: 305 pgs: 305 active+clean; 279 MiB data, 889 MiB used, 59 GiB / 60 GiB avail; 433 KiB/s rd, 2.6 MiB/s wr, 77 op/s
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.687 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:35:22
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'default.rgw.log', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.mgr', 'cephfs.cephfs.meta', 'images', '.rgw.root']
Oct  7 10:35:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:35:22 np0005473739 nova_compute[259550]: 2025-10-07 14:35:22.762 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:35:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:35:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:35:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/700200518' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.242 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.480s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.249 2 DEBUG nova.compute.provider_tree [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.303 2 DEBUG nova.scheduler.client.report [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.412 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.210s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.413 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.416 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.423 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.423 2 INFO nova.compute.claims [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.912 2 DEBUG nova.network.neutron [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.919 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:35:23 np0005473739 nova_compute[259550]: 2025-10-07 14:35:23.920 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:35:24 np0005473739 nova_compute[259550]: 2025-10-07 14:35:24.134 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:35:24 np0005473739 nova_compute[259550]: 2025-10-07 14:35:24.176 2 INFO nova.compute.manager [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Took 2.09 seconds to deallocate network for instance.#033[00m
Oct  7 10:35:24 np0005473739 nova_compute[259550]: 2025-10-07 14:35:24.299 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:35:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2189: 305 pgs: 305 active+clean; 214 MiB data, 850 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 1.9 MiB/s wr, 82 op/s
Oct  7 10:35:24 np0005473739 nova_compute[259550]: 2025-10-07 14:35:24.813 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.015 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.017 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.017 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Creating image(s)#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.042 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.064 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.083 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.086 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.155 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.156 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.156 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.157 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.176 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.179 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 614c22a4-9342-4037-adb5-71c3375b8553_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.218 2 DEBUG nova.compute.manager [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.219 2 DEBUG oslo_concurrency.lockutils [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.220 2 DEBUG oslo_concurrency.lockutils [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.220 2 DEBUG oslo_concurrency.lockutils [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.220 2 DEBUG nova.compute.manager [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] No waiting events found dispatching network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.220 2 WARNING nova.compute.manager [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received unexpected event network-vif-plugged-647fc35d-c5c1-4818-9fe1-b704468ceb32 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.221 2 DEBUG nova.compute.manager [req-6ffc3ecd-6a41-4a7e-a80f-f1d78aa0ac47 req-c35c4bcb-c17b-42f0-b6bf-bf31a0eab3a8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Received event network-vif-deleted-647fc35d-c5c1-4818-9fe1-b704468ceb32 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.499 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 614c22a4-9342-4037-adb5-71c3375b8553_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.320s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.566 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.670 2 DEBUG nova.objects.instance [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 614c22a4-9342-4037-adb5-71c3375b8553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.753 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.753 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Ensure instance console log exists: /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.753 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.754 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.754 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.792 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:25 np0005473739 nova_compute[259550]: 2025-10-07 14:35:25.828 2 DEBUG nova.policy [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:35:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:35:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/437884889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:35:26 np0005473739 nova_compute[259550]: 2025-10-07 14:35:26.263 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:26 np0005473739 nova_compute[259550]: 2025-10-07 14:35:26.269 2 DEBUG nova.compute.provider_tree [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:35:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2190: 305 pgs: 305 active+clean; 220 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 MiB/s wr, 34 op/s
Oct  7 10:35:26 np0005473739 nova_compute[259550]: 2025-10-07 14:35:26.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:26 np0005473739 nova_compute[259550]: 2025-10-07 14:35:26.481 2 DEBUG nova.scheduler.client.report [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:35:26 np0005473739 nova_compute[259550]: 2025-10-07 14:35:26.762 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 3.346s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:26 np0005473739 nova_compute[259550]: 2025-10-07 14:35:26.763 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:35:26 np0005473739 nova_compute[259550]: 2025-10-07 14:35:26.766 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.953s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:26 np0005473739 nova_compute[259550]: 2025-10-07 14:35:26.893 2 DEBUG oslo_concurrency.processutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:27 np0005473739 podman[379358]: 2025-10-07 14:35:27.06724062 +0000 UTC m=+0.055790882 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:35:27 np0005473739 podman[379359]: 2025-10-07 14:35:27.098815903 +0000 UTC m=+0.085022732 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.109 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.110 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.204 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.255 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.292 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:35:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:35:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2338717851' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.376 2 DEBUG oslo_concurrency.processutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.381 2 DEBUG nova.compute.provider_tree [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.441 2 DEBUG nova.policy [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.470 2 DEBUG nova.scheduler.client.report [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.546 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.547 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.548 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating image(s)#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.570 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.595 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.619 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.624 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.658 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Successfully created port: 3eb614e8-3f14-4375-bbe9-34facd5bce52 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.662 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.896s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.696 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.697 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.697 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.698 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.718 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.722 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.760 2 INFO nova.scheduler.client.report [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance a5f57fca-2121-432c-b000-2fd92f5c1b12#033[00m
Oct  7 10:35:27 np0005473739 nova_compute[259550]: 2025-10-07 14:35:27.853 2 DEBUG oslo_concurrency.lockutils [None req-1a581940-ff20-4835-bacd-58b866100051 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "a5f57fca-2121-432c-b000-2fd92f5c1b12" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2191: 305 pgs: 305 active+clean; 220 MiB data, 843 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.437 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Successfully created port: e3da1022-6830-4557-9992-ffd8ec07a599 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.513 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Successfully created port: 84baaee6-3f89-4d61-aaea-507e46e65618 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.801 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.864 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.961 2 DEBUG nova.objects.instance [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.995 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.995 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Ensure instance console log exists: /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.996 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.996 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:28 np0005473739 nova_compute[259550]: 2025-10-07 14:35:28.996 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:29 np0005473739 nova_compute[259550]: 2025-10-07 14:35:29.904 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Successfully updated port: 3eb614e8-3f14-4375-bbe9-34facd5bce52 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.043 2 DEBUG nova.compute.manager [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.043 2 DEBUG nova.compute.manager [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing instance network info cache due to event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.043 2 DEBUG oslo_concurrency.lockutils [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.044 2 DEBUG oslo_concurrency.lockutils [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.044 2 DEBUG nova.network.neutron [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.085 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.249 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Successfully updated port: 84baaee6-3f89-4d61-aaea-507e46e65618 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:35:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2192: 305 pgs: 305 active+clean; 273 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 2.6 MiB/s wr, 59 op/s
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.358 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.358 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.359 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.525 2 DEBUG nova.network.neutron [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:35:30 np0005473739 nova_compute[259550]: 2025-10-07 14:35:30.737 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:35:31 np0005473739 nova_compute[259550]: 2025-10-07 14:35:31.025 2 DEBUG nova.network.neutron [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:31 np0005473739 nova_compute[259550]: 2025-10-07 14:35:31.181 2 DEBUG oslo_concurrency.lockutils [req-4486fb18-b51b-4f3f-a3e9-716f99d1193d req-2a702014-d240-45d6-887a-597c3e0e3ac2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:31Z|01136|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct  7 10:35:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:31Z|01137|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct  7 10:35:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:31Z|01138|binding|INFO|Releasing lport 3f0bb527-e0e8-409d-b11f-0c1205bec67a from this chassis (sb_readonly=0)
Oct  7 10:35:31 np0005473739 nova_compute[259550]: 2025-10-07 14:35:31.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:31 np0005473739 nova_compute[259550]: 2025-10-07 14:35:31.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:31.755 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=35, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=34) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:35:31 np0005473739 nova_compute[259550]: 2025-10-07 14:35:31.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:31.756 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2193: 305 pgs: 305 active+clean; 277 MiB data, 879 MiB used, 59 GiB / 60 GiB avail; 52 KiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.446 2 DEBUG nova.compute.manager [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.447 2 DEBUG nova.compute.manager [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing instance network info cache due to event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.447 2 DEBUG oslo_concurrency.lockutils [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002211684817998307 of space, bias 1.0, pg target 0.6635054453994921 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:35:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.623 2 DEBUG nova.network.neutron [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:35:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566944976' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:35:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:35:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1566944976' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.790 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.791 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance network_info: |[{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.791 2 DEBUG oslo_concurrency.lockutils [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.791 2 DEBUG nova.network.neutron [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.794 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start _get_guest_xml network_info=[{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.798 2 WARNING nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.841 2 DEBUG nova.virt.libvirt.host [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.842 2 DEBUG nova.virt.libvirt.host [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.846 2 DEBUG nova.virt.libvirt.host [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.846 2 DEBUG nova.virt.libvirt.host [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.847 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.847 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.847 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.848 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.848 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.848 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.848 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.849 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.849 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.849 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.849 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.850 2 DEBUG nova.virt.hardware [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.853 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:32 np0005473739 nova_compute[259550]: 2025-10-07 14:35:32.887 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Successfully updated port: e3da1022-6830-4557-9992-ffd8ec07a599 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:35:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:35:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/846551382' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.346 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.366 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.369 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.555 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.555 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.555 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:35:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:35:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2856261765' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.844 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.846 2 DEBUG nova.virt.libvirt.vif [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:27Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.847 2 DEBUG nova.network.os_vif_util [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.848 2 DEBUG nova.network.os_vif_util [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.849 2 DEBUG nova.objects.instance [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.876 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.932 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <uuid>e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</uuid>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <name>instance-00000071</name>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2036986690</nova:name>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:35:32</nova:creationTime>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <nova:port uuid="84baaee6-3f89-4d61-aaea-507e46e65618">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <entry name="serial">e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</entry>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <entry name="uuid">e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</entry>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:84:dc:c4"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <target dev="tap84baaee6-3f"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/console.log" append="off"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:35:33 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:35:33 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:35:33 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:35:33 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.933 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Preparing to wait for external event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.934 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.934 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.934 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.935 2 DEBUG nova.virt.libvirt.vif [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:27Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.935 2 DEBUG nova.network.os_vif_util [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.936 2 DEBUG nova.network.os_vif_util [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.936 2 DEBUG os_vif [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.937 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84baaee6-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.940 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84baaee6-3f, col_values=(('external_ids', {'iface-id': '84baaee6-3f89-4d61-aaea-507e46e65618', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:dc:c4', 'vm-uuid': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:33 np0005473739 NetworkManager[44949]: <info>  [1759847733.9793] manager: (tap84baaee6-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/463)
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.981 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.986 2 INFO os_vif [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f')#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.988 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.988 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.988 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.989 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.989 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.991 2 INFO nova.compute.manager [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Terminating instance#033[00m
Oct  7 10:35:33 np0005473739 nova_compute[259550]: 2025-10-07 14:35:33.992 2 DEBUG nova.compute.manager [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.109 2 DEBUG nova.compute.manager [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.110 2 DEBUG nova.compute.manager [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing instance network info cache due to event network-changed-13fb3ec4-0080-406c-8cea-6620016c0513. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.111 2 DEBUG oslo_concurrency.lockutils [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.111 2 DEBUG oslo_concurrency.lockutils [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.111 2 DEBUG nova.network.neutron [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Refreshing network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:35:34 np0005473739 kernel: tap13fb3ec4-00 (unregistering): left promiscuous mode
Oct  7 10:35:34 np0005473739 NetworkManager[44949]: <info>  [1759847734.1185] device (tap13fb3ec4-00): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:35:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:34Z|01139|binding|INFO|Releasing lport 13fb3ec4-0080-406c-8cea-6620016c0513 from this chassis (sb_readonly=0)
Oct  7 10:35:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:34Z|01140|binding|INFO|Setting lport 13fb3ec4-0080-406c-8cea-6620016c0513 down in Southbound
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:34Z|01141|binding|INFO|Removing iface tap13fb3ec4-00 ovn-installed in OVS
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:34 np0005473739 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Deactivated successfully.
Oct  7 10:35:34 np0005473739 systemd[1]: machine-qemu\x2d136\x2dinstance\x2d0000006d.scope: Consumed 15.955s CPU time.
Oct  7 10:35:34 np0005473739 systemd-machined[214580]: Machine qemu-136-instance-0000006d terminated.
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.230 2 INFO nova.virt.libvirt.driver [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Instance destroyed successfully.#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.230 2 DEBUG nova.objects.instance [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.250 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a7:48:df 10.100.0.13'], port_security=['fa:16:3e:a7:48:df 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '109b6c3d-5810-4b7d-a96d-4e1b64dc2daa', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'f7aecf35-a2bc-4233-a6ec-98effd9bc888', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2bf60a0a-2571-43f1-914f-0f00c5d191c5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=13fb3ec4-0080-406c-8cea-6620016c0513) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.251 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 13fb3ec4-0080-406c-8cea-6620016c0513 in datapath 8e6e03c6-002b-464f-aea7-5ae708e3e5dc unbound from our chassis#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.252 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8e6e03c6-002b-464f-aea7-5ae708e3e5dc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[05fbb673-f320-49cf-91ee-6256dfdcd485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.254 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc namespace which is not needed anymore#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.254 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.255 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.255 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:84:dc:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.255 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Using config drive#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.281 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2194: 305 pgs: 305 active+clean; 293 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 3.6 MiB/s wr, 83 op/s
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.321 2 DEBUG nova.virt.libvirt.vif [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:33:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1213898697',display_name='tempest-TestNetworkBasicOps-server-1213898697',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1213898697',id=109,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJOXXnJfbNjLe/xM7CNQv3JpMjH7QLX8ezU0CsayXoRWsJliurdVjk2soP6dLM8VmWxLkM0XL+W32Oeh6Y4JueSCzXDjtscbAk00DGutpE2Eyr2ZzKKLBUUQM3YVtmhffw==',key_name='tempest-TestNetworkBasicOps-1417079346',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:34:06Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-6sgmyxmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:06Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=109b6c3d-5810-4b7d-a96d-4e1b64dc2daa,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.322 2 DEBUG nova.network.os_vif_util [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.221", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.322 2 DEBUG nova.network.os_vif_util [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.323 2 DEBUG os_vif [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.324 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap13fb3ec4-00, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.334 2 INFO os_vif [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a7:48:df,bridge_name='br-int',has_traffic_filtering=True,id=13fb3ec4-0080-406c-8cea-6620016c0513,network=Network(8e6e03c6-002b-464f-aea7-5ae708e3e5dc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap13fb3ec4-00')#033[00m
Oct  7 10:35:34 np0005473739 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [NOTICE]   (376417) : haproxy version is 2.8.14-c23fe91
Oct  7 10:35:34 np0005473739 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [NOTICE]   (376417) : path to executable is /usr/sbin/haproxy
Oct  7 10:35:34 np0005473739 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [WARNING]  (376417) : Exiting Master process...
Oct  7 10:35:34 np0005473739 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [WARNING]  (376417) : Exiting Master process...
Oct  7 10:35:34 np0005473739 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [ALERT]    (376417) : Current worker (376419) exited with code 143 (Terminated)
Oct  7 10:35:34 np0005473739 neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc[376413]: [WARNING]  (376417) : All workers exited. Exiting... (0)
Oct  7 10:35:34 np0005473739 systemd[1]: libpod-342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db.scope: Deactivated successfully.
Oct  7 10:35:34 np0005473739 podman[379708]: 2025-10-07 14:35:34.406460056 +0000 UTC m=+0.064629438 container died 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:35:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db-userdata-shm.mount: Deactivated successfully.
Oct  7 10:35:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1ce9d977b3319507425402d4f816a2d462fb60a6cead3d48e6a5a2ead545ba35-merged.mount: Deactivated successfully.
Oct  7 10:35:34 np0005473739 podman[379708]: 2025-10-07 14:35:34.475695355 +0000 UTC m=+0.133864717 container cleanup 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:35:34 np0005473739 systemd[1]: libpod-conmon-342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db.scope: Deactivated successfully.
Oct  7 10:35:34 np0005473739 podman[379754]: 2025-10-07 14:35:34.542453309 +0000 UTC m=+0.046971246 container remove 342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.551 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[581bb811-0b8b-43e9-9f71-67593f14fb38]: (4, ('Tue Oct  7 02:35:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc (342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db)\n342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db\nTue Oct  7 02:35:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc (342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db)\n342d8322edfd0728445c811bf97855b8f841573bf753d57cb23a5f692246c7db\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.553 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b512424c-5562-40f4-8ceb-17109264ad4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.554 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e6e03c6-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:34 np0005473739 kernel: tap8e6e03c6-00: left promiscuous mode
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:34 np0005473739 nova_compute[259550]: 2025-10-07 14:35:34.578 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.581 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0b5f16eb-f0e0-4995-9da2-ceeb1783a8f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.606 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e9b40dc7-fb71-4f45-9ccb-204054c390e2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.608 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2fbf4e4-ea24-4aca-9186-7562198230bc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.624 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8121df11-a8d6-4f27-97c0-d2da1c4ca0c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 822748, 'reachable_time': 29587, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379773, 'error': None, 'target': 'ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:34 np0005473739 systemd[1]: run-netns-ovnmeta\x2d8e6e03c6\x2d002b\x2d464f\x2daea7\x2d5ae708e3e5dc.mount: Deactivated successfully.
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.629 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8e6e03c6-002b-464f-aea7-5ae708e3e5dc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:35:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:34.629 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fd9ea914-354d-4f71-955c-2f5e4dedb4dd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.036 2 INFO nova.virt.libvirt.driver [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Deleting instance files /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_del#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.037 2 INFO nova.virt.libvirt.driver [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Deletion of /var/lib/nova/instances/109b6c3d-5810-4b7d-a96d-4e1b64dc2daa_del complete#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.134 2 INFO nova.compute.manager [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Took 1.14 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.135 2 DEBUG oslo.service.loopingcall [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.135 2 DEBUG nova.compute.manager [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.135 2 DEBUG nova.network.neutron [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.300 2 DEBUG nova.compute.manager [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-changed-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.300 2 DEBUG nova.compute.manager [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing instance network info cache due to event network-changed-e3da1022-6830-4557-9992-ffd8ec07a599. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.301 2 DEBUG oslo_concurrency.lockutils [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.802 2 DEBUG nova.network.neutron [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updated VIF entry in instance network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.802 2 DEBUG nova.network.neutron [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.807 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating config drive at /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.812 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0xcctgs_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.940 2 DEBUG oslo_concurrency.lockutils [req-0ffbda25-3cd3-4cb4-9ca4-ac86a979b1a7 req-a27ff1a4-c8aa-49a9-94ed-4960147f1910 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.956 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0xcctgs_" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.981 2 DEBUG nova.storage.rbd_utils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:35 np0005473739 nova_compute[259550]: 2025-10-07 14:35:35.984 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.198 2 DEBUG oslo_concurrency.processutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.214s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.200 2 INFO nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deleting local config drive /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config because it was imported into RBD.#033[00m
Oct  7 10:35:36 np0005473739 kernel: tap84baaee6-3f: entered promiscuous mode
Oct  7 10:35:36 np0005473739 NetworkManager[44949]: <info>  [1759847736.2668] manager: (tap84baaee6-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/464)
Oct  7 10:35:36 np0005473739 systemd-udevd[379655]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:35:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:36Z|01142|binding|INFO|Claiming lport 84baaee6-3f89-4d61-aaea-507e46e65618 for this chassis.
Oct  7 10:35:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:36Z|01143|binding|INFO|84baaee6-3f89-4d61-aaea-507e46e65618: Claiming fa:16:3e:84:dc:c4 10.100.0.11
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.269 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:36 np0005473739 NetworkManager[44949]: <info>  [1759847736.2813] device (tap84baaee6-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:35:36 np0005473739 NetworkManager[44949]: <info>  [1759847736.2826] device (tap84baaee6-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:35:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:36Z|01144|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 ovn-installed in OVS
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:36 np0005473739 systemd-machined[214580]: New machine qemu-140-instance-00000071.
Oct  7 10:35:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2195: 305 pgs: 305 active+clean; 248 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 3.5 MiB/s wr, 71 op/s
Oct  7 10:35:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:36Z|01145|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 up in Southbound
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.319 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dc:c4 10.100.0.11'], port_security=['fa:16:3e:84:dc:c4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '97d6fa92-e050-4cce-a368-3065ace6996b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e8a86b-883d-4b18-ac5a-841f1cc14111, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=84baaee6-3f89-4d61-aaea-507e46e65618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.321 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 84baaee6-3f89-4d61-aaea-507e46e65618 in datapath 002ae4bb-0f71-4b57-99ae-0bfd304fb458 bound to our chassis#033[00m
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.320 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847721.3187284, a5f57fca-2121-432c-b000-2fd92f5c1b12 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.321 2 INFO nova.compute.manager [-] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 002ae4bb-0f71-4b57-99ae-0bfd304fb458#033[00m
Oct  7 10:35:36 np0005473739 systemd[1]: Started Virtual Machine qemu-140-instance-00000071.
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.334 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a2454b0-e8e6-42c4-8f16-bea91ea4be5d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.335 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap002ae4bb-01 in ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.337 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap002ae4bb-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.337 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4095d8e6-6b4d-4775-9a3d-c32208edbf05]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.338 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92728d5d-4591-4a29-81e1-d3f7f031a307]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.350 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0f04b6-3cb9-4e01-89a8-f2dbe7a39100]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.373 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4df5c7ee-963e-487e-9b84-9b0d76fc8380]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.410 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a3156bd7-ec9f-4f83-bfe9-56174efa8b96]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 NetworkManager[44949]: <info>  [1759847736.4174] manager: (tap002ae4bb-00): new Veth device (/org/freedesktop/NetworkManager/Devices/465)
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.417 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9482e5ba-a51a-4dd4-878f-7703012b1ea9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.436 2 DEBUG nova.compute.manager [None req-8ceadb25-3711-4919-8e48-9d2b054c16ba - - - - - -] [instance: a5f57fca-2121-432c-b000-2fd92f5c1b12] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.457 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4066e5-f5f3-4532-a5c4-a750ea80f51c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.460 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cc9dfb40-f58e-4971-ba04-69f27cc7446f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 NetworkManager[44949]: <info>  [1759847736.4852] device (tap002ae4bb-00): carrier: link connected
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.496 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2916c81d-05c2-4eb5-a2b0-6a04a7e14374]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.514 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c2a429fb-06f4-4770-9c97-dcf9b4b87908]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap002ae4bb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:93:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 832004, 'reachable_time': 29221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 379860, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.533 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[474247b8-7b5d-437c-a67b-d0e1a9de139b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:93fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 832004, 'tstamp': 832004}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 379861, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.551 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fab6e0bb-1050-46d4-bf54-2c2f8466f1bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap002ae4bb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:93:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 331], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 832004, 'reachable_time': 29221, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 379862, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c01902c-a129-4154-9d39-e94d80eaaabb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.643 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[85cc9396-d414-424c-b428-08aa4bbc08aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.646 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap002ae4bb-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.646 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.647 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap002ae4bb-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:36 np0005473739 NetworkManager[44949]: <info>  [1759847736.6496] manager: (tap002ae4bb-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/466)
Oct  7 10:35:36 np0005473739 kernel: tap002ae4bb-00: entered promiscuous mode
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.653 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap002ae4bb-00, col_values=(('external_ids', {'iface-id': '9b384c3a-3862-48f5-9fa0-a30ca1bb30cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:36Z|01146|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:36 np0005473739 nova_compute[259550]: 2025-10-07 14:35:36.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.669 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.671 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fc8531c9-1d47-4738-93e9-ef336c1caf6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.672 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:35:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:36.673 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'env', 'PROCESS_TAG=haproxy-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/002ae4bb-0f71-4b57-99ae-0bfd304fb458.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:35:37 np0005473739 podman[379936]: 2025-10-07 14:35:37.102579567 +0000 UTC m=+0.088754882 container create 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:35:37 np0005473739 podman[379936]: 2025-10-07 14:35:37.045354658 +0000 UTC m=+0.031529973 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:35:37 np0005473739 systemd[1]: Started libpod-conmon-47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624.scope.
Oct  7 10:35:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:35:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cfa396a46fc4f81512c267e5f20233b50cdd5b2d225de55dfebc3c6e7a24f58e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:37 np0005473739 podman[379936]: 2025-10-07 14:35:37.199345143 +0000 UTC m=+0.185520458 container init 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:35:37 np0005473739 podman[379936]: 2025-10-07 14:35:37.205908528 +0000 UTC m=+0.192083823 container start 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.213 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847737.2126427, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.215 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Started (Lifecycle Event)#033[00m
Oct  7 10:35:37 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [NOTICE]   (379956) : New worker (379958) forked
Oct  7 10:35:37 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [NOTICE]   (379956) : Loading success.
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.252 2 DEBUG nova.network.neutron [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.266 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.309 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847737.2130566, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.309 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.330 2 INFO nova.compute.manager [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Took 2.19 seconds to deallocate network for instance.#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.337 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.342 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.405 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.426 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.426 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.519 2 DEBUG nova.network.neutron [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updated VIF entry in instance network info cache for port 13fb3ec4-0080-406c-8cea-6620016c0513. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.520 2 DEBUG nova.network.neutron [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [{"id": "13fb3ec4-0080-406c-8cea-6620016c0513", "address": "fa:16:3e:a7:48:df", "network": {"id": "8e6e03c6-002b-464f-aea7-5ae708e3e5dc", "bridge": "br-int", "label": "tempest-network-smoke--2092975276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap13fb3ec4-00", "ovs_interfaceid": "13fb3ec4-0080-406c-8cea-6620016c0513", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.551 2 DEBUG oslo_concurrency.processutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.594 2 DEBUG oslo_concurrency.lockutils [req-3ff2f1ae-e4e0-477d-b6ce-4ec93783cc87 req-2e63a337-74b3-4e61-b5c3-54cadf93b77d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.776 2 DEBUG nova.compute.manager [req-c35ff1d9-127c-479d-9a6a-5497aaf1e463 req-c6356181-6cd0-412d-9d30-cfb3e8d0ef54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Received event network-vif-deleted-13fb3ec4-0080-406c-8cea-6620016c0513 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.776 2 INFO nova.compute.manager [req-c35ff1d9-127c-479d-9a6a-5497aaf1e463 req-c6356181-6cd0-412d-9d30-cfb3e8d0ef54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Neutron deleted interface 13fb3ec4-0080-406c-8cea-6620016c0513; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.776 2 DEBUG nova.network.neutron [req-c35ff1d9-127c-479d-9a6a-5497aaf1e463 req-c6356181-6cd0-412d-9d30-cfb3e8d0ef54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:37 np0005473739 nova_compute[259550]: 2025-10-07 14:35:37.871 2 DEBUG nova.compute.manager [req-c35ff1d9-127c-479d-9a6a-5497aaf1e463 req-c6356181-6cd0-412d-9d30-cfb3e8d0ef54 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Detach interface failed, port_id=13fb3ec4-0080-406c-8cea-6620016c0513, reason: Instance 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:35:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:35:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2423252125' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.028 2 DEBUG oslo_concurrency.processutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.035 2 DEBUG nova.compute.provider_tree [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.077 2 DEBUG nova.scheduler.client.report [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.085 2 DEBUG nova.network.neutron [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.134 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.708s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.142 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.143 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance network_info: |[{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.143 2 DEBUG oslo_concurrency.lockutils [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.144 2 DEBUG nova.network.neutron [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing network info cache for port e3da1022-6830-4557-9992-ffd8ec07a599 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.148 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start _get_guest_xml network_info=[{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.151 2 WARNING nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.155 2 DEBUG nova.virt.libvirt.host [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.156 2 DEBUG nova.virt.libvirt.host [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.159 2 DEBUG nova.virt.libvirt.host [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.159 2 DEBUG nova.virt.libvirt.host [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.159 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.160 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.160 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.160 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.161 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.162 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.162 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.162 2 DEBUG nova.virt.hardware [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.165 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.201 2 INFO nova.scheduler.client.report [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa#033[00m
Oct  7 10:35:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2196: 305 pgs: 305 active+clean; 248 MiB data, 858 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.4 MiB/s wr, 56 op/s
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.339 2 DEBUG oslo_concurrency.lockutils [None req-ca3d8976-3304-4390-b440-9f3c99d3eaaa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "109b6c3d-5810-4b7d-a96d-4e1b64dc2daa" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:35:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4108979624' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.618 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.644 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:38 np0005473739 nova_compute[259550]: 2025-10-07 14:35:38.649 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:35:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2672729564' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.135 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.137 2 DEBUG nova.virt.libvirt.vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.137 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.138 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.139 2 DEBUG nova.virt.libvirt.vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.139 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.139 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.141 2 DEBUG nova.objects.instance [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 614c22a4-9342-4037-adb5-71c3375b8553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.210 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <uuid>614c22a4-9342-4037-adb5-71c3375b8553</uuid>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <name>instance-00000070</name>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1065431809</nova:name>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:35:38</nova:creationTime>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:port uuid="3eb614e8-3f14-4375-bbe9-34facd5bce52">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <nova:port uuid="e3da1022-6830-4557-9992-ffd8ec07a599">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe2e:5c25" ipVersion="6"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <entry name="serial">614c22a4-9342-4037-adb5-71c3375b8553</entry>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <entry name="uuid">614c22a4-9342-4037-adb5-71c3375b8553</entry>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/614c22a4-9342-4037-adb5-71c3375b8553_disk">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/614c22a4-9342-4037-adb5-71c3375b8553_disk.config">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:02:68:46"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <target dev="tap3eb614e8-3f"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:2e:5c:25"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <target dev="tape3da1022-68"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/console.log" append="off"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:35:39 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:35:39 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:35:39 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:35:39 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.211 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Preparing to wait for external event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.212 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.212 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.212 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.212 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Preparing to wait for external event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.213 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.213 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.213 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.214 2 DEBUG nova.virt.libvirt.vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.214 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.215 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.215 2 DEBUG os_vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3eb614e8-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.221 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3eb614e8-3f, col_values=(('external_ids', {'iface-id': '3eb614e8-3f14-4375-bbe9-34facd5bce52', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:02:68:46', 'vm-uuid': '614c22a4-9342-4037-adb5-71c3375b8553'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:39 np0005473739 NetworkManager[44949]: <info>  [1759847739.2249] manager: (tap3eb614e8-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/467)
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.232 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.233 2 INFO os_vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f')#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.234 2 DEBUG nova.virt.libvirt.vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:35:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.235 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.235 2 DEBUG nova.network.os_vif_util [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.235 2 DEBUG os_vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.236 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape3da1022-68, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.239 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tape3da1022-68, col_values=(('external_ids', {'iface-id': 'e3da1022-6830-4557-9992-ffd8ec07a599', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:2e:5c:25', 'vm-uuid': '614c22a4-9342-4037-adb5-71c3375b8553'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:39 np0005473739 NetworkManager[44949]: <info>  [1759847739.2423] manager: (tape3da1022-68): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/468)
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.243 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.253 2 INFO os_vif [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68')#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.564 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.565 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.565 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:02:68:46, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.565 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:2e:5c:25, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.566 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Using config drive#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.594 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:39.759 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '35'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.988 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Creating config drive at /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config#033[00m
Oct  7 10:35:39 np0005473739 nova_compute[259550]: 2025-10-07 14:35:39.995 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp96iin_w4 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.144 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp96iin_w4" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.174 2 DEBUG nova.storage.rbd_utils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 614c22a4-9342-4037-adb5-71c3375b8553_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.179 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config 614c22a4-9342-4037-adb5-71c3375b8553_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2197: 305 pgs: 305 active+clean; 213 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 2.4 MiB/s wr, 80 op/s
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.549 2 DEBUG oslo_concurrency.processutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config 614c22a4-9342-4037-adb5-71c3375b8553_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.370s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.550 2 INFO nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Deleting local config drive /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553/disk.config because it was imported into RBD.#033[00m
Oct  7 10:35:40 np0005473739 kernel: tap3eb614e8-3f: entered promiscuous mode
Oct  7 10:35:40 np0005473739 NetworkManager[44949]: <info>  [1759847740.6038] manager: (tap3eb614e8-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/469)
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01147|binding|INFO|Claiming lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 for this chassis.
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01148|binding|INFO|3eb614e8-3f14-4375-bbe9-34facd5bce52: Claiming fa:16:3e:02:68:46 10.100.0.10
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.620 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:68:46 10.100.0.10'], port_security=['fa:16:3e:02:68:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '614c22a4-9342-4037-adb5-71c3375b8553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c899e05d-224c-44fe-8294-eaece58d7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34b97456-8a84-4c85-bc2d-80b2de48e489, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3eb614e8-3f14-4375-bbe9-34facd5bce52) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.621 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3eb614e8-3f14-4375-bbe9-34facd5bce52 in datapath c899e05d-224c-44fe-8294-eaece58d7fe7 bound to our chassis#033[00m
Oct  7 10:35:40 np0005473739 NetworkManager[44949]: <info>  [1759847740.6243] manager: (tape3da1022-68): new Tun device (/org/freedesktop/NetworkManager/Devices/470)
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.623 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c899e05d-224c-44fe-8294-eaece58d7fe7#033[00m
Oct  7 10:35:40 np0005473739 kernel: tape3da1022-68: entered promiscuous mode
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01149|binding|INFO|Setting lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 ovn-installed in OVS
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01150|binding|INFO|Setting lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 up in Southbound
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01151|if_status|INFO|Dropped 3 log messages in last 50 seconds (most recently, 49 seconds ago) due to excessive rate
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01152|if_status|INFO|Not updating pb chassis for e3da1022-6830-4557-9992-ffd8ec07a599 now as sb is readonly
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.638 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[46005e0e-abb3-4bb2-b560-c30c4e598f28]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:40 np0005473739 systemd-udevd[380152]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:35:40 np0005473739 systemd-udevd[380154]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:35:40 np0005473739 systemd-machined[214580]: New machine qemu-141-instance-00000070.
Oct  7 10:35:40 np0005473739 systemd[1]: Started Virtual Machine qemu-141-instance-00000070.
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.671 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[663d25d4-b5d7-4f0a-a0ef-0057b2035357]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.675 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c6fcd808-eb9a-4355-9da5-db9369af721f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 NetworkManager[44949]: <info>  [1759847740.6784] device (tape3da1022-68): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:35:40 np0005473739 NetworkManager[44949]: <info>  [1759847740.6795] device (tape3da1022-68): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:35:40 np0005473739 NetworkManager[44949]: <info>  [1759847740.6821] device (tap3eb614e8-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:35:40 np0005473739 NetworkManager[44949]: <info>  [1759847740.6830] device (tap3eb614e8-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.707 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31157d49-62ec-42f8-90d0-2a3c8d35ac13]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01153|binding|INFO|Claiming lport e3da1022-6830-4557-9992-ffd8ec07a599 for this chassis.
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01154|binding|INFO|e3da1022-6830-4557-9992-ffd8ec07a599: Claiming fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01155|binding|INFO|Setting lport e3da1022-6830-4557-9992-ffd8ec07a599 ovn-installed in OVS
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.730 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[22cf1395-a5b0-4441-9f4f-02aa30f5d9ae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc899e05d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ba:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827520, 'reachable_time': 17717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380175, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 podman[380125]: 2025-10-07 14:35:40.736006864 +0000 UTC m=+0.101139303 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.745 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25'], port_security=['fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2e:5c25/64', 'neutron:device_id': '614c22a4-9342-4037-adb5-71c3375b8553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae04c5e-8ef6-447a-8c6c-5575a0afadaf, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e3da1022-6830-4557-9992-ffd8ec07a599) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:35:40 np0005473739 podman[380128]: 2025-10-07 14:35:40.746999968 +0000 UTC m=+0.110606187 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, tcib_managed=true)
Oct  7 10:35:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:40Z|01156|binding|INFO|Setting lport e3da1022-6830-4557-9992-ffd8ec07a599 up in Southbound
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf562eaf-6f33-4f13-97e2-7a46c22883b3]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc899e05d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827532, 'tstamp': 827532}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380185, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc899e05d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827535, 'tstamp': 827535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380185, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.753 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc899e05d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc899e05d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc899e05d-20, col_values=(('external_ids', {'iface-id': 'a420e217-e19f-447c-8a17-e0fdc42144b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.758 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e3da1022-6830-4557-9992-ffd8ec07a599 in datapath 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 bound to our chassis#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.759 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.774 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1e788f5-3103-404a-a464-617f40373be1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.801 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f8b1bc16-b48e-40a5-954d-0358d03f027b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.805 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[60476f6e-10b0-439a-a1a0-9ec022ecd34e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.840 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[525ae20e-12df-422a-82b8-135ee9f3ab8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58592f7b-5010-4adc-ba86-5533ac75c421]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap673dd6ba-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:cb:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 4, 'rx_bytes': 1802, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 4, 'rx_bytes': 1802, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827720, 'reachable_time': 16406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 380193, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.884 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[456ea80d-3901-4abc-9e6d-6dbb0c03a82e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap673dd6ba-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827732, 'tstamp': 827732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 380194, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.885 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673dd6ba-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.888 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap673dd6ba-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.888 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.889 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap673dd6ba-40, col_values=(('external_ids', {'iface-id': 'e2eca697-1003-4116-b184-c9191a00584f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:35:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:35:40.889 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.978 2 DEBUG nova.network.neutron [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updated VIF entry in instance network info cache for port e3da1022-6830-4557-9992-ffd8ec07a599. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:35:40 np0005473739 nova_compute[259550]: 2025-10-07 14:35:40.979 2 DEBUG nova.network.neutron [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:41 np0005473739 nova_compute[259550]: 2025-10-07 14:35:41.071 2 DEBUG oslo_concurrency.lockutils [req-9b464704-29b6-481c-bc52-437e486c1f88 req-7120b690-a46a-4650-8443-f9b61ce2bb85 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:41 np0005473739 nova_compute[259550]: 2025-10-07 14:35:41.264 2 DEBUG nova.compute.manager [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:41 np0005473739 nova_compute[259550]: 2025-10-07 14:35:41.264 2 DEBUG oslo_concurrency.lockutils [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:41 np0005473739 nova_compute[259550]: 2025-10-07 14:35:41.265 2 DEBUG oslo_concurrency.lockutils [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:41 np0005473739 nova_compute[259550]: 2025-10-07 14:35:41.265 2 DEBUG oslo_concurrency.lockutils [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:41 np0005473739 nova_compute[259550]: 2025-10-07 14:35:41.265 2 DEBUG nova.compute.manager [req-9381b296-3a4a-47f3-b8ae-82b609e697fb req-f763daa7-7c10-484d-9402-9b98c2999314 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Processing event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.307 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2198: 305 pgs: 305 active+clean; 213 MiB data, 841 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 1023 KiB/s wr, 63 op/s
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.373 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847742.3732715, 614c22a4-9342-4037-adb5-71c3375b8553 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.374 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] VM Started (Lifecycle Event)#033[00m
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.412 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.416 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847742.3756316, 614c22a4-9342-4037-adb5-71c3375b8553 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.416 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.448 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.451 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:35:42 np0005473739 nova_compute[259550]: 2025-10-07 14:35:42.471 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:35:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.216 2 DEBUG nova.compute.manager [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.216 2 DEBUG oslo_concurrency.lockutils [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.216 2 DEBUG oslo_concurrency.lockutils [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.216 2 DEBUG oslo_concurrency.lockutils [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.217 2 DEBUG nova.compute.manager [req-d103d16a-9ed9-4e10-a9bc-8c30ce0ca5e9 req-255049a8-5c89-4a7d-9a63-d48370d1b585 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Processing event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.217 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.221 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847743.221098, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.221 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.222 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.226 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance spawned successfully.#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.226 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.249 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.255 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.259 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.259 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.260 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.260 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.260 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.261 2 DEBUG nova.virt.libvirt.driver [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.286 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.320 2 INFO nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Took 15.77 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.321 2 DEBUG nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.379 2 DEBUG nova.compute.manager [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.379 2 DEBUG oslo_concurrency.lockutils [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.380 2 DEBUG oslo_concurrency.lockutils [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.380 2 DEBUG oslo_concurrency.lockutils [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.380 2 DEBUG nova.compute.manager [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No event matching network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 in dict_keys([('network-vif-plugged', 'e3da1022-6830-4557-9992-ffd8ec07a599')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.380 2 WARNING nova.compute.manager [req-4d37bc9d-ce43-4f27-9db8-d465806372f6 req-2f3dca48-b8bf-49f2-881c-2b51dca448bf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received unexpected event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.395 2 INFO nova.compute.manager [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Took 20.74 seconds to build instance.#033[00m
Oct  7 10:35:43 np0005473739 nova_compute[259550]: 2025-10-07 14:35:43.430 2 DEBUG oslo_concurrency.lockutils [None req-62378312-915d-410f-b520-76299e910189 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.354s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:44Z|01157|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct  7 10:35:44 np0005473739 nova_compute[259550]: 2025-10-07 14:35:44.241 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:44Z|01158|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct  7 10:35:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:44Z|01159|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct  7 10:35:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2199: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 107 KiB/s rd, 712 KiB/s wr, 53 op/s
Oct  7 10:35:44 np0005473739 nova_compute[259550]: 2025-10-07 14:35:44.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.310 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.311 2 WARNING nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.311 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.311 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.311 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.311 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Processing event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG oslo_concurrency.lockutils [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.312 2 DEBUG nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.313 2 WARNING nova.compute.manager [req-e487a5fd-60dc-4298-b908-e87bd0d9bf2a req-b823e7e9-5a0a-45a1-accf-a38b0629d4b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received unexpected event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.313 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.316 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847745.3164804, 614c22a4-9342-4037-adb5-71c3375b8553 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.317 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.319 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.323 2 INFO nova.virt.libvirt.driver [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance spawned successfully.#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.324 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.346 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.351 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.355 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.356 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.356 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.357 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.357 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.358 2 DEBUG nova.virt.libvirt.driver [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.387 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.456 2 INFO nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Took 20.44 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.456 2 DEBUG nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.560 2 INFO nova.compute.manager [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Took 23.38 seconds to build instance.#033[00m
Oct  7 10:35:45 np0005473739 nova_compute[259550]: 2025-10-07 14:35:45.587 2 DEBUG oslo_concurrency.lockutils [None req-5419fa89-1fb7-4019-a188-d763194e0ade d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 23.767s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2200: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 448 KiB/s rd, 28 KiB/s wr, 62 op/s
Oct  7 10:35:46 np0005473739 nova_compute[259550]: 2025-10-07 14:35:46.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:46 np0005473739 podman[380408]: 2025-10-07 14:35:46.738179165 +0000 UTC m=+0.070112775 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 10:35:46 np0005473739 podman[380408]: 2025-10-07 14:35:46.833211404 +0000 UTC m=+0.165145004 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 10:35:46 np0005473739 nova_compute[259550]: 2025-10-07 14:35:46.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:47 np0005473739 nova_compute[259550]: 2025-10-07 14:35:47.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:47 np0005473739 nova_compute[259550]: 2025-10-07 14:35:47.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:35:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:35:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2201: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 446 KiB/s rd, 28 KiB/s wr, 56 op/s
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:48 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 935d1af2-2616-44f0-8341-153e89f83475 does not exist
Oct  7 10:35:48 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cf3cb86b-9ae5-4edf-bc73-b5891b47bb76 does not exist
Oct  7 10:35:48 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9903ea43-8c97-4719-85f1-69a1806dba57 does not exist
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:35:48 np0005473739 podman[380827]: 2025-10-07 14:35:48.939395722 +0000 UTC m=+0.038981943 container create 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:35:48 np0005473739 nova_compute[259550]: 2025-10-07 14:35:48.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:48 np0005473739 nova_compute[259550]: 2025-10-07 14:35:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:48 np0005473739 nova_compute[259550]: 2025-10-07 14:35:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:48 np0005473739 nova_compute[259550]: 2025-10-07 14:35:48.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:35:48 np0005473739 systemd[1]: Started libpod-conmon-520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c.scope.
Oct  7 10:35:49 np0005473739 podman[380827]: 2025-10-07 14:35:48.921777632 +0000 UTC m=+0.021363873 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:35:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:35:49 np0005473739 podman[380827]: 2025-10-07 14:35:49.040656708 +0000 UTC m=+0.140242929 container init 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:35:49 np0005473739 podman[380827]: 2025-10-07 14:35:49.050332087 +0000 UTC m=+0.149918308 container start 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:35:49 np0005473739 podman[380827]: 2025-10-07 14:35:49.055083714 +0000 UTC m=+0.154669945 container attach 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 10:35:49 np0005473739 keen_goldstine[380842]: 167 167
Oct  7 10:35:49 np0005473739 systemd[1]: libpod-520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c.scope: Deactivated successfully.
Oct  7 10:35:49 np0005473739 podman[380827]: 2025-10-07 14:35:49.057498858 +0000 UTC m=+0.157085089 container died 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:35:49 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8cf0da66ad63c6b8759bb523da201b148e2f8639a1c5b9ba3d80d7746e84531d-merged.mount: Deactivated successfully.
Oct  7 10:35:49 np0005473739 podman[380827]: 2025-10-07 14:35:49.105892911 +0000 UTC m=+0.205479132 container remove 520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_goldstine, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 10:35:49 np0005473739 systemd[1]: libpod-conmon-520363fc74fc751d52cb0e19a9842ccf1ecbae8d82bdf76cab2c120fc9624f5c.scope: Deactivated successfully.
Oct  7 10:35:49 np0005473739 nova_compute[259550]: 2025-10-07 14:35:49.229 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847734.2274334, 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:35:49 np0005473739 nova_compute[259550]: 2025-10-07 14:35:49.231 2 INFO nova.compute.manager [-] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:35:49 np0005473739 nova_compute[259550]: 2025-10-07 14:35:49.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:49 np0005473739 podman[380865]: 2025-10-07 14:35:49.318337508 +0000 UTC m=+0.040999406 container create f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  7 10:35:49 np0005473739 systemd[1]: Started libpod-conmon-f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7.scope.
Oct  7 10:35:49 np0005473739 podman[380865]: 2025-10-07 14:35:49.300711286 +0000 UTC m=+0.023373204 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:35:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:35:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:49 np0005473739 podman[380865]: 2025-10-07 14:35:49.442233168 +0000 UTC m=+0.164895096 container init f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:35:49 np0005473739 podman[380865]: 2025-10-07 14:35:49.451188638 +0000 UTC m=+0.173850536 container start f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:35:49 np0005473739 podman[380865]: 2025-10-07 14:35:49.457054994 +0000 UTC m=+0.179716922 container attach f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:35:49 np0005473739 nova_compute[259550]: 2025-10-07 14:35:49.520 2 DEBUG nova.compute.manager [None req-96b22911-0140-4d23-a28f-4cbd156d56af - - - - - -] [instance: 109b6c3d-5810-4b7d-a96d-4e1b64dc2daa] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:35:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2202: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 28 KiB/s wr, 131 op/s
Oct  7 10:35:50 np0005473739 mystifying_hofstadter[380882]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:35:50 np0005473739 mystifying_hofstadter[380882]: --> relative data size: 1.0
Oct  7 10:35:50 np0005473739 mystifying_hofstadter[380882]: --> All data devices are unavailable
Oct  7 10:35:50 np0005473739 systemd[1]: libpod-f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7.scope: Deactivated successfully.
Oct  7 10:35:50 np0005473739 systemd[1]: libpod-f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7.scope: Consumed 1.069s CPU time.
Oct  7 10:35:50 np0005473739 podman[380911]: 2025-10-07 14:35:50.634349412 +0000 UTC m=+0.035273453 container died f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:35:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-eef3f4a8fb39c7c0499748e994025df16596a0732ef71b4fd36ed2520c4326b2-merged.mount: Deactivated successfully.
Oct  7 10:35:50 np0005473739 podman[380911]: 2025-10-07 14:35:50.731798266 +0000 UTC m=+0.132722307 container remove f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_hofstadter, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 10:35:50 np0005473739 systemd[1]: libpod-conmon-f03a5710d2f9adf15f542866f8474e5e7def4ff184fcebffa2b1105ae6d56cd7.scope: Deactivated successfully.
Oct  7 10:35:50 np0005473739 nova_compute[259550]: 2025-10-07 14:35:50.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:51 np0005473739 podman[381066]: 2025-10-07 14:35:51.384724683 +0000 UTC m=+0.043413161 container create 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:35:51 np0005473739 systemd[1]: Started libpod-conmon-65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4.scope.
Oct  7 10:35:51 np0005473739 podman[381066]: 2025-10-07 14:35:51.366107535 +0000 UTC m=+0.024796043 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:35:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:35:51 np0005473739 podman[381066]: 2025-10-07 14:35:51.500142576 +0000 UTC m=+0.158831074 container init 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:35:51 np0005473739 podman[381066]: 2025-10-07 14:35:51.507480332 +0000 UTC m=+0.166168810 container start 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:35:51 np0005473739 inspiring_stonebraker[381083]: 167 167
Oct  7 10:35:51 np0005473739 systemd[1]: libpod-65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4.scope: Deactivated successfully.
Oct  7 10:35:51 np0005473739 conmon[381083]: conmon 65028c5909f7072b2be8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4.scope/container/memory.events
Oct  7 10:35:51 np0005473739 podman[381066]: 2025-10-07 14:35:51.51450356 +0000 UTC m=+0.173192038 container attach 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:35:51 np0005473739 podman[381066]: 2025-10-07 14:35:51.515328152 +0000 UTC m=+0.174016630 container died 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:35:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-80f64848eddc315c2ec427b88bb005cb38664cb1886389a4836826e2d53a1657-merged.mount: Deactivated successfully.
Oct  7 10:35:51 np0005473739 podman[381066]: 2025-10-07 14:35:51.575010217 +0000 UTC m=+0.233698695 container remove 65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:35:51 np0005473739 systemd[1]: libpod-conmon-65028c5909f7072b2be828972bb77c1d49fda61a36b1c35d20fdd9b8ac3691b4.scope: Deactivated successfully.
Oct  7 10:35:51 np0005473739 podman[381108]: 2025-10-07 14:35:51.782509432 +0000 UTC m=+0.044930292 container create 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 10:35:51 np0005473739 systemd[1]: Started libpod-conmon-52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32.scope.
Oct  7 10:35:51 np0005473739 podman[381108]: 2025-10-07 14:35:51.764373497 +0000 UTC m=+0.026794377 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:35:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:35:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:51 np0005473739 podman[381108]: 2025-10-07 14:35:51.8808679 +0000 UTC m=+0.143288760 container init 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:35:51 np0005473739 podman[381108]: 2025-10-07 14:35:51.889461449 +0000 UTC m=+0.151882309 container start 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 10:35:51 np0005473739 podman[381108]: 2025-10-07 14:35:51.894960786 +0000 UTC m=+0.157381676 container attach 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:35:51 np0005473739 nova_compute[259550]: 2025-10-07 14:35:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:52 np0005473739 nova_compute[259550]: 2025-10-07 14:35:52.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2203: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 27 KiB/s wr, 145 op/s
Oct  7 10:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:35:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:35:52 np0005473739 stoic_jang[381125]: {
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:    "0": [
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:        {
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "devices": [
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "/dev/loop3"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            ],
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_name": "ceph_lv0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_size": "21470642176",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "name": "ceph_lv0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "tags": {
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cluster_name": "ceph",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.crush_device_class": "",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.encrypted": "0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osd_id": "0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.type": "block",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.vdo": "0"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            },
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "type": "block",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "vg_name": "ceph_vg0"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:        }
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:    ],
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:    "1": [
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:        {
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "devices": [
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "/dev/loop4"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            ],
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_name": "ceph_lv1",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_size": "21470642176",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "name": "ceph_lv1",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "tags": {
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cluster_name": "ceph",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.crush_device_class": "",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.encrypted": "0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osd_id": "1",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.type": "block",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.vdo": "0"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            },
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "type": "block",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "vg_name": "ceph_vg1"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:        }
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:    ],
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:    "2": [
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:        {
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "devices": [
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "/dev/loop5"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            ],
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_name": "ceph_lv2",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_size": "21470642176",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "name": "ceph_lv2",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "tags": {
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.cluster_name": "ceph",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.crush_device_class": "",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.encrypted": "0",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osd_id": "2",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.type": "block",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:                "ceph.vdo": "0"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            },
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "type": "block",
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:            "vg_name": "ceph_vg2"
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:        }
Oct  7 10:35:52 np0005473739 stoic_jang[381125]:    ]
Oct  7 10:35:52 np0005473739 stoic_jang[381125]: }
Oct  7 10:35:52 np0005473739 systemd[1]: libpod-52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32.scope: Deactivated successfully.
Oct  7 10:35:52 np0005473739 podman[381108]: 2025-10-07 14:35:52.75924935 +0000 UTC m=+1.021670220 container died 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:35:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-457f0ccb160c603bf9d9d76cb754019f654f9575c8ed8123cd944f489ac57b58-merged.mount: Deactivated successfully.
Oct  7 10:35:52 np0005473739 podman[381108]: 2025-10-07 14:35:52.832077806 +0000 UTC m=+1.094498666 container remove 52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_jang, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:35:52 np0005473739 systemd[1]: libpod-conmon-52e2fd79fc4102fb6f17ed2a2f4ee713d7c70ca9c9ceace5d4f1f503334b4f32.scope: Deactivated successfully.
Oct  7 10:35:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:53 np0005473739 podman[381286]: 2025-10-07 14:35:53.51970826 +0000 UTC m=+0.057049365 container create 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:35:53 np0005473739 systemd[1]: Started libpod-conmon-46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793.scope.
Oct  7 10:35:53 np0005473739 podman[381286]: 2025-10-07 14:35:53.492512963 +0000 UTC m=+0.029854088 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:35:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:35:53 np0005473739 podman[381286]: 2025-10-07 14:35:53.609771827 +0000 UTC m=+0.147112952 container init 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:35:53 np0005473739 podman[381286]: 2025-10-07 14:35:53.616811425 +0000 UTC m=+0.154152530 container start 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:35:53 np0005473739 podman[381286]: 2025-10-07 14:35:53.620784931 +0000 UTC m=+0.158126056 container attach 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:35:53 np0005473739 systemd[1]: libpod-46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793.scope: Deactivated successfully.
Oct  7 10:35:53 np0005473739 crazy_lovelace[381302]: 167 167
Oct  7 10:35:53 np0005473739 conmon[381302]: conmon 46d6a00a3f02bcdedee6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793.scope/container/memory.events
Oct  7 10:35:53 np0005473739 podman[381286]: 2025-10-07 14:35:53.624193482 +0000 UTC m=+0.161534587 container died 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:35:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-37e434d956223c4270eae0ce816b6df2db82e597c54af1d7718653766549e77c-merged.mount: Deactivated successfully.
Oct  7 10:35:53 np0005473739 podman[381286]: 2025-10-07 14:35:53.676202701 +0000 UTC m=+0.213543806 container remove 46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:35:53 np0005473739 systemd[1]: libpod-conmon-46d6a00a3f02bcdedee6eb0e2b51c103d6e28c38e7a79d1f0ae3965107853793.scope: Deactivated successfully.
Oct  7 10:35:53 np0005473739 podman[381326]: 2025-10-07 14:35:53.916625126 +0000 UTC m=+0.066090977 container create 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:35:53 np0005473739 systemd[1]: Started libpod-conmon-03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea.scope.
Oct  7 10:35:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:35:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:35:53 np0005473739 podman[381326]: 2025-10-07 14:35:53.896748895 +0000 UTC m=+0.046214766 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:35:54 np0005473739 podman[381326]: 2025-10-07 14:35:54.001616247 +0000 UTC m=+0.151082118 container init 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:35:54 np0005473739 podman[381326]: 2025-10-07 14:35:54.009627671 +0000 UTC m=+0.159093512 container start 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:35:54 np0005473739 podman[381326]: 2025-10-07 14:35:54.032162183 +0000 UTC m=+0.181628064 container attach 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  7 10:35:54 np0005473739 nova_compute[259550]: 2025-10-07 14:35:54.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2204: 305 pgs: 305 active+clean; 214 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 15 KiB/s wr, 136 op/s
Oct  7 10:35:54 np0005473739 nova_compute[259550]: 2025-10-07 14:35:54.423 2 DEBUG nova.compute.manager [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:54 np0005473739 nova_compute[259550]: 2025-10-07 14:35:54.424 2 DEBUG nova.compute.manager [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing instance network info cache due to event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:35:54 np0005473739 nova_compute[259550]: 2025-10-07 14:35:54.424 2 DEBUG oslo_concurrency.lockutils [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:54 np0005473739 nova_compute[259550]: 2025-10-07 14:35:54.424 2 DEBUG oslo_concurrency.lockutils [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:54 np0005473739 nova_compute[259550]: 2025-10-07 14:35:54.425 2 DEBUG nova.network.neutron [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:35:55 np0005473739 goofy_germain[381341]: {
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "osd_id": 2,
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "type": "bluestore"
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:    },
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "osd_id": 1,
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "type": "bluestore"
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:    },
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "osd_id": 0,
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:        "type": "bluestore"
Oct  7 10:35:55 np0005473739 goofy_germain[381341]:    }
Oct  7 10:35:55 np0005473739 goofy_germain[381341]: }
Oct  7 10:35:55 np0005473739 systemd[1]: libpod-03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea.scope: Deactivated successfully.
Oct  7 10:35:55 np0005473739 systemd[1]: libpod-03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea.scope: Consumed 1.074s CPU time.
Oct  7 10:35:55 np0005473739 podman[381374]: 2025-10-07 14:35:55.126170175 +0000 UTC m=+0.025640845 container died 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:35:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4c008a3bd37c7c22e088bba77b7bc93301b18f7971ad16ea3d6e4149dec45b48-merged.mount: Deactivated successfully.
Oct  7 10:35:55 np0005473739 podman[381374]: 2025-10-07 14:35:55.220843965 +0000 UTC m=+0.120314615 container remove 03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_germain, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:35:55 np0005473739 systemd[1]: libpod-conmon-03b4646bbdd8a3f9bfa1624101622eac3c6bad344e451135ca3415ea027f03ea.scope: Deactivated successfully.
Oct  7 10:35:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:35:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:35:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ccbf365b-d9a0-4af4-8ca3-7cac45b2dfab does not exist
Oct  7 10:35:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 93572faa-c4b9-4962-9919-dd185a73af5a does not exist
Oct  7 10:35:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:35:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2205: 305 pgs: 305 active+clean; 221 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.7 MiB/s rd, 511 KiB/s wr, 131 op/s
Oct  7 10:35:56 np0005473739 nova_compute[259550]: 2025-10-07 14:35:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.041 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.042 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.042 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.042 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.043 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:57Z|00125|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:dc:c4 10.100.0.11
Oct  7 10:35:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:57Z|00126|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:dc:c4 10.100.0.11
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:35:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4112694520' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.521 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:57 np0005473739 podman[381462]: 2025-10-07 14:35:57.65887472 +0000 UTC m=+0.079751412 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 10:35:57 np0005473739 podman[381461]: 2025-10-07 14:35:57.658887511 +0000 UTC m=+0.079365932 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.872 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.872 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000006e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.876 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.877 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000070 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.881 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:35:57 np0005473739 nova_compute[259550]: 2025-10-07 14:35:57.881 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000071 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:35:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.124 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.125 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3270MB free_disk=59.89509582519531GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.126 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.126 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.144 2 DEBUG nova.network.neutron [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updated VIF entry in instance network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.145 2 DEBUG nova.network.neutron [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:35:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2206: 305 pgs: 305 active+clean; 221 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 511 KiB/s wr, 120 op/s
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.714 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance df052dd5-fecd-4dd3-be36-4becc3f9f318 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.715 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 614c22a4-9342-4037-adb5-71c3375b8553 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.715 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.715 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.716 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:35:58 np0005473739 nova_compute[259550]: 2025-10-07 14:35:58.788 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.117 2 DEBUG oslo_concurrency.lockutils [req-8d9a68f2-0024-4524-8463-03ce5622209a req-4a018531-ba0e-4db1-acc3-80a265c7339c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:35:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:35:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/488407076' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.257 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.263 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.470 2 DEBUG nova.compute.manager [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.471 2 DEBUG nova.compute.manager [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing instance network info cache due to event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.471 2 DEBUG oslo_concurrency.lockutils [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.472 2 DEBUG oslo_concurrency.lockutils [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.472 2 DEBUG nova.network.neutron [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.563 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.661 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:35:59 np0005473739 nova_compute[259550]: 2025-10-07 14:35:59.661 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.535s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:35:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:59Z|00127|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:02:68:46 10.100.0.10
Oct  7 10:35:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:35:59Z|00128|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:02:68:46 10.100.0.10
Oct  7 10:36:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:00.068 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:00.068 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:00.069 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:00 np0005473739 nova_compute[259550]: 2025-10-07 14:36:00.239 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2207: 305 pgs: 305 active+clean; 262 MiB data, 896 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.9 MiB/s wr, 192 op/s
Oct  7 10:36:01 np0005473739 nova_compute[259550]: 2025-10-07 14:36:01.601 2 DEBUG nova.network.neutron [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updated VIF entry in instance network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:36:01 np0005473739 nova_compute[259550]: 2025-10-07 14:36:01.602 2 DEBUG nova.network.neutron [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:01 np0005473739 nova_compute[259550]: 2025-10-07 14:36:01.662 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:01 np0005473739 nova_compute[259550]: 2025-10-07 14:36:01.874 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:01 np0005473739 nova_compute[259550]: 2025-10-07 14:36:01.875 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:36:01 np0005473739 nova_compute[259550]: 2025-10-07 14:36:01.938 2 DEBUG oslo_concurrency.lockutils [req-bb4d6528-b1ab-4372-845b-31cb79609a43 req-428e082a-f6c9-458b-87dd-bb57aaaf98b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:02 np0005473739 nova_compute[259550]: 2025-10-07 14:36:02.222 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:02 np0005473739 nova_compute[259550]: 2025-10-07 14:36:02.222 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:02 np0005473739 nova_compute[259550]: 2025-10-07 14:36:02.223 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:36:02 np0005473739 nova_compute[259550]: 2025-10-07 14:36:02.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2208: 305 pgs: 305 active+clean; 271 MiB data, 905 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 4.2 MiB/s wr, 155 op/s
Oct  7 10:36:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:03 np0005473739 nova_compute[259550]: 2025-10-07 14:36:03.039 2 INFO nova.compute.manager [None req-6b7afdfc-e496-4075-a030-32b1bf31d697 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Get console output#033[00m
Oct  7 10:36:03 np0005473739 nova_compute[259550]: 2025-10-07 14:36:03.047 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:36:04 np0005473739 nova_compute[259550]: 2025-10-07 14:36:04.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2209: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct  7 10:36:05 np0005473739 nova_compute[259550]: 2025-10-07 14:36:05.538 2 INFO nova.compute.manager [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Rebuilding instance#033[00m
Oct  7 10:36:05 np0005473739 nova_compute[259550]: 2025-10-07 14:36:05.759 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'trusted_certs' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:06 np0005473739 nova_compute[259550]: 2025-10-07 14:36:06.000 2 DEBUG nova.compute.manager [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:06 np0005473739 nova_compute[259550]: 2025-10-07 14:36:06.291 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_requests' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2210: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 703 KiB/s rd, 4.3 MiB/s wr, 128 op/s
Oct  7 10:36:06 np0005473739 nova_compute[259550]: 2025-10-07 14:36:06.349 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:06 np0005473739 nova_compute[259550]: 2025-10-07 14:36:06.411 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:06 np0005473739 nova_compute[259550]: 2025-10-07 14:36:06.540 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:06 np0005473739 nova_compute[259550]: 2025-10-07 14:36:06.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:06 np0005473739 nova_compute[259550]: 2025-10-07 14:36:06.619 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:36:06 np0005473739 nova_compute[259550]: 2025-10-07 14:36:06.625 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:36:07 np0005473739 nova_compute[259550]: 2025-10-07 14:36:07.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:07 np0005473739 nova_compute[259550]: 2025-10-07 14:36:07.669 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:08 np0005473739 nova_compute[259550]: 2025-10-07 14:36:08.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:08 np0005473739 nova_compute[259550]: 2025-10-07 14:36:08.005 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:36:08 np0005473739 nova_compute[259550]: 2025-10-07 14:36:08.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2211: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 696 KiB/s rd, 3.8 MiB/s wr, 121 op/s
Oct  7 10:36:09 np0005473739 kernel: tap84baaee6-3f (unregistering): left promiscuous mode
Oct  7 10:36:09 np0005473739 NetworkManager[44949]: <info>  [1759847769.2502] device (tap84baaee6-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:36:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:09Z|01160|binding|INFO|Releasing lport 84baaee6-3f89-4d61-aaea-507e46e65618 from this chassis (sb_readonly=0)
Oct  7 10:36:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:09Z|01161|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 down in Southbound
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:09Z|01162|binding|INFO|Removing iface tap84baaee6-3f ovn-installed in OVS
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.304 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dc:c4 10.100.0.11'], port_security=['fa:16:3e:84:dc:c4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '97d6fa92-e050-4cce-a368-3065ace6996b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e8a86b-883d-4b18-ac5a-841f1cc14111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=84baaee6-3f89-4d61-aaea-507e46e65618) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.306 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 84baaee6-3f89-4d61-aaea-507e46e65618 in datapath 002ae4bb-0f71-4b57-99ae-0bfd304fb458 unbound from our chassis#033[00m
Oct  7 10:36:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.307 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 002ae4bb-0f71-4b57-99ae-0bfd304fb458, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:36:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.309 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7367f5-6ef0-4a8b-b861-30953ce67e61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:09.309 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 namespace which is not needed anymore#033[00m
Oct  7 10:36:09 np0005473739 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct  7 10:36:09 np0005473739 systemd[1]: machine-qemu\x2d140\x2dinstance\x2d00000071.scope: Consumed 14.835s CPU time.
Oct  7 10:36:09 np0005473739 systemd-machined[214580]: Machine qemu-140-instance-00000071 terminated.
Oct  7 10:36:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:09Z|01163|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct  7 10:36:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:09Z|01164|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct  7 10:36:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:09Z|01165|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [NOTICE]   (379956) : haproxy version is 2.8.14-c23fe91
Oct  7 10:36:09 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [NOTICE]   (379956) : path to executable is /usr/sbin/haproxy
Oct  7 10:36:09 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [WARNING]  (379956) : Exiting Master process...
Oct  7 10:36:09 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [WARNING]  (379956) : Exiting Master process...
Oct  7 10:36:09 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [ALERT]    (379956) : Current worker (379958) exited with code 143 (Terminated)
Oct  7 10:36:09 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[379952]: [WARNING]  (379956) : All workers exited. Exiting... (0)
Oct  7 10:36:09 np0005473739 systemd[1]: libpod-47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624.scope: Deactivated successfully.
Oct  7 10:36:09 np0005473739 podman[381546]: 2025-10-07 14:36:09.494357649 +0000 UTC m=+0.083137252 container died 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.512 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.578 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.578 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.579 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.579 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.580 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.581 2 INFO nova.compute.manager [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Terminating instance#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.582 2 DEBUG nova.compute.manager [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.639 2 DEBUG nova.compute.manager [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.640 2 DEBUG nova.compute.manager [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing instance network info cache due to event network-changed-3eb614e8-3f14-4375-bbe9-34facd5bce52. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.640 2 DEBUG oslo_concurrency.lockutils [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.640 2 DEBUG oslo_concurrency.lockutils [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.640 2 DEBUG nova.network.neutron [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Refreshing network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.644 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance shutdown successfully after 3 seconds.#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.649 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance destroyed successfully.#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.653 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance destroyed successfully.#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.654 2 DEBUG nova.virt.libvirt.vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='rebuilding',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:03Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.655 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.655 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.656 2 DEBUG os_vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.657 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84baaee6-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.661 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:09 np0005473739 nova_compute[259550]: 2025-10-07 14:36:09.666 2 INFO os_vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f')#033[00m
Oct  7 10:36:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624-userdata-shm.mount: Deactivated successfully.
Oct  7 10:36:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cfa396a46fc4f81512c267e5f20233b50cdd5b2d225de55dfebc3c6e7a24f58e-merged.mount: Deactivated successfully.
Oct  7 10:36:09 np0005473739 podman[381546]: 2025-10-07 14:36:09.962091067 +0000 UTC m=+0.550870670 container cleanup 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:36:09 np0005473739 systemd[1]: libpod-conmon-47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624.scope: Deactivated successfully.
Oct  7 10:36:10 np0005473739 kernel: tap3eb614e8-3f (unregistering): left promiscuous mode
Oct  7 10:36:10 np0005473739 NetworkManager[44949]: <info>  [1759847770.1392] device (tap3eb614e8-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:36:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:10Z|01166|binding|INFO|Releasing lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 from this chassis (sb_readonly=0)
Oct  7 10:36:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:10Z|01167|binding|INFO|Setting lport 3eb614e8-3f14-4375-bbe9-34facd5bce52 down in Southbound
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:10Z|01168|binding|INFO|Removing iface tap3eb614e8-3f ovn-installed in OVS
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 kernel: tape3da1022-68 (unregistering): left promiscuous mode
Oct  7 10:36:10 np0005473739 NetworkManager[44949]: <info>  [1759847770.1807] device (tape3da1022-68): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:36:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:10Z|01169|binding|INFO|Releasing lport e3da1022-6830-4557-9992-ffd8ec07a599 from this chassis (sb_readonly=1)
Oct  7 10:36:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:10Z|01170|binding|INFO|Removing iface tape3da1022-68 ovn-installed in OVS
Oct  7 10:36:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:10Z|01171|if_status|INFO|Dropped 1 log messages in last 152 seconds (most recently, 152 seconds ago) due to excessive rate
Oct  7 10:36:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:10Z|01172|if_status|INFO|Not setting lport e3da1022-6830-4557-9992-ffd8ec07a599 down as sb is readonly
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:10Z|01173|binding|INFO|Setting lport e3da1022-6830-4557-9992-ffd8ec07a599 down in Southbound
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.229 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:02:68:46 10.100.0.10'], port_security=['fa:16:3e:02:68:46 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '614c22a4-9342-4037-adb5-71c3375b8553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c899e05d-224c-44fe-8294-eaece58d7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34b97456-8a84-4c85-bc2d-80b2de48e489, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3eb614e8-3f14-4375-bbe9-34facd5bce52) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:10 np0005473739 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Deactivated successfully.
Oct  7 10:36:10 np0005473739 systemd[1]: machine-qemu\x2d141\x2dinstance\x2d00000070.scope: Consumed 15.621s CPU time.
Oct  7 10:36:10 np0005473739 systemd-machined[214580]: Machine qemu-141-instance-00000070 terminated.
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.283 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25'], port_security=['fa:16:3e:2e:5c:25 2001:db8::f816:3eff:fe2e:5c25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe2e:5c25/64', 'neutron:device_id': '614c22a4-9342-4037-adb5-71c3375b8553', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae04c5e-8ef6-447a-8c6c-5575a0afadaf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=e3da1022-6830-4557-9992-ffd8ec07a599) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2212: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 706 KiB/s rd, 3.8 MiB/s wr, 126 op/s
Oct  7 10:36:10 np0005473739 podman[381604]: 2025-10-07 14:36:10.383022795 +0000 UTC m=+0.399186439 container remove 47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.390 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8af6c58b-f804-4ee0-ba79-9bb323ff8141]: (4, ('Tue Oct  7 02:36:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 (47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624)\n47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624\nTue Oct  7 02:36:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 (47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624)\n47ce7f6c60a58f741a54ecdd85f9e41d39758f5efff2c5b93d98b10017b36624\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.392 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5283d62c-f8ae-43e1-bd59-60055daae5a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.393 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap002ae4bb-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 kernel: tap002ae4bb-00: left promiscuous mode
Oct  7 10:36:10 np0005473739 NetworkManager[44949]: <info>  [1759847770.4122] manager: (tape3da1022-68): new Tun device (/org/freedesktop/NetworkManager/Devices/471)
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[679d527a-1b4f-4287-8b2e-6c89f64b91be]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.432 2 INFO nova.virt.libvirt.driver [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Instance destroyed successfully.#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.432 2 DEBUG nova.objects.instance [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 614c22a4-9342-4037-adb5-71c3375b8553 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.450 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8f2fc603-58da-4c09-99f0-191c704fd85a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.452 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd0e38fe-053c-486f-ad86-2be3c849e7e7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.467 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a1f9c0b-947a-4d68-a0f9-488e8bbc8cc1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 831996, 'reachable_time': 20743, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381646, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 systemd[1]: run-netns-ovnmeta\x2d002ae4bb\x2d0f71\x2d4b57\x2d99ae\x2d0bfd304fb458.mount: Deactivated successfully.
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.473 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.473 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5730ecd5-4ef5-4071-a4fa-e58bb33ad23e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.474 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3eb614e8-3f14-4375-bbe9-34facd5bce52 in datapath c899e05d-224c-44fe-8294-eaece58d7fe7 unbound from our chassis#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.475 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c899e05d-224c-44fe-8294-eaece58d7fe7#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.491 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[76727b13-9c56-43fe-875a-a133dab60a83]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.524 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[272ae17c-c067-440e-88e7-153f090d698c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.526 2 DEBUG nova.virt.libvirt.vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:35:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.526 2 DEBUG nova.network.os_vif_util [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.185", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.527 2 DEBUG nova.network.os_vif_util [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.527 2 DEBUG os_vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.528 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3eb614e8-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.528 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8d57c20e-cbe3-4e24-b9dd-0959a456c1f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.537 2 INFO os_vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:02:68:46,bridge_name='br-int',has_traffic_filtering=True,id=3eb614e8-3f14-4375-bbe9-34facd5bce52,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3eb614e8-3f')#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.538 2 DEBUG nova.virt.libvirt.vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:35:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1065431809',display_name='tempest-TestGettingAddress-server-1065431809',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1065431809',id=112,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-b1k3bk39',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:35:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=614c22a4-9342-4037-adb5-71c3375b8553,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.538 2 DEBUG nova.network.os_vif_util [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.539 2 DEBUG nova.network.os_vif_util [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.539 2 DEBUG os_vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape3da1022-68, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.550 2 DEBUG nova.compute.manager [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-unplugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.550 2 DEBUG oslo_concurrency.lockutils [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.550 2 DEBUG oslo_concurrency.lockutils [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.551 2 DEBUG oslo_concurrency.lockutils [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.551 2 DEBUG nova.compute.manager [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-unplugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.551 2 DEBUG nova.compute.manager [req-5de303bc-a3eb-4503-b819-5e4d06cad502 req-71379314-22ce-496b-893a-1a943e99c50c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-unplugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.551 2 INFO os_vif [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:2e:5c:25,bridge_name='br-int',has_traffic_filtering=True,id=e3da1022-6830-4557-9992-ffd8ec07a599,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tape3da1022-68')#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.567 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aea1e8e9-c5fe-4136-834c-9cbb343ba1e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.589 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c73a51d3-d787-41fb-966c-249b6bc03d0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc899e05d-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:74:ba:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 324], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827520, 'reachable_time': 17717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381669, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.609 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e010a476-faaa-4b8a-aef8-d7e2ed6f65ac]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapc899e05d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827532, 'tstamp': 827532}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381670, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc899e05d-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827535, 'tstamp': 827535}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381670, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.611 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc899e05d-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.614 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc899e05d-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.614 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.615 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc899e05d-20, col_values=(('external_ids', {'iface-id': 'a420e217-e19f-447c-8a17-e0fdc42144b5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.615 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.617 161536 INFO neutron.agent.ovn.metadata.agent [-] Port e3da1022-6830-4557-9992-ffd8ec07a599 in datapath 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 unbound from our chassis#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.619 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.633 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61ef4fdf-65cb-4ae9-af6c-5539a2325448]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.669 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[333cc2cf-4963-4d81-91cf-24eb3b1e7e2e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.673 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5986124d-a389-41fd-9018-43991fafa620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.699 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[738cbe54-22f1-4038-9294-f281ab36ac30]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.726 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e674ab-9d1c-4efc-a17d-a60cb8853474]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap673dd6ba-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7b:cb:46'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2772, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2772, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 325], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827720, 'reachable_time': 16406, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 381679, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.749 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42abc9bd-626b-42fd-9b35-6b9002ab582f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap673dd6ba-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 827732, 'tstamp': 827732}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 381680, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.752 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673dd6ba-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 nova_compute[259550]: 2025-10-07 14:36:10.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap673dd6ba-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.756 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap673dd6ba-40, col_values=(('external_ids', {'iface-id': 'e2eca697-1003-4116-b184-c9191a00584f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:10.757 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:11 np0005473739 nova_compute[259550]: 2025-10-07 14:36:11.068 2 DEBUG nova.network.neutron [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updated VIF entry in instance network info cache for port 3eb614e8-3f14-4375-bbe9-34facd5bce52. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:36:11 np0005473739 nova_compute[259550]: 2025-10-07 14:36:11.069 2 DEBUG nova.network.neutron [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "e3da1022-6830-4557-9992-ffd8ec07a599", "address": "fa:16:3e:2e:5c:25", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe2e:5c25", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tape3da1022-68", "ovs_interfaceid": "e3da1022-6830-4557-9992-ffd8ec07a599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:11 np0005473739 podman[381681]: 2025-10-07 14:36:11.078161089 +0000 UTC m=+0.062544583 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  7 10:36:11 np0005473739 podman[381682]: 2025-10-07 14:36:11.108585832 +0000 UTC m=+0.092720288 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:36:11 np0005473739 nova_compute[259550]: 2025-10-07 14:36:11.133 2 DEBUG oslo_concurrency.lockutils [req-75103e4d-1b3b-42f0-897f-39e247444757 req-901cee8c-c35b-4f5e-99e7-097f5fd01f2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-614c22a4-9342-4037-adb5-71c3375b8553" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:12 np0005473739 nova_compute[259550]: 2025-10-07 14:36:12.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2213: 305 pgs: 305 active+clean; 279 MiB data, 907 MiB used, 59 GiB / 60 GiB avail; 274 KiB/s rd, 418 KiB/s wr, 62 op/s
Oct  7 10:36:12 np0005473739 nova_compute[259550]: 2025-10-07 14:36:12.743 2 DEBUG nova.compute.manager [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:12 np0005473739 nova_compute[259550]: 2025-10-07 14:36:12.743 2 DEBUG oslo_concurrency.lockutils [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:12 np0005473739 nova_compute[259550]: 2025-10-07 14:36:12.744 2 DEBUG oslo_concurrency.lockutils [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:12 np0005473739 nova_compute[259550]: 2025-10-07 14:36:12.744 2 DEBUG oslo_concurrency.lockutils [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:12 np0005473739 nova_compute[259550]: 2025-10-07 14:36:12.745 2 DEBUG nova.compute.manager [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:12 np0005473739 nova_compute[259550]: 2025-10-07 14:36:12.745 2 WARNING nova.compute.manager [req-1e16e314-d5a0-4cc3-b88e-7d5f072af652 req-361d1923-c1d8-42c1-841a-b7f2bfb2336f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received unexpected event network-vif-plugged-3eb614e8-3f14-4375-bbe9-34facd5bce52 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:36:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:13 np0005473739 nova_compute[259550]: 2025-10-07 14:36:13.107 2 DEBUG nova.compute.manager [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:13 np0005473739 nova_compute[259550]: 2025-10-07 14:36:13.108 2 DEBUG oslo_concurrency.lockutils [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:13 np0005473739 nova_compute[259550]: 2025-10-07 14:36:13.108 2 DEBUG oslo_concurrency.lockutils [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:13 np0005473739 nova_compute[259550]: 2025-10-07 14:36:13.108 2 DEBUG oslo_concurrency.lockutils [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:13 np0005473739 nova_compute[259550]: 2025-10-07 14:36:13.108 2 DEBUG nova.compute.manager [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:13 np0005473739 nova_compute[259550]: 2025-10-07 14:36:13.109 2 WARNING nova.compute.manager [req-374cbcba-b28a-4b9f-883f-b0c77ca478b7 req-38957575-9949-46b2-8ece-562a4c09cefa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state rebuilding.#033[00m
Oct  7 10:36:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2214: 305 pgs: 305 active+clean; 186 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 68 KiB/s rd, 110 KiB/s wr, 39 op/s
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.189 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deleting instance files /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_del#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.191 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deletion of /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_del complete#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.506 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.507 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.507 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.508 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.508 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.508 2 WARNING nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state rebuilding.#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.508 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-unplugged-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.508 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-unplugged-e3da1022-6830-4557-9992-ffd8ec07a599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-unplugged-e3da1022-6830-4557-9992-ffd8ec07a599 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "614c22a4-9342-4037-adb5-71c3375b8553-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.509 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.510 2 DEBUG oslo_concurrency.lockutils [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.510 2 DEBUG nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] No waiting events found dispatching network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.510 2 WARNING nova.compute.manager [req-17ce93c3-fe70-44a1-a2b4-278404f5f59f req-bda00f92-498a-4529-a460-c20f7af8f0d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received unexpected event network-vif-plugged-e3da1022-6830-4557-9992-ffd8ec07a599 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.560 2 INFO nova.virt.libvirt.driver [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Deleting instance files /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553_del#033[00m
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.561 2 INFO nova.virt.libvirt.driver [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Deletion of /var/lib/nova/instances/614c22a4-9342-4037-adb5-71c3375b8553_del complete#033[00m
Oct  7 10:36:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:15Z|01174|binding|INFO|Releasing lport a420e217-e19f-447c-8a17-e0fdc42144b5 from this chassis (sb_readonly=0)
Oct  7 10:36:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:15Z|01175|binding|INFO|Releasing lport e2eca697-1003-4116-b184-c9191a00584f from this chassis (sb_readonly=0)
Oct  7 10:36:15 np0005473739 nova_compute[259550]: 2025-10-07 14:36:15.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.168 2 INFO nova.compute.manager [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Took 6.59 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.169 2 DEBUG oslo.service.loopingcall [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.169 2 DEBUG nova.compute.manager [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.170 2 DEBUG nova.network.neutron [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.311 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.312 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating image(s)#033[00m
Oct  7 10:36:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2215: 305 pgs: 305 active+clean; 146 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 47 op/s
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.334 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.363 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.385 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.389 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.484 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.485 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.486 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.486 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.510 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.514 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.875 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:16 np0005473739 nova_compute[259550]: 2025-10-07 14:36:16.876 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.294 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.469 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2 e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.955s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.541 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.589 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.590 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.601 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.601 2 INFO nova.compute.claims [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.688 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.689 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Ensure instance console log exists: /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.689 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.690 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.690 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.692 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start _get_guest_xml network_info=[{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.696 2 WARNING nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.: NotImplementedError#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.700 2 DEBUG nova.virt.libvirt.host [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.701 2 DEBUG nova.virt.libvirt.host [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.704 2 DEBUG nova.virt.libvirt.host [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.704 2 DEBUG nova.virt.libvirt.host [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.705 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.705 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:31Z,direct_url=<?>,disk_format='qcow2',id=d37bdf89-ce37-478a-af4d-2b9cd0435b79,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img_alt',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:32Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.705 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.705 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.706 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.707 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.707 2 DEBUG nova.virt.hardware [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.707 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:17 np0005473739 nova_compute[259550]: 2025-10-07 14:36:17.948 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2216: 305 pgs: 305 active+clean; 146 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 41 KiB/s wr, 47 op/s
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.343 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:36:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1520250676' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.423 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.445 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.449 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:36:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2929134489' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.793 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.800 2 DEBUG nova.compute.provider_tree [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:36:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:36:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/259515317' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.908 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.909 2 DEBUG nova.virt.libvirt.vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:15Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.909 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.910 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.913 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <uuid>e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</uuid>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <name>instance-00000071</name>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-2036986690</nova:name>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:36:17</nova:creationTime>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="d37bdf89-ce37-478a-af4d-2b9cd0435b79"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <nova:port uuid="84baaee6-3f89-4d61-aaea-507e46e65618">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <entry name="serial">e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</entry>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <entry name="uuid">e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb</entry>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:84:dc:c4"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <target dev="tap84baaee6-3f"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/console.log" append="off"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:36:18 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:36:18 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:36:18 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:36:18 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.914 2 DEBUG nova.virt.libvirt.vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:35:43Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='rebuild_spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:15Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.914 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.914 2 DEBUG nova.network.os_vif_util [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.915 2 DEBUG os_vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.916 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap84baaee6-3f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.918 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap84baaee6-3f, col_values=(('external_ids', {'iface-id': '84baaee6-3f89-4d61-aaea-507e46e65618', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:84:dc:c4', 'vm-uuid': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:18 np0005473739 NetworkManager[44949]: <info>  [1759847778.9206] manager: (tap84baaee6-3f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/472)
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.922 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.925 2 INFO os_vif [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f')#033[00m
Oct  7 10:36:18 np0005473739 nova_compute[259550]: 2025-10-07 14:36:18.935 2 DEBUG nova.scheduler.client.report [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.335 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.336 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.396 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.397 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.397 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:84:dc:c4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.398 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Using config drive#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.418 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.770 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.770 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.778 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'ec2_ids' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.889 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'keypairs' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:19 np0005473739 nova_compute[259550]: 2025-10-07 14:36:19.915 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.000 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:36:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2217: 305 pgs: 305 active+clean; 144 MiB data, 825 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 978 KiB/s wr, 76 op/s
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.442 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.443 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.444 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Creating image(s)#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.465 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.488 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.509 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.512 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.602 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.603 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.604 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.604 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.623 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.626 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:20 np0005473739 nova_compute[259550]: 2025-10-07 14:36:20.998 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.372s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.059 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.229 2 DEBUG nova.objects.instance [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.262 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.262 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Ensure instance console log exists: /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.263 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.264 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.264 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.505 2 DEBUG nova.compute.manager [req-ed0705dd-0d46-495e-898f-fb210fb89800 req-2ed09e60-d264-40f9-9f39-52ecb713ccdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-deleted-e3da1022-6830-4557-9992-ffd8ec07a599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.505 2 INFO nova.compute.manager [req-ed0705dd-0d46-495e-898f-fb210fb89800 req-2ed09e60-d264-40f9-9f39-52ecb713ccdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Neutron deleted interface e3da1022-6830-4557-9992-ffd8ec07a599; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.506 2 DEBUG nova.network.neutron [req-ed0705dd-0d46-495e-898f-fb210fb89800 req-2ed09e60-d264-40f9-9f39-52ecb713ccdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [{"id": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "address": "fa:16:3e:02:68:46", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3eb614e8-3f", "ovs_interfaceid": "3eb614e8-3f14-4375-bbe9-34facd5bce52", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.558 2 DEBUG nova.network.neutron [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.625 2 DEBUG nova.compute.manager [req-ed0705dd-0d46-495e-898f-fb210fb89800 req-2ed09e60-d264-40f9-9f39-52ecb713ccdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Detach interface failed, port_id=e3da1022-6830-4557-9992-ffd8ec07a599, reason: Instance 614c22a4-9342-4037-adb5-71c3375b8553 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.637 2 INFO nova.compute.manager [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Took 5.47 seconds to deallocate network for instance.#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.672 2 DEBUG nova.policy [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.732 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.733 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.837 2 DEBUG oslo_concurrency.processutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.944 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Creating config drive at /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config#033[00m
Oct  7 10:36:21 np0005473739 nova_compute[259550]: 2025-10-07 14:36:21.950 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr6dmss1v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.096 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpr6dmss1v" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.126 2 DEBUG nova.storage.rbd_utils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.130 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:36:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3308405861' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.290 2 DEBUG oslo_concurrency.processutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.296 2 DEBUG nova.compute.provider_tree [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2218: 305 pgs: 305 active+clean; 167 MiB data, 835 MiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 1.8 MiB/s wr, 84 op/s
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.325 2 DEBUG nova.scheduler.client.report [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.372 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.639s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.411 2 DEBUG oslo_concurrency.processutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.281s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.411 2 INFO nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deleting local config drive /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb/disk.config because it was imported into RBD.#033[00m
Oct  7 10:36:22 np0005473739 kernel: tap84baaee6-3f: entered promiscuous mode
Oct  7 10:36:22 np0005473739 NetworkManager[44949]: <info>  [1759847782.4645] manager: (tap84baaee6-3f): new Tun device (/org/freedesktop/NetworkManager/Devices/473)
Oct  7 10:36:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:22Z|01176|binding|INFO|Claiming lport 84baaee6-3f89-4d61-aaea-507e46e65618 for this chassis.
Oct  7 10:36:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:22Z|01177|binding|INFO|84baaee6-3f89-4d61-aaea-507e46e65618: Claiming fa:16:3e:84:dc:c4 10.100.0.11
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:22Z|01178|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 ovn-installed in OVS
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:22 np0005473739 systemd-udevd[382238]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:36:22 np0005473739 systemd-machined[214580]: New machine qemu-142-instance-00000071.
Oct  7 10:36:22 np0005473739 NetworkManager[44949]: <info>  [1759847782.5077] device (tap84baaee6-3f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:36:22 np0005473739 NetworkManager[44949]: <info>  [1759847782.5085] device (tap84baaee6-3f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:36:22 np0005473739 systemd[1]: Started Virtual Machine qemu-142-instance-00000071.
Oct  7 10:36:22 np0005473739 nova_compute[259550]: 2025-10-07 14:36:22.556 2 INFO nova.scheduler.client.report [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 614c22a4-9342-4037-adb5-71c3375b8553#033[00m
Oct  7 10:36:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:22Z|01179|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 up in Southbound
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.629 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dc:c4 10.100.0.11'], port_security=['fa:16:3e:84:dc:c4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '97d6fa92-e050-4cce-a368-3065ace6996b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.232'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e8a86b-883d-4b18-ac5a-841f1cc14111, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=84baaee6-3f89-4d61-aaea-507e46e65618) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.631 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 84baaee6-3f89-4d61-aaea-507e46e65618 in datapath 002ae4bb-0f71-4b57-99ae-0bfd304fb458 bound to our chassis#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.633 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 002ae4bb-0f71-4b57-99ae-0bfd304fb458#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6012021f-1af8-40b8-8e89-3248ff7ae2e5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.647 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap002ae4bb-01 in ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.649 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap002ae4bb-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.649 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13025b81-5f5d-4751-9a74-fe114ab64e18]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.650 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20156dcf-6a79-43d6-848a-b6f9d69ddfd6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.662 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dd67fc02-3290-4abe-b6f7-8810e13a6c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.687 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[050f3dd7-a374-4017-b938-45bcd9aea0bd]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:36:22
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.meta', 'default.rgw.log', 'cephfs.cephfs.data', '.mgr', 'volumes', '.rgw.root', 'cephfs.cephfs.meta', 'default.rgw.control', 'vms']
Oct  7 10:36:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.723 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3706b538-be00-483e-bb09-75e7853f5bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 NetworkManager[44949]: <info>  [1759847782.7328] manager: (tap002ae4bb-00): new Veth device (/org/freedesktop/NetworkManager/Devices/474)
Oct  7 10:36:22 np0005473739 systemd-udevd[382240]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[660b8f29-f4ef-4399-ae23-78ce3be927b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.772 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a01a49b-10f8-4765-8cfa-47de3d5366a0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.776 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7809bdd5-2a26-4075-835a-b7d58a0efdf7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 NetworkManager[44949]: <info>  [1759847782.8111] device (tap002ae4bb-00): carrier: link connected
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.817 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8017b485-75bc-4236-bf18-077c8d35fabc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.836 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f46c8a-88f8-4577-a14f-7edd0813492f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap002ae4bb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:93:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836637, 'reachable_time': 23776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382312, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.853 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63b86e71-31e8-4735-8864-ad88cec3edcc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe46:93fa'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 836637, 'tstamp': 836637}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382314, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.871 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7d262f95-1a9b-448c-ab8c-22d8741c7cfd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap002ae4bb-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:46:93:fa'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 338], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836637, 'reachable_time': 23776, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382315, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.899 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2d535916-cd92-488e-871d-61af50068138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.977 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[25c20c26-7afa-4e22-9ce6-15a99e079827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.979 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap002ae4bb-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.980 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:22.980 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap002ae4bb-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:23 np0005473739 kernel: tap002ae4bb-00: entered promiscuous mode
Oct  7 10:36:23 np0005473739 NetworkManager[44949]: <info>  [1759847783.0133] manager: (tap002ae4bb-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/475)
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.015 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.016 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap002ae4bb-00, col_values=(('external_ids', {'iface-id': '9b384c3a-3862-48f5-9fa0-a30ca1bb30cc'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:23Z|01180|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.021 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.022 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1ede22-3a30-4ea3-a297-e2d73464d8ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.023 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/002ae4bb-0f71-4b57-99ae-0bfd304fb458.pid.haproxy
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 002ae4bb-0f71-4b57-99ae-0bfd304fb458
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:36:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:23.024 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'env', 'PROCESS_TAG=haproxy-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/002ae4bb-0f71-4b57-99ae-0bfd304fb458.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:36:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.123 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Successfully created port: 68807b3e-505c-41ea-96e4-f03b329e4c69 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.147 2 DEBUG oslo_concurrency.lockutils [None req-dec61ac6-279c-4d58-ba39-e246d67fb5ef d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "614c22a4-9342-4037-adb5-71c3375b8553" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 13.568s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.326 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.326 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847783.325037, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.327 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.330 2 DEBUG nova.compute.manager [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.330 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.334 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance spawned successfully.#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.334 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.406 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.414 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.415 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:23 np0005473739 podman[382345]: 2025-10-07 14:36:23.416804452 +0000 UTC m=+0.048746133 container create e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.419 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.420 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.421 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.422 2 DEBUG nova.virt.libvirt.driver [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.429 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:36:23 np0005473739 systemd[1]: Started libpod-conmon-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823.scope.
Oct  7 10:36:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:36:23 np0005473739 podman[382345]: 2025-10-07 14:36:23.390182631 +0000 UTC m=+0.022124332 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:36:23 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/24b9ad01db3205e9317c12e398a06f873e540a0ad996975fae67ff29c1925f78/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:23 np0005473739 podman[382345]: 2025-10-07 14:36:23.515200632 +0000 UTC m=+0.147142333 container init e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:36:23 np0005473739 podman[382345]: 2025-10-07 14:36:23.520985966 +0000 UTC m=+0.152927657 container start e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:36:23 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [NOTICE]   (382364) : New worker (382366) forked
Oct  7 10:36:23 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [NOTICE]   (382364) : Loading success.
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.697 2 DEBUG nova.compute.manager [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.700 2 DEBUG oslo_concurrency.lockutils [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.701 2 DEBUG oslo_concurrency.lockutils [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.701 2 DEBUG oslo_concurrency.lockutils [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.701 2 DEBUG nova.compute.manager [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.702 2 WARNING nova.compute.manager [req-4f608368-794d-4900-bc86-8c39904cae06 req-5d0ae9aa-42df-4882-aabc-c253e1ae57c0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state rebuild_spawning.#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.719 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.720 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847783.3258421, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.720 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Started (Lifecycle Event)#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.785 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.790 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: rebuild_spawning, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.883 2 DEBUG nova.compute.manager [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.970 2 DEBUG nova.compute.manager [req-02fe9cc0-0579-40d4-a1ea-3488e6ad8f61 req-567478bf-2e66-49dd-bcb0-64864f1045e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Received event network-vif-deleted-3eb614e8-3f14-4375-bbe9-34facd5bce52 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:23 np0005473739 nova_compute[259550]: 2025-10-07 14:36:23.982 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] During sync_power_state the instance has a pending task (rebuild_spawning). Skip.#033[00m
Oct  7 10:36:24 np0005473739 nova_compute[259550]: 2025-10-07 14:36:24.307 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:24 np0005473739 nova_compute[259550]: 2025-10-07 14:36:24.308 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:24 np0005473739 nova_compute[259550]: 2025-10-07 14:36:24.308 2 DEBUG nova.objects.instance [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Trying to apply a migration context that does not seem to be set for this instance apply_migration_context /usr/lib/python3.9/site-packages/nova/objects/instance.py:1032#033[00m
Oct  7 10:36:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2219: 305 pgs: 305 active+clean; 209 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 3.3 MiB/s wr, 91 op/s
Oct  7 10:36:24 np0005473739 nova_compute[259550]: 2025-10-07 14:36:24.553 2 DEBUG oslo_concurrency.lockutils [None req-51a13cab-ad38-4a74-933e-8f1d07ad08a5 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.finish_evacuation" :: held 0.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.396 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Successfully updated port: 68807b3e-505c-41ea-96e4-f03b329e4c69 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.426 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847770.425433, 614c22a4-9342-4037-adb5-71c3375b8553 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.427 2 INFO nova.compute.manager [-] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.533 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.533 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.533 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.610 2 DEBUG nova.compute.manager [None req-1af8454a-e916-4b31-8e49-c0ad0384b1ec - - - - - -] [instance: 614c22a4-9342-4037-adb5-71c3375b8553] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.782 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.834 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.834 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.835 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.835 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.835 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.835 2 WARNING nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.836 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-changed-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.836 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing instance network info cache due to event network-changed-72db4fd3-8171-42af-9801-69a061614ccc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.836 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.837 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.837 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Refreshing network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.874 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.874 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.875 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.875 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.875 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.877 2 INFO nova.compute.manager [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Terminating instance#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.878 2 DEBUG nova.compute.manager [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:36:25 np0005473739 kernel: tap72db4fd3-81 (unregistering): left promiscuous mode
Oct  7 10:36:25 np0005473739 NetworkManager[44949]: <info>  [1759847785.9489] device (tap72db4fd3-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:36:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:25Z|01181|binding|INFO|Releasing lport 72db4fd3-8171-42af-9801-69a061614ccc from this chassis (sb_readonly=0)
Oct  7 10:36:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:25Z|01182|binding|INFO|Setting lport 72db4fd3-8171-42af-9801-69a061614ccc down in Southbound
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:25Z|01183|binding|INFO|Removing iface tap72db4fd3-81 ovn-installed in OVS
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:25 np0005473739 nova_compute[259550]: 2025-10-07 14:36:25.976 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.982 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4d:17:d0 10.100.0.8'], port_security=['fa:16:3e:4d:17:d0 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'df052dd5-fecd-4dd3-be36-4becc3f9f318', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c899e05d-224c-44fe-8294-eaece58d7fe7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=34b97456-8a84-4c85-bc2d-80b2de48e489, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=72db4fd3-8171-42af-9801-69a061614ccc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.983 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 72db4fd3-8171-42af-9801-69a061614ccc in datapath c899e05d-224c-44fe-8294-eaece58d7fe7 unbound from our chassis#033[00m
Oct  7 10:36:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.985 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c899e05d-224c-44fe-8294-eaece58d7fe7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:36:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.986 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2281bdb6-6f6a-46fe-96f1-edf785bb0047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:25.986 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 namespace which is not needed anymore#033[00m
Oct  7 10:36:25 np0005473739 kernel: tap87dbfe27-94 (unregistering): left promiscuous mode
Oct  7 10:36:25 np0005473739 NetworkManager[44949]: <info>  [1759847785.9927] device (tap87dbfe27-94): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:36:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:26Z|01184|binding|INFO|Releasing lport 87dbfe27-9436-4e21-a648-df77ddfec6ca from this chassis (sb_readonly=0)
Oct  7 10:36:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:26Z|01185|binding|INFO|Setting lport 87dbfe27-9436-4e21-a648-df77ddfec6ca down in Southbound
Oct  7 10:36:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:26Z|01186|binding|INFO|Removing iface tap87dbfe27-94 ovn-installed in OVS
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.029 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e'], port_security=['fa:16:3e:44:cb:8e 2001:db8::f816:3eff:fe44:cb8e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe44:cb8e/64', 'neutron:device_id': 'df052dd5-fecd-4dd3-be36-4becc3f9f318', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '03f8a6cc-86b5-49f0-b76e-96a2dbc54e74', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=eae04c5e-8ef6-447a-8c6c-5575a0afadaf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=87dbfe27-9436-4e21-a648-df77ddfec6ca) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:26 np0005473739 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Deactivated successfully.
Oct  7 10:36:26 np0005473739 systemd[1]: machine-qemu\x2d138\x2dinstance\x2d0000006e.scope: Consumed 17.403s CPU time.
Oct  7 10:36:26 np0005473739 systemd-machined[214580]: Machine qemu-138-instance-0000006e terminated.
Oct  7 10:36:26 np0005473739 NetworkManager[44949]: <info>  [1759847786.1121] manager: (tap87dbfe27-94): new Tun device (/org/freedesktop/NetworkManager/Devices/476)
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [NOTICE]   (378692) : haproxy version is 2.8.14-c23fe91
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [NOTICE]   (378692) : path to executable is /usr/sbin/haproxy
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [WARNING]  (378692) : Exiting Master process...
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.139 2 INFO nova.virt.libvirt.driver [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Instance destroyed successfully.#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.140 2 DEBUG nova.objects.instance [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid df052dd5-fecd-4dd3-be36-4becc3f9f318 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [ALERT]    (378692) : Current worker (378694) exited with code 143 (Terminated)
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7[378688]: [WARNING]  (378692) : All workers exited. Exiting... (0)
Oct  7 10:36:26 np0005473739 systemd[1]: libpod-5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484.scope: Deactivated successfully.
Oct  7 10:36:26 np0005473739 podman[382402]: 2025-10-07 14:36:26.149911313 +0000 UTC m=+0.074318808 container died 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.157 2 DEBUG nova.virt.libvirt.vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:34:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:54Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.157 2 DEBUG nova.network.os_vif_util [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.206", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.158 2 DEBUG nova.network.os_vif_util [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.158 2 DEBUG os_vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.160 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72db4fd3-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.161 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.165 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.167 2 INFO os_vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4d:17:d0,bridge_name='br-int',has_traffic_filtering=True,id=72db4fd3-8171-42af-9801-69a061614ccc,network=Network(c899e05d-224c-44fe-8294-eaece58d7fe7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72db4fd3-81')#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.168 2 DEBUG nova.virt.libvirt.vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:34:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1627574273',display_name='tempest-TestGettingAddress-server-1627574273',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1627574273',id=110,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGIYaaAZ3mQQOmXfLr5nXkn0csta2VsL7z1H/QuubVfDg8QPWl9eHSGopum69HA1IoBxD3DQY/rTuk4WLD+K24Gc8aq8YeLYSN5doNVdfpGL1Yp0u83VadTWmbR4HQIaCA==',key_name='tempest-TestGettingAddress-321335689',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:34:54Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-yvllxh4l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:34:54Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=df052dd5-fecd-4dd3-be36-4becc3f9f318,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.168 2 DEBUG nova.network.os_vif_util [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.169 2 DEBUG nova.network.os_vif_util [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.170 2 DEBUG os_vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.171 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.172 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap87dbfe27-94, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.175 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.179 2 INFO os_vif [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:44:cb:8e,bridge_name='br-int',has_traffic_filtering=True,id=87dbfe27-9436-4e21-a648-df77ddfec6ca,network=Network(673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap87dbfe27-94')#033[00m
Oct  7 10:36:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484-userdata-shm.mount: Deactivated successfully.
Oct  7 10:36:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-325f40c5397e88b9b38207d1a08b9557bb0033598dc986796ff5fd8d9abcf075-merged.mount: Deactivated successfully.
Oct  7 10:36:26 np0005473739 podman[382402]: 2025-10-07 14:36:26.209815862 +0000 UTC m=+0.134223357 container cleanup 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:36:26 np0005473739 systemd[1]: libpod-conmon-5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484.scope: Deactivated successfully.
Oct  7 10:36:26 np0005473739 podman[382470]: 2025-10-07 14:36:26.281844588 +0000 UTC m=+0.048145408 container remove 5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.288 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[72b2b75c-5c14-483f-9d68-889d32374dc7]: (4, ('Tue Oct  7 02:36:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 (5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484)\n5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484\nTue Oct  7 02:36:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 (5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484)\n5f99e41dc464b7dd80718934e93b7ce6c1aaf7f266126bc5bdaa6019b7c3a484\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.290 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[33e0d967-973e-46c6-b9a8-c95e3b122a67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.291 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc899e05d-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.293 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 kernel: tapc899e05d-20: left promiscuous mode
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.301 2 DEBUG nova.compute.manager [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-unplugged-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.301 2 DEBUG oslo_concurrency.lockutils [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.302 2 DEBUG oslo_concurrency.lockutils [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.302 2 DEBUG oslo_concurrency.lockutils [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.302 2 DEBUG nova.compute.manager [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-unplugged-72db4fd3-8171-42af-9801-69a061614ccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.302 2 DEBUG nova.compute.manager [req-f0750cb4-48b5-4531-88ab-4cc16256da87 req-72d3bed9-2544-4a9f-9c8b-b1c33e5e135d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-unplugged-72db4fd3-8171-42af-9801-69a061614ccc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.314 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5854d2-5140-4a2a-8ab1-863c70d74688]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2220: 305 pgs: 305 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 955 KiB/s rd, 3.6 MiB/s wr, 128 op/s
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.343 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cfc4d10-d442-4257-bb2e-06ece0d6de6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.344 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cb55cfa6-e883-48c1-a655-5df307bad6b1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.365 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0fcacd82-1fef-44ca-babd-079d561365c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827512, 'reachable_time': 29611, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382485, 'error': None, 'target': 'ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 systemd[1]: run-netns-ovnmeta\x2dc899e05d\x2d224c\x2d44fe\x2d8294\x2deaece58d7fe7.mount: Deactivated successfully.
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.371 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c899e05d-224c-44fe-8294-eaece58d7fe7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.371 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[649ab640-cbdd-4332-819c-432af7305cc6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.372 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 87dbfe27-9436-4e21-a648-df77ddfec6ca in datapath 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 unbound from our chassis#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.373 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.374 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9bf760ff-af3a-476f-bea1-3d3396165f13]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.374 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 namespace which is not needed anymore#033[00m
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [NOTICE]   (378834) : haproxy version is 2.8.14-c23fe91
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [NOTICE]   (378834) : path to executable is /usr/sbin/haproxy
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [WARNING]  (378834) : Exiting Master process...
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [ALERT]    (378834) : Current worker (378836) exited with code 143 (Terminated)
Oct  7 10:36:26 np0005473739 neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3[378830]: [WARNING]  (378834) : All workers exited. Exiting... (0)
Oct  7 10:36:26 np0005473739 systemd[1]: libpod-4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d.scope: Deactivated successfully.
Oct  7 10:36:26 np0005473739 podman[382504]: 2025-10-07 14:36:26.544148026 +0000 UTC m=+0.068705807 container died 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:36:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d-userdata-shm.mount: Deactivated successfully.
Oct  7 10:36:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fd9610b23e7fa9be83c256b0b81e796a0e4842e582d51c70137e3b1797edeeac-merged.mount: Deactivated successfully.
Oct  7 10:36:26 np0005473739 podman[382504]: 2025-10-07 14:36:26.606437761 +0000 UTC m=+0.130995512 container cleanup 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:36:26 np0005473739 systemd[1]: libpod-conmon-4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d.scope: Deactivated successfully.
Oct  7 10:36:26 np0005473739 podman[382534]: 2025-10-07 14:36:26.682312468 +0000 UTC m=+0.054653851 container remove 4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37c1573c-5026-4a70-b1c0-de509aa3b25a]: (4, ('Tue Oct  7 02:36:26 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 (4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d)\n4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d\nTue Oct  7 02:36:26 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 (4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d)\n4555fb5ca796213b42ce2c841980dcb60b195b9e4b4fe7257be553eec5764d1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.689 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a933206-1e5c-4d16-ba94-9085e80dd8a9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.690 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap673dd6ba-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 kernel: tap673dd6ba-40: left promiscuous mode
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.695 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9b3785b1-f039-4fe7-9747-87e943b41d52]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.722 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b23552dc-3a51-4f1e-a172-4617b146c5fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4bd2764-efdc-46af-a0ca-1792dacbcda7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.730 2 INFO nova.virt.libvirt.driver [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Deleting instance files /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318_del#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.731 2 INFO nova.virt.libvirt.driver [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Deletion of /var/lib/nova/instances/df052dd5-fecd-4dd3-be36-4becc3f9f318_del complete#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.738 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06e492cd-b183-4f23-993d-b33faf5f228d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 827713, 'reachable_time': 25724, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382549, 'error': None, 'target': 'ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.740 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:36:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:26.740 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[38609ac3-3e93-4bfb-bad8-059ec253ec3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.749 2 DEBUG nova.network.neutron [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.793 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.794 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance network_info: |[{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.796 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start _get_guest_xml network_info=[{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.800 2 WARNING nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.804 2 DEBUG nova.virt.libvirt.host [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.805 2 DEBUG nova.virt.libvirt.host [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.808 2 DEBUG nova.virt.libvirt.host [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.809 2 DEBUG nova.virt.libvirt.host [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.809 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.809 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.810 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.810 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.810 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.810 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.811 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.812 2 DEBUG nova.virt.hardware [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.814 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.860 2 INFO nova.compute.manager [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.861 2 DEBUG oslo.service.loopingcall [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.861 2 DEBUG nova.compute.manager [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:36:26 np0005473739 nova_compute[259550]: 2025-10-07 14:36:26.862 2 DEBUG nova.network.neutron [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:36:27 np0005473739 systemd[1]: run-netns-ovnmeta\x2d673dd6ba\x2d4e32\x2d4ca4\x2db9a1\x2d97a20a5e17b3.mount: Deactivated successfully.
Oct  7 10:36:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:36:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3904870245' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.277 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.300 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.303 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:36:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275341985' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.756 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.758 2 DEBUG nova.virt.libvirt.vif [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:20Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.758 2 DEBUG nova.network.os_vif_util [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.758 2 DEBUG nova.network.os_vif_util [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.759 2 DEBUG nova.objects.instance [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.818 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <uuid>88e378df-94f3-4a3e-89e1-62f6de052e9d</uuid>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <name>instance-00000072</name>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:36:26</nova:creationTime>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <entry name="serial">88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <entry name="uuid">88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:aa:2b:a2"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <target dev="tap68807b3e-50"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log" append="off"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:36:27 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:36:27 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:36:27 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:36:27 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.819 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Preparing to wait for external event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.819 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.819 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.819 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.820 2 DEBUG nova.virt.libvirt.vif [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:36:20Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.820 2 DEBUG nova.network.os_vif_util [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.820 2 DEBUG nova.network.os_vif_util [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.821 2 DEBUG os_vif [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.822 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap68807b3e-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap68807b3e-50, col_values=(('external_ids', {'iface-id': '68807b3e-505c-41ea-96e4-f03b329e4c69', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:aa:2b:a2', 'vm-uuid': '88e378df-94f3-4a3e-89e1-62f6de052e9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:27 np0005473739 NetworkManager[44949]: <info>  [1759847787.8266] manager: (tap68807b3e-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/477)
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.831 2 INFO os_vif [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50')#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.912 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updated VIF entry in instance network info cache for port 72db4fd3-8171-42af-9801-69a061614ccc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.913 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "72db4fd3-8171-42af-9801-69a061614ccc", "address": "fa:16:3e:4d:17:d0", "network": {"id": "c899e05d-224c-44fe-8294-eaece58d7fe7", "bridge": "br-int", "label": "tempest-network-smoke--978281850", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72db4fd3-81", "ovs_interfaceid": "72db4fd3-8171-42af-9801-69a061614ccc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:27 np0005473739 podman[382615]: 2025-10-07 14:36:27.921804718 +0000 UTC m=+0.058153326 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 10:36:27 np0005473739 podman[382616]: 2025-10-07 14:36:27.927747216 +0000 UTC m=+0.061695049 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3)
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.945 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-unplugged-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.945 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.945 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.945 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-unplugged-87dbfe27-9436-4e21-a648-df77ddfec6ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-unplugged-87dbfe27-9436-4e21-a648-df77ddfec6ca for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.946 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.947 2 DEBUG oslo_concurrency.lockutils [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.947 2 DEBUG nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.947 2 WARNING nova.compute.manager [req-98697422-fa8a-4c1a-b7ab-f86b1ed3abbb req-b348d841-4339-4e88-8c2a-e19ab23d3912 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received unexpected event network-vif-plugged-87dbfe27-9436-4e21-a648-df77ddfec6ca for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:36:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.990 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-df052dd5-fecd-4dd3-be36-4becc3f9f318" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.990 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.991 2 DEBUG nova.compute.manager [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing instance network info cache due to event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.991 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.991 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.991 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.996 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.997 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.997 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:aa:2b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:36:27 np0005473739 nova_compute[259550]: 2025-10-07 14:36:27.997 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Using config drive#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.020 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2221: 305 pgs: 305 active+clean; 213 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 942 KiB/s rd, 3.6 MiB/s wr, 108 op/s
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.419 2 DEBUG nova.compute.manager [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.419 2 DEBUG oslo_concurrency.lockutils [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.419 2 DEBUG oslo_concurrency.lockutils [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.419 2 DEBUG oslo_concurrency.lockutils [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.420 2 DEBUG nova.compute.manager [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] No waiting events found dispatching network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.420 2 WARNING nova.compute.manager [req-ade8b577-0fba-4459-a69e-89b71189f1b7 req-03a5140a-f7ae-4d85-85e7-c242c28cbb12 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received unexpected event network-vif-plugged-72db4fd3-8171-42af-9801-69a061614ccc for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.812 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Creating config drive at /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.818 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprjdyl1vw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.960 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprjdyl1vw" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.985 2 DEBUG nova.storage.rbd_utils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:36:28 np0005473739 nova_compute[259550]: 2025-10-07 14:36:28.988 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:29 np0005473739 nova_compute[259550]: 2025-10-07 14:36:29.546 2 DEBUG oslo_concurrency.processutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config 88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.558s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:29 np0005473739 nova_compute[259550]: 2025-10-07 14:36:29.548 2 INFO nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Deleting local config drive /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/disk.config because it was imported into RBD.#033[00m
Oct  7 10:36:29 np0005473739 kernel: tap68807b3e-50: entered promiscuous mode
Oct  7 10:36:29 np0005473739 nova_compute[259550]: 2025-10-07 14:36:29.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:29Z|01187|binding|INFO|Claiming lport 68807b3e-505c-41ea-96e4-f03b329e4c69 for this chassis.
Oct  7 10:36:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:29Z|01188|binding|INFO|68807b3e-505c-41ea-96e4-f03b329e4c69: Claiming fa:16:3e:aa:2b:a2 10.100.0.14
Oct  7 10:36:29 np0005473739 NetworkManager[44949]: <info>  [1759847789.6031] manager: (tap68807b3e-50): new Tun device (/org/freedesktop/NetworkManager/Devices/478)
Oct  7 10:36:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:29Z|01189|binding|INFO|Setting lport 68807b3e-505c-41ea-96e4-f03b329e4c69 ovn-installed in OVS
Oct  7 10:36:29 np0005473739 nova_compute[259550]: 2025-10-07 14:36:29.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.622 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:2b:a2 10.100.0.14'], port_security=['fa:16:3e:aa:2b:a2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '88e378df-94f3-4a3e-89e1-62f6de052e9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '49bffbb9-d898-4935-821f-8756d7f43377', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3993e5e4-e68a-46c5-a740-21f6d2b71498, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=68807b3e-505c-41ea-96e4-f03b329e4c69) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:29Z|01190|binding|INFO|Setting lport 68807b3e-505c-41ea-96e4-f03b329e4c69 up in Southbound
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.624 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 68807b3e-505c-41ea-96e4-f03b329e4c69 in datapath 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 bound to our chassis#033[00m
Oct  7 10:36:29 np0005473739 systemd-udevd[382722]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.625 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38#033[00m
Oct  7 10:36:29 np0005473739 nova_compute[259550]: 2025-10-07 14:36:29.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:29 np0005473739 NetworkManager[44949]: <info>  [1759847789.6422] device (tap68807b3e-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:36:29 np0005473739 NetworkManager[44949]: <info>  [1759847789.6428] device (tap68807b3e-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.641 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5543e53-b67a-48a9-95d6-1843b6aa45c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.643 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4e6cb0ef-51 in ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.645 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4e6cb0ef-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.645 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7515f796-e1bb-4e73-9076-4ab39b0b5267]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[87b2e94b-60b5-4713-b52e-0042b058a4f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 systemd-machined[214580]: New machine qemu-143-instance-00000072.
Oct  7 10:36:29 np0005473739 systemd[1]: Started Virtual Machine qemu-143-instance-00000072.
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.669 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[90813cdf-b575-465e-b71d-29d7771db929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.683 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[59fdc409-ae4f-47da-8a2c-6f0fe5860815]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.714 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dd9fc1d5-d74e-47e2-8458-76263b12a2ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 systemd-udevd[382728]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:36:29 np0005473739 NetworkManager[44949]: <info>  [1759847789.7234] manager: (tap4e6cb0ef-50): new Veth device (/org/freedesktop/NetworkManager/Devices/479)
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2d97bde4-a61a-47d5-9d6a-c12e5c2e7479]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.770 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[34b1bd61-9866-4794-b120-95a4395f8a0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.774 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[97129d73-2791-4b68-843f-5b2ea2b5550e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 NetworkManager[44949]: <info>  [1759847789.8015] device (tap4e6cb0ef-50): carrier: link connected
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.807 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5d3aa783-81c5-48f1-8e07-fc702a02c6c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[434329ed-6e3e-4cdc-b2f9-fde2591844dc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e6cb0ef-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:8c:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837336, 'reachable_time': 35350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 382758, 'error': None, 'target': 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.846 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8e110cdd-c2c3-4f7e-9d9c-791b3f70dfdc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec7:8c23'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 837336, 'tstamp': 837336}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 382759, 'error': None, 'target': 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.868 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e96fcf7f-d2e4-40f0-b155-f80806f2d47e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4e6cb0ef-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c7:8c:23'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 342], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837336, 'reachable_time': 35350, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 382760, 'error': None, 'target': 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45dcec8e-a823-4bd2-9a7c-44fb5df13ae1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.990 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e624348d-139d-4d5e-988c-5c1f86164b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.992 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6cb0ef-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.992 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:29.992 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4e6cb0ef-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:29 np0005473739 nova_compute[259550]: 2025-10-07 14:36:29.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:29 np0005473739 NetworkManager[44949]: <info>  [1759847789.9951] manager: (tap4e6cb0ef-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/480)
Oct  7 10:36:29 np0005473739 kernel: tap4e6cb0ef-50: entered promiscuous mode
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:29.999 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.001 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4e6cb0ef-50, col_values=(('external_ids', {'iface-id': '0987c9ec-179e-425c-b303-601326f99ffa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:30Z|01191|binding|INFO|Releasing lport 0987c9ec-179e-425c-b303-601326f99ffa from this chassis (sb_readonly=0)
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.005 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.005 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69d25294-764a-4554-a796-3b6c297f53b2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.006 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38.pid.haproxy
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:36:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:30.007 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'env', 'PROCESS_TAG=haproxy-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.019 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.091 2 DEBUG nova.compute.manager [req-5d4f7c12-b1eb-4e55-9892-a4bbef3492f1 req-ce12cd9b-d60c-4d5e-a424-afcdc63588a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-deleted-72db4fd3-8171-42af-9801-69a061614ccc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.092 2 INFO nova.compute.manager [req-5d4f7c12-b1eb-4e55-9892-a4bbef3492f1 req-ce12cd9b-d60c-4d5e-a424-afcdc63588a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Neutron deleted interface 72db4fd3-8171-42af-9801-69a061614ccc; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.092 2 DEBUG nova.network.neutron [req-5d4f7c12-b1eb-4e55-9892-a4bbef3492f1 req-ce12cd9b-d60c-4d5e-a424-afcdc63588a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [{"id": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "address": "fa:16:3e:44:cb:8e", "network": {"id": "673dd6ba-4e32-4ca4-b9a1-97a20a5e17b3", "bridge": "br-int", "label": "tempest-network-smoke--797712187", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe44:cb8e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap87dbfe27-94", "ovs_interfaceid": "87dbfe27-9436-4e21-a648-df77ddfec6ca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.171 2 DEBUG nova.compute.manager [req-5d4f7c12-b1eb-4e55-9892-a4bbef3492f1 req-ce12cd9b-d60c-4d5e-a424-afcdc63588a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Detach interface failed, port_id=72db4fd3-8171-42af-9801-69a061614ccc, reason: Instance df052dd5-fecd-4dd3-be36-4becc3f9f318 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.286 2 DEBUG nova.network.neutron [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.322 2 INFO nova.compute.manager [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Took 3.46 seconds to deallocate network for instance.#033[00m
Oct  7 10:36:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2222: 305 pgs: 305 active+clean; 152 MiB data, 824 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.6 MiB/s wr, 165 op/s
Oct  7 10:36:30 np0005473739 podman[382833]: 2025-10-07 14:36:30.383080504 +0000 UTC m=+0.074093121 container create c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:36:30 np0005473739 systemd[1]: Started libpod-conmon-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b.scope.
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.432 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.432 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:30 np0005473739 podman[382833]: 2025-10-07 14:36:30.338999496 +0000 UTC m=+0.030012133 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:36:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:36:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ecad819b79e80b6f43b313371589ac92f170aa571899affb8f76f01510b64cf5/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:30 np0005473739 podman[382833]: 2025-10-07 14:36:30.4708623 +0000 UTC m=+0.161875097 container init c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:36:30 np0005473739 podman[382833]: 2025-10-07 14:36:30.476624344 +0000 UTC m=+0.167636961 container start c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.490 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847790.4896224, 88e378df-94f3-4a3e-89e1-62f6de052e9d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.490 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] VM Started (Lifecycle Event)#033[00m
Oct  7 10:36:30 np0005473739 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [NOTICE]   (382852) : New worker (382854) forked
Oct  7 10:36:30 np0005473739 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [NOTICE]   (382852) : Loading success.
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.539 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.543 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847790.490437, 88e378df-94f3-4a3e-89e1-62f6de052e9d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.543 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.555 2 DEBUG oslo_concurrency.processutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.594 2 DEBUG nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.595 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.595 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.595 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.595 2 DEBUG nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Processing event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG oslo_concurrency.lockutils [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.596 2 DEBUG nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.596 2 WARNING nova.compute.manager [req-c19b0c76-9d94-4d3a-b1ac-77c9c48e0d36 req-da3e25a1-b349-44c5-ac1d-52fededb74d8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-plugged-68807b3e-505c-41ea-96e4-f03b329e4c69 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.597 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.606 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.614 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.622 2 INFO nova.virt.libvirt.driver [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance spawned successfully.#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.623 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.626 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updated VIF entry in instance network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.627 2 DEBUG nova.network.neutron [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.630 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847790.6151118, 88e378df-94f3-4a3e-89e1-62f6de052e9d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.631 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.815 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.816 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.817 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.817 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.817 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.818 2 DEBUG nova.virt.libvirt.driver [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.821 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.822 2 DEBUG oslo_concurrency.lockutils [req-6f756510-b932-4d04-9b16-c252df979e14 req-d8b5e807-f455-49ad-90e5-44d68c4b6de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:30 np0005473739 nova_compute[259550]: 2025-10-07 14:36:30.827 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:36:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:36:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4013827441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.032 2 DEBUG oslo_concurrency.processutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.037 2 DEBUG nova.compute.provider_tree [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.041 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.166 2 DEBUG nova.scheduler.client.report [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.226 2 INFO nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Took 10.78 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.227 2 DEBUG nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.347 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.915s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.436 2 INFO nova.scheduler.client.report [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance df052dd5-fecd-4dd3-be36-4becc3f9f318#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.500 2 INFO nova.compute.manager [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Took 13.97 seconds to build instance.#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.601 2 DEBUG oslo_concurrency.lockutils [None req-f3928c2d-9af1-4360-b720-4d379037d626 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:31 np0005473739 nova_compute[259550]: 2025-10-07 14:36:31.882 2 DEBUG oslo_concurrency.lockutils [None req-e65aedf2-d89e-42af-b182-1c9f79febcf6 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df052dd5-fecd-4dd3-be36-4becc3f9f318" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.008s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2223: 305 pgs: 305 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 143 op/s
Oct  7 10:36:32 np0005473739 nova_compute[259550]: 2025-10-07 14:36:32.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006966362041739923 of space, bias 1.0, pg target 0.2089908612521977 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:36:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:36:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:36:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1711557554' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:36:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:36:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1711557554' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:36:32 np0005473739 nova_compute[259550]: 2025-10-07 14:36:32.862 2 DEBUG nova.compute.manager [req-a095efcd-855e-4fb2-b116-5e8ac53eeb8c req-153f54ee-4bac-4deb-8aec-a743e736eb41 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Received event network-vif-deleted-87dbfe27-9436-4e21-a648-df77ddfec6ca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:32 np0005473739 nova_compute[259550]: 2025-10-07 14:36:32.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2224: 305 pgs: 305 active+clean; 134 MiB data, 810 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 1.8 MiB/s wr, 191 op/s
Oct  7 10:36:35 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  7 10:36:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:35Z|00129|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:84:dc:c4 10.100.0.11
Oct  7 10:36:35 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:35Z|00130|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:84:dc:c4 10.100.0.11
Oct  7 10:36:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2225: 305 pgs: 305 active+clean; 148 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.5 MiB/s wr, 202 op/s
Oct  7 10:36:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:36.781 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=36, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=35) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:36 np0005473739 nova_compute[259550]: 2025-10-07 14:36:36.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:36.784 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:36:37 np0005473739 nova_compute[259550]: 2025-10-07 14:36:37.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:37 np0005473739 nova_compute[259550]: 2025-10-07 14:36:37.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2226: 305 pgs: 305 active+clean; 148 MiB data, 821 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.2 MiB/s wr, 150 op/s
Oct  7 10:36:40 np0005473739 nova_compute[259550]: 2025-10-07 14:36:40.050 2 DEBUG nova.compute.manager [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:40 np0005473739 nova_compute[259550]: 2025-10-07 14:36:40.051 2 DEBUG nova.compute.manager [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing instance network info cache due to event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:36:40 np0005473739 nova_compute[259550]: 2025-10-07 14:36:40.051 2 DEBUG oslo_concurrency.lockutils [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:40 np0005473739 nova_compute[259550]: 2025-10-07 14:36:40.052 2 DEBUG oslo_concurrency.lockutils [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:40 np0005473739 nova_compute[259550]: 2025-10-07 14:36:40.052 2 DEBUG nova.network.neutron [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:36:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2227: 305 pgs: 305 active+clean; 167 MiB data, 853 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 2.1 MiB/s wr, 192 op/s
Oct  7 10:36:41 np0005473739 nova_compute[259550]: 2025-10-07 14:36:41.131 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847786.1303074, df052dd5-fecd-4dd3-be36-4becc3f9f318 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:36:41 np0005473739 nova_compute[259550]: 2025-10-07 14:36:41.132 2 INFO nova.compute.manager [-] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:36:41 np0005473739 nova_compute[259550]: 2025-10-07 14:36:41.218 2 DEBUG nova.compute.manager [None req-32884433-5cb7-4bbd-aa64-8a2c5fd690f2 - - - - - -] [instance: df052dd5-fecd-4dd3-be36-4becc3f9f318] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:36:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:41Z|01192|binding|INFO|Releasing lport 0987c9ec-179e-425c-b303-601326f99ffa from this chassis (sb_readonly=0)
Oct  7 10:36:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:41Z|01193|binding|INFO|Releasing lport 9b384c3a-3862-48f5-9fa0-a30ca1bb30cc from this chassis (sb_readonly=0)
Oct  7 10:36:41 np0005473739 nova_compute[259550]: 2025-10-07 14:36:41.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:41 np0005473739 nova_compute[259550]: 2025-10-07 14:36:41.786 2 INFO nova.compute.manager [None req-fa4f1ef0-f045-4664-846a-2f196b772149 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Get console output#033[00m
Oct  7 10:36:41 np0005473739 nova_compute[259550]: 2025-10-07 14:36:41.790 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:36:41 np0005473739 nova_compute[259550]: 2025-10-07 14:36:41.900 2 DEBUG nova.network.neutron [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updated VIF entry in instance network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:36:41 np0005473739 nova_compute[259550]: 2025-10-07 14:36:41.901 2 DEBUG nova.network.neutron [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:42 np0005473739 podman[382886]: 2025-10-07 14:36:42.080424931 +0000 UTC m=+0.062229593 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:36:42 np0005473739 podman[382887]: 2025-10-07 14:36:42.109075607 +0000 UTC m=+0.087903250 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  7 10:36:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2228: 305 pgs: 305 active+clean; 167 MiB data, 861 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 141 op/s
Oct  7 10:36:42 np0005473739 nova_compute[259550]: 2025-10-07 14:36:42.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:42 np0005473739 nova_compute[259550]: 2025-10-07 14:36:42.869 2 DEBUG oslo_concurrency.lockutils [req-eda01b30-8284-42ed-8e6e-64c9ddc16b2c req-7246d0f6-a68c-432a-9833-76833c6b3316 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:42 np0005473739 nova_compute[259550]: 2025-10-07 14:36:42.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:43.786 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '36'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:43 np0005473739 nova_compute[259550]: 2025-10-07 14:36:43.785 2 DEBUG nova.compute.manager [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:43 np0005473739 nova_compute[259550]: 2025-10-07 14:36:43.786 2 DEBUG nova.compute.manager [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing instance network info cache due to event network-changed-84baaee6-3f89-4d61-aaea-507e46e65618. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:36:43 np0005473739 nova_compute[259550]: 2025-10-07 14:36:43.786 2 DEBUG oslo_concurrency.lockutils [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:43 np0005473739 nova_compute[259550]: 2025-10-07 14:36:43.786 2 DEBUG oslo_concurrency.lockutils [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:43 np0005473739 nova_compute[259550]: 2025-10-07 14:36:43.786 2 DEBUG nova.network.neutron [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Refreshing network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:36:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:44Z|00131|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:aa:2b:a2 10.100.0.14
Oct  7 10:36:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:44Z|00132|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:aa:2b:a2 10.100.0.14
Oct  7 10:36:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2229: 305 pgs: 305 active+clean; 199 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 4.3 MiB/s wr, 189 op/s
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.565 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.565 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.565 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.566 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.566 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.567 2 INFO nova.compute.manager [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Terminating instance#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.568 2 DEBUG nova.compute.manager [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:36:44 np0005473739 kernel: tap84baaee6-3f (unregistering): left promiscuous mode
Oct  7 10:36:44 np0005473739 NetworkManager[44949]: <info>  [1759847804.6260] device (tap84baaee6-3f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:36:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:44Z|01194|binding|INFO|Releasing lport 84baaee6-3f89-4d61-aaea-507e46e65618 from this chassis (sb_readonly=0)
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.641 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:44Z|01195|binding|INFO|Setting lport 84baaee6-3f89-4d61-aaea-507e46e65618 down in Southbound
Oct  7 10:36:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:44Z|01196|binding|INFO|Removing iface tap84baaee6-3f ovn-installed in OVS
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:44 np0005473739 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Deactivated successfully.
Oct  7 10:36:44 np0005473739 systemd[1]: machine-qemu\x2d142\x2dinstance\x2d00000071.scope: Consumed 13.621s CPU time.
Oct  7 10:36:44 np0005473739 systemd-machined[214580]: Machine qemu-142-instance-00000071 terminated.
Oct  7 10:36:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.732 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:84:dc:c4 10.100.0.11'], port_security=['fa:16:3e:84:dc:c4 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '97d6fa92-e050-4cce-a368-3065ace6996b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=51e8a86b-883d-4b18-ac5a-841f1cc14111, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=84baaee6-3f89-4d61-aaea-507e46e65618) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.733 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 84baaee6-3f89-4d61-aaea-507e46e65618 in datapath 002ae4bb-0f71-4b57-99ae-0bfd304fb458 unbound from our chassis#033[00m
Oct  7 10:36:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.734 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 002ae4bb-0f71-4b57-99ae-0bfd304fb458, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:36:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.735 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e764fa07-52b4-4d8d-99c8-ca359b12c3f0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:44.735 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 namespace which is not needed anymore#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.805 2 INFO nova.virt.libvirt.driver [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Instance destroyed successfully.#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.805 2 DEBUG nova.objects.instance [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:44 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [NOTICE]   (382364) : haproxy version is 2.8.14-c23fe91
Oct  7 10:36:44 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [NOTICE]   (382364) : path to executable is /usr/sbin/haproxy
Oct  7 10:36:44 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [WARNING]  (382364) : Exiting Master process...
Oct  7 10:36:44 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [ALERT]    (382364) : Current worker (382366) exited with code 143 (Terminated)
Oct  7 10:36:44 np0005473739 neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458[382360]: [WARNING]  (382364) : All workers exited. Exiting... (0)
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.889 2 DEBUG nova.virt.libvirt.vif [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:35:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-2036986690',display_name='tempest-TestNetworkAdvancedServerOps-server-2036986690',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-2036986690',id=113,image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJWZftNkY/zVHCXSaics6F8ZT1EfbDe33oEET2vK0gP7Xemq47ftAGCjttUGmoEEk2tSMFrst5lmpCcJn9l+9uZ/tfDJY40RL0sC5x3TjuIWq3RsYwCZVLYWuqz+xMOnKw==',key_name='tempest-TestNetworkAdvancedServerOps-1798249152',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-ha5sygpa',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='d37bdf89-ce37-478a-af4d-2b9cd0435b79',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:24Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.890 2 DEBUG nova.network.os_vif_util [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.232", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.890 2 DEBUG nova.network.os_vif_util [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:44 np0005473739 systemd[1]: libpod-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823.scope: Deactivated successfully.
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.891 2 DEBUG os_vif [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:36:44 np0005473739 conmon[382360]: conmon e8169b3facf70ab175b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823.scope/container/memory.events
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.892 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.893 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap84baaee6-3f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:44 np0005473739 podman[382966]: 2025-10-07 14:36:44.898158093 +0000 UTC m=+0.055131355 container died e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.899 2 INFO os_vif [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:84:dc:c4,bridge_name='br-int',has_traffic_filtering=True,id=84baaee6-3f89-4d61-aaea-507e46e65618,network=Network(002ae4bb-0f71-4b57-99ae-0bfd304fb458),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap84baaee6-3f')#033[00m
Oct  7 10:36:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823-userdata-shm.mount: Deactivated successfully.
Oct  7 10:36:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-24b9ad01db3205e9317c12e398a06f873e540a0ad996975fae67ff29c1925f78-merged.mount: Deactivated successfully.
Oct  7 10:36:44 np0005473739 podman[382966]: 2025-10-07 14:36:44.980378249 +0000 UTC m=+0.137351491 container cleanup e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:44 np0005473739 nova_compute[259550]: 2025-10-07 14:36:44.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:36:44 np0005473739 systemd[1]: libpod-conmon-e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823.scope: Deactivated successfully.
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.015 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.016 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.016 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:36:45 np0005473739 podman[383016]: 2025-10-07 14:36:45.062595156 +0000 UTC m=+0.059989964 container remove e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.069 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[290a118d-1396-46c9-aaca-f3d7cf6782c2]: (4, ('Tue Oct  7 02:36:44 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 (e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823)\ne8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823\nTue Oct  7 02:36:44 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 (e8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823)\ne8169b3facf70ab175b185a78c72a37d49779b3892721d11128aaeaf7fbd8823\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.072 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90dd67a1-946c-4c59-ae41-b6c581d80369]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.073 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap002ae4bb-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:45 np0005473739 kernel: tap002ae4bb-00: left promiscuous mode
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8b42331d-b60b-4469-9e4c-f1d9378255c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.131 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0031962e-9fb2-4ab0-aae4-f6e0e478f0ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.133 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2955fa94-0e9a-4cb5-9608-178a6b832237]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.151 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3cb6c0ec-0233-47c1-a4f5-c301d0e8ef5e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 836628, 'reachable_time': 35248, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383031, 'error': None, 'target': 'ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.155 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-002ae4bb-0f71-4b57-99ae-0bfd304fb458 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:36:45 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:45.155 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0203227d-8505-4f9b-b278-92e6672e3773]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:45 np0005473739 systemd[1]: run-netns-ovnmeta\x2d002ae4bb\x2d0f71\x2d4b57\x2d99ae\x2d0bfd304fb458.mount: Deactivated successfully.
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.490 2 INFO nova.virt.libvirt.driver [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deleting instance files /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_del#033[00m
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.491 2 INFO nova.virt.libvirt.driver [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deletion of /var/lib/nova/instances/e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb_del complete#033[00m
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.619 2 INFO nova.compute.manager [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.619 2 DEBUG oslo.service.loopingcall [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.619 2 DEBUG nova.compute.manager [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:36:45 np0005473739 nova_compute[259550]: 2025-10-07 14:36:45.620 2 DEBUG nova.network.neutron [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.170 2 DEBUG nova.network.neutron [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updated VIF entry in instance network info cache for port 84baaee6-3f89-4d61-aaea-507e46e65618. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.171 2 DEBUG nova.network.neutron [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [{"id": "84baaee6-3f89-4d61-aaea-507e46e65618", "address": "fa:16:3e:84:dc:c4", "network": {"id": "002ae4bb-0f71-4b57-99ae-0bfd304fb458", "bridge": "br-int", "label": "tempest-network-smoke--646189904", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap84baaee6-3f", "ovs_interfaceid": "84baaee6-3f89-4d61-aaea-507e46e65618", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.176 2 DEBUG nova.compute.manager [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.176 2 DEBUG oslo_concurrency.lockutils [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.176 2 DEBUG oslo_concurrency.lockutils [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.177 2 DEBUG oslo_concurrency.lockutils [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.177 2 DEBUG nova.compute.manager [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.177 2 DEBUG nova.compute.manager [req-0202701a-570a-4672-b2d7-a4c60066eddf req-2eca5971-91d3-4589-b0f8-224e0f25b612 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-unplugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:36:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2230: 305 pgs: 305 active+clean; 163 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 1011 KiB/s rd, 4.3 MiB/s wr, 158 op/s
Oct  7 10:36:46 np0005473739 nova_compute[259550]: 2025-10-07 14:36:46.555 2 DEBUG oslo_concurrency.lockutils [req-fc7c72ca-e209-4462-984a-426257699671 req-a61096d1-5d60-4890-8dbb-d36ad865c830 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:47 np0005473739 nova_compute[259550]: 2025-10-07 14:36:47.273 2 DEBUG nova.network.neutron [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:47 np0005473739 nova_compute[259550]: 2025-10-07 14:36:47.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:47 np0005473739 nova_compute[259550]: 2025-10-07 14:36:47.462 2 INFO nova.compute.manager [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Took 1.84 seconds to deallocate network for instance.#033[00m
Oct  7 10:36:47 np0005473739 nova_compute[259550]: 2025-10-07 14:36:47.767 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:47 np0005473739 nova_compute[259550]: 2025-10-07 14:36:47.768 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:48 np0005473739 nova_compute[259550]: 2025-10-07 14:36:48.112 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2231: 305 pgs: 305 active+clean; 163 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 605 KiB/s rd, 3.1 MiB/s wr, 132 op/s
Oct  7 10:36:48 np0005473739 nova_compute[259550]: 2025-10-07 14:36:48.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:48 np0005473739 nova_compute[259550]: 2025-10-07 14:36:48.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:48 np0005473739 nova_compute[259550]: 2025-10-07 14:36:48.981 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:36:49 np0005473739 nova_compute[259550]: 2025-10-07 14:36:49.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:49 np0005473739 nova_compute[259550]: 2025-10-07 14:36:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:50 np0005473739 nova_compute[259550]: 2025-10-07 14:36:50.150 2 DEBUG nova.compute.manager [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:50 np0005473739 nova_compute[259550]: 2025-10-07 14:36:50.150 2 DEBUG oslo_concurrency.lockutils [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:50 np0005473739 nova_compute[259550]: 2025-10-07 14:36:50.151 2 DEBUG oslo_concurrency.lockutils [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:50 np0005473739 nova_compute[259550]: 2025-10-07 14:36:50.151 2 DEBUG oslo_concurrency.lockutils [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:50 np0005473739 nova_compute[259550]: 2025-10-07 14:36:50.151 2 DEBUG nova.compute.manager [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] No waiting events found dispatching network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:36:50 np0005473739 nova_compute[259550]: 2025-10-07 14:36:50.151 2 WARNING nova.compute.manager [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received unexpected event network-vif-plugged-84baaee6-3f89-4d61-aaea-507e46e65618 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:36:50 np0005473739 nova_compute[259550]: 2025-10-07 14:36:50.152 2 DEBUG nova.compute.manager [req-ee39f3bb-878f-4716-8773-dc121d132f88 req-496c8766-8c1f-415b-9f17-ba6733612ede 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Received event network-vif-deleted-84baaee6-3f89-4d61-aaea-507e46e65618 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2232: 305 pgs: 305 active+clean; 121 MiB data, 869 MiB used, 59 GiB / 60 GiB avail; 609 KiB/s rd, 3.1 MiB/s wr, 139 op/s
Oct  7 10:36:50 np0005473739 nova_compute[259550]: 2025-10-07 14:36:50.704 2 DEBUG oslo_concurrency.processutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:36:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2870104604' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.143 2 DEBUG oslo_concurrency.processutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.149 2 DEBUG nova.compute.provider_tree [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.226 2 DEBUG nova.scheduler.client.report [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.322 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 3.554s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.384 2 INFO nova.compute.manager [None req-eb712208-a59c-44e0-81fe-e2d656b88698 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Get console output#033[00m
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.386 2 INFO nova.scheduler.client.report [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb#033[00m
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.393 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.640 2 DEBUG oslo_concurrency.lockutils [None req-8c3c2716-20fd-4022-975d-cd43e1e25681 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.075s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:51 np0005473739 nova_compute[259550]: 2025-10-07 14:36:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2233: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 385 KiB/s rd, 2.2 MiB/s wr, 98 op/s
Oct  7 10:36:52 np0005473739 nova_compute[259550]: 2025-10-07 14:36:52.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:36:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:36:52 np0005473739 nova_compute[259550]: 2025-10-07 14:36:52.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:54 np0005473739 nova_compute[259550]: 2025-10-07 14:36:54.248 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:54 np0005473739 nova_compute[259550]: 2025-10-07 14:36:54.249 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:54 np0005473739 nova_compute[259550]: 2025-10-07 14:36:54.249 2 DEBUG nova.objects.instance [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'flavor' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2234: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 347 KiB/s rd, 2.2 MiB/s wr, 92 op/s
Oct  7 10:36:54 np0005473739 nova_compute[259550]: 2025-10-07 14:36:54.770 2 DEBUG nova.objects.instance [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_requests' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:36:54 np0005473739 nova_compute[259550]: 2025-10-07 14:36:54.786 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:36:54 np0005473739 nova_compute[259550]: 2025-10-07 14:36:54.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:55 np0005473739 nova_compute[259550]: 2025-10-07 14:36:55.067 2 DEBUG nova.policy [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:36:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:55Z|01197|binding|INFO|Releasing lport 0987c9ec-179e-425c-b303-601326f99ffa from this chassis (sb_readonly=0)
Oct  7 10:36:55 np0005473739 nova_compute[259550]: 2025-10-07 14:36:55.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:55 np0005473739 nova_compute[259550]: 2025-10-07 14:36:55.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:56 np0005473739 nova_compute[259550]: 2025-10-07 14:36:56.015 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Successfully created port: 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:36:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2235: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 26 KiB/s wr, 37 op/s
Oct  7 10:36:56 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5b66d0e2-a07a-49e8-af52-fd83b17e16dc does not exist
Oct  7 10:36:56 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev aa7bf53d-831a-4f3b-a601-dfc22a5428da does not exist
Oct  7 10:36:56 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ef6bbd86-0508-4a64-95a7-56e42cf97f42 does not exist
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:36:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:36:56 np0005473739 podman[383325]: 2025-10-07 14:36:56.886821323 +0000 UTC m=+0.038430248 container create 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:36:56 np0005473739 systemd[1]: Started libpod-conmon-045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574.scope.
Oct  7 10:36:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:36:56 np0005473739 podman[383325]: 2025-10-07 14:36:56.964217901 +0000 UTC m=+0.115826876 container init 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:36:56 np0005473739 podman[383325]: 2025-10-07 14:36:56.870837126 +0000 UTC m=+0.022446071 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:36:56 np0005473739 podman[383325]: 2025-10-07 14:36:56.971563808 +0000 UTC m=+0.123172743 container start 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:36:56 np0005473739 podman[383325]: 2025-10-07 14:36:56.975696468 +0000 UTC m=+0.127305403 container attach 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:36:56 np0005473739 quizzical_golick[383341]: 167 167
Oct  7 10:36:56 np0005473739 systemd[1]: libpod-045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574.scope: Deactivated successfully.
Oct  7 10:36:56 np0005473739 podman[383325]: 2025-10-07 14:36:56.977186198 +0000 UTC m=+0.128795133 container died 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 10:36:57 np0005473739 systemd[1]: var-lib-containers-storage-overlay-769e7cdae2427f47e94fb1708964917c9b8741f991173f9b40e7245ba6798491-merged.mount: Deactivated successfully.
Oct  7 10:36:57 np0005473739 podman[383325]: 2025-10-07 14:36:57.019901489 +0000 UTC m=+0.171510414 container remove 045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_golick, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:36:57 np0005473739 systemd[1]: libpod-conmon-045669c3af6db06adddbd034bbf903b9eaaf72d8f2dbd16ef98af0d672155574.scope: Deactivated successfully.
Oct  7 10:36:57 np0005473739 podman[383365]: 2025-10-07 14:36:57.183684795 +0000 UTC m=+0.041860100 container create fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 10:36:57 np0005473739 systemd[1]: Started libpod-conmon-fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa.scope.
Oct  7 10:36:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:36:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:57 np0005473739 podman[383365]: 2025-10-07 14:36:57.163793294 +0000 UTC m=+0.021968609 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:36:57 np0005473739 podman[383365]: 2025-10-07 14:36:57.268464781 +0000 UTC m=+0.126640106 container init fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 10:36:57 np0005473739 podman[383365]: 2025-10-07 14:36:57.276826574 +0000 UTC m=+0.135001889 container start fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:36:57 np0005473739 nova_compute[259550]: 2025-10-07 14:36:57.277 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Successfully updated port: 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:36:57 np0005473739 podman[383365]: 2025-10-07 14:36:57.281504549 +0000 UTC m=+0.139679854 container attach fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:36:57 np0005473739 nova_compute[259550]: 2025-10-07 14:36:57.337 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:57 np0005473739 nova_compute[259550]: 2025-10-07 14:36:57.338 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:57 np0005473739 nova_compute[259550]: 2025-10-07 14:36:57.338 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:36:57 np0005473739 nova_compute[259550]: 2025-10-07 14:36:57.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:57 np0005473739 nova_compute[259550]: 2025-10-07 14:36:57.402 2 DEBUG nova.compute.manager [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-changed-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:36:57 np0005473739 nova_compute[259550]: 2025-10-07 14:36:57.403 2 DEBUG nova.compute.manager [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing instance network info cache due to event network-changed-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:36:57 np0005473739 nova_compute[259550]: 2025-10-07 14:36:57.403 2 DEBUG oslo_concurrency.lockutils [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:36:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:36:58 np0005473739 podman[383389]: 2025-10-07 14:36:58.09026215 +0000 UTC m=+0.070657030 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:36:58 np0005473739 podman[383391]: 2025-10-07 14:36:58.11983906 +0000 UTC m=+0.099004317 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:36:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2236: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 12 KiB/s wr, 7 op/s
Oct  7 10:36:58 np0005473739 compassionate_gates[383382]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:36:58 np0005473739 compassionate_gates[383382]: --> relative data size: 1.0
Oct  7 10:36:58 np0005473739 compassionate_gates[383382]: --> All data devices are unavailable
Oct  7 10:36:58 np0005473739 systemd[1]: libpod-fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa.scope: Deactivated successfully.
Oct  7 10:36:58 np0005473739 podman[383365]: 2025-10-07 14:36:58.398615129 +0000 UTC m=+1.256790444 container died fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:36:58 np0005473739 systemd[1]: libpod-fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa.scope: Consumed 1.047s CPU time.
Oct  7 10:36:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-befdcde83218d98f4d57db52b16b5edff2eb4a7f36351704b4d4d9ed9844db93-merged.mount: Deactivated successfully.
Oct  7 10:36:58 np0005473739 podman[383365]: 2025-10-07 14:36:58.468387453 +0000 UTC m=+1.326562758 container remove fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_gates, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:36:58 np0005473739 systemd[1]: libpod-conmon-fb2cca0798f67e0ef0f5f4ff474f22370ae28d4371547370a281d339e7f2fafa.scope: Deactivated successfully.
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.018 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.018 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:36:59 np0005473739 podman[383601]: 2025-10-07 14:36:59.126399976 +0000 UTC m=+0.039385334 container create ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:36:59 np0005473739 systemd[1]: Started libpod-conmon-ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd.scope.
Oct  7 10:36:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:36:59 np0005473739 podman[383601]: 2025-10-07 14:36:59.194398332 +0000 UTC m=+0.107383710 container init ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:36:59 np0005473739 podman[383601]: 2025-10-07 14:36:59.204146683 +0000 UTC m=+0.117132041 container start ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 10:36:59 np0005473739 podman[383601]: 2025-10-07 14:36:59.110373597 +0000 UTC m=+0.023358985 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:36:59 np0005473739 podman[383601]: 2025-10-07 14:36:59.207827841 +0000 UTC m=+0.120813219 container attach ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:36:59 np0005473739 elastic_bartik[383617]: 167 167
Oct  7 10:36:59 np0005473739 systemd[1]: libpod-ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd.scope: Deactivated successfully.
Oct  7 10:36:59 np0005473739 podman[383601]: 2025-10-07 14:36:59.211205901 +0000 UTC m=+0.124191289 container died ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.212 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.213 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:36:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4d20b9d558944f4a674bd08156c4e2afb7e3625288eb856259adffd6e036a5e6-merged.mount: Deactivated successfully.
Oct  7 10:36:59 np0005473739 podman[383601]: 2025-10-07 14:36:59.247482591 +0000 UTC m=+0.160467949 container remove ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_bartik, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 10:36:59 np0005473739 systemd[1]: libpod-conmon-ea01a104fd7f901c70d37169a99d36271db8d0fccdb0afa2e2fbd1927ab1d7cd.scope: Deactivated successfully.
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.340 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.341 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.415 2 DEBUG nova.network.neutron [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:36:59 np0005473739 podman[383643]: 2025-10-07 14:36:59.419837256 +0000 UTC m=+0.043503163 container create d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:36:59 np0005473739 systemd[1]: Started libpod-conmon-d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7.scope.
Oct  7 10:36:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:36:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:36:59 np0005473739 podman[383643]: 2025-10-07 14:36:59.487735701 +0000 UTC m=+0.111401608 container init d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:36:59 np0005473739 podman[383643]: 2025-10-07 14:36:59.496665739 +0000 UTC m=+0.120331646 container start d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:36:59 np0005473739 podman[383643]: 2025-10-07 14:36:59.402180765 +0000 UTC m=+0.025846702 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:36:59 np0005473739 podman[383643]: 2025-10-07 14:36:59.500238615 +0000 UTC m=+0.123904522 container attach d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.603 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.605 2 DEBUG oslo_concurrency.lockutils [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.605 2 DEBUG nova.network.neutron [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing network info cache for port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.608 2 DEBUG nova.virt.libvirt.vif [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.609 2 DEBUG nova.network.os_vif_util [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.610 2 DEBUG nova.network.os_vif_util [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.610 2 DEBUG os_vif [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.611 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.612 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap444fd1ba-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.616 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap444fd1ba-91, col_values=(('external_ids', {'iface-id': '444fd1ba-91db-40d4-95c0-a6ec1fb6ce49', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7c:26:98', 'vm-uuid': '88e378df-94f3-4a3e-89e1-62f6de052e9d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:59 np0005473739 NetworkManager[44949]: <info>  [1759847819.6192] manager: (tap444fd1ba-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/481)
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.629 2 INFO os_vif [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91')#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.630 2 DEBUG nova.virt.libvirt.vif [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.630 2 DEBUG nova.network.os_vif_util [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.631 2 DEBUG nova.network.os_vif_util [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.634 2 DEBUG nova.virt.libvirt.guest [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] attach device xml: <interface type="ethernet">
Oct  7 10:36:59 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:7c:26:98"/>
Oct  7 10:36:59 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:36:59 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:36:59 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:36:59 np0005473739 nova_compute[259550]:  <target dev="tap444fd1ba-91"/>
Oct  7 10:36:59 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:36:59 np0005473739 nova_compute[259550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  7 10:36:59 np0005473739 kernel: tap444fd1ba-91: entered promiscuous mode
Oct  7 10:36:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:59Z|01198|binding|INFO|Claiming lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for this chassis.
Oct  7 10:36:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:59Z|01199|binding|INFO|444fd1ba-91db-40d4-95c0-a6ec1fb6ce49: Claiming fa:16:3e:7c:26:98 10.100.0.19
Oct  7 10:36:59 np0005473739 NetworkManager[44949]: <info>  [1759847819.6524] manager: (tap444fd1ba-91): new Tun device (/org/freedesktop/NetworkManager/Devices/482)
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:59 np0005473739 systemd-udevd[383690]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:36:59 np0005473739 NetworkManager[44949]: <info>  [1759847819.7087] device (tap444fd1ba-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:36:59 np0005473739 NetworkManager[44949]: <info>  [1759847819.7094] device (tap444fd1ba-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:36:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:59Z|01200|binding|INFO|Setting lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 ovn-installed in OVS
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.803 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847804.8026242, e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.803 2 INFO nova.compute.manager [-] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:36:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:36:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4218141925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:36:59 np0005473739 nova_compute[259550]: 2025-10-07 14:36:59.828 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:36:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:36:59Z|01201|binding|INFO|Setting lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 up in Southbound
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.957 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:26:98 10.100.0.19'], port_security=['fa:16:3e:7c:26:98 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '88e378df-94f3-4a3e-89e1-62f6de052e9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74cd8327-fb9a-441d-85b6-cebb9054477c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.958 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 in datapath 03ab5118-02e3-4fcc-b38c-4eebc4305f96 bound to our chassis#033[00m
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.959 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 03ab5118-02e3-4fcc-b38c-4eebc4305f96#033[00m
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.972 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[93384757-a4cb-4542-b49e-c0adf7a00871]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.974 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap03ab5118-01 in ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.976 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap03ab5118-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.976 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3912fe76-80d6-4d34-904b-8638557bc10a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.977 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4397a9fa-72a0-46f1-ae45-979d69de7dbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:36:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:36:59.991 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[dad96626-44b0-46ef-a7f8-4014d5472b6b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.004 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb8405c-06d2-42e4-b1ed-1a10d8af8e39]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.031 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5313bf07-c787-461a-b35b-542fb21c6034]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.045 2 DEBUG nova.compute.manager [None req-63b4a422-df20-4405-9615-e136ca15591c - - - - - -] [instance: e2d9ce4e-7bc6-423a-a46b-85ae8d7dc6bb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:00 np0005473739 NetworkManager[44949]: <info>  [1759847820.0481] manager: (tap03ab5118-00): new Veth device (/org/freedesktop/NetworkManager/Devices/483)
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.047 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[325ee812-1667-421d-8a57-7d203a348acf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 systemd-udevd[383692]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.070 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.070 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.071 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.087 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[96cb4703-8b41-448b-aac6-3fbbb54705a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.091 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[93a41c5f-a1f7-4005-9678-00ef6ccd517e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 NetworkManager[44949]: <info>  [1759847820.1162] device (tap03ab5118-00): carrier: link connected
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.122 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[76af2131-b9c0-49f3-bf47-ad8a674e8f01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.139 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[05007246-41d1-4644-93e2-f60a83a6cd55]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03ab5118-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:0a:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840368, 'reachable_time': 38819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 383722, 'error': None, 'target': 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.153 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[39de37fb-708c-4acd-84c7-4296f811a356]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee5:a8b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 840368, 'tstamp': 840368}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 383723, 'error': None, 'target': 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.171 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b524dce-1525-4e00-9f47-ddd45493f0bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap03ab5118-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e5:0a:8b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 345], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840368, 'reachable_time': 38819, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 383724, 'error': None, 'target': 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.202 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3686ea7-2c8a-446b-a1f4-65003fb2060b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.258 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fc1efb27-724c-4568-a89b-27e24077af87]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.260 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03ab5118-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.260 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.260 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap03ab5118-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:00 np0005473739 kernel: tap03ab5118-00: entered promiscuous mode
Oct  7 10:37:00 np0005473739 NetworkManager[44949]: <info>  [1759847820.2644] manager: (tap03ab5118-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/484)
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.267 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap03ab5118-00, col_values=(('external_ids', {'iface-id': '6498794a-d0b4-4dcb-87cf-e1a202b8ef2f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:00Z|01202|binding|INFO|Releasing lport 6498794a-d0b4-4dcb-87cf-e1a202b8ef2f from this chassis (sb_readonly=1)
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.274 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/03ab5118-02e3-4fcc-b38c-4eebc4305f96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/03ab5118-02e3-4fcc-b38c-4eebc4305f96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.275 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5cbd0418-b3d0-43c9-a0e0-a7884892648d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.276 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-03ab5118-02e3-4fcc-b38c-4eebc4305f96
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/03ab5118-02e3-4fcc-b38c-4eebc4305f96.pid.haproxy
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 03ab5118-02e3-4fcc-b38c-4eebc4305f96
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:37:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:00.277 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'env', 'PROCESS_TAG=haproxy-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/03ab5118-02e3-4fcc-b38c-4eebc4305f96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]: {
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:    "0": [
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:        {
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "devices": [
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "/dev/loop3"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            ],
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_name": "ceph_lv0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_size": "21470642176",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "name": "ceph_lv0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "tags": {
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cluster_name": "ceph",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.crush_device_class": "",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.encrypted": "0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osd_id": "0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.type": "block",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.vdo": "0"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            },
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "type": "block",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "vg_name": "ceph_vg0"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:        }
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:    ],
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:    "1": [
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:        {
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "devices": [
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "/dev/loop4"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            ],
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_name": "ceph_lv1",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_size": "21470642176",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "name": "ceph_lv1",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "tags": {
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cluster_name": "ceph",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.crush_device_class": "",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.encrypted": "0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osd_id": "1",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.type": "block",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.vdo": "0"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            },
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "type": "block",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "vg_name": "ceph_vg1"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:        }
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:    ],
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:    "2": [
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:        {
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "devices": [
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "/dev/loop5"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            ],
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_name": "ceph_lv2",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_size": "21470642176",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "name": "ceph_lv2",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "tags": {
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.cluster_name": "ceph",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.crush_device_class": "",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.encrypted": "0",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osd_id": "2",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.type": "block",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:                "ceph.vdo": "0"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            },
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "type": "block",
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:            "vg_name": "ceph_vg2"
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:        }
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]:    ]
Oct  7 10:37:00 np0005473739 affectionate_matsumoto[383660]: }
Oct  7 10:37:00 np0005473739 systemd[1]: libpod-d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7.scope: Deactivated successfully.
Oct  7 10:37:00 np0005473739 podman[383643]: 2025-10-07 14:37:00.327060877 +0000 UTC m=+0.950726804 container died d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:37:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2237: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 4.1 KiB/s rd, 17 KiB/s wr, 7 op/s
Oct  7 10:37:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-59e8819c438856d4b1ab9917b5376f805e07b6816f43e741fd94b01c74e0cc97-merged.mount: Deactivated successfully.
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.482 2 DEBUG nova.virt.libvirt.driver [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.483 2 DEBUG nova.virt.libvirt.driver [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.484 2 DEBUG nova.virt.libvirt.driver [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:aa:2b:a2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.484 2 DEBUG nova.virt.libvirt.driver [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:7c:26:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:37:00 np0005473739 podman[383643]: 2025-10-07 14:37:00.691789264 +0000 UTC m=+1.315455171 container remove d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=affectionate_matsumoto, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:37:00 np0005473739 systemd[1]: libpod-conmon-d19724c9232564570c41a096d40603ed7b312d7b6ba92448eebdbc038040c1e7.scope: Deactivated successfully.
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.730 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.730 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000072 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:37:00 np0005473739 podman[383770]: 2025-10-07 14:37:00.716255207 +0000 UTC m=+0.102400657 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.828 2 DEBUG nova.virt.libvirt.guest [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:37:00</nova:creationTime>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct  7 10:37:00 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    <nova:port uuid="444fd1ba-91db-40d4-95c0-a6ec1fb6ce49">
Oct  7 10:37:00 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:37:00 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:37:00 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:37:00 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.916 2 DEBUG oslo_concurrency.lockutils [None req-885d935a-f2d1-49df-a5d4-fa3ce8dfabbd 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 6.667s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.954 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.955 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3558MB free_disk=59.94276428222656GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.955 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.956 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.968 2 DEBUG nova.compute.manager [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.968 2 DEBUG oslo_concurrency.lockutils [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.969 2 DEBUG oslo_concurrency.lockutils [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.969 2 DEBUG oslo_concurrency.lockutils [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.969 2 DEBUG nova.compute.manager [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:00 np0005473739 nova_compute[259550]: 2025-10-07 14:37:00.969 2 WARNING nova.compute.manager [req-2ab9a041-44db-45f1-93e8-301b371da448 req-74e91406-3242-450c-b344-66c460797140 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:37:01 np0005473739 podman[383770]: 2025-10-07 14:37:01.117475498 +0000 UTC m=+0.503620958 container create 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:01 np0005473739 systemd[1]: Started libpod-conmon-859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064.scope.
Oct  7 10:37:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:37:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d2c041b6e969b41f63435c0c8bb7d38859ca24d5708706e9cc97d1edb64a1a9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:01Z|00133|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7c:26:98 10.100.0.19
Oct  7 10:37:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:01Z|00134|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7c:26:98 10.100.0.19
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.250 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 88e378df-94f3-4a3e-89e1-62f6de052e9d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.250 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.250 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:37:01 np0005473739 podman[383770]: 2025-10-07 14:37:01.275740647 +0000 UTC m=+0.661886127 container init 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:37:01 np0005473739 podman[383770]: 2025-10-07 14:37:01.286039482 +0000 UTC m=+0.672184932 container start 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  7 10:37:01 np0005473739 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [NOTICE]   (383920) : New worker (383922) forked
Oct  7 10:37:01 np0005473739 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [NOTICE]   (383920) : Loading success.
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.349 2 DEBUG nova.network.neutron [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updated VIF entry in instance network info cache for port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.350 2 DEBUG nova.network.neutron [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.353 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.390 2 DEBUG oslo_concurrency.lockutils [req-fb70c293-6c32-4d8a-9654-758c24c5eeb6 req-5f4a0a30-4358-4557-9978-51a38d279392 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:01 np0005473739 podman[383948]: 2025-10-07 14:37:01.465762285 +0000 UTC m=+0.024816155 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:37:01 np0005473739 podman[383948]: 2025-10-07 14:37:01.603182277 +0000 UTC m=+0.162236127 container create a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 10:37:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:37:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1548020635' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:37:01 np0005473739 systemd[1]: Started libpod-conmon-a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896.scope.
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.794 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.441s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.801 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:37:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.821 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:37:01 np0005473739 podman[383948]: 2025-10-07 14:37:01.831181279 +0000 UTC m=+0.390235139 container init a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:37:01 np0005473739 podman[383948]: 2025-10-07 14:37:01.838414332 +0000 UTC m=+0.397468182 container start a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 10:37:01 np0005473739 pedantic_shannon[383984]: 167 167
Oct  7 10:37:01 np0005473739 systemd[1]: libpod-a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896.scope: Deactivated successfully.
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.872 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:37:01 np0005473739 nova_compute[259550]: 2025-10-07 14:37:01.872 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.917s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:01 np0005473739 podman[383948]: 2025-10-07 14:37:01.880538248 +0000 UTC m=+0.439592098 container attach a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:37:01 np0005473739 podman[383948]: 2025-10-07 14:37:01.881581025 +0000 UTC m=+0.440634885 container died a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 10:37:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-615f2add1e595ba7c3c03a716290e7a2380edea0329954e9116706c1d9708266-merged.mount: Deactivated successfully.
Oct  7 10:37:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2238: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 16 KiB/s wr, 0 op/s
Oct  7 10:37:02 np0005473739 podman[383948]: 2025-10-07 14:37:02.347622028 +0000 UTC m=+0.906675878 container remove a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_shannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:37:02 np0005473739 nova_compute[259550]: 2025-10-07 14:37:02.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:02 np0005473739 systemd[1]: libpod-conmon-a6cca9de0d596dc89006e6df92f7c197f7605ea08fa206290deec469d596a896.scope: Deactivated successfully.
Oct  7 10:37:02 np0005473739 podman[384008]: 2025-10-07 14:37:02.531919093 +0000 UTC m=+0.025539373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:37:02 np0005473739 podman[384008]: 2025-10-07 14:37:02.761828196 +0000 UTC m=+0.255448446 container create 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:37:02 np0005473739 systemd[1]: Started libpod-conmon-679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0.scope.
Oct  7 10:37:02 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:37:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:03 np0005473739 podman[384008]: 2025-10-07 14:37:03.009303989 +0000 UTC m=+0.502924249 container init 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:37:03 np0005473739 podman[384008]: 2025-10-07 14:37:03.020011905 +0000 UTC m=+0.513632165 container start 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.053 2 DEBUG nova.compute.manager [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.053 2 DEBUG oslo_concurrency.lockutils [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.054 2 DEBUG oslo_concurrency.lockutils [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.054 2 DEBUG oslo_concurrency.lockutils [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.055 2 DEBUG nova.compute.manager [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.055 2 WARNING nova.compute.manager [req-93ee5714-b22a-4b6d-94d2-2b0d302b258e req-db5ddbf8-2122-4b6c-9e52-5a1fbad16cd9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:37:03 np0005473739 podman[384008]: 2025-10-07 14:37:03.138731977 +0000 UTC m=+0.632352257 container attach 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.383 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.384 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.414 2 DEBUG nova.objects.instance [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'flavor' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.442 2 DEBUG nova.virt.libvirt.vif [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.443 2 DEBUG nova.network.os_vif_util [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.444 2 DEBUG nova.network.os_vif_util [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.449 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.452 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.454 2 DEBUG nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Attempting to detach device tap444fd1ba-91 from instance 88e378df-94f3-4a3e-89e1-62f6de052e9d from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.455 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:7c:26:98"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <target dev="tap444fd1ba-91"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.630 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.635 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <name>instance-00000072</name>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <uuid>88e378df-94f3-4a3e-89e1-62f6de052e9d</uuid>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:37:00</nova:creationTime>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:port uuid="444fd1ba-91db-40d4-95c0-a6ec1fb6ce49">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='serial'>88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='uuid'>88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk' index='2'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config' index='1'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:aa:2b:a2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target dev='tap68807b3e-50'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:7c:26:98'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target dev='tap444fd1ba-91'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='net1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log' append='off'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log' append='off'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c220,c590</label>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c220,c590</imagelabel>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.635 2 INFO nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully detached device tap444fd1ba-91 from instance 88e378df-94f3-4a3e-89e1-62f6de052e9d from the persistent domain config.#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.636 2 DEBUG nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] (1/8): Attempting to detach device tap444fd1ba-91 with device alias net1 from instance 88e378df-94f3-4a3e-89e1-62f6de052e9d from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.636 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:7c:26:98"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <target dev="tap444fd1ba-91"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:37:03 np0005473739 kernel: tap444fd1ba-91 (unregistering): left promiscuous mode
Oct  7 10:37:03 np0005473739 NetworkManager[44949]: <info>  [1759847823.6956] device (tap444fd1ba-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:37:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:03Z|01203|binding|INFO|Releasing lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 from this chassis (sb_readonly=0)
Oct  7 10:37:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:03Z|01204|binding|INFO|Setting lport 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 down in Southbound
Oct  7 10:37:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:03Z|01205|binding|INFO|Removing iface tap444fd1ba-91 ovn-installed in OVS
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.720 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:26:98 10.100.0.19'], port_security=['fa:16:3e:7c:26:98 10.100.0.19'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.19/28', 'neutron:device_id': '88e378df-94f3-4a3e-89e1-62f6de052e9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=74cd8327-fb9a-441d-85b6-cebb9054477c, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.721 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 in datapath 03ab5118-02e3-4fcc-b38c-4eebc4305f96 unbound from our chassis#033[00m
Oct  7 10:37:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.722 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 03ab5118-02e3-4fcc-b38c-4eebc4305f96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:37:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[530517da-55cc-4793-af44-7b175a46ef99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:03.728 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 namespace which is not needed anymore#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.730 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759847823.72822, 88e378df-94f3-4a3e-89e1-62f6de052e9d => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.731 2 DEBUG nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Start waiting for the detach event from libvirt for device tap444fd1ba-91 with device alias net1 for instance 88e378df-94f3-4a3e-89e1-62f6de052e9d _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.731 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.740 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:7c:26:98"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap444fd1ba-91"/></interface>not found in domain: <domain type='kvm' id='143'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <name>instance-00000072</name>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <uuid>88e378df-94f3-4a3e-89e1-62f6de052e9d</uuid>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:37:00</nova:creationTime>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:port uuid="444fd1ba-91db-40d4-95c0-a6ec1fb6ce49">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.19" ipVersion="4"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='serial'>88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='uuid'>88e378df-94f3-4a3e-89e1-62f6de052e9d</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk' index='2'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/88e378df-94f3-4a3e-89e1-62f6de052e9d_disk.config' index='1'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:aa:2b:a2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target dev='tap68807b3e-50'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log' append='off'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d/console.log' append='off'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c220,c590</label>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c220,c590</imagelabel>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.740 2 INFO nova.virt.libvirt.driver [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully detached device tap444fd1ba-91 from instance 88e378df-94f3-4a3e-89e1-62f6de052e9d from the live domain config.#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.741 2 DEBUG nova.virt.libvirt.vif [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.741 2 DEBUG nova.network.os_vif_util [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "address": "fa:16:3e:7c:26:98", "network": {"id": "03ab5118-02e3-4fcc-b38c-4eebc4305f96", "bridge": "br-int", "label": "tempest-network-smoke--828460640", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.19", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap444fd1ba-91", "ovs_interfaceid": "444fd1ba-91db-40d4-95c0-a6ec1fb6ce49", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.741 2 DEBUG nova.network.os_vif_util [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.742 2 DEBUG os_vif [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.745 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap444fd1ba-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.753 2 INFO os_vif [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7c:26:98,bridge_name='br-int',has_traffic_filtering=True,id=444fd1ba-91db-40d4-95c0-a6ec1fb6ce49,network=Network(03ab5118-02e3-4fcc-b38c-4eebc4305f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap444fd1ba-91')#033[00m
Oct  7 10:37:03 np0005473739 nova_compute[259550]: 2025-10-07 14:37:03.753 2 DEBUG nova.virt.libvirt.guest [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:name>tempest-TestNetworkBasicOps-server-1302175134</nova:name>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:37:03</nova:creationTime>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    <nova:port uuid="68807b3e-505c-41ea-96e4-f03b329e4c69">
Oct  7 10:37:03 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:37:03 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:37:03 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:37:03 np0005473739 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [NOTICE]   (383920) : haproxy version is 2.8.14-c23fe91
Oct  7 10:37:03 np0005473739 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [NOTICE]   (383920) : path to executable is /usr/sbin/haproxy
Oct  7 10:37:03 np0005473739 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [WARNING]  (383920) : Exiting Master process...
Oct  7 10:37:03 np0005473739 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [ALERT]    (383920) : Current worker (383922) exited with code 143 (Terminated)
Oct  7 10:37:03 np0005473739 neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96[383901]: [WARNING]  (383920) : All workers exited. Exiting... (0)
Oct  7 10:37:03 np0005473739 systemd[1]: libpod-859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064.scope: Deactivated successfully.
Oct  7 10:37:03 np0005473739 podman[384060]: 2025-10-07 14:37:03.913419637 +0000 UTC m=+0.094072425 container died 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]: {
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "osd_id": 2,
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "type": "bluestore"
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:    },
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "osd_id": 1,
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "type": "bluestore"
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:    },
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "osd_id": 0,
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:        "type": "bluestore"
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]:    }
Oct  7 10:37:04 np0005473739 hardcore_proskuriakova[384024]: }
Oct  7 10:37:04 np0005473739 systemd[1]: libpod-679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0.scope: Deactivated successfully.
Oct  7 10:37:04 np0005473739 systemd[1]: libpod-679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0.scope: Consumed 1.002s CPU time.
Oct  7 10:37:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064-userdata-shm.mount: Deactivated successfully.
Oct  7 10:37:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0d2c041b6e969b41f63435c0c8bb7d38859ca24d5708706e9cc97d1edb64a1a9-merged.mount: Deactivated successfully.
Oct  7 10:37:04 np0005473739 podman[384008]: 2025-10-07 14:37:04.231223529 +0000 UTC m=+1.724843809 container died 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  7 10:37:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2239: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 8.3 KiB/s rd, 17 KiB/s wr, 1 op/s
Oct  7 10:37:04 np0005473739 podman[384060]: 2025-10-07 14:37:04.455677486 +0000 UTC m=+0.636330274 container cleanup 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:37:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1a16923cb6bc6c3e76209de583a4a02fa2189b29325c23217d50e7ec68793e23-merged.mount: Deactivated successfully.
Oct  7 10:37:04 np0005473739 podman[384108]: 2025-10-07 14:37:04.54901104 +0000 UTC m=+0.483930822 container remove 679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_proskuriakova, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef)
Oct  7 10:37:04 np0005473739 systemd[1]: libpod-conmon-679d4f008c0203d8382d6753d7f3567ef51153a5e2509cb4301c95f7a9c33dd0.scope: Deactivated successfully.
Oct  7 10:37:04 np0005473739 podman[384124]: 2025-10-07 14:37:04.584016536 +0000 UTC m=+0.100579469 container remove 859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:37:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:37:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:37:04 np0005473739 systemd[1]: libpod-conmon-859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064.scope: Deactivated successfully.
Oct  7 10:37:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.593 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[79cd9b62-7da0-4eac-940d-dd90a86027d4]: (4, ('Tue Oct  7 02:37:03 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 (859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064)\n859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064\nTue Oct  7 02:37:04 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 (859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064)\n859c51d60fc5da1a4753378f637fd63cc126b9227d6b045bae4e305eef87f064\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12a94bdb-246f-4956-8101-c681c3567787]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.599 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap03ab5118-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:04 np0005473739 nova_compute[259550]: 2025-10-07 14:37:04.600 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:37:04 np0005473739 kernel: tap03ab5118-00: left promiscuous mode
Oct  7 10:37:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 60a689a7-0fcf-408b-b45c-9149a3c42c42 does not exist
Oct  7 10:37:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3cdfc09d-b100-4cb4-b766-923c33efc095 does not exist
Oct  7 10:37:04 np0005473739 nova_compute[259550]: 2025-10-07 14:37:04.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.623 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[190c7520-d1d3-4d43-9bed-6474852b790f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.653 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d0d3340-cf65-4892-a3ae-a5622833ddde]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.654 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[55bb0203-d826-44bb-ae54-f6597aebfaac]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4d162cd-0705-430b-9b1c-9caa5367ff9d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 840358, 'reachable_time': 26257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384162, 'error': None, 'target': 'ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:04 np0005473739 systemd[1]: run-netns-ovnmeta\x2d03ab5118\x2d02e3\x2d4fcc\x2db38c\x2d4eebc4305f96.mount: Deactivated successfully.
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.679 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-03ab5118-02e3-4fcc-b38c-4eebc4305f96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:37:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:04.679 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3e5bb309-8432-4c3b-87e2-b759ee163c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.180 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.199 2 DEBUG nova.compute.manager [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-unplugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.200 2 DEBUG oslo_concurrency.lockutils [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.200 2 DEBUG oslo_concurrency.lockutils [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.201 2 DEBUG oslo_concurrency.lockutils [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.201 2 DEBUG nova.compute.manager [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-unplugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.201 2 WARNING nova.compute.manager [req-70c9f191-5e02-4400-b306-fca84fdacda6 req-bd350b39-650f-481f-89a8-457b299864c6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-unplugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.452 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.453 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:05 np0005473739 nova_compute[259550]: 2025-10-07 14:37:05.454 2 DEBUG nova.network.neutron [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:37:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:37:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:37:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2240: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct  7 10:37:06 np0005473739 nova_compute[259550]: 2025-10-07 14:37:06.642 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:37:06 np0005473739 nova_compute[259550]: 2025-10-07 14:37:06.682 2 INFO nova.network.neutron [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Port 444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  7 10:37:06 np0005473739 nova_compute[259550]: 2025-10-07 14:37:06.683 2 DEBUG nova.network.neutron [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:06 np0005473739 nova_compute[259550]: 2025-10-07 14:37:06.705 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:06 np0005473739 nova_compute[259550]: 2025-10-07 14:37:06.737 2 DEBUG oslo_concurrency.lockutils [None req-4406f2f5-644f-4261-b3bc-a9755c57d755 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-88e378df-94f3-4a3e-89e1-62f6de052e9d-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 3.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.292 2 DEBUG nova.compute.manager [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.292 2 DEBUG oslo_concurrency.lockutils [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.293 2 DEBUG oslo_concurrency.lockutils [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.293 2 DEBUG oslo_concurrency.lockutils [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.293 2 DEBUG nova.compute.manager [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] No waiting events found dispatching network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.293 2 WARNING nova.compute.manager [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received unexpected event network-vif-plugged-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.294 2 DEBUG nova.compute.manager [req-c2428730-5ef3-486f-b9c9-fdf5d47430fd req-55617eae-fb7f-48d4-8c6d-c9f256e37f6c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-deleted-444fd1ba-91db-40d4-95c0-a6ec1fb6ce49 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:07Z|01206|binding|INFO|Releasing lport 0987c9ec-179e-425c-b303-601326f99ffa from this chassis (sb_readonly=0)
Oct  7 10:37:07 np0005473739 nova_compute[259550]: 2025-10-07 14:37:07.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.152 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.153 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.153 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.154 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.154 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.155 2 INFO nova.compute.manager [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Terminating instance#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.156 2 DEBUG nova.compute.manager [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2241: 305 pgs: 305 active+clean; 121 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 5.7 KiB/s wr, 1 op/s
Oct  7 10:37:08 np0005473739 kernel: tap68807b3e-50 (unregistering): left promiscuous mode
Oct  7 10:37:08 np0005473739 NetworkManager[44949]: <info>  [1759847828.3882] device (tap68807b3e-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:08Z|01207|binding|INFO|Releasing lport 68807b3e-505c-41ea-96e4-f03b329e4c69 from this chassis (sb_readonly=0)
Oct  7 10:37:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:08Z|01208|binding|INFO|Setting lport 68807b3e-505c-41ea-96e4-f03b329e4c69 down in Southbound
Oct  7 10:37:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:08Z|01209|binding|INFO|Removing iface tap68807b3e-50 ovn-installed in OVS
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.403 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.417 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:aa:2b:a2 10.100.0.14'], port_security=['fa:16:3e:aa:2b:a2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '88e378df-94f3-4a3e-89e1-62f6de052e9d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '49bffbb9-d898-4935-821f-8756d7f43377', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=3993e5e4-e68a-46c5-a740-21f6d2b71498, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=68807b3e-505c-41ea-96e4-f03b329e4c69) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.419 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 68807b3e-505c-41ea-96e4-f03b329e4c69 in datapath 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 unbound from our chassis#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.420 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.422 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be6da688-ccfc-4bef-af0c-3609466696f3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.423 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 namespace which is not needed anymore#033[00m
Oct  7 10:37:08 np0005473739 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Deactivated successfully.
Oct  7 10:37:08 np0005473739 systemd[1]: machine-qemu\x2d143\x2dinstance\x2d00000072.scope: Consumed 13.973s CPU time.
Oct  7 10:37:08 np0005473739 systemd-machined[214580]: Machine qemu-143-instance-00000072 terminated.
Oct  7 10:37:08 np0005473739 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [NOTICE]   (382852) : haproxy version is 2.8.14-c23fe91
Oct  7 10:37:08 np0005473739 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [NOTICE]   (382852) : path to executable is /usr/sbin/haproxy
Oct  7 10:37:08 np0005473739 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [WARNING]  (382852) : Exiting Master process...
Oct  7 10:37:08 np0005473739 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [ALERT]    (382852) : Current worker (382854) exited with code 143 (Terminated)
Oct  7 10:37:08 np0005473739 neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38[382848]: [WARNING]  (382852) : All workers exited. Exiting... (0)
Oct  7 10:37:08 np0005473739 systemd[1]: libpod-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b.scope: Deactivated successfully.
Oct  7 10:37:08 np0005473739 conmon[382848]: conmon c5e808b5b8a3e3272d23 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b.scope/container/memory.events
Oct  7 10:37:08 np0005473739 podman[384214]: 2025-10-07 14:37:08.584235033 +0000 UTC m=+0.056494791 container died c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.594 2 INFO nova.virt.libvirt.driver [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance destroyed successfully.#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.601 2 DEBUG nova.objects.instance [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 88e378df-94f3-4a3e-89e1-62f6de052e9d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.616 2 DEBUG nova.virt.libvirt.vif [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:36:14Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1302175134',display_name='tempest-TestNetworkBasicOps-server-1302175134',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1302175134',id=114,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNWpQUaf47IgvjMJ1PfiJ3hBWwgl5SRzpVWmxMmvbTcZSrsJXs5xD5XgPEXcyNNmm528IJ5KaOIt3CjVQWo+4ASv0OE74+2GIAZPXgv6ybnOW8r7u3cAdnaHk7c6RMmT5A==',key_name='tempest-TestNetworkBasicOps-677021137',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:36:31Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-xkm7y880',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:36:31Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=88e378df-94f3-4a3e-89e1-62f6de052e9d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.617 2 DEBUG nova.network.os_vif_util [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.617 2 DEBUG nova.network.os_vif_util [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.618 2 DEBUG os_vif [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.620 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap68807b3e-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b-userdata-shm.mount: Deactivated successfully.
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.626 2 INFO os_vif [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:aa:2b:a2,bridge_name='br-int',has_traffic_filtering=True,id=68807b3e-505c-41ea-96e4-f03b329e4c69,network=Network(4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap68807b3e-50')#033[00m
Oct  7 10:37:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ecad819b79e80b6f43b313371589ac92f170aa571899affb8f76f01510b64cf5-merged.mount: Deactivated successfully.
Oct  7 10:37:08 np0005473739 podman[384214]: 2025-10-07 14:37:08.6354093 +0000 UTC m=+0.107669058 container cleanup c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:37:08 np0005473739 systemd[1]: libpod-conmon-c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b.scope: Deactivated successfully.
Oct  7 10:37:08 np0005473739 podman[384270]: 2025-10-07 14:37:08.705249037 +0000 UTC m=+0.045757744 container remove c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.711 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[510ea67f-0f46-4644-b129-e774e881e958]: (4, ('Tue Oct  7 02:37:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 (c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b)\nc5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b\nTue Oct  7 02:37:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 (c5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b)\nc5e808b5b8a3e3272d23a5624538b1e0946dda9630e2b20a23c41a520b28ad3b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.713 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d5a81322-b8bf-4afe-a209-76ab6bf89ac0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4e6cb0ef-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 kernel: tap4e6cb0ef-50: left promiscuous mode
Oct  7 10:37:08 np0005473739 nova_compute[259550]: 2025-10-07 14:37:08.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c70bd45e-1c4b-420f-aa89-0c54be5b79fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.760 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9fdad02-a6ca-4fda-ba63-e5e95005a580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b1b3b81-05ab-4372-a24c-1c0687b12e48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.778 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[832b3bdf-a3f0-47a7-907b-9427cc03ebd1]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 837327, 'reachable_time': 35570, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384289, 'error': None, 'target': 'ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:08 np0005473739 systemd[1]: run-netns-ovnmeta\x2d4e6cb0ef\x2d5d1e\x2d4cf2\x2da50c\x2d8f2d313d6a38.mount: Deactivated successfully.
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.781 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:37:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:08.781 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0b8df40e-b48a-46a1-82f4-d9068db78d3c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.050 2 INFO nova.virt.libvirt.driver [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Deleting instance files /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d_del#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.051 2 INFO nova.virt.libvirt.driver [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Deletion of /var/lib/nova/instances/88e378df-94f3-4a3e-89e1-62f6de052e9d_del complete#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.105 2 INFO nova.compute.manager [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Took 0.95 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.106 2 DEBUG oslo.service.loopingcall [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.107 2 DEBUG nova.compute.manager [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.107 2 DEBUG nova.network.neutron [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.368 2 DEBUG nova.compute.manager [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.369 2 DEBUG nova.compute.manager [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing instance network info cache due to event network-changed-68807b3e-505c-41ea-96e4-f03b329e4c69. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.369 2 DEBUG oslo_concurrency.lockutils [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.370 2 DEBUG oslo_concurrency.lockutils [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:09 np0005473739 nova_compute[259550]: 2025-10-07 14:37:09.370 2 DEBUG nova.network.neutron [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Refreshing network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:37:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2242: 305 pgs: 305 active+clean; 87 MiB data, 844 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 10 KiB/s wr, 6 op/s
Oct  7 10:37:10 np0005473739 nova_compute[259550]: 2025-10-07 14:37:10.394 2 DEBUG nova.network.neutron [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:10 np0005473739 nova_compute[259550]: 2025-10-07 14:37:10.496 2 INFO nova.compute.manager [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Took 1.39 seconds to deallocate network for instance.#033[00m
Oct  7 10:37:10 np0005473739 nova_compute[259550]: 2025-10-07 14:37:10.643 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:10 np0005473739 nova_compute[259550]: 2025-10-07 14:37:10.644 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:10 np0005473739 nova_compute[259550]: 2025-10-07 14:37:10.707 2 DEBUG oslo_concurrency.processutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:37:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1908663976' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.207 2 DEBUG oslo_concurrency.processutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.213 2 DEBUG nova.compute.provider_tree [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.250 2 DEBUG nova.scheduler.client.report [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.295 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.322 2 INFO nova.scheduler.client.report [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 88e378df-94f3-4a3e-89e1-62f6de052e9d#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.410 2 DEBUG nova.network.neutron [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updated VIF entry in instance network info cache for port 68807b3e-505c-41ea-96e4-f03b329e4c69. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.411 2 DEBUG nova.network.neutron [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Updating instance_info_cache with network_info: [{"id": "68807b3e-505c-41ea-96e4-f03b329e4c69", "address": "fa:16:3e:aa:2b:a2", "network": {"id": "4e6cb0ef-5d1e-4cf2-a50c-8f2d313d6a38", "bridge": "br-int", "label": "tempest-network-smoke--227424757", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap68807b3e-50", "ovs_interfaceid": "68807b3e-505c-41ea-96e4-f03b329e4c69", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.514 2 DEBUG oslo_concurrency.lockutils [None req-f734e958-302a-4d00-9e12-1f7f2d194921 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "88e378df-94f3-4a3e-89e1-62f6de052e9d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.361s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.516 2 DEBUG oslo_concurrency.lockutils [req-b81d33ef-1bb3-45f3-aeb4-400d4b4eaaf7 req-ab284870-2b1d-4795-887f-14a5832d3773 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-88e378df-94f3-4a3e-89e1-62f6de052e9d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.612 2 DEBUG nova.compute.manager [req-cf0dec61-a615-4549-8230-944933ce62b6 req-5ecbb06d-1e01-409b-b002-6ff81fb2c970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Received event network-vif-deleted-68807b3e-505c-41ea-96e4-f03b329e4c69 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.612 2 INFO nova.compute.manager [req-cf0dec61-a615-4549-8230-944933ce62b6 req-5ecbb06d-1e01-409b-b002-6ff81fb2c970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Neutron deleted interface 68807b3e-505c-41ea-96e4-f03b329e4c69; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.613 2 DEBUG nova.network.neutron [req-cf0dec61-a615-4549-8230-944933ce62b6 req-5ecbb06d-1e01-409b-b002-6ff81fb2c970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.615 2 DEBUG nova.compute.manager [req-cf0dec61-a615-4549-8230-944933ce62b6 req-5ecbb06d-1e01-409b-b002-6ff81fb2c970 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Detach interface failed, port_id=68807b3e-505c-41ea-96e4-f03b329e4c69, reason: Instance 88e378df-94f3-4a3e-89e1-62f6de052e9d could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:37:11 np0005473739 nova_compute[259550]: 2025-10-07 14:37:11.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2243: 305 pgs: 305 active+clean; 82 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 5.6 KiB/s wr, 15 op/s
Oct  7 10:37:12 np0005473739 nova_compute[259550]: 2025-10-07 14:37:12.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:13 np0005473739 podman[384313]: 2025-10-07 14:37:13.067405867 +0000 UTC m=+0.055241848 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:37:13 np0005473739 podman[384314]: 2025-10-07 14:37:13.105201997 +0000 UTC m=+0.089480782 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:37:13 np0005473739 nova_compute[259550]: 2025-10-07 14:37:13.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #105. Immutable memtables: 0.
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.713293) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 61] Flushing memtable with next log file: 105
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833713382, "job": 61, "event": "flush_started", "num_memtables": 1, "num_entries": 2065, "num_deletes": 251, "total_data_size": 3350444, "memory_usage": 3398096, "flush_reason": "Manual Compaction"}
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 61] Level-0 flush table #106: started
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833732613, "cf_name": "default", "job": 61, "event": "table_file_creation", "file_number": 106, "file_size": 3272563, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 45115, "largest_seqno": 47179, "table_properties": {"data_size": 3263250, "index_size": 5807, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19190, "raw_average_key_size": 20, "raw_value_size": 3244683, "raw_average_value_size": 3415, "num_data_blocks": 258, "num_entries": 950, "num_filter_entries": 950, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847618, "oldest_key_time": 1759847618, "file_creation_time": 1759847833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 106, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 61] Flush lasted 19368 microseconds, and 7265 cpu microseconds.
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.732670) [db/flush_job.cc:967] [default] [JOB 61] Level-0 flush table #106: 3272563 bytes OK
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.732690) [db/memtable_list.cc:519] [default] Level-0 commit table #106 started
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.734030) [db/memtable_list.cc:722] [default] Level-0 commit table #106: memtable #1 done
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.734046) EVENT_LOG_v1 {"time_micros": 1759847833734040, "job": 61, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.734064) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 61] Try to delete WAL files size 3341762, prev total WAL file size 3341762, number of live WAL files 2.
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000102.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.735171) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034323637' seq:72057594037927935, type:22 .. '7061786F730034353139' seq:0, type:0; will stop at (end)
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 62] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 61 Base level 0, inputs: [106(3195KB)], [104(8520KB)]
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833735207, "job": 62, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [106], "files_L6": [104], "score": -1, "input_data_size": 11997194, "oldest_snapshot_seqno": -1}
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 62] Generated table #107: 6999 keys, 10308845 bytes, temperature: kUnknown
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833803382, "cf_name": "default", "job": 62, "event": "table_file_creation", "file_number": 107, "file_size": 10308845, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10260891, "index_size": 29346, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17541, "raw_key_size": 180440, "raw_average_key_size": 25, "raw_value_size": 10134444, "raw_average_value_size": 1447, "num_data_blocks": 1155, "num_entries": 6999, "num_filter_entries": 6999, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847833, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 107, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.803685) [db/compaction/compaction_job.cc:1663] [default] [JOB 62] Compacted 1@0 + 1@6 files to L6 => 10308845 bytes
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.805072) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 175.7 rd, 151.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.3 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(6.8) write-amplify(3.2) OK, records in: 7513, records dropped: 514 output_compression: NoCompression
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.805107) EVENT_LOG_v1 {"time_micros": 1759847833805083, "job": 62, "event": "compaction_finished", "compaction_time_micros": 68278, "compaction_time_cpu_micros": 27658, "output_level": 6, "num_output_files": 1, "total_output_size": 10308845, "num_input_records": 7513, "num_output_records": 6999, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000106.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833805891, "job": 62, "event": "table_file_deletion", "file_number": 106}
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000104.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847833807634, "job": 62, "event": "table_file_deletion", "file_number": 104}
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.735095) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807684) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807687) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:13 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:13.807688) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2244: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 6.2 KiB/s wr, 28 op/s
Oct  7 10:37:15 np0005473739 nova_compute[259550]: 2025-10-07 14:37:15.895 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2245: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct  7 10:37:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:16.842 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7c:a2:29 2001:db8:0:1:f816:3eff:fe7c:a229 2001:db8::f816:3eff:fe7c:a229'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe7c:a229/64 2001:db8::f816:3eff:fe7c:a229/64', 'neutron:device_id': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ec328f15-1843-4594-8d39-b0d2d9796360) old=Port_Binding(mac=['fa:16:3e:7c:a2:29 2001:db8::f816:3eff:fe7c:a229'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe7c:a229/64', 'neutron:device_id': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:16.844 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ec328f15-1843-4594-8d39-b0d2d9796360 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 updated#033[00m
Oct  7 10:37:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:16.845 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c956141-6a21-499d-99b1-885d1a2972f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:37:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:16.845 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f808a6b-d63e-48f8-8b9e-34394e36eaec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:17 np0005473739 nova_compute[259550]: 2025-10-07 14:37:17.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2246: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct  7 10:37:18 np0005473739 nova_compute[259550]: 2025-10-07 14:37:18.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:19 np0005473739 nova_compute[259550]: 2025-10-07 14:37:19.792 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:19 np0005473739 nova_compute[259550]: 2025-10-07 14:37:19.793 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:19 np0005473739 nova_compute[259550]: 2025-10-07 14:37:19.935 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:37:20 np0005473739 nova_compute[259550]: 2025-10-07 14:37:20.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:20 np0005473739 nova_compute[259550]: 2025-10-07 14:37:20.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2247: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 5.2 KiB/s wr, 27 op/s
Oct  7 10:37:20 np0005473739 nova_compute[259550]: 2025-10-07 14:37:20.485 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:20 np0005473739 nova_compute[259550]: 2025-10-07 14:37:20.486 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:20 np0005473739 nova_compute[259550]: 2025-10-07 14:37:20.497 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:37:20 np0005473739 nova_compute[259550]: 2025-10-07 14:37:20.497 2 INFO nova.compute.claims [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:37:21 np0005473739 nova_compute[259550]: 2025-10-07 14:37:21.074 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:37:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2945680067' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:37:21 np0005473739 nova_compute[259550]: 2025-10-07 14:37:21.537 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:21 np0005473739 nova_compute[259550]: 2025-10-07 14:37:21.543 2 DEBUG nova.compute.provider_tree [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:37:21 np0005473739 nova_compute[259550]: 2025-10-07 14:37:21.637 2 DEBUG nova.scheduler.client.report [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:37:21 np0005473739 nova_compute[259550]: 2025-10-07 14:37:21.897 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.411s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:21 np0005473739 nova_compute[259550]: 2025-10-07 14:37:21.898 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.119 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.120 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.221 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.296 2 DEBUG nova.policy [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2248: 305 pgs: 305 active+clean; 41 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 597 B/s wr, 22 op/s
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.438 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:37:22
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data', 'images', '.mgr', 'default.rgw.meta', 'backups', 'default.rgw.log', 'vms']
Oct  7 10:37:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.759 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.761 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.761 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Creating image(s)#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.783 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.804 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.825 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.829 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.906 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.907 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.908 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.909 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.933 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:22 np0005473739 nova_compute[259550]: 2025-10-07 14:37:22.939 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e64a021-390b-4a0c-bb4c-75a19f274777_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:37:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.423 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Successfully created port: 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.523 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4e64a021-390b-4a0c-bb4c-75a19f274777_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.584s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.588 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.621 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847828.593345, 88e378df-94f3-4a3e-89e1-62f6de052e9d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.621 2 INFO nova.compute.manager [-] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.664 2 DEBUG nova.compute.manager [None req-4b741e53-f5a4-4c44-981e-2dad65ab448f - - - - - -] [instance: 88e378df-94f3-4a3e-89e1-62f6de052e9d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.761 2 DEBUG nova.objects.instance [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.818 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.818 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Ensure instance console log exists: /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.819 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.819 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:23 np0005473739 nova_compute[259550]: 2025-10-07 14:37:23.819 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2249: 305 pgs: 305 active+clean; 66 MiB data, 800 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 793 KiB/s wr, 36 op/s
Oct  7 10:37:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2250: 305 pgs: 305 active+clean; 88 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:37:27 np0005473739 nova_compute[259550]: 2025-10-07 14:37:27.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:27 np0005473739 nova_compute[259550]: 2025-10-07 14:37:27.725 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Successfully updated port: 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:37:27 np0005473739 nova_compute[259550]: 2025-10-07 14:37:27.727 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:27 np0005473739 nova_compute[259550]: 2025-10-07 14:37:27.727 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:27 np0005473739 nova_compute[259550]: 2025-10-07 14:37:27.844 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:27 np0005473739 nova_compute[259550]: 2025-10-07 14:37:27.845 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:27 np0005473739 nova_compute[259550]: 2025-10-07 14:37:27.845 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:37:27 np0005473739 nova_compute[259550]: 2025-10-07 14:37:27.896 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:37:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.113 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.114 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.119 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.119 2 INFO nova.compute.claims [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.151 2 DEBUG nova.compute.manager [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.152 2 DEBUG nova.compute.manager [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing instance network info cache due to event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.152 2 DEBUG oslo_concurrency.lockutils [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2251: 305 pgs: 305 active+clean; 88 MiB data, 812 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.647 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:28 np0005473739 nova_compute[259550]: 2025-10-07 14:37:28.701 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:37:29 np0005473739 podman[384566]: 2025-10-07 14:37:29.067902246 +0000 UTC m=+0.054692733 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:37:29 np0005473739 podman[384565]: 2025-10-07 14:37:29.06805627 +0000 UTC m=+0.058171745 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:37:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:37:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2404864662' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.142 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.147 2 DEBUG nova.compute.provider_tree [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.217 2 DEBUG nova.scheduler.client.report [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.294 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.295 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.530 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.530 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.696 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.743 2 DEBUG nova.policy [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.747 2 DEBUG nova.network.neutron [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.880 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.972 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.972 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance network_info: |[{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.973 2 DEBUG oslo_concurrency.lockutils [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.974 2 DEBUG nova.network.neutron [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.979 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start _get_guest_xml network_info=[{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.989 2 WARNING nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.995 2 DEBUG nova.virt.libvirt.host [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.996 2 DEBUG nova.virt.libvirt.host [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:37:29 np0005473739 nova_compute[259550]: 2025-10-07 14:37:29.999 2 DEBUG nova.virt.libvirt.host [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.000 2 DEBUG nova.virt.libvirt.host [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.000 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.001 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.001 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.002 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.002 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.002 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.003 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.003 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.003 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.004 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.004 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.005 2 DEBUG nova.virt.hardware [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.009 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.203 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.205 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.205 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Creating image(s)#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.228 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.249 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.273 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.276 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.349 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.350 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.350 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.350 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2252: 305 pgs: 305 active+clean; 88 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.372 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.376 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:37:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/492944341' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.478 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.506 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.510 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.701 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.764 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.865 2 DEBUG nova.objects.instance [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 23c0ce36-9e34-4a73-9f99-3b79f8623238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.923 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.923 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Ensure instance console log exists: /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.924 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.924 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.924 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:37:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1830871132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.997 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.999 2 DEBUG nova.virt.libvirt.vif [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:22Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:37:30 np0005473739 nova_compute[259550]: 2025-10-07 14:37:30.999 2 DEBUG nova.network.os_vif_util [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.000 2 DEBUG nova.network.os_vif_util [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.001 2 DEBUG nova.objects.instance [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.123 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <uuid>4e64a021-390b-4a0c-bb4c-75a19f274777</uuid>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <name>instance-00000073</name>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1824501664</nova:name>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:37:29</nova:creationTime>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <nova:port uuid="6e86ce79-9f1b-4e53-8ae5-918e8402b8c6">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <entry name="serial">4e64a021-390b-4a0c-bb4c-75a19f274777</entry>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <entry name="uuid">4e64a021-390b-4a0c-bb4c-75a19f274777</entry>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e64a021-390b-4a0c-bb4c-75a19f274777_disk">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6d:38:87"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <target dev="tap6e86ce79-9f"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/console.log" append="off"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:37:31 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:37:31 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:37:31 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:37:31 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.123 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Preparing to wait for external event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.124 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.124 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.124 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.125 2 DEBUG nova.virt.libvirt.vif [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:22Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.125 2 DEBUG nova.network.os_vif_util [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.126 2 DEBUG nova.network.os_vif_util [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.126 2 DEBUG os_vif [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.128 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e86ce79-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e86ce79-9f, col_values=(('external_ids', {'iface-id': '6e86ce79-9f1b-4e53-8ae5-918e8402b8c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:38:87', 'vm-uuid': '4e64a021-390b-4a0c-bb4c-75a19f274777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:31 np0005473739 NetworkManager[44949]: <info>  [1759847851.1370] manager: (tap6e86ce79-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/485)
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.141 2 INFO os_vif [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f')#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.241 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Successfully created port: 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.284 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.284 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.285 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:6d:38:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.285 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Using config drive#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.307 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.772 2 DEBUG nova.network.neutron [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updated VIF entry in instance network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.773 2 DEBUG nova.network.neutron [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.839 2 DEBUG oslo_concurrency.lockutils [req-154eaec5-94f4-47ae-98ad-4a090934bc55 req-65e7a00d-ac6a-4f78-a063-a7f5ec57aa79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.919 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Creating config drive at /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config#033[00m
Oct  7 10:37:31 np0005473739 nova_compute[259550]: 2025-10-07 14:37:31.928 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpst8jjpv0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.071 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpst8jjpv0" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.097 2 DEBUG nova.storage.rbd_utils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.100 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.331 2 DEBUG oslo_concurrency.processutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config 4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.231s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.333 2 INFO nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Deleting local config drive /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/disk.config because it was imported into RBD.#033[00m
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2253: 305 pgs: 305 active+clean; 102 MiB data, 818 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 2.4 MiB/s wr, 28 op/s
Oct  7 10:37:32 np0005473739 kernel: tap6e86ce79-9f: entered promiscuous mode
Oct  7 10:37:32 np0005473739 NetworkManager[44949]: <info>  [1759847852.3821] manager: (tap6e86ce79-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/486)
Oct  7 10:37:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:32Z|01210|binding|INFO|Claiming lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for this chassis.
Oct  7 10:37:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:32Z|01211|binding|INFO|6e86ce79-9f1b-4e53-8ae5-918e8402b8c6: Claiming fa:16:3e:6d:38:87 10.100.0.6
Oct  7 10:37:32 np0005473739 systemd-udevd[384903]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.420 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:32 np0005473739 NetworkManager[44949]: <info>  [1759847852.4364] device (tap6e86ce79-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:37:32 np0005473739 NetworkManager[44949]: <info>  [1759847852.4375] device (tap6e86ce79-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:37:32 np0005473739 systemd-machined[214580]: New machine qemu-144-instance-00000073.
Oct  7 10:37:32 np0005473739 systemd[1]: Started Virtual Machine qemu-144-instance-00000073.
Oct  7 10:37:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:32Z|01212|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 ovn-installed in OVS
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0004631311250569773 of space, bias 1.0, pg target 0.1389393375170932 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:37:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:37:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:32Z|01213|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 up in Southbound
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.650 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:38:87 10.100.0.6'], port_security=['fa:16:3e:6d:38:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e64a021-390b-4a0c-bb4c-75a19f274777', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28efb734-0152-4914-9f31-b818d894be70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c3ab1e78-0aa6-4d11-8104-a075342a0333', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3cdfe5f-0d4a-4d58-ba3f-aac15c533467, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 in datapath 28efb734-0152-4914-9f31-b818d894be70 bound to our chassis#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.654 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28efb734-0152-4914-9f31-b818d894be70#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.669 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f56df2f-3e63-4bbc-b884-9c21e2949c21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.670 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28efb734-01 in ovnmeta-28efb734-0152-4914-9f31-b818d894be70 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.672 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28efb734-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.672 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ea9900ef-1da8-4689-9656-c2d350c3fa36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6af87485-a98e-407d-9dac-68edb8da40ec]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.695 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a91ca161-be92-4436-9376-a1c50f4f41ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:37:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2507582309' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:37:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:37:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2507582309' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.723 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e9313090-55fe-4608-9a6a-3bf8184c38ea]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.761 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b781cfb3-bd0d-4ead-8c9b-d296e568c3ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.767 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2368fe16-eb3c-40fb-ac66-b1946e96bb4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 NetworkManager[44949]: <info>  [1759847852.7678] manager: (tap28efb734-00): new Veth device (/org/freedesktop/NetworkManager/Devices/487)
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.802 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef85c2df-b0fb-48db-a64a-331b8353a234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.805 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc021f3-03fa-4083-b61c-c60912d8c2c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 NetworkManager[44949]: <info>  [1759847852.8307] device (tap28efb734-00): carrier: link connected
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.839 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c895d9e9-2b87-4d15-ad11-abd90dfb8158]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.856 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9ba897c5-7a0b-46fe-ba3e-346ef0c3659c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28efb734-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843639, 'reachable_time': 32519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 384980, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.874 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d022c278-bcac-4d18-a3ee-ccd2e3a2b71f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:b7c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 843639, 'tstamp': 843639}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 384981, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.893 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b54082a1-1d7b-495f-bdd5-93d2d44cde70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28efb734-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 348], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843639, 'reachable_time': 32519, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 384983, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.923 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ebea8767-071e-4d75-8c1b-eb081db872c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.982 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[039a3a74-2f64-4b91-85f7-ad00f8407512]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.983 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28efb734-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.984 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.984 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28efb734-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.985 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:32 np0005473739 NetworkManager[44949]: <info>  [1759847852.9864] manager: (tap28efb734-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/488)
Oct  7 10:37:32 np0005473739 kernel: tap28efb734-00: entered promiscuous mode
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:32.988 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28efb734-00, col_values=(('external_ids', {'iface-id': '54dfc7fb-c548-460a-8b73-708819b15ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:32 np0005473739 nova_compute[259550]: 2025-10-07 14:37:32.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:32Z|01214|binding|INFO|Releasing lport 54dfc7fb-c548-460a-8b73-708819b15ca2 from this chassis (sb_readonly=0)
Oct  7 10:37:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:33.005 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:33.006 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[57384803-7219-4929-997d-e9dd0ec0a4d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:33.006 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-28efb734-0152-4914-9f31-b818d894be70
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 28efb734-0152-4914-9f31-b818d894be70
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:37:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:33.007 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'env', 'PROCESS_TAG=haproxy-28efb734-0152-4914-9f31-b818d894be70', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28efb734-0152-4914-9f31-b818d894be70.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.284 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847853.2841039, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.285 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Started (Lifecycle Event)#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.314 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Successfully created port: 1a175a4f-4ef8-469e-b213-5f8d404858c8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.358 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.362 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847853.2843251, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.362 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:37:33 np0005473739 podman[385014]: 2025-10-07 14:37:33.374403928 +0000 UTC m=+0.049051722 container create ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:37:33 np0005473739 systemd[1]: Started libpod-conmon-ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef.scope.
Oct  7 10:37:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:37:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a358810f9f8251b9e86fcc8b171db7901750b59cf1dbd689c2d91dd47284bb72/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.432 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.436 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:37:33 np0005473739 podman[385014]: 2025-10-07 14:37:33.346544204 +0000 UTC m=+0.021191978 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:37:33 np0005473739 podman[385014]: 2025-10-07 14:37:33.456572084 +0000 UTC m=+0.131219858 container init ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:37:33 np0005473739 podman[385014]: 2025-10-07 14:37:33.461890116 +0000 UTC m=+0.136537870 container start ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  7 10:37:33 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [NOTICE]   (385033) : New worker (385035) forked
Oct  7 10:37:33 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [NOTICE]   (385033) : Loading success.
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.505 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.650 2 DEBUG nova.compute.manager [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.651 2 DEBUG oslo_concurrency.lockutils [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.651 2 DEBUG oslo_concurrency.lockutils [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.652 2 DEBUG oslo_concurrency.lockutils [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.652 2 DEBUG nova.compute.manager [req-d5588f08-81dd-4eb7-9ac9-41524cb1f9f3 req-e624ab32-7c6d-4fb9-ac9e-d58da43ba186 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Processing event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.653 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.658 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847853.6574464, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.659 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.661 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.667 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance spawned successfully.#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.668 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.718 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.722 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.775 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.776 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.776 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.777 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.777 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.778 2 DEBUG nova.virt.libvirt.driver [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:33 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:37:33 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.794 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.992 2 INFO nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Took 11.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:37:33 np0005473739 nova_compute[259550]: 2025-10-07 14:37:33.993 2 DEBUG nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:34 np0005473739 nova_compute[259550]: 2025-10-07 14:37:34.102 2 INFO nova.compute.manager [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Took 13.69 seconds to build instance.#033[00m
Oct  7 10:37:34 np0005473739 nova_compute[259550]: 2025-10-07 14:37:34.128 2 DEBUG oslo_concurrency.lockutils [None req-1978056f-66cc-468b-8e52-896cc482bee1 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.335s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2254: 305 pgs: 305 active+clean; 134 MiB data, 833 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 60 op/s
Oct  7 10:37:34 np0005473739 nova_compute[259550]: 2025-10-07 14:37:34.847 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Successfully updated port: 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.773 2 DEBUG nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.774 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.774 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.775 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.775 2 DEBUG nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.775 2 WARNING nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.775 2 DEBUG nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.776 2 DEBUG nova.compute.manager [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing instance network info cache due to event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.776 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.776 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.777 2 DEBUG nova.network.neutron [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:37:35 np0005473739 nova_compute[259550]: 2025-10-07 14:37:35.952 2 DEBUG nova.network.neutron [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.132 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Successfully updated port: 1a175a4f-4ef8-469e-b213-5f8d404858c8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.168 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.343 2 DEBUG nova.network.neutron [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2255: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.8 MiB/s wr, 74 op/s
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.403 2 DEBUG oslo_concurrency.lockutils [req-2a8f3250-d51c-44fd-94d6-040feb7785ef req-89b59035-5ecc-44f5-b85a-48e13d8a59b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.404 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.404 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.630 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:37:36 np0005473739 nova_compute[259550]: 2025-10-07 14:37:36.939 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:36.941 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=37, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=36) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:36.943 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:37:37 np0005473739 NetworkManager[44949]: <info>  [1759847857.3690] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/489)
Oct  7 10:37:37 np0005473739 NetworkManager[44949]: <info>  [1759847857.3699] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/490)
Oct  7 10:37:37 np0005473739 nova_compute[259550]: 2025-10-07 14:37:37.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:37 np0005473739 nova_compute[259550]: 2025-10-07 14:37:37.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:37Z|01215|binding|INFO|Releasing lport 54dfc7fb-c548-460a-8b73-708819b15ca2 from this chassis (sb_readonly=0)
Oct  7 10:37:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:37Z|01216|binding|INFO|Releasing lport 54dfc7fb-c548-460a-8b73-708819b15ca2 from this chassis (sb_readonly=0)
Oct  7 10:37:37 np0005473739 nova_compute[259550]: 2025-10-07 14:37:37.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:38 np0005473739 nova_compute[259550]: 2025-10-07 14:37:38.144 2 DEBUG nova.compute.manager [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-changed-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:38 np0005473739 nova_compute[259550]: 2025-10-07 14:37:38.144 2 DEBUG nova.compute.manager [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing instance network info cache due to event network-changed-1a175a4f-4ef8-469e-b213-5f8d404858c8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:37:38 np0005473739 nova_compute[259550]: 2025-10-07 14:37:38.144 2 DEBUG oslo_concurrency.lockutils [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2256: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.203 2 DEBUG nova.network.neutron [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.237 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.238 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance network_info: |[{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.239 2 DEBUG oslo_concurrency.lockutils [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.240 2 DEBUG nova.network.neutron [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing network info cache for port 1a175a4f-4ef8-469e-b213-5f8d404858c8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.248 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start _get_guest_xml network_info=[{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.253 2 WARNING nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.261 2 DEBUG nova.virt.libvirt.host [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.262 2 DEBUG nova.virt.libvirt.host [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.265 2 DEBUG nova.virt.libvirt.host [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.265 2 DEBUG nova.virt.libvirt.host [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.266 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.266 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.267 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.268 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.268 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.268 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.268 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.269 2 DEBUG nova.virt.hardware [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.272 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:37:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3486510496' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.766 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.787 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:39 np0005473739 nova_compute[259550]: 2025-10-07 14:37:39.792 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:37:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3144698385' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.254 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.256 2 DEBUG nova.virt.libvirt.vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:29Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.256 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.257 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.258 2 DEBUG nova.virt.libvirt.vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:29Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.258 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.259 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.260 2 DEBUG nova.objects.instance [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 23c0ce36-9e34-4a73-9f99-3b79f8623238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.302 2 DEBUG nova.compute.manager [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.302 2 DEBUG nova.compute.manager [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing instance network info cache due to event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.302 2 DEBUG oslo_concurrency.lockutils [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.303 2 DEBUG oslo_concurrency.lockutils [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.303 2 DEBUG nova.network.neutron [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:37:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2257: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.418 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <uuid>23c0ce36-9e34-4a73-9f99-3b79f8623238</uuid>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <name>instance-00000074</name>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-78341121</nova:name>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:37:39</nova:creationTime>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:port uuid="72a26c5e-6ceb-4ee6-b79b-d105eac8054b">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <nova:port uuid="1a175a4f-4ef8-469e-b213-5f8d404858c8">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe72:c44a" ipVersion="6"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe72:c44a" ipVersion="6"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <entry name="serial">23c0ce36-9e34-4a73-9f99-3b79f8623238</entry>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <entry name="uuid">23c0ce36-9e34-4a73-9f99-3b79f8623238</entry>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/23c0ce36-9e34-4a73-9f99-3b79f8623238_disk">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:dc:bc:87"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <target dev="tap72a26c5e-6c"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:72:c4:4a"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <target dev="tap1a175a4f-4e"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/console.log" append="off"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:37:40 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:37:40 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:37:40 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:37:40 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.419 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Preparing to wait for external event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.420 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.420 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.420 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.421 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Preparing to wait for external event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.421 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.421 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.421 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.422 2 DEBUG nova.virt.libvirt.vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:29Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.422 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.423 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.423 2 DEBUG os_vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.424 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.425 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.428 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap72a26c5e-6c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.428 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap72a26c5e-6c, col_values=(('external_ids', {'iface-id': '72a26c5e-6ceb-4ee6-b79b-d105eac8054b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:bc:87', 'vm-uuid': '23c0ce36-9e34-4a73-9f99-3b79f8623238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:40 np0005473739 NetworkManager[44949]: <info>  [1759847860.4306] manager: (tap72a26c5e-6c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/491)
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.433 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.438 2 INFO os_vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c')#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.439 2 DEBUG nova.virt.libvirt.vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:29Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.439 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.440 2 DEBUG nova.network.os_vif_util [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.440 2 DEBUG os_vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.441 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.443 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1a175a4f-4e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.444 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1a175a4f-4e, col_values=(('external_ids', {'iface-id': '1a175a4f-4ef8-469e-b213-5f8d404858c8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:c4:4a', 'vm-uuid': '23c0ce36-9e34-4a73-9f99-3b79f8623238'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:40 np0005473739 NetworkManager[44949]: <info>  [1759847860.4463] manager: (tap1a175a4f-4e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/492)
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.452 2 INFO os_vif [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e')#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.718 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.718 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.719 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:dc:bc:87, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.719 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:72:c4:4a, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.719 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Using config drive#033[00m
Oct  7 10:37:40 np0005473739 nova_compute[259550]: 2025-10-07 14:37:40.737 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:41 np0005473739 nova_compute[259550]: 2025-10-07 14:37:41.741 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Creating config drive at /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config#033[00m
Oct  7 10:37:41 np0005473739 nova_compute[259550]: 2025-10-07 14:37:41.745 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiyj0dzqo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:41 np0005473739 nova_compute[259550]: 2025-10-07 14:37:41.886 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpiyj0dzqo" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:41 np0005473739 nova_compute[259550]: 2025-10-07 14:37:41.907 2 DEBUG nova.storage.rbd_utils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:41 np0005473739 nova_compute[259550]: 2025-10-07 14:37:41.916 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.000 2 DEBUG nova.network.neutron [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updated VIF entry in instance network info cache for port 1a175a4f-4ef8-469e-b213-5f8d404858c8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.001 2 DEBUG nova.network.neutron [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.162 2 DEBUG oslo_concurrency.lockutils [req-d06a0eca-d38e-4d8e-88ca-4f05fb543a0c req-cc44f78d-13b2-4346-b820-89c34b15aba4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2258: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.594 2 DEBUG nova.network.neutron [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updated VIF entry in instance network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.595 2 DEBUG nova.network.neutron [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.750 2 DEBUG oslo_concurrency.lockutils [req-a7af1ba0-7f6b-48c2-855f-31d644bd6c33 req-7ff242c1-0e74-499f-b68d-aa8c900c2421 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.819 2 DEBUG oslo_concurrency.processutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config 23c0ce36-9e34-4a73-9f99-3b79f8623238_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.903s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.820 2 INFO nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Deleting local config drive /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238/disk.config because it was imported into RBD.#033[00m
Oct  7 10:37:42 np0005473739 kernel: tap72a26c5e-6c: entered promiscuous mode
Oct  7 10:37:42 np0005473739 NetworkManager[44949]: <info>  [1759847862.8873] manager: (tap72a26c5e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/493)
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01217|binding|INFO|Claiming lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b for this chassis.
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01218|binding|INFO|72a26c5e-6ceb-4ee6-b79b-d105eac8054b: Claiming fa:16:3e:dc:bc:87 10.100.0.12
Oct  7 10:37:42 np0005473739 NetworkManager[44949]: <info>  [1759847862.9032] manager: (tap1a175a4f-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/494)
Oct  7 10:37:42 np0005473739 systemd-udevd[385185]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:37:42 np0005473739 kernel: tap1a175a4f-4e: entered promiscuous mode
Oct  7 10:37:42 np0005473739 systemd-udevd[385186]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01219|binding|INFO|Setting lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b ovn-installed in OVS
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01220|if_status|INFO|Dropped 7 log messages in last 122 seconds (most recently, 122 seconds ago) due to excessive rate
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01221|if_status|INFO|Not updating pb chassis for 1a175a4f-4ef8-469e-b213-5f8d404858c8 now as sb is readonly
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.923 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:42 np0005473739 NetworkManager[44949]: <info>  [1759847862.9349] device (tap72a26c5e-6c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:37:42 np0005473739 NetworkManager[44949]: <info>  [1759847862.9368] device (tap72a26c5e-6c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:37:42 np0005473739 NetworkManager[44949]: <info>  [1759847862.9386] device (tap1a175a4f-4e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:37:42 np0005473739 NetworkManager[44949]: <info>  [1759847862.9399] device (tap1a175a4f-4e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01222|binding|INFO|Claiming lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 for this chassis.
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01223|binding|INFO|1a175a4f-4ef8-469e-b213-5f8d404858c8: Claiming fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01224|binding|INFO|Setting lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b up in Southbound
Oct  7 10:37:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:42.945 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '37'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:42.948 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:bc:87 10.100.0.12'], port_security=['fa:16:3e:dc:bc:87 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '23c0ce36-9e34-4a73-9f99-3b79f8623238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb73c9-a89b-4659-8761-7d887493b39b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=169b3722-7b9a-4733-8efb-f5bd5c71aacf, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=72a26c5e-6ceb-4ee6-b79b-d105eac8054b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:42.949 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b in datapath 5dfb73c9-a89b-4659-8761-7d887493b39b bound to our chassis#033[00m
Oct  7 10:37:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:42Z|01225|binding|INFO|Setting lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 ovn-installed in OVS
Oct  7 10:37:42 np0005473739 nova_compute[259550]: 2025-10-07 14:37:42.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:42 np0005473739 systemd-machined[214580]: New machine qemu-145-instance-00000074.
Oct  7 10:37:42 np0005473739 systemd[1]: Started Virtual Machine qemu-145-instance-00000074.
Oct  7 10:37:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.002 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5dfb73c9-a89b-4659-8761-7d887493b39b#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.014 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab071713-26a3-47f5-a354-1508d7b6595f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.027 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5dfb73c9-a1 in ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.029 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5dfb73c9-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.029 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e27be6f-c426-4cad-8027-2c6d9a056a9d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.030 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[808ea68f-8025-4922-8951-8364259d61c4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:43Z|01226|binding|INFO|Setting lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 up in Southbound
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.032 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a'], port_security=['fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe72:c44a/64 2001:db8::f816:3eff:fe72:c44a/64', 'neutron:device_id': '23c0ce36-9e34-4a73-9f99-3b79f8623238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1a175a4f-4ef8-469e-b213-5f8d404858c8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.049 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e654d743-3b2b-45cf-aadc-28a246f2615e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.074 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2466e32f-9f0a-4920-9c6c-c73c58ff6141]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.109 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0004c75d-d8e8-46f7-8037-ea11f3e5d29f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 NetworkManager[44949]: <info>  [1759847863.1184] manager: (tap5dfb73c9-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/495)
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.119 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a60130f-9c2c-4d69-897c-61a1ff60ddb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.160 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d0ba2a46-108e-4831-8df5-57e6c82e8779]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.164 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dabb4c6f-985a-4930-9176-db28c7127e65]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 NetworkManager[44949]: <info>  [1759847863.1920] device (tap5dfb73c9-a0): carrier: link connected
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.200 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fa99dd88-cfd9-4f30-b8fe-e4215a9301f1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 podman[385211]: 2025-10-07 14:37:43.219269568 +0000 UTC m=+0.060680342 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.221 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5bdf6152-6003-40f7-bff3-6ca69c58b16d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb73c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:44:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 200, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 200, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844675, 'reachable_time': 21117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 172, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 172, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385253, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4e3ebbcd-3af1-4c84-8fe3-7c77eb2d4144]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee7:4411'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844675, 'tstamp': 844675}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385260, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7024e23f-23c1-40cb-9734-3973353bb3b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb73c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:44:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 200, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 200, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844675, 'reachable_time': 21117, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 172, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 172, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385265, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 podman[385217]: 2025-10-07 14:37:43.277053822 +0000 UTC m=+0.118500927 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.282 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8eef5421-a272-4ff9-918f-52f7cab64802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.294 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.294 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.339 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b48da5d8-c84d-4502-895a-020c47cadc61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.340 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb73c9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.340 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.341 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dfb73c9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:43 np0005473739 NetworkManager[44949]: <info>  [1759847863.3431] manager: (tap5dfb73c9-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/496)
Oct  7 10:37:43 np0005473739 kernel: tap5dfb73c9-a0: entered promiscuous mode
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.344 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5dfb73c9-a0, col_values=(('external_ids', {'iface-id': '30dd4552-fdd6-4d17-af87-77adcec53278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:43Z|01227|binding|INFO|Releasing lport 30dd4552-fdd6-4d17-af87-77adcec53278 from this chassis (sb_readonly=0)
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.360 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5dfb73c9-a89b-4659-8761-7d887493b39b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5dfb73c9-a89b-4659-8761-7d887493b39b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.361 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d909bccd-1a4f-477e-9f47-025bc0bef358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.361 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-5dfb73c9-a89b-4659-8761-7d887493b39b
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/5dfb73c9-a89b-4659-8761-7d887493b39b.pid.haproxy
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 5dfb73c9-a89b-4659-8761-7d887493b39b
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:37:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:43.362 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'env', 'PROCESS_TAG=haproxy-5dfb73c9-a89b-4659-8761-7d887493b39b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5dfb73c9-a89b-4659-8761-7d887493b39b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.390 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.512 2 DEBUG nova.compute.manager [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.513 2 DEBUG oslo_concurrency.lockutils [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.513 2 DEBUG oslo_concurrency.lockutils [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.514 2 DEBUG oslo_concurrency.lockutils [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.514 2 DEBUG nova.compute.manager [req-b9f9918f-1d53-41bd-bdac-7e4e17a96f2c req-892c5a0a-550c-4999-a1cb-fea70fcdc9cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Processing event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.540 2 DEBUG nova.compute.manager [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.541 2 DEBUG oslo_concurrency.lockutils [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.541 2 DEBUG oslo_concurrency.lockutils [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.541 2 DEBUG oslo_concurrency.lockutils [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.542 2 DEBUG nova.compute.manager [req-a7488ecf-3063-4b59-8496-da200abe81fc req-21772a0d-1aa4-4e3d-890b-5d5259f284d1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Processing event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.546 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.547 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.556 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.556 2 INFO nova.compute.claims [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:37:43 np0005473739 nova_compute[259550]: 2025-10-07 14:37:43.740 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:43 np0005473739 podman[385340]: 2025-10-07 14:37:43.729742228 +0000 UTC m=+0.022341648 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:37:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:37:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2630454737' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:37:44 np0005473739 podman[385340]: 2025-10-07 14:37:44.182846956 +0000 UTC m=+0.475446366 container create c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.206 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.213 2 DEBUG nova.compute.provider_tree [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.262 2 DEBUG nova.scheduler.client.report [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.281 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847864.281388, 23c0ce36-9e34-4a73-9f99-3b79f8623238 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.282 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] VM Started (Lifecycle Event)#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.284 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.285 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.286 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.290 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.294 2 INFO nova.virt.libvirt.driver [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance spawned successfully.#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.294 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.299 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.303 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:37:44 np0005473739 systemd[1]: Started libpod-conmon-c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69.scope.
Oct  7 10:37:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:37:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b6c393a5ba60ac8c97a1f1c72230373f0ef69c5c9c49d0c8f32905833660b064/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2259: 305 pgs: 305 active+clean; 134 MiB data, 840 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 106 op/s
Oct  7 10:37:44 np0005473739 podman[385340]: 2025-10-07 14:37:44.38625769 +0000 UTC m=+0.678857110 container init c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:37:44 np0005473739 podman[385340]: 2025-10-07 14:37:44.392400504 +0000 UTC m=+0.684999914 container start c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:37:44 np0005473739 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [NOTICE]   (385383) : New worker (385385) forked
Oct  7 10:37:44 np0005473739 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [NOTICE]   (385383) : Loading success.
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.452 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1a175a4f-4ef8-469e-b213-5f8d404858c8 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 unbound from our chassis#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.457 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c956141-6a21-499d-99b1-885d1a2972f7#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6edcebf0-2045-4d17-a0db-6590c2ecd13a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.471 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.471 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4c956141-61 in ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.471 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847864.2819824, 23c0ce36-9e34-4a73-9f99-3b79f8623238 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.472 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.472 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4c956141-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.473 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c75fde23-0aa9-4920-b8a2-4cde4eb970e3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.473 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e090b79f-0d5b-4535-a6ad-0e7476b620cf]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.477 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.477 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.478 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.478 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.479 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.479 2 DEBUG nova.virt.libvirt.driver [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.484 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.484 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.487 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[14befe37-04c4-4abf-93b6-4fdf4bf5b0df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.501 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[263f76b1-53a5-43e0-bf67-b1e98fb6bb5f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.529 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a5cfad-e652-4594-bc45-add80244923a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.535 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[762fd037-3a32-48e7-a089-0b493f4e706d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 NetworkManager[44949]: <info>  [1759847864.5364] manager: (tap4c956141-60): new Veth device (/org/freedesktop/NetworkManager/Devices/497)
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.551 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.555 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847864.2872374, 23c0ce36-9e34-4a73-9f99-3b79f8623238 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.556 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.567 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.578 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6f8fdaea-5b58-4219-aaeb-179b520d80fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.582 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[294f7f2d-952b-486f-a442-74011726b887]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.589 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.592 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:37:44 np0005473739 NetworkManager[44949]: <info>  [1759847864.6053] device (tap4c956141-60): carrier: link connected
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.611 2 INFO nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Took 14.41 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.612 2 DEBUG nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.611 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5872c887-c037-4b65-ab7d-0fb80ea12b17]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.627 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[485d541b-1980-44d0-92b2-5f6cd203d709]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c956141-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a2:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844816, 'reachable_time': 27638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385404, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.650 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e650f099-de47-4862-befa-7213f7dc18b2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe7c:a229'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844816, 'tstamp': 844816}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385405, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.666 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.667 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[49f0f459-2618-4544-a385-bab667464f0b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c956141-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a2:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844816, 'reachable_time': 27638, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385406, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.701 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6fdd7179-77bc-43f7-a8a2-ee630f2a8d54]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.718 2 INFO nova.compute.manager [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Took 16.63 seconds to build instance.#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.734 2 DEBUG nova.policy [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.752 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0ea6c730-5def-4d6a-a45c-fa094f9c3b01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.753 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c956141-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.753 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.754 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c956141-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:44 np0005473739 NetworkManager[44949]: <info>  [1759847864.7723] manager: (tap4c956141-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/498)
Oct  7 10:37:44 np0005473739 kernel: tap4c956141-60: entered promiscuous mode
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.776 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c956141-60, col_values=(('external_ids', {'iface-id': 'ec328f15-1843-4594-8d39-b0d2d9796360'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:44Z|01228|binding|INFO|Releasing lport ec328f15-1843-4594-8d39-b0d2d9796360 from this chassis (sb_readonly=0)
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.777 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.779 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4c956141-6a21-499d-99b1-885d1a2972f7.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4c956141-6a21-499d-99b1-885d1a2972f7.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.779 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ae399c8a-7d8f-45e9-b673-5a9ced2bfd70]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.781 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-4c956141-6a21-499d-99b1-885d1a2972f7
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/4c956141-6a21-499d-99b1-885d1a2972f7.pid.haproxy
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 4c956141-6a21-499d-99b1-885d1a2972f7
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:37:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:44.782 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'env', 'PROCESS_TAG=haproxy-4c956141-6a21-499d-99b1-885d1a2972f7', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4c956141-6a21-499d-99b1-885d1a2972f7.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.918 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.919 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.919 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Creating image(s)#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.939 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.964 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.988 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:44 np0005473739 nova_compute[259550]: 2025-10-07 14:37:44.992 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.031 2 DEBUG oslo_concurrency.lockutils [None req-82b686f9-9582-4dae-8275-3db5bfd84547 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.087 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.089 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.089 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.090 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.121 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.128 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c0eb8730-2b26-4cc0-8a9c-019688db568f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:45 np0005473739 podman[385507]: 2025-10-07 14:37:45.185394244 +0000 UTC m=+0.057593481 container create 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:37:45 np0005473739 systemd[1]: Started libpod-conmon-9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44.scope.
Oct  7 10:37:45 np0005473739 podman[385507]: 2025-10-07 14:37:45.15494105 +0000 UTC m=+0.027140317 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:37:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:37:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a337889f89fdae52e14dc13504ac936b7577e6b7adfb2aa670a8f40235199adf/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:45 np0005473739 podman[385507]: 2025-10-07 14:37:45.293590735 +0000 UTC m=+0.165790002 container init 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001)
Oct  7 10:37:45 np0005473739 podman[385507]: 2025-10-07 14:37:45.300106889 +0000 UTC m=+0.172306126 container start 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  7 10:37:45 np0005473739 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [NOTICE]   (385548) : New worker (385550) forked
Oct  7 10:37:45 np0005473739 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [NOTICE]   (385548) : Loading success.
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.452 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c0eb8730-2b26-4cc0-8a9c-019688db568f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.324s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.519 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.604 2 DEBUG nova.compute.manager [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.605 2 DEBUG oslo_concurrency.lockutils [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.605 2 DEBUG oslo_concurrency.lockutils [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.605 2 DEBUG oslo_concurrency.lockutils [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.606 2 DEBUG nova.compute.manager [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.606 2 WARNING nova.compute.manager [req-2c264bad-a964-41b1-bc8c-aecbf3ec5a3f req-138d55b3-201c-4bf1-a3eb-f78b6b3f8fdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received unexpected event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.611 2 DEBUG nova.objects.instance [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid c0eb8730-2b26-4cc0-8a9c-019688db568f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.628 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.628 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Ensure instance console log exists: /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.629 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.629 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.630 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.674 2 DEBUG nova.compute.manager [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.675 2 DEBUG oslo_concurrency.lockutils [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.675 2 DEBUG oslo_concurrency.lockutils [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.675 2 DEBUG oslo_concurrency.lockutils [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.675 2 DEBUG nova.compute.manager [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.676 2 WARNING nova.compute.manager [req-ae0982d4-a1b5-48bb-9946-c45af7236a92 req-11ec5634-e044-48e3-9eb1-223e46535d61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received unexpected event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:37:45 np0005473739 nova_compute[259550]: 2025-10-07 14:37:45.779 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Successfully created port: 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:37:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2260: 305 pgs: 305 active+clean; 156 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 1.4 MiB/s wr, 108 op/s
Oct  7 10:37:46 np0005473739 nova_compute[259550]: 2025-10-07 14:37:46.969 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Successfully updated port: 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:37:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:47Z|00135|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:38:87 10.100.0.6
Oct  7 10:37:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:47Z|00136|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:38:87 10.100.0.6
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.025 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.026 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.026 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.236 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.779 2 DEBUG nova.compute.manager [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.780 2 DEBUG nova.compute.manager [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing instance network info cache due to event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.780 2 DEBUG oslo_concurrency.lockutils [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:47 np0005473739 nova_compute[259550]: 2025-10-07 14:37:47.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:37:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.329 2 DEBUG nova.network.neutron [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2261: 305 pgs: 305 active+clean; 156 MiB data, 842 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.4 MiB/s wr, 70 op/s
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.438 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.439 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance network_info: |[{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.440 2 DEBUG oslo_concurrency.lockutils [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.440 2 DEBUG nova.network.neutron [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.442 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start _get_guest_xml network_info=[{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.448 2 WARNING nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.455 2 DEBUG nova.virt.libvirt.host [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.456 2 DEBUG nova.virt.libvirt.host [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.461 2 DEBUG nova.virt.libvirt.host [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.462 2 DEBUG nova.virt.libvirt.host [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.462 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.462 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.463 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.463 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.464 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.464 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.464 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.464 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.465 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.465 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.465 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.465 2 DEBUG nova.virt.hardware [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.469 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.807 2 DEBUG nova.compute.manager [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.808 2 DEBUG nova.compute.manager [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing instance network info cache due to event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.809 2 DEBUG oslo_concurrency.lockutils [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.810 2 DEBUG oslo_concurrency.lockutils [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.810 2 DEBUG nova.network.neutron [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:37:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:37:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2831155830' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.912 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.931 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:48 np0005473739 nova_compute[259550]: 2025-10-07 14:37:48.935 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:37:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2552038153' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.367 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.371 2 DEBUG nova.virt.libvirt.vif [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-528486164',display_name='tempest-TestNetworkBasicOps-server-528486164',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-528486164',id=117,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQxueVciqThLEf3QSXvOwVT+5hy1ZzCF3GFJVm8JMkN6QiflWo55jpLsNYky8mEXwzLpOQbQ3bLKW+JqUogdrrbbB9pgVzSI20yKTEUP5ZPdTnLVnqTqbVW/n6Yq8HTpg==',key_name='tempest-TestNetworkBasicOps-1556177030',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-aayr7mmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:44Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=c0eb8730-2b26-4cc0-8a9c-019688db568f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.372 2 DEBUG nova.network.os_vif_util [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.373 2 DEBUG nova.network.os_vif_util [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.375 2 DEBUG nova.objects.instance [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid c0eb8730-2b26-4cc0-8a9c-019688db568f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.565 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <uuid>c0eb8730-2b26-4cc0-8a9c-019688db568f</uuid>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <name>instance-00000075</name>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-528486164</nova:name>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:37:48</nova:creationTime>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <nova:port uuid="1c96bebd-0f68-48d9-9bab-486d6e56cb4e">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <entry name="serial">c0eb8730-2b26-4cc0-8a9c-019688db568f</entry>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <entry name="uuid">c0eb8730-2b26-4cc0-8a9c-019688db568f</entry>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c0eb8730-2b26-4cc0-8a9c-019688db568f_disk">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:46:76:3f"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <target dev="tap1c96bebd-0f"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/console.log" append="off"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:37:49 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:37:49 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:37:49 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:37:49 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.571 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Preparing to wait for external event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.571 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.572 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.572 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.573 2 DEBUG nova.virt.libvirt.vif [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-528486164',display_name='tempest-TestNetworkBasicOps-server-528486164',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-528486164',id=117,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQxueVciqThLEf3QSXvOwVT+5hy1ZzCF3GFJVm8JMkN6QiflWo55jpLsNYky8mEXwzLpOQbQ3bLKW+JqUogdrrbbB9pgVzSI20yKTEUP5ZPdTnLVnqTqbVW/n6Yq8HTpg==',key_name='tempest-TestNetworkBasicOps-1556177030',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-aayr7mmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:37:44Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=c0eb8730-2b26-4cc0-8a9c-019688db568f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.573 2 DEBUG nova.network.os_vif_util [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.575 2 DEBUG nova.network.os_vif_util [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.575 2 DEBUG os_vif [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.576 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.579 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1c96bebd-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1c96bebd-0f, col_values=(('external_ids', {'iface-id': '1c96bebd-0f68-48d9-9bab-486d6e56cb4e', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:46:76:3f', 'vm-uuid': 'c0eb8730-2b26-4cc0-8a9c-019688db568f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:49 np0005473739 NetworkManager[44949]: <info>  [1759847869.5824] manager: (tap1c96bebd-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/499)
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.589 2 INFO os_vif [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f')#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.720 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.721 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.721 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:46:76:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.721 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Using config drive#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.739 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:37:49 np0005473739 nova_compute[259550]: 2025-10-07 14:37:49.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.085 2 DEBUG nova.network.neutron [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updated VIF entry in instance network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.086 2 DEBUG nova.network.neutron [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.143 2 DEBUG oslo_concurrency.lockutils [req-f4d33a72-af81-4318-a945-7beb00f3b1f0 req-a06070b0-56f8-4f1f-a747-d71a74cfa0f1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.160 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Creating config drive at /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.164 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpro2in8z9 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.304 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpro2in8z9" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.328 2 DEBUG nova.storage.rbd_utils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.331 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:37:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2262: 305 pgs: 305 active+clean; 210 MiB data, 871 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 182 op/s
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.405 2 DEBUG nova.network.neutron [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updated VIF entry in instance network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.406 2 DEBUG nova.network.neutron [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.612 2 DEBUG oslo_concurrency.lockutils [req-59a27177-fb7e-4880-957a-5cf0cca6528a req-c2fc33ce-a770-46d3-ab7a-8b63f6b3342f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.755 2 DEBUG oslo_concurrency.processutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config c0eb8730-2b26-4cc0-8a9c-019688db568f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.424s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.756 2 INFO nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Deleting local config drive /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f/disk.config because it was imported into RBD.#033[00m
Oct  7 10:37:50 np0005473739 NetworkManager[44949]: <info>  [1759847870.8165] manager: (tap1c96bebd-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/500)
Oct  7 10:37:50 np0005473739 kernel: tap1c96bebd-0f: entered promiscuous mode
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:50Z|01229|binding|INFO|Claiming lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e for this chassis.
Oct  7 10:37:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:50Z|01230|binding|INFO|1c96bebd-0f68-48d9-9bab-486d6e56cb4e: Claiming fa:16:3e:46:76:3f 10.100.0.13
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.854 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:76:3f 10.100.0.13'], port_security=['fa:16:3e:46:76:3f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c0eb8730-2b26-4cc0-8a9c-019688db568f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ddf5ea1d-9677-4e1c-8a3a-0f9543177677', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be472fff-888c-4934-b0b7-07dc4319f2eb, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1c96bebd-0f68-48d9-9bab-486d6e56cb4e) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.855 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e in datapath 4bd15a72-ce65-4737-b705-4b2b86d3a32a bound to our chassis#033[00m
Oct  7 10:37:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:50Z|01231|binding|INFO|Setting lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e ovn-installed in OVS
Oct  7 10:37:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:50Z|01232|binding|INFO|Setting lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e up in Southbound
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.857 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bd15a72-ce65-4737-b705-4b2b86d3a32a#033[00m
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:50 np0005473739 systemd-machined[214580]: New machine qemu-146-instance-00000075.
Oct  7 10:37:50 np0005473739 systemd-udevd[385767]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.871 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eb8be62b-5375-4086-b8ab-6c5905b9467a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.872 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4bd15a72-c1 in ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:37:50 np0005473739 systemd[1]: Started Virtual Machine qemu-146-instance-00000075.
Oct  7 10:37:50 np0005473739 NetworkManager[44949]: <info>  [1759847870.8759] device (tap1c96bebd-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.874 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4bd15a72-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.874 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d686052-eee5-486b-834e-182c62b8a465]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:50 np0005473739 NetworkManager[44949]: <info>  [1759847870.8777] device (tap1c96bebd-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.877 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef8b66e0-be24-4587-ba80-89260ae816b7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.894 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3b043d58-f6d3-4a4e-b4a5-a1084d981f55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.911 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99f7dd8d-126e-42eb-9677-32c089502826]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.939 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31c45dc9-b8b0-4135-bf15-83a454b90bcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[812f676e-c81f-4770-ade3-75b45f084046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:50 np0005473739 systemd-udevd[385770]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:37:50 np0005473739 NetworkManager[44949]: <info>  [1759847870.9500] manager: (tap4bd15a72-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/501)
Oct  7 10:37:50 np0005473739 nova_compute[259550]: 2025-10-07 14:37:50.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.989 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf9de0b-c3f1-4002-9c31-4f64f82b96d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:50.993 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd33759-e5a4-4ce7-a59f-f76dc9de16d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:51 np0005473739 NetworkManager[44949]: <info>  [1759847871.0401] device (tap4bd15a72-c0): carrier: link connected
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.042 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5723b261-1955-4a5d-9f51-3f6394ce9d07]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.063 2 DEBUG nova.compute.manager [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.064 2 DEBUG oslo_concurrency.lockutils [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.064 2 DEBUG oslo_concurrency.lockutils [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.063 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[158d9750-10a7-40d2-a3dc-85afa0910dc0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bd15a72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:db:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845459, 'reachable_time': 38757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385800, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.064 2 DEBUG oslo_concurrency.lockutils [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.065 2 DEBUG nova.compute.manager [req-50b6a5ab-3c3b-4159-b981-53dbef0691ce req-1434d2ec-09d7-4e04-be83-4fddc9c0eac6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Processing event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.084 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[19d4b464-7856-46a2-8b9a-74a26d2d2fe6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe11:db12'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845459, 'tstamp': 845459}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 385801, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.106 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1f4ea7c8-a6f9-4768-a252-d7faa7b32a52]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bd15a72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:db:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845459, 'reachable_time': 38757, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 385802, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.153 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2bd86e8-c556-46e2-a373-df35ed7281c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.211 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f4ce2e8-41df-42c4-bc86-d79e0af96c85]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.218 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bd15a72-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.218 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.219 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bd15a72-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.220 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:51 np0005473739 NetworkManager[44949]: <info>  [1759847871.2213] manager: (tap4bd15a72-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/502)
Oct  7 10:37:51 np0005473739 kernel: tap4bd15a72-c0: entered promiscuous mode
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.232 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bd15a72-c0, col_values=(('external_ids', {'iface-id': '818ca059-c8de-4f85-a524-8f47c8fbf780'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:51Z|01233|binding|INFO|Releasing lport 818ca059-c8de-4f85-a524-8f47c8fbf780 from this chassis (sb_readonly=0)
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.235 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4bd15a72-ce65-4737-b705-4b2b86d3a32a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4bd15a72-ce65-4737-b705-4b2b86d3a32a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.248 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07b39efb-b1a3-4f7e-8916-dadab92f3db0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.249 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-4bd15a72-ce65-4737-b705-4b2b86d3a32a
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/4bd15a72-ce65-4737-b705-4b2b86d3a32a.pid.haproxy
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 4bd15a72-ce65-4737-b705-4b2b86d3a32a
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:37:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:51.250 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'env', 'PROCESS_TAG=haproxy-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4bd15a72-ce65-4737-b705-4b2b86d3a32a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:37:51 np0005473739 podman[385874]: 2025-10-07 14:37:51.615792616 +0000 UTC m=+0.046816031 container create 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:37:51 np0005473739 systemd[1]: Started libpod-conmon-70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c.scope.
Oct  7 10:37:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:37:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad6812d5d6ccd623c1c8db2e48873105455dd3e4c1a071313da6b9108e14c4dd/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:37:51 np0005473739 podman[385874]: 2025-10-07 14:37:51.591107297 +0000 UTC m=+0.022130752 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:37:51 np0005473739 podman[385874]: 2025-10-07 14:37:51.700552791 +0000 UTC m=+0.131576226 container init 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:37:51 np0005473739 podman[385874]: 2025-10-07 14:37:51.707109466 +0000 UTC m=+0.138132881 container start 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:37:51 np0005473739 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [NOTICE]   (385893) : New worker (385895) forked
Oct  7 10:37:51 np0005473739 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [NOTICE]   (385893) : Loading success.
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.910 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.911 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847871.9107678, c0eb8730-2b26-4cc0-8a9c-019688db568f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.912 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] VM Started (Lifecycle Event)#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.932 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.935 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.939 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.941 2 INFO nova.virt.libvirt.driver [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance spawned successfully.#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.942 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.956 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.956 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847871.9108791, c0eb8730-2b26-4cc0-8a9c-019688db568f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.956 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.963 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.964 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.964 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.965 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.965 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.966 2 DEBUG nova.virt.libvirt.driver [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.971 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.974 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847871.9153705, c0eb8730-2b26-4cc0-8a9c-019688db568f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:37:51 np0005473739 nova_compute[259550]: 2025-10-07 14:37:51.974 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.011 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.014 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.029 2 INFO nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Took 7.11 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.030 2 DEBUG nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.039 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.088 2 INFO nova.compute.manager [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Took 8.57 seconds to build instance.#033[00m
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.105 2 DEBUG oslo_concurrency.lockutils [None req-20ffaa2b-3285-4c12-b85d-264b7fdc2c47 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2263: 305 pgs: 305 active+clean; 213 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:37:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:37:52 np0005473739 nova_compute[259550]: 2025-10-07 14:37:52.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:37:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #108. Immutable memtables: 0.
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.002264) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 63] Flushing memtable with next log file: 108
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873002336, "job": 63, "event": "flush_started", "num_memtables": 1, "num_entries": 564, "num_deletes": 255, "total_data_size": 585179, "memory_usage": 597096, "flush_reason": "Manual Compaction"}
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 63] Level-0 flush table #109: started
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873007340, "cf_name": "default", "job": 63, "event": "table_file_creation", "file_number": 109, "file_size": 579927, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47180, "largest_seqno": 47743, "table_properties": {"data_size": 576855, "index_size": 1044, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6984, "raw_average_key_size": 18, "raw_value_size": 570728, "raw_average_value_size": 1505, "num_data_blocks": 47, "num_entries": 379, "num_filter_entries": 379, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847834, "oldest_key_time": 1759847834, "file_creation_time": 1759847873, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 109, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 63] Flush lasted 5125 microseconds, and 2862 cpu microseconds.
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.007395) [db/flush_job.cc:967] [default] [JOB 63] Level-0 flush table #109: 579927 bytes OK
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.007420) [db/memtable_list.cc:519] [default] Level-0 commit table #109 started
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009329) [db/memtable_list.cc:722] [default] Level-0 commit table #109: memtable #1 done
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009353) EVENT_LOG_v1 {"time_micros": 1759847873009346, "job": 63, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009372) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 63] Try to delete WAL files size 582008, prev total WAL file size 582008, number of live WAL files 2.
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000105.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009971) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0031373631' seq:72057594037927935, type:22 .. '6C6F676D0032303132' seq:0, type:0; will stop at (end)
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 64] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 63 Base level 0, inputs: [109(566KB)], [107(10067KB)]
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873010015, "job": 64, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [109], "files_L6": [107], "score": -1, "input_data_size": 10888772, "oldest_snapshot_seqno": -1}
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 64] Generated table #110: 6860 keys, 10765465 bytes, temperature: kUnknown
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873083487, "cf_name": "default", "job": 64, "event": "table_file_creation", "file_number": 110, "file_size": 10765465, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10717401, "index_size": 29840, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17157, "raw_key_size": 178498, "raw_average_key_size": 26, "raw_value_size": 10592311, "raw_average_value_size": 1544, "num_data_blocks": 1173, "num_entries": 6860, "num_filter_entries": 6860, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847873, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 110, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.083882) [db/compaction/compaction_job.cc:1663] [default] [JOB 64] Compacted 1@0 + 1@6 files to L6 => 10765465 bytes
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.085680) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 147.7 rd, 146.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.6, 9.8 +0.0 blob) out(10.3 +0.0 blob), read-write-amplify(37.3) write-amplify(18.6) OK, records in: 7378, records dropped: 518 output_compression: NoCompression
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.085700) EVENT_LOG_v1 {"time_micros": 1759847873085692, "job": 64, "event": "compaction_finished", "compaction_time_micros": 73699, "compaction_time_cpu_micros": 28345, "output_level": 6, "num_output_files": 1, "total_output_size": 10765465, "num_input_records": 7378, "num_output_records": 6860, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000109.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873086474, "job": 64, "event": "table_file_deletion", "file_number": 109}
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000107.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847873089047, "job": 64, "event": "table_file_deletion", "file_number": 107}
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.009823) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089167) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089172) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:37:53.089176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.131 2 DEBUG nova.compute.manager [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.132 2 DEBUG oslo_concurrency.lockutils [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.132 2 DEBUG oslo_concurrency.lockutils [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.133 2 DEBUG oslo_concurrency.lockutils [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.133 2 DEBUG nova.compute.manager [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] No waiting events found dispatching network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.133 2 WARNING nova.compute.manager [req-02b53d50-5cc8-421c-96af-c431ef650fb2 req-90107a9e-b85b-4bc6-ad6e-ade36ea47451 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received unexpected event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e for instance with vm_state active and task_state None.#033[00m
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.937 2 INFO nova.compute.manager [None req-f5b37fd7-aad1-440f-be72-695b56954266 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Get console output#033[00m
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.942 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:37:53 np0005473739 nova_compute[259550]: 2025-10-07 14:37:53.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:37:54 np0005473739 nova_compute[259550]: 2025-10-07 14:37:54.183 2 DEBUG oslo_concurrency.lockutils [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:54 np0005473739 nova_compute[259550]: 2025-10-07 14:37:54.184 2 DEBUG oslo_concurrency.lockutils [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" acquired by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:54 np0005473739 nova_compute[259550]: 2025-10-07 14:37:54.184 2 DEBUG nova.compute.manager [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:54 np0005473739 nova_compute[259550]: 2025-10-07 14:37:54.189 2 DEBUG nova.compute.manager [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Stopping instance; current vm_state: active, current task_state: powering-off, current DB power_state: 1, current VM power_state: 1 do_stop_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3338#033[00m
Oct  7 10:37:54 np0005473739 nova_compute[259550]: 2025-10-07 14:37:54.190 2 DEBUG nova.objects.instance [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:54 np0005473739 nova_compute[259550]: 2025-10-07 14:37:54.218 2 DEBUG nova.virt.libvirt.driver [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:37:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2264: 305 pgs: 305 active+clean; 214 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 3.3 MiB/s rd, 3.9 MiB/s wr, 208 op/s
Oct  7 10:37:54 np0005473739 nova_compute[259550]: 2025-10-07 14:37:54.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2265: 305 pgs: 305 active+clean; 214 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 4.2 MiB/s rd, 3.9 MiB/s wr, 231 op/s
Oct  7 10:37:56 np0005473739 kernel: tap6e86ce79-9f (unregistering): left promiscuous mode
Oct  7 10:37:56 np0005473739 NetworkManager[44949]: <info>  [1759847876.5507] device (tap6e86ce79-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:37:56 np0005473739 nova_compute[259550]: 2025-10-07 14:37:56.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:56Z|01234|binding|INFO|Releasing lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 from this chassis (sb_readonly=0)
Oct  7 10:37:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:56Z|01235|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 down in Southbound
Oct  7 10:37:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:56Z|01236|binding|INFO|Removing iface tap6e86ce79-9f ovn-installed in OVS
Oct  7 10:37:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.586 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:38:87 10.100.0.6'], port_security=['fa:16:3e:6d:38:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e64a021-390b-4a0c-bb4c-75a19f274777', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28efb734-0152-4914-9f31-b818d894be70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c3ab1e78-0aa6-4d11-8104-a075342a0333', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3cdfe5f-0d4a-4d58-ba3f-aac15c533467, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:37:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.587 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 in datapath 28efb734-0152-4914-9f31-b818d894be70 unbound from our chassis#033[00m
Oct  7 10:37:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.589 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28efb734-0152-4914-9f31-b818d894be70, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:37:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.590 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f8c61eb6-b60a-4f15-b75b-8b824676b230]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.590 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28efb734-0152-4914-9f31-b818d894be70 namespace which is not needed anymore#033[00m
Oct  7 10:37:56 np0005473739 nova_compute[259550]: 2025-10-07 14:37:56.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:56 np0005473739 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct  7 10:37:56 np0005473739 systemd[1]: machine-qemu\x2d144\x2dinstance\x2d00000073.scope: Consumed 13.834s CPU time.
Oct  7 10:37:56 np0005473739 systemd-machined[214580]: Machine qemu-144-instance-00000073 terminated.
Oct  7 10:37:56 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [NOTICE]   (385033) : haproxy version is 2.8.14-c23fe91
Oct  7 10:37:56 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [NOTICE]   (385033) : path to executable is /usr/sbin/haproxy
Oct  7 10:37:56 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [WARNING]  (385033) : Exiting Master process...
Oct  7 10:37:56 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [WARNING]  (385033) : Exiting Master process...
Oct  7 10:37:56 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [ALERT]    (385033) : Current worker (385035) exited with code 143 (Terminated)
Oct  7 10:37:56 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[385029]: [WARNING]  (385033) : All workers exited. Exiting... (0)
Oct  7 10:37:56 np0005473739 systemd[1]: libpod-ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef.scope: Deactivated successfully.
Oct  7 10:37:56 np0005473739 podman[385926]: 2025-10-07 14:37:56.774861269 +0000 UTC m=+0.083640117 container died ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:37:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef-userdata-shm.mount: Deactivated successfully.
Oct  7 10:37:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a358810f9f8251b9e86fcc8b171db7901750b59cf1dbd689c2d91dd47284bb72-merged.mount: Deactivated successfully.
Oct  7 10:37:56 np0005473739 podman[385926]: 2025-10-07 14:37:56.870006641 +0000 UTC m=+0.178785489 container cleanup ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:37:56 np0005473739 systemd[1]: libpod-conmon-ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef.scope: Deactivated successfully.
Oct  7 10:37:56 np0005473739 podman[385963]: 2025-10-07 14:37:56.989452972 +0000 UTC m=+0.093679434 container remove ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:37:56 np0005473739 nova_compute[259550]: 2025-10-07 14:37:56.989 2 DEBUG nova.compute.manager [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:56 np0005473739 nova_compute[259550]: 2025-10-07 14:37:56.991 2 DEBUG oslo_concurrency.lockutils [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:37:56 np0005473739 nova_compute[259550]: 2025-10-07 14:37:56.991 2 DEBUG oslo_concurrency.lockutils [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:37:56 np0005473739 nova_compute[259550]: 2025-10-07 14:37:56.991 2 DEBUG oslo_concurrency.lockutils [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:56 np0005473739 nova_compute[259550]: 2025-10-07 14:37:56.991 2 DEBUG nova.compute.manager [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:37:56 np0005473739 nova_compute[259550]: 2025-10-07 14:37:56.992 2 WARNING nova.compute.manager [req-548e80f5-41e4-42aa-aadf-3deaa74f0323 req-ef86e87c-1878-47c4-82e8-2df0829309ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state active and task_state powering-off.#033[00m
Oct  7 10:37:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:56.998 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e15ae09b-5b4d-49bc-97c9-2f6dfeacd899]: (4, ('Tue Oct  7 02:37:56 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70 (ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef)\nff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef\nTue Oct  7 02:37:56 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70 (ff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef)\nff32523d6814ee925b44a8ddfe89e0f396b730a1658995b812e1da8a31daefef\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.000 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0eb03fc6-0ca7-4896-a0fd-4479cadcba8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.002 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28efb734-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:37:57 np0005473739 nova_compute[259550]: 2025-10-07 14:37:57.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:57 np0005473739 kernel: tap28efb734-00: left promiscuous mode
Oct  7 10:37:57 np0005473739 nova_compute[259550]: 2025-10-07 14:37:57.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.035 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cd79608f-063b-43a5-bd79-b0ec937cc722]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.070 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99644421-7d72-4695-8977-760d4afb416e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.072 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[13aa0d89-e436-4889-a241-9f2f32b54ba2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.098 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37b37bab-efee-4563-886e-832d0432c288]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 843632, 'reachable_time': 34076, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 385984, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:57 np0005473739 systemd[1]: run-netns-ovnmeta\x2d28efb734\x2d0152\x2d4914\x2d9f31\x2db818d894be70.mount: Deactivated successfully.
Oct  7 10:37:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.102 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28efb734-0152-4914-9f31-b818d894be70 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:37:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:37:57.103 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a0795a74-28db-49ae-82e8-4e5600fdec08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:37:57 np0005473739 nova_compute[259550]: 2025-10-07 14:37:57.237 2 INFO nova.virt.libvirt.driver [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance shutdown successfully after 3 seconds.#033[00m
Oct  7 10:37:57 np0005473739 nova_compute[259550]: 2025-10-07 14:37:57.242 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance destroyed successfully.#033[00m
Oct  7 10:37:57 np0005473739 nova_compute[259550]: 2025-10-07 14:37:57.243 2 DEBUG nova.objects.instance [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:57 np0005473739 nova_compute[259550]: 2025-10-07 14:37:57.264 2 DEBUG nova.compute.manager [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:37:57 np0005473739 nova_compute[259550]: 2025-10-07 14:37:57.331 2 DEBUG oslo_concurrency.lockutils [None req-547d42b1-1960-4dc9-8287-2ce86110eb9d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" "released" by "nova.compute.manager.ComputeManager.stop_instance.<locals>.do_stop_instance" :: held 3.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:37:57 np0005473739 nova_compute[259550]: 2025-10-07 14:37:57.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:57Z|00137|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:bc:87 10.100.0.12
Oct  7 10:37:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:37:57Z|00138|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:bc:87 10.100.0.12
Oct  7 10:37:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:37:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2266: 305 pgs: 305 active+clean; 214 MiB data, 887 MiB used, 59 GiB / 60 GiB avail; 3.6 MiB/s rd, 2.5 MiB/s wr, 197 op/s
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.164 2 DEBUG nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.165 2 DEBUG nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing instance network info cache due to event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.165 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.165 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.166 2 DEBUG nova.network.neutron [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.318 2 INFO nova.compute.manager [None req-29e9f3a0-2a3b-49d3-849b-5f74be629d4f 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Get console output#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.693 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.733 2 DEBUG oslo_concurrency.lockutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.734 2 DEBUG oslo_concurrency.lockutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.734 2 DEBUG nova.network.neutron [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.734 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'info_cache' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:37:59 np0005473739 nova_compute[259550]: 2025-10-07 14:37:59.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:38:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:00.071 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:00.071 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:00.072 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:00 np0005473739 podman[385985]: 2025-10-07 14:38:00.080019254 +0000 UTC m=+0.068492991 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:38:00 np0005473739 podman[385986]: 2025-10-07 14:38:00.099475233 +0000 UTC m=+0.086770599 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3)
Oct  7 10:38:00 np0005473739 nova_compute[259550]: 2025-10-07 14:38:00.150 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2267: 305 pgs: 305 active+clean; 235 MiB data, 900 MiB used, 59 GiB / 60 GiB avail; 3.8 MiB/s rd, 3.7 MiB/s wr, 250 op/s
Oct  7 10:38:00 np0005473739 nova_compute[259550]: 2025-10-07 14:38:00.868 2 DEBUG nova.network.neutron [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updated VIF entry in instance network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:38:00 np0005473739 nova_compute[259550]: 2025-10-07 14:38:00.869 2 DEBUG nova.network.neutron [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.192 2 DEBUG nova.network.neutron [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.277 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.278 2 DEBUG nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.278 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.278 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.279 2 DEBUG oslo_concurrency.lockutils [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.279 2 DEBUG nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.279 2 WARNING nova.compute.manager [req-5418cc3e-9e1f-44ce-8c7b-0511b546921d req-c012d073-1238-4c12-b3fa-f45e80593e09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state stopped and task_state None.#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.362 2 DEBUG oslo_concurrency.lockutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.364 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.364 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.364 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.446 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance destroyed successfully.#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.447 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.691 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.820 2 DEBUG nova.virt.libvirt.vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:57Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.821 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.822 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.822 2 DEBUG os_vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.824 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e86ce79-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.829 2 INFO os_vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f')#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.835 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start _get_guest_xml network_info=[{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.839 2 WARNING nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.846 2 DEBUG nova.virt.libvirt.host [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.847 2 DEBUG nova.virt.libvirt.host [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.850 2 DEBUG nova.virt.libvirt.host [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.851 2 DEBUG nova.virt.libvirt.host [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.852 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.852 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.853 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.853 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.853 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.854 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.854 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.854 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.855 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.855 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.855 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.856 2 DEBUG nova.virt.hardware [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:38:01 np0005473739 nova_compute[259550]: 2025-10-07 14:38:01.856 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:02 np0005473739 nova_compute[259550]: 2025-10-07 14:38:02.121 2 DEBUG oslo_concurrency.processutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2268: 305 pgs: 305 active+clean; 246 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 2.2 MiB/s wr, 151 op/s
Oct  7 10:38:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:38:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1481707439' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:38:02 np0005473739 nova_compute[259550]: 2025-10-07 14:38:02.598 2 DEBUG oslo_concurrency.processutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:02 np0005473739 nova_compute[259550]: 2025-10-07 14:38:02.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:02 np0005473739 nova_compute[259550]: 2025-10-07 14:38:02.634 2 DEBUG oslo_concurrency.processutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:38:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/690858199' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.109 2 DEBUG oslo_concurrency.processutils [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.111 2 DEBUG nova.virt.libvirt.vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:57Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.112 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.112 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.114 2 DEBUG nova.objects.instance [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.130 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <uuid>4e64a021-390b-4a0c-bb4c-75a19f274777</uuid>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <name>instance-00000073</name>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-1824501664</nova:name>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:38:01</nova:creationTime>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <nova:port uuid="6e86ce79-9f1b-4e53-8ae5-918e8402b8c6">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <entry name="serial">4e64a021-390b-4a0c-bb4c-75a19f274777</entry>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <entry name="uuid">4e64a021-390b-4a0c-bb4c-75a19f274777</entry>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e64a021-390b-4a0c-bb4c-75a19f274777_disk">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4e64a021-390b-4a0c-bb4c-75a19f274777_disk.config">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6d:38:87"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <target dev="tap6e86ce79-9f"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777/console.log" append="off"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:38:03 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:38:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:38:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:38:03 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.132 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.132 2 DEBUG nova.virt.libvirt.driver [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.133 2 DEBUG nova.virt.libvirt.vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=4,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=<?>,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='powering-on',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:57Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='stopped') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.134 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.134 2 DEBUG nova.network.os_vif_util [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.135 2 DEBUG os_vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.136 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.138 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6e86ce79-9f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.139 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6e86ce79-9f, col_values=(('external_ids', {'iface-id': '6e86ce79-9f1b-4e53-8ae5-918e8402b8c6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:38:87', 'vm-uuid': '4e64a021-390b-4a0c-bb4c-75a19f274777'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:03 np0005473739 NetworkManager[44949]: <info>  [1759847883.2268] manager: (tap6e86ce79-9f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/503)
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.234 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.237 2 INFO os_vif [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f')#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.251 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.251 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.252 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.272 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.273 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.273 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.273 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.273 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:03 np0005473739 kernel: tap6e86ce79-9f: entered promiscuous mode
Oct  7 10:38:03 np0005473739 NetworkManager[44949]: <info>  [1759847883.3207] manager: (tap6e86ce79-9f): new Tun device (/org/freedesktop/NetworkManager/Devices/504)
Oct  7 10:38:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:03Z|01237|binding|INFO|Claiming lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for this chassis.
Oct  7 10:38:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:03Z|01238|binding|INFO|6e86ce79-9f1b-4e53-8ae5-918e8402b8c6: Claiming fa:16:3e:6d:38:87 10.100.0.6
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.330 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:38:87 10.100.0.6'], port_security=['fa:16:3e:6d:38:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e64a021-390b-4a0c-bb4c-75a19f274777', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28efb734-0152-4914-9f31-b818d894be70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'c3ab1e78-0aa6-4d11-8104-a075342a0333', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.195'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3cdfe5f-0d4a-4d58-ba3f-aac15c533467, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.331 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 in datapath 28efb734-0152-4914-9f31-b818d894be70 bound to our chassis#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.332 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 28efb734-0152-4914-9f31-b818d894be70#033[00m
Oct  7 10:38:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:03Z|01239|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 ovn-installed in OVS
Oct  7 10:38:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:03Z|01240|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 up in Southbound
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.346 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c092479-0f87-4e3c-88e2-9beb76ece58e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.347 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap28efb734-01 in ovnmeta-28efb734-0152-4914-9f31-b818d894be70 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:38:03 np0005473739 systemd-udevd[386102]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.349 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap28efb734-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.350 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[101265fd-f328-48ae-b298-2b115f28b66c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.353 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a479bd2a-6855-4a0e-8ca7-03d98c4c09f9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 systemd-machined[214580]: New machine qemu-147-instance-00000073.
Oct  7 10:38:03 np0005473739 NetworkManager[44949]: <info>  [1759847883.3633] device (tap6e86ce79-9f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:38:03 np0005473739 NetworkManager[44949]: <info>  [1759847883.3643] device (tap6e86ce79-9f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.366 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[51d3288d-642c-4227-a5c4-1a3813ec9135]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 systemd[1]: Started Virtual Machine qemu-147-instance-00000073.
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.384 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8bbe98a1-e611-48af-bc22-0f3197741aae]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.537 2 DEBUG nova.compute.manager [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.538 2 DEBUG oslo_concurrency.lockutils [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.538 2 DEBUG oslo_concurrency.lockutils [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.540 2 DEBUG oslo_concurrency.lockutils [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.541 2 DEBUG nova.compute.manager [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.542 2 WARNING nova.compute.manager [req-af292fe6-7eae-46c2-8117-af75a2fddd50 req-ef01f250-3d5e-4301-a57e-ff29eccc6798 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state stopped and task_state powering-on.#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.668 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4b048e41-7be6-44e8-b3fe-ad20371219e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 NetworkManager[44949]: <info>  [1759847883.6759] manager: (tap28efb734-00): new Veth device (/org/freedesktop/NetworkManager/Devices/505)
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c1b2dce-7441-42e5-aa9a-b4b299769037]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.727 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6f17e931-5dae-4347-9b26-b4bd941c2b5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.731 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c1842400-40f3-4611-b3db-644741d992d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 NetworkManager[44949]: <info>  [1759847883.7571] device (tap28efb734-00): carrier: link connected
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.763 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02b56e85-4433-44a5-b582-0f0f848f4cc0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:38:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4191226254' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.792 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3462ffbc-353d-4ace-9c64-244c3ebc9182]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28efb734-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846732, 'reachable_time': 20712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 386155, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.809 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.809 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de8c342b-9e50-4d1e-81cd-1c2991367513]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe71:b7c0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 846732, 'tstamp': 846732}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 386157, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e64a243a-b47a-412d-9ff3-40392fa4acef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap28efb734-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:71:b7:c0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 357], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846732, 'reachable_time': 20712, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 386158, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.871 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1777d21d-2e5e-4cff-9e5b-5c5d08b3c612]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.936 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bbe6ef2e-57f3-449d-8f5a-8a0c37ee32ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.937 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28efb734-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.937 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.938 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28efb734-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:03 np0005473739 kernel: tap28efb734-00: entered promiscuous mode
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 NetworkManager[44949]: <info>  [1759847883.9410] manager: (tap28efb734-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/506)
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.943 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap28efb734-00, col_values=(('external_ids', {'iface-id': '54dfc7fb-c548-460a-8b73-708819b15ca2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:03Z|01241|binding|INFO|Releasing lport 54dfc7fb-c548-460a-8b73-708819b15ca2 from this chassis (sb_readonly=0)
Oct  7 10:38:03 np0005473739 nova_compute[259550]: 2025-10-07 14:38:03.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.966 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.967 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23fd636c-ce59-451f-b69a-1dd7496a83aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.969 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-28efb734-0152-4914-9f31-b818d894be70
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/28efb734-0152-4914-9f31-b818d894be70.pid.haproxy
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 28efb734-0152-4914-9f31-b818d894be70
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:38:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:03.970 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'env', 'PROCESS_TAG=haproxy-28efb734-0152-4914-9f31-b818d894be70', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/28efb734-0152-4914-9f31-b818d894be70.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.234 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.234 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.239 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.240 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000073 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.245 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.246 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:38:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2269: 305 pgs: 305 active+clean; 252 MiB data, 895 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.0 MiB/s wr, 159 op/s
Oct  7 10:38:04 np0005473739 podman[386190]: 2025-10-07 14:38:04.432909704 +0000 UTC m=+0.064169955 container create 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:38:04 np0005473739 systemd[1]: Started libpod-conmon-0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240.scope.
Oct  7 10:38:04 np0005473739 podman[386190]: 2025-10-07 14:38:04.400828348 +0000 UTC m=+0.032088629 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:38:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:38:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2bdfd20c34e8e463ac3bd8f75949ae3c20d5cfcc6b96b2e99a9b1bbbdec631c1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.513 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.515 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3335MB free_disk=59.876399993896484GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.515 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:04 np0005473739 nova_compute[259550]: 2025-10-07 14:38:04.515 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:04 np0005473739 podman[386190]: 2025-10-07 14:38:04.528728045 +0000 UTC m=+0.159988316 container init 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:38:04 np0005473739 podman[386190]: 2025-10-07 14:38:04.535309431 +0000 UTC m=+0.166569682 container start 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Oct  7 10:38:04 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [NOTICE]   (386209) : New worker (386211) forked
Oct  7 10:38:04 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [NOTICE]   (386209) : Loading success.
Oct  7 10:38:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:04Z|00139|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:46:76:3f 10.100.0.13
Oct  7 10:38:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:04Z|00140|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:46:76:3f 10.100.0.13
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.378 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 4e64a021-390b-4a0c-bb4c-75a19f274777 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.379 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847885.3781834, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.379 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.381 2 DEBUG nova.compute.manager [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.384 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance rebooted successfully.#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.384 2 DEBUG nova.compute.manager [None req-39209d07-d7c6-4222-bdbb-ffdb6dc5aa3d 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.532 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.536 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: stopped, current task_state: powering-on, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:38:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0f6b1e7a-4755-4e93-b739-78f2e2d967b6 does not exist
Oct  7 10:38:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e7596c5-248f-4641-bf37-05e5942e1788 does not exist
Oct  7 10:38:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 95565ae4-8791-427e-885a-8a0608957652 does not exist
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.684 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] During sync_power_state the instance has a pending task (powering-on). Skip.#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.684 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847885.3808265, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.684 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Started (Lifecycle Event)#033[00m
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:38:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.756 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 4e64a021-390b-4a0c-bb4c-75a19f274777 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.757 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 23c0ce36-9e34-4a73-9f99-3b79f8623238 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.757 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance c0eb8730-2b26-4cc0-8a9c-019688db568f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.757 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.757 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.839 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.894 2 DEBUG nova.compute.manager [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.895 2 DEBUG oslo_concurrency.lockutils [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.896 2 DEBUG oslo_concurrency.lockutils [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.896 2 DEBUG oslo_concurrency.lockutils [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.896 2 DEBUG nova.compute.manager [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.897 2 WARNING nova.compute.manager [req-3a938757-ee18-4e94-a4cb-63068c3c469e req-d5c1925c-d7a5-4e04-b509-094d385a5687 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.898 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:05 np0005473739 nova_compute[259550]: 2025-10-07 14:38:05.904 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:38:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:38:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:38:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:38:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:38:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2843257544' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:38:06 np0005473739 podman[386557]: 2025-10-07 14:38:06.347919324 +0000 UTC m=+0.092606595 container create 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:38:06 np0005473739 nova_compute[259550]: 2025-10-07 14:38:06.364 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2270: 305 pgs: 305 active+clean; 267 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 4.0 MiB/s wr, 129 op/s
Oct  7 10:38:06 np0005473739 nova_compute[259550]: 2025-10-07 14:38:06.372 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:38:06 np0005473739 podman[386557]: 2025-10-07 14:38:06.278661904 +0000 UTC m=+0.023349195 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:38:06 np0005473739 nova_compute[259550]: 2025-10-07 14:38:06.391 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:38:06 np0005473739 systemd[1]: Started libpod-conmon-5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d.scope.
Oct  7 10:38:06 np0005473739 nova_compute[259550]: 2025-10-07 14:38:06.422 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:38:06 np0005473739 nova_compute[259550]: 2025-10-07 14:38:06.422 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.907s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:38:06 np0005473739 podman[386557]: 2025-10-07 14:38:06.464660774 +0000 UTC m=+0.209348075 container init 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:38:06 np0005473739 podman[386557]: 2025-10-07 14:38:06.471453175 +0000 UTC m=+0.216140446 container start 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:38:06 np0005473739 nervous_chatelet[386575]: 167 167
Oct  7 10:38:06 np0005473739 systemd[1]: libpod-5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d.scope: Deactivated successfully.
Oct  7 10:38:06 np0005473739 conmon[386575]: conmon 5ac6d37017f3527d7211 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d.scope/container/memory.events
Oct  7 10:38:06 np0005473739 podman[386557]: 2025-10-07 14:38:06.499373151 +0000 UTC m=+0.244060432 container attach 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:38:06 np0005473739 podman[386557]: 2025-10-07 14:38:06.49968706 +0000 UTC m=+0.244374331 container died 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:38:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-df55622f615ab0c411ea86f6cb8876ea52ecb4e6afe83845133ae6bdef8c975e-merged.mount: Deactivated successfully.
Oct  7 10:38:06 np0005473739 podman[386557]: 2025-10-07 14:38:06.608673552 +0000 UTC m=+0.353360823 container remove 5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_chatelet, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:38:06 np0005473739 systemd[1]: libpod-conmon-5ac6d37017f3527d72117e8d97287e050281deb5b83af78eb519e2152584af1d.scope: Deactivated successfully.
Oct  7 10:38:06 np0005473739 podman[386599]: 2025-10-07 14:38:06.818810366 +0000 UTC m=+0.046230636 container create b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 10:38:06 np0005473739 systemd[1]: Started libpod-conmon-b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7.scope.
Oct  7 10:38:06 np0005473739 podman[386599]: 2025-10-07 14:38:06.800636651 +0000 UTC m=+0.028056951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:38:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:38:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:06 np0005473739 podman[386599]: 2025-10-07 14:38:06.930311466 +0000 UTC m=+0.157731756 container init b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default)
Oct  7 10:38:06 np0005473739 podman[386599]: 2025-10-07 14:38:06.938225478 +0000 UTC m=+0.165645768 container start b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:38:06 np0005473739 podman[386599]: 2025-10-07 14:38:06.941830214 +0000 UTC m=+0.169250514 container attach b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:38:07 np0005473739 nova_compute[259550]: 2025-10-07 14:38:07.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:08 np0005473739 brave_jemison[386615]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:38:08 np0005473739 brave_jemison[386615]: --> relative data size: 1.0
Oct  7 10:38:08 np0005473739 brave_jemison[386615]: --> All data devices are unavailable
Oct  7 10:38:08 np0005473739 systemd[1]: libpod-b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7.scope: Deactivated successfully.
Oct  7 10:38:08 np0005473739 systemd[1]: libpod-b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7.scope: Consumed 1.140s CPU time.
Oct  7 10:38:08 np0005473739 podman[386599]: 2025-10-07 14:38:08.147467669 +0000 UTC m=+1.374887939 container died b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:38:08 np0005473739 nova_compute[259550]: 2025-10-07 14:38:08.153 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:08 np0005473739 nova_compute[259550]: 2025-10-07 14:38:08.178 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:08 np0005473739 nova_compute[259550]: 2025-10-07 14:38:08.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6757baf4744f99843625abc21a51b0f10ffc781de3918ca49ccaef97a181eb38-merged.mount: Deactivated successfully.
Oct  7 10:38:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2271: 305 pgs: 305 active+clean; 267 MiB data, 906 MiB used, 59 GiB / 60 GiB avail; 428 KiB/s rd, 4.0 MiB/s wr, 99 op/s
Oct  7 10:38:08 np0005473739 podman[386599]: 2025-10-07 14:38:08.483408585 +0000 UTC m=+1.710828845 container remove b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_jemison, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:38:08 np0005473739 systemd[1]: libpod-conmon-b69e94e708f1143b4dc2ebe2f922d157fe42a952b26b7e79e26896921f72d8c7.scope: Deactivated successfully.
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.097 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.098 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.131 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:38:09 np0005473739 podman[386798]: 2025-10-07 14:38:09.211869072 +0000 UTC m=+0.042170918 container create d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.240 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.240 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.248 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.248 2 INFO nova.compute.claims [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:38:09 np0005473739 systemd[1]: Started libpod-conmon-d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a.scope.
Oct  7 10:38:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:38:09 np0005473739 podman[386798]: 2025-10-07 14:38:09.195168096 +0000 UTC m=+0.025469962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:38:09 np0005473739 podman[386798]: 2025-10-07 14:38:09.310197159 +0000 UTC m=+0.140499025 container init d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:38:09 np0005473739 podman[386798]: 2025-10-07 14:38:09.317075383 +0000 UTC m=+0.147377229 container start d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:38:09 np0005473739 podman[386798]: 2025-10-07 14:38:09.320813322 +0000 UTC m=+0.151115178 container attach d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:38:09 np0005473739 gracious_carver[386815]: 167 167
Oct  7 10:38:09 np0005473739 systemd[1]: libpod-d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a.scope: Deactivated successfully.
Oct  7 10:38:09 np0005473739 podman[386798]: 2025-10-07 14:38:09.323489244 +0000 UTC m=+0.153791090 container died d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:38:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-eb89a884e12e2cb6ee72a606a5beb4c0b3d7a71f1f32a4a85d99f4a47e3576d0-merged.mount: Deactivated successfully.
Oct  7 10:38:09 np0005473739 podman[386798]: 2025-10-07 14:38:09.383847637 +0000 UTC m=+0.214149483 container remove d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_carver, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:38:09 np0005473739 systemd[1]: libpod-conmon-d71d576389540dd14b0fbac7d85aca5f9567e9b9e46f2484e029b249e643db0a.scope: Deactivated successfully.
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.493 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:09 np0005473739 podman[386840]: 2025-10-07 14:38:09.603134937 +0000 UTC m=+0.052851444 container create a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:38:09 np0005473739 systemd[1]: Started libpod-conmon-a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04.scope.
Oct  7 10:38:09 np0005473739 podman[386840]: 2025-10-07 14:38:09.577682167 +0000 UTC m=+0.027398704 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:38:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:38:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:09 np0005473739 podman[386840]: 2025-10-07 14:38:09.70204138 +0000 UTC m=+0.151757917 container init a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:38:09 np0005473739 podman[386840]: 2025-10-07 14:38:09.710165776 +0000 UTC m=+0.159882283 container start a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 10:38:09 np0005473739 podman[386840]: 2025-10-07 14:38:09.714491443 +0000 UTC m=+0.164207980 container attach a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default)
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.772 2 INFO nova.compute.manager [None req-8b0ee77c-b5c0-413c-bbf2-c43ba7718c3b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Get console output#033[00m
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.780 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:38:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:38:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3163642183' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.969 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:09 np0005473739 nova_compute[259550]: 2025-10-07 14:38:09.978 2 DEBUG nova.compute.provider_tree [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.016 2 DEBUG nova.scheduler.client.report [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.054 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.814s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.055 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.103 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.104 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.131 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.153 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.242 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.244 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.244 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Creating image(s)#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.272 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.310 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.355 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.359 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2272: 305 pgs: 305 active+clean; 279 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 4.3 MiB/s wr, 165 op/s
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.475 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.116s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.476 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.477 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.477 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.504 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.512 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]: {
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:    "0": [
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:        {
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "devices": [
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "/dev/loop3"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            ],
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_name": "ceph_lv0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_size": "21470642176",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "name": "ceph_lv0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "tags": {
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cluster_name": "ceph",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.crush_device_class": "",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.encrypted": "0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osd_id": "0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.type": "block",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.vdo": "0"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            },
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "type": "block",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "vg_name": "ceph_vg0"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:        }
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:    ],
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:    "1": [
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:        {
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "devices": [
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "/dev/loop4"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            ],
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_name": "ceph_lv1",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_size": "21470642176",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "name": "ceph_lv1",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "tags": {
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cluster_name": "ceph",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.crush_device_class": "",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.encrypted": "0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osd_id": "1",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.type": "block",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.vdo": "0"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            },
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "type": "block",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "vg_name": "ceph_vg1"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:        }
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:    ],
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:    "2": [
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:        {
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "devices": [
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "/dev/loop5"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            ],
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_name": "ceph_lv2",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_size": "21470642176",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "name": "ceph_lv2",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "tags": {
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.cluster_name": "ceph",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.crush_device_class": "",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.encrypted": "0",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osd_id": "2",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.type": "block",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:                "ceph.vdo": "0"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            },
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "type": "block",
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:            "vg_name": "ceph_vg2"
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:        }
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]:    ]
Oct  7 10:38:10 np0005473739 trusting_banzai[386876]: }
Oct  7 10:38:10 np0005473739 systemd[1]: libpod-a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04.scope: Deactivated successfully.
Oct  7 10:38:10 np0005473739 podman[386840]: 2025-10-07 14:38:10.659222576 +0000 UTC m=+1.108939093 container died a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:38:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ad9b428a9f8fecd22fcd82c55e7d3a56f5d89b530a01a03841c8a7d8201fd6d6-merged.mount: Deactivated successfully.
Oct  7 10:38:10 np0005473739 podman[386840]: 2025-10-07 14:38:10.731540149 +0000 UTC m=+1.181256656 container remove a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_banzai, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.739 2 DEBUG nova.policy [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:38:10 np0005473739 systemd[1]: libpod-conmon-a35fc9271608224225c7650760a8c0077f3c217aabd7b786adea00fb5cf7ac04.scope: Deactivated successfully.
Oct  7 10:38:10 np0005473739 nova_compute[259550]: 2025-10-07 14:38:10.945 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:11 np0005473739 nova_compute[259550]: 2025-10-07 14:38:11.017 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:38:11 np0005473739 nova_compute[259550]: 2025-10-07 14:38:11.114 2 DEBUG nova.objects.instance [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 4c61749b-b18d-4fbe-b99c-90e15ced9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:11 np0005473739 nova_compute[259550]: 2025-10-07 14:38:11.158 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:38:11 np0005473739 nova_compute[259550]: 2025-10-07 14:38:11.159 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Ensure instance console log exists: /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:38:11 np0005473739 nova_compute[259550]: 2025-10-07 14:38:11.159 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:11 np0005473739 nova_compute[259550]: 2025-10-07 14:38:11.160 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:11 np0005473739 nova_compute[259550]: 2025-10-07 14:38:11.160 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:11 np0005473739 podman[387205]: 2025-10-07 14:38:11.384098985 +0000 UTC m=+0.040215135 container create a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:38:11 np0005473739 systemd[1]: Started libpod-conmon-a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979.scope.
Oct  7 10:38:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:38:11 np0005473739 podman[387205]: 2025-10-07 14:38:11.365962961 +0000 UTC m=+0.022079141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:38:11 np0005473739 podman[387205]: 2025-10-07 14:38:11.461730529 +0000 UTC m=+0.117846709 container init a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 10:38:11 np0005473739 podman[387205]: 2025-10-07 14:38:11.469868237 +0000 UTC m=+0.125984387 container start a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:38:11 np0005473739 nice_herschel[387221]: 167 167
Oct  7 10:38:11 np0005473739 systemd[1]: libpod-a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979.scope: Deactivated successfully.
Oct  7 10:38:11 np0005473739 podman[387205]: 2025-10-07 14:38:11.476623038 +0000 UTC m=+0.132739208 container attach a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 10:38:11 np0005473739 podman[387205]: 2025-10-07 14:38:11.477224714 +0000 UTC m=+0.133340864 container died a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:38:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a5abfc1ab273f3a11bd4dbbcd78e5cd262ce9e46d1441f07dd377d82b193ed81-merged.mount: Deactivated successfully.
Oct  7 10:38:11 np0005473739 podman[387205]: 2025-10-07 14:38:11.512825175 +0000 UTC m=+0.168941325 container remove a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nice_herschel, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:38:11 np0005473739 systemd[1]: libpod-conmon-a4290636f2a45c21335c69a2a28b1f1c43ecba0ae6691ec6173abcff61a38979.scope: Deactivated successfully.
Oct  7 10:38:11 np0005473739 podman[387247]: 2025-10-07 14:38:11.7367965 +0000 UTC m=+0.043703159 container create d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:38:11 np0005473739 systemd[1]: Started libpod-conmon-d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0.scope.
Oct  7 10:38:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:38:11 np0005473739 podman[387247]: 2025-10-07 14:38:11.718317806 +0000 UTC m=+0.025224445 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:38:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:38:11 np0005473739 podman[387247]: 2025-10-07 14:38:11.827047721 +0000 UTC m=+0.133954390 container init d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:38:11 np0005473739 podman[387247]: 2025-10-07 14:38:11.835297261 +0000 UTC m=+0.142203900 container start d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:38:11 np0005473739 podman[387247]: 2025-10-07 14:38:11.839008141 +0000 UTC m=+0.145914790 container attach d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 10:38:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2273: 305 pgs: 305 active+clean; 279 MiB data, 921 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 3.1 MiB/s wr, 147 op/s
Oct  7 10:38:12 np0005473739 nova_compute[259550]: 2025-10-07 14:38:12.598 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Successfully created port: ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:38:12 np0005473739 nova_compute[259550]: 2025-10-07 14:38:12.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:12 np0005473739 strange_turing[387264]: {
Oct  7 10:38:12 np0005473739 strange_turing[387264]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "osd_id": 2,
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "type": "bluestore"
Oct  7 10:38:12 np0005473739 strange_turing[387264]:    },
Oct  7 10:38:12 np0005473739 strange_turing[387264]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "osd_id": 1,
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "type": "bluestore"
Oct  7 10:38:12 np0005473739 strange_turing[387264]:    },
Oct  7 10:38:12 np0005473739 strange_turing[387264]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "osd_id": 0,
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:38:12 np0005473739 strange_turing[387264]:        "type": "bluestore"
Oct  7 10:38:12 np0005473739 strange_turing[387264]:    }
Oct  7 10:38:12 np0005473739 strange_turing[387264]: }
Oct  7 10:38:12 np0005473739 systemd[1]: libpod-d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0.scope: Deactivated successfully.
Oct  7 10:38:12 np0005473739 systemd[1]: libpod-d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0.scope: Consumed 1.004s CPU time.
Oct  7 10:38:12 np0005473739 podman[387247]: 2025-10-07 14:38:12.846293615 +0000 UTC m=+1.153200254 container died d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 10:38:12 np0005473739 nova_compute[259550]: 2025-10-07 14:38:12.910 2 DEBUG nova.compute.manager [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:12 np0005473739 nova_compute[259550]: 2025-10-07 14:38:12.912 2 DEBUG nova.compute.manager [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing instance network info cache due to event network-changed-1c96bebd-0f68-48d9-9bab-486d6e56cb4e. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:38:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5568478be26ef2a3b0c9fd811b8bd875471c54e4e444b60d1a673eba2b806468-merged.mount: Deactivated successfully.
Oct  7 10:38:12 np0005473739 nova_compute[259550]: 2025-10-07 14:38:12.912 2 DEBUG oslo_concurrency.lockutils [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:12 np0005473739 nova_compute[259550]: 2025-10-07 14:38:12.913 2 DEBUG oslo_concurrency.lockutils [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:12 np0005473739 nova_compute[259550]: 2025-10-07 14:38:12.913 2 DEBUG nova.network.neutron [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Refreshing network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:38:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:13 np0005473739 podman[387247]: 2025-10-07 14:38:13.038038849 +0000 UTC m=+1.344945488 container remove d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=strange_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:38:13 np0005473739 systemd[1]: libpod-conmon-d456728bb2ed7c87203f09f99c80a55e460ab62af36c44cc0f60a485b6b684c0.scope: Deactivated successfully.
Oct  7 10:38:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:38:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:38:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:38:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:38:13 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 19b7a85f-fa64-4dc5-b708-924760ea4793 does not exist
Oct  7 10:38:13 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cf005306-4ca7-4f9c-ae49-64ea8ef05575 does not exist
Oct  7 10:38:13 np0005473739 nova_compute[259550]: 2025-10-07 14:38:13.284 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:13 np0005473739 podman[387360]: 2025-10-07 14:38:13.392076609 +0000 UTC m=+0.083385488 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent)
Oct  7 10:38:13 np0005473739 podman[387361]: 2025-10-07 14:38:13.40595602 +0000 UTC m=+0.094076934 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:38:13 np0005473739 nova_compute[259550]: 2025-10-07 14:38:13.674 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Successfully created port: 1ee1b68d-6081-4e61-a797-c2e41ac53a29 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:38:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:38:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:38:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2274: 305 pgs: 305 active+clean; 310 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 161 op/s
Oct  7 10:38:15 np0005473739 nova_compute[259550]: 2025-10-07 14:38:15.885 2 DEBUG nova.network.neutron [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updated VIF entry in instance network info cache for port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:38:15 np0005473739 nova_compute[259550]: 2025-10-07 14:38:15.887 2 DEBUG nova.network.neutron [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [{"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:15 np0005473739 nova_compute[259550]: 2025-10-07 14:38:15.964 2 DEBUG oslo_concurrency.lockutils [req-748ba35f-0669-4183-91be-c1cf9b015ef7 req-66bb247d-7cc7-4671-b45c-3cde8a2e3490 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c0eb8730-2b26-4cc0-8a9c-019688db568f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:16 np0005473739 nova_compute[259550]: 2025-10-07 14:38:16.345 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Successfully updated port: ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:38:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2275: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 144 op/s
Oct  7 10:38:16 np0005473739 nova_compute[259550]: 2025-10-07 14:38:16.498 2 DEBUG nova.compute.manager [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:16 np0005473739 nova_compute[259550]: 2025-10-07 14:38:16.499 2 DEBUG nova.compute.manager [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing instance network info cache due to event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:38:16 np0005473739 nova_compute[259550]: 2025-10-07 14:38:16.500 2 DEBUG oslo_concurrency.lockutils [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:16 np0005473739 nova_compute[259550]: 2025-10-07 14:38:16.500 2 DEBUG oslo_concurrency.lockutils [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:16 np0005473739 nova_compute[259550]: 2025-10-07 14:38:16.501 2 DEBUG nova.network.neutron [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:38:16 np0005473739 nova_compute[259550]: 2025-10-07 14:38:16.790 2 DEBUG nova.network.neutron [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:38:17 np0005473739 nova_compute[259550]: 2025-10-07 14:38:17.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:17 np0005473739 nova_compute[259550]: 2025-10-07 14:38:17.657 2 DEBUG nova.network.neutron [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:17 np0005473739 nova_compute[259550]: 2025-10-07 14:38:17.759 2 DEBUG oslo_concurrency.lockutils [req-ebeb8ec0-a850-440e-a0cf-f63df1435500 req-b8c86ae9-39e2-48de-815b-53a81cc28095 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:17 np0005473739 nova_compute[259550]: 2025-10-07 14:38:17.811 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Successfully updated port: 1ee1b68d-6081-4e61-a797-c2e41ac53a29 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:38:17 np0005473739 nova_compute[259550]: 2025-10-07 14:38:17.876 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:17 np0005473739 nova_compute[259550]: 2025-10-07 14:38:17.877 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:17 np0005473739 nova_compute[259550]: 2025-10-07 14:38:17.878 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:38:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:18 np0005473739 nova_compute[259550]: 2025-10-07 14:38:18.074 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:38:18 np0005473739 nova_compute[259550]: 2025-10-07 14:38:18.288 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2276: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct  7 10:38:18 np0005473739 nova_compute[259550]: 2025-10-07 14:38:18.601 2 DEBUG nova.compute.manager [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-changed-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:18 np0005473739 nova_compute[259550]: 2025-10-07 14:38:18.602 2 DEBUG nova.compute.manager [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing instance network info cache due to event network-changed-1ee1b68d-6081-4e61-a797-c2e41ac53a29. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:38:18 np0005473739 nova_compute[259550]: 2025-10-07 14:38:18.602 2 DEBUG oslo_concurrency.lockutils [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2277: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 128 op/s
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.074 2 DEBUG nova.network.neutron [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.352 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.352 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance network_info: |[{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.353 2 DEBUG oslo_concurrency.lockutils [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.353 2 DEBUG nova.network.neutron [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing network info cache for port 1ee1b68d-6081-4e61-a797-c2e41ac53a29 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.357 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start _get_guest_xml network_info=[{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.362 2 WARNING nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.370 2 DEBUG nova.virt.libvirt.host [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.371 2 DEBUG nova.virt.libvirt.host [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.374 2 DEBUG nova.virt.libvirt.host [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.374 2 DEBUG nova.virt.libvirt.host [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.375 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.375 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.375 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.376 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.376 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.376 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.376 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.377 2 DEBUG nova.virt.hardware [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.380 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:38:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/407483761' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.899 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.937 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:21 np0005473739 nova_compute[259550]: 2025-10-07 14:38:21.944 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2278: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 1.8 MiB/s wr, 63 op/s
Oct  7 10:38:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:38:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451410457' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.418 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.419 2 DEBUG nova.virt.libvirt.vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:10Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.420 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.420 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.421 2 DEBUG nova.virt.libvirt.vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:10Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.421 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.422 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.423 2 DEBUG nova.objects.instance [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 4c61749b-b18d-4fbe-b99c-90e15ced9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.475 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.475 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.556 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <uuid>4c61749b-b18d-4fbe-b99c-90e15ced9469</uuid>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <name>instance-00000076</name>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1890244122</nova:name>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:38:21</nova:creationTime>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:port uuid="ad5643dc-1b43-4ffa-b380-8ee6fbac98fe">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <nova:port uuid="1ee1b68d-6081-4e61-a797-c2e41ac53a29">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe32:23b" ipVersion="6"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe32:23b" ipVersion="6"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <entry name="serial">4c61749b-b18d-4fbe-b99c-90e15ced9469</entry>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <entry name="uuid">4c61749b-b18d-4fbe-b99c-90e15ced9469</entry>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4c61749b-b18d-4fbe-b99c-90e15ced9469_disk">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:f4:e8:e8"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <target dev="tapad5643dc-1b"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:32:02:3b"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <target dev="tap1ee1b68d-60"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/console.log" append="off"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:38:22 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:38:22 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:38:22 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:38:22 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.558 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Preparing to wait for external event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.558 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.558 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.559 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.559 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Preparing to wait for external event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.559 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.559 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.560 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.560 2 DEBUG nova.virt.libvirt.vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:10Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.560 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.561 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.561 2 DEBUG os_vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.562 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.563 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.565 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapad5643dc-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.566 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapad5643dc-1b, col_values=(('external_ids', {'iface-id': 'ad5643dc-1b43-4ffa-b380-8ee6fbac98fe', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f4:e8:e8', 'vm-uuid': '4c61749b-b18d-4fbe-b99c-90e15ced9469'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 NetworkManager[44949]: <info>  [1759847902.5688] manager: (tapad5643dc-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/507)
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.577 2 INFO os_vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b')#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.578 2 DEBUG nova.virt.libvirt.vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:10Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.579 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.579 2 DEBUG nova.network.os_vif_util [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.580 2 DEBUG os_vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.580 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.581 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.583 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap1ee1b68d-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.583 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap1ee1b68d-60, col_values=(('external_ids', {'iface-id': '1ee1b68d-6081-4e61-a797-c2e41ac53a29', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:02:3b', 'vm-uuid': '4c61749b-b18d-4fbe-b99c-90e15ced9469'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 NetworkManager[44949]: <info>  [1759847902.5854] manager: (tap1ee1b68d-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/508)
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.596 2 INFO os_vif [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60')#033[00m
Oct  7 10:38:22 np0005473739 nova_compute[259550]: 2025-10-07 14:38:22.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:38:22
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'images', 'backups', 'vms', 'volumes', 'default.rgw.meta', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data']
Oct  7 10:38:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:38:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:22Z|00141|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:38:87 10.100.0.6
Oct  7 10:38:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:38:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.079 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.314 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.314 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.315 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:f4:e8:e8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.315 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:32:02:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.316 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Using config drive#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.336 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.725 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.725 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.732 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:38:23 np0005473739 nova_compute[259550]: 2025-10-07 14:38:23.732 2 INFO nova.compute.claims [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:38:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2279: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 331 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct  7 10:38:24 np0005473739 nova_compute[259550]: 2025-10-07 14:38:24.742 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:38:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1304747458' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.190 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.195 2 DEBUG nova.compute.provider_tree [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.279 2 DEBUG nova.scheduler.client.report [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.410 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.685s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.411 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.619 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.620 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.764 2 DEBUG nova.network.neutron [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updated VIF entry in instance network info cache for port 1ee1b68d-6081-4e61-a797-c2e41ac53a29. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.764 2 DEBUG nova.network.neutron [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:25 np0005473739 nova_compute[259550]: 2025-10-07 14:38:25.768 2 DEBUG nova.policy [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.018 2 DEBUG oslo_concurrency.lockutils [req-28d83d1b-369a-4dc4-a0f1-fec923ca2416 req-764cc7f8-180f-4206-a585-94ff01a054f3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.023 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.113 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.329 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Creating config drive at /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.334 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9487p5s3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2280: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 529 KiB/s rd, 679 KiB/s wr, 44 op/s
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.514 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp9487p5s3" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.540 2 DEBUG nova.storage.rbd_utils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.544 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.579 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.581 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.581 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Creating image(s)#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.607 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.636 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.660 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.665 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.737 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.738 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.739 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.739 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.761 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:26 np0005473739 nova_compute[259550]: 2025-10-07 14:38:26.764 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:27 np0005473739 nova_compute[259550]: 2025-10-07 14:38:27.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:27 np0005473739 nova_compute[259550]: 2025-10-07 14:38:27.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2281: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 524 KiB/s rd, 16 KiB/s wr, 42 op/s
Oct  7 10:38:28 np0005473739 nova_compute[259550]: 2025-10-07 14:38:28.718 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Successfully created port: 134818f1-9848-45e7-ac27-c5290f58f87f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:38:28 np0005473739 nova_compute[259550]: 2025-10-07 14:38:28.788 2 INFO nova.compute.manager [None req-96acce94-cb44-4df9-a5c9-a72d4d0867de 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Get console output#033[00m
Oct  7 10:38:28 np0005473739 nova_compute[259550]: 2025-10-07 14:38:28.796 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.345 2 DEBUG oslo_concurrency.processutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config 4c61749b-b18d-4fbe-b99c-90e15ced9469_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.801s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.345 2 INFO nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Deleting local config drive /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469/disk.config because it was imported into RBD.#033[00m
Oct  7 10:38:29 np0005473739 NetworkManager[44949]: <info>  [1759847909.3952] manager: (tapad5643dc-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/509)
Oct  7 10:38:29 np0005473739 kernel: tapad5643dc-1b: entered promiscuous mode
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01242|binding|INFO|Claiming lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe for this chassis.
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01243|binding|INFO|ad5643dc-1b43-4ffa-b380-8ee6fbac98fe: Claiming fa:16:3e:f4:e8:e8 10.100.0.14
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:29 np0005473739 NetworkManager[44949]: <info>  [1759847909.4184] manager: (tap1ee1b68d-60): new Tun device (/org/freedesktop/NetworkManager/Devices/510)
Oct  7 10:38:29 np0005473739 kernel: tap1ee1b68d-60: entered promiscuous mode
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01244|binding|INFO|Setting lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe ovn-installed in OVS
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.422 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01245|if_status|INFO|Dropped 5 log messages in last 47 seconds (most recently, 47 seconds ago) due to excessive rate
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01246|if_status|INFO|Not updating pb chassis for 1ee1b68d-6081-4e61-a797-c2e41ac53a29 now as sb is readonly
Oct  7 10:38:29 np0005473739 systemd-udevd[387660]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:38:29 np0005473739 systemd-udevd[387659]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:29 np0005473739 NetworkManager[44949]: <info>  [1759847909.4573] device (tapad5643dc-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:38:29 np0005473739 NetworkManager[44949]: <info>  [1759847909.4580] device (tap1ee1b68d-60): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:38:29 np0005473739 systemd-machined[214580]: New machine qemu-148-instance-00000076.
Oct  7 10:38:29 np0005473739 NetworkManager[44949]: <info>  [1759847909.4586] device (tapad5643dc-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:38:29 np0005473739 NetworkManager[44949]: <info>  [1759847909.4590] device (tap1ee1b68d-60): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:38:29 np0005473739 systemd[1]: Started Virtual Machine qemu-148-instance-00000076.
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01247|binding|INFO|Claiming lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 for this chassis.
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01248|binding|INFO|1ee1b68d-6081-4e61-a797-c2e41ac53a29: Claiming fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01249|binding|INFO|Setting lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe up in Southbound
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01250|binding|INFO|Setting lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 ovn-installed in OVS
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.644 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e8:e8 10.100.0.14'], port_security=['fa:16:3e:f4:e8:e8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4c61749b-b18d-4fbe-b99c-90e15ced9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb73c9-a89b-4659-8761-7d887493b39b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=169b3722-7b9a-4733-8efb-f5bd5c71aacf, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.645 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe in datapath 5dfb73c9-a89b-4659-8761-7d887493b39b bound to our chassis#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.647 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5dfb73c9-a89b-4659-8761-7d887493b39b#033[00m
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.662 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4996dd5-7867-4aa4-918c-f721005635df]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.691 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0ae6de16-8a32-4a37-ad34-d07537643677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.694 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9cf7e92d-b805-4137-ba04-28db463dfacc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.723 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a0023808-9a1d-46da-aa40-77c0880d12e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.742 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[88ec0337-7d8e-4d86-8b75-9995a1a91805]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb73c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:44:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 5, 'rx_bytes': 986, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 5, 'rx_bytes': 986, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844675, 'reachable_time': 28533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 9, 'inoctets': 776, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 776, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387683, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.751 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b'], port_security=['fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe32:23b/64 2001:db8::f816:3eff:fe32:23b/64', 'neutron:device_id': '4c61749b-b18d-4fbe-b99c-90e15ced9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1ee1b68d-6081-4e61-a797-c2e41ac53a29) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:38:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:29Z|01251|binding|INFO|Setting lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 up in Southbound
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.764 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c41e03e-09e5-4d22-91e3-db8da9a8ce93]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5dfb73c9-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844686, 'tstamp': 844686}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387693, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5dfb73c9-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844689, 'tstamp': 844689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387693, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.767 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb73c9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.785 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 3.021s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dfb73c9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.812 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.813 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5dfb73c9-a0, col_values=(('external_ids', {'iface-id': '30dd4552-fdd6-4d17-af87-77adcec53278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.813 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.815 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1ee1b68d-6081-4e61-a797-c2e41ac53a29 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 bound to our chassis#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.816 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c956141-6a21-499d-99b1-885d1a2972f7#033[00m
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.835 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b4a8770-2476-40a9-87db-ff3033a4f2e6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.867 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7397394a-edf9-413e-8f4c-ba235748524a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.870 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[047e9ab7-f4bf-4196-b07c-78d7dcc9a6bf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 nova_compute[259550]: 2025-10-07 14:38:29.903 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.907 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9ec4dd48-3d59-4efb-a8f9-507c25217ee4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.929 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[500bf550-9925-4c64-87e4-e617c877b27b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c956141-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a2:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844816, 'reachable_time': 18374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387764, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.944 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f770757c-d0e8-4ee9-8823-7865bdcf92ef]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4c956141-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844830, 'tstamp': 844830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 387774, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.946 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c956141-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c956141-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c956141-60, col_values=(('external_ids', {'iface-id': 'ec328f15-1843-4594-8d39-b0d2d9796360'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:29.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2282: 305 pgs: 305 active+clean; 337 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 533 KiB/s rd, 623 KiB/s wr, 59 op/s
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.576 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847910.5753996, 4c61749b-b18d-4fbe-b99c-90e15ced9469 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.576 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] VM Started (Lifecycle Event)#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.730 2 DEBUG nova.objects.instance [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 716d82da-745f-43ca-a7fa-38f02d3e5dc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.750 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.754 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847910.5755756, 4c61749b-b18d-4fbe-b99c-90e15ced9469 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.754 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.979 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.980 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Ensure instance console log exists: /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.980 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.981 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.981 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.986 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:30 np0005473739 nova_compute[259550]: 2025-10-07 14:38:30.989 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:38:31 np0005473739 podman[387801]: 2025-10-07 14:38:31.089725319 +0000 UTC m=+0.064499684 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:38:31 np0005473739 podman[387800]: 2025-10-07 14:38:31.0897494 +0000 UTC m=+0.064922486 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:38:31 np0005473739 nova_compute[259550]: 2025-10-07 14:38:31.547 2 DEBUG nova.compute.manager [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:31 np0005473739 nova_compute[259550]: 2025-10-07 14:38:31.547 2 DEBUG oslo_concurrency.lockutils [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:31 np0005473739 nova_compute[259550]: 2025-10-07 14:38:31.547 2 DEBUG oslo_concurrency.lockutils [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:31 np0005473739 nova_compute[259550]: 2025-10-07 14:38:31.548 2 DEBUG oslo_concurrency.lockutils [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:31 np0005473739 nova_compute[259550]: 2025-10-07 14:38:31.548 2 DEBUG nova.compute.manager [req-9e7e1f96-5789-411a-9926-bf3af1ec4fc3 req-d508dfb5-bdd4-48c7-b519-77584f14f3cd 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Processing event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:38:31 np0005473739 nova_compute[259550]: 2025-10-07 14:38:31.668 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:38:31 np0005473739 nova_compute[259550]: 2025-10-07 14:38:31.684 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Successfully updated port: 134818f1-9848-45e7-ac27-c5290f58f87f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.039 2 DEBUG nova.compute.manager [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.040 2 DEBUG oslo_concurrency.lockutils [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.040 2 DEBUG oslo_concurrency.lockutils [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.040 2 DEBUG oslo_concurrency.lockutils [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.040 2 DEBUG nova.compute.manager [req-9e2a0c00-e925-4a36-bec9-b8d08a68359d req-a75c543f-387c-479a-a45c-cb0e4b56efe1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Processing event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.041 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.050 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.051 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847912.0503786, 4c61749b-b18d-4fbe-b99c-90e15ced9469 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.051 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.056 2 INFO nova.virt.libvirt.driver [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance spawned successfully.#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.057 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2283: 305 pgs: 305 active+clean; 355 MiB data, 959 MiB used, 59 GiB / 60 GiB avail; 538 KiB/s rd, 1.4 MiB/s wr, 67 op/s
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.559 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.565 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.570 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.571 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.572 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.572 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.573 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.573 2 DEBUG nova.virt.libvirt.driver [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002973883041935274 of space, bias 1.0, pg target 0.8921649125805822 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:38:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:38:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298582817' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:38:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:38:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3298582817' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.764 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.765 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.765 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.919 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.969 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.969 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.970 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.970 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.970 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.972 2 INFO nova.compute.manager [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Terminating instance#033[00m
Oct  7 10:38:32 np0005473739 nova_compute[259550]: 2025-10-07 14:38:32.973 2 DEBUG nova.compute.manager [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:38:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.061 2 INFO nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Took 22.82 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.062 2 DEBUG nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.114 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:38:33 np0005473739 kernel: tap6e86ce79-9f (unregistering): left promiscuous mode
Oct  7 10:38:33 np0005473739 NetworkManager[44949]: <info>  [1759847913.1660] device (tap6e86ce79-9f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:33Z|01252|binding|INFO|Releasing lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 from this chassis (sb_readonly=0)
Oct  7 10:38:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:33Z|01253|binding|INFO|Setting lport 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 down in Southbound
Oct  7 10:38:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:33Z|01254|binding|INFO|Removing iface tap6e86ce79-9f ovn-installed in OVS
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.177 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:33 np0005473739 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000073.scope: Deactivated successfully.
Oct  7 10:38:33 np0005473739 systemd[1]: machine-qemu\x2d147\x2dinstance\x2d00000073.scope: Consumed 19.088s CPU time.
Oct  7 10:38:33 np0005473739 systemd-machined[214580]: Machine qemu-147-instance-00000073 terminated.
Oct  7 10:38:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.350 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:38:87 10.100.0.6'], port_security=['fa:16:3e:6d:38:87 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '4e64a021-390b-4a0c-bb4c-75a19f274777', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-28efb734-0152-4914-9f31-b818d894be70', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'c3ab1e78-0aa6-4d11-8104-a075342a0333', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=d3cdfe5f-0d4a-4d58-ba3f-aac15c533467, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:38:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.351 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 in datapath 28efb734-0152-4914-9f31-b818d894be70 unbound from our chassis#033[00m
Oct  7 10:38:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.353 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 28efb734-0152-4914-9f31-b818d894be70, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:38:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.354 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[40f86114-61ba-4678-a1de-d57a1a3bee1d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:33.354 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-28efb734-0152-4914-9f31-b818d894be70 namespace which is not needed anymore#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.399 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.405 2 INFO nova.virt.libvirt.driver [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Instance destroyed successfully.#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.405 2 DEBUG nova.objects.instance [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid 4e64a021-390b-4a0c-bb4c-75a19f274777 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:33 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [NOTICE]   (386209) : haproxy version is 2.8.14-c23fe91
Oct  7 10:38:33 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [NOTICE]   (386209) : path to executable is /usr/sbin/haproxy
Oct  7 10:38:33 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [WARNING]  (386209) : Exiting Master process...
Oct  7 10:38:33 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [WARNING]  (386209) : Exiting Master process...
Oct  7 10:38:33 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [ALERT]    (386209) : Current worker (386211) exited with code 143 (Terminated)
Oct  7 10:38:33 np0005473739 neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70[386205]: [WARNING]  (386209) : All workers exited. Exiting... (0)
Oct  7 10:38:33 np0005473739 systemd[1]: libpod-0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240.scope: Deactivated successfully.
Oct  7 10:38:33 np0005473739 podman[387868]: 2025-10-07 14:38:33.533734345 +0000 UTC m=+0.082837695 container died 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:38:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240-userdata-shm.mount: Deactivated successfully.
Oct  7 10:38:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2bdfd20c34e8e463ac3bd8f75949ae3c20d5cfcc6b96b2e99a9b1bbbdec631c1-merged.mount: Deactivated successfully.
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.699 2 INFO nova.compute.manager [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Took 24.48 seconds to build instance.#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.702 2 DEBUG nova.virt.libvirt.vif [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:18Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-1824501664',display_name='tempest-TestNetworkAdvancedServerOps-server-1824501664',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-1824501664',id=115,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBZnJzpjSv/TU/ynLhehpWtOh3Ok15/bj8hZqv14d9GetFxMUNAsBy1sPCK8k7EXv+srEQ5zJiIUEWZ8pm1FRF0+OuDCiKL8OPwrGm2N566RfHl82V8uvcba1igoHu/qSA==',key_name='tempest-TestNetworkAdvancedServerOps-112753023',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-zrj0040b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:38:05Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=4e64a021-390b-4a0c-bb4c-75a19f274777,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.702 2 DEBUG nova.network.os_vif_util [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.703 2 DEBUG nova.network.os_vif_util [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.704 2 DEBUG os_vif [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6e86ce79-9f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.713 2 INFO os_vif [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:38:87,bridge_name='br-int',has_traffic_filtering=True,id=6e86ce79-9f1b-4e53-8ae5-918e8402b8c6,network=Network(28efb734-0152-4914-9f31-b818d894be70),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6e86ce79-9f')#033[00m
Oct  7 10:38:33 np0005473739 podman[387868]: 2025-10-07 14:38:33.887196499 +0000 UTC m=+0.436299839 container cleanup 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:38:33 np0005473739 systemd[1]: libpod-conmon-0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240.scope: Deactivated successfully.
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.962 2 DEBUG oslo_concurrency.lockutils [None req-35eb79aa-197b-49b1-b0cd-4a234ae1a7eb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.864s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.966 2 DEBUG nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.967 2 DEBUG nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing instance network info cache due to event network-changed-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.967 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.967 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:33 np0005473739 nova_compute[259550]: 2025-10-07 14:38:33.968 2 DEBUG nova.network.neutron [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Refreshing network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:38:34 np0005473739 podman[387917]: 2025-10-07 14:38:34.281126455 +0000 UTC m=+0.359982020 container remove 0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.289 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f00cc3dd-e221-4e40-8980-7ca3d4272997]: (4, ('Tue Oct  7 02:38:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70 (0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240)\n0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240\nTue Oct  7 02:38:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-28efb734-0152-4914-9f31-b818d894be70 (0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240)\n0941fbaaf6eaa8b46472459c92a295c7ae55ab5e503e9a8d02c2b991e80f8240\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:34 np0005473739 nova_compute[259550]: 2025-10-07 14:38:34.291 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-changed-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.291 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c118bf27-fc96-4250-8130-c75138316fa6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.293 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28efb734-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:34 np0005473739 kernel: tap28efb734-00: left promiscuous mode
Oct  7 10:38:34 np0005473739 nova_compute[259550]: 2025-10-07 14:38:34.294 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Refreshing instance network info cache due to event network-changed-134818f1-9848-45e7-ac27-c5290f58f87f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:38:34 np0005473739 nova_compute[259550]: 2025-10-07 14:38:34.300 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:34 np0005473739 nova_compute[259550]: 2025-10-07 14:38:34.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:34 np0005473739 nova_compute[259550]: 2025-10-07 14:38:34.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.314 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e82d568f-88a3-4382-a5d3-d7898179c2e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.348 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f181f26-31c3-4b86-84d5-17daeee0c616]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.349 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a036f543-1a3a-4339-9952-fd3c9912c4a3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.369 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e17c6e04-83e0-4426-bf0e-0730ab920d68]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 846722, 'reachable_time': 37603, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 387931, 'error': None, 'target': 'ovnmeta-28efb734-0152-4914-9f31-b818d894be70', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:34 np0005473739 systemd[1]: run-netns-ovnmeta\x2d28efb734\x2d0152\x2d4914\x2d9f31\x2db818d894be70.mount: Deactivated successfully.
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.373 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-28efb734-0152-4914-9f31-b818d894be70 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:38:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:34.374 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[06c56437-afde-4141-8c43-e6d18ca086f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2284: 305 pgs: 305 active+clean; 374 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 126 op/s
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.733 2 DEBUG nova.network.neutron [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updating instance_info_cache with network_info: [{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.833 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.833 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance network_info: |[{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.834 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.834 2 DEBUG nova.network.neutron [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Refreshing network info cache for port 134818f1-9848-45e7-ac27-c5290f58f87f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.837 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start _get_guest_xml network_info=[{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.840 2 WARNING nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.845 2 DEBUG nova.virt.libvirt.host [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.846 2 DEBUG nova.virt.libvirt.host [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.850 2 DEBUG nova.virt.libvirt.host [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.850 2 DEBUG nova.virt.libvirt.host [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.851 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.851 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.852 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.852 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.853 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.853 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.854 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.854 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.854 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.854 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.855 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.855 2 DEBUG nova.virt.hardware [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:38:35 np0005473739 nova_compute[259550]: 2025-10-07 14:38:35.858 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.060 2 DEBUG nova.network.neutron [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updated VIF entry in instance network info cache for port 6e86ce79-9f1b-4e53-8ae5-918e8402b8c6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.064 2 DEBUG nova.network.neutron [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [{"id": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "address": "fa:16:3e:6d:38:87", "network": {"id": "28efb734-0152-4914-9f31-b818d894be70", "bridge": "br-int", "label": "tempest-network-smoke--1032426150", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6e86ce79-9f", "ovs_interfaceid": "6e86ce79-9f1b-4e53-8ae5-918e8402b8c6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.196 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4e64a021-390b-4a0c-bb4c-75a19f274777" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.197 2 DEBUG nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.197 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.198 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.198 2 DEBUG oslo_concurrency.lockutils [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.199 2 DEBUG nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.199 2 WARNING nova.compute.manager [req-8258c082-4601-4ee3-9ffd-70abf724dda8 req-48890f21-2502-491a-a5e7-8dc6d59a4be3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received unexpected event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe for instance with vm_state active and task_state None.#033[00m
Oct  7 10:38:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:38:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2203997159' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.313 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.337 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.341 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2285: 305 pgs: 305 active+clean; 345 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 121 op/s
Oct  7 10:38:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:38:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2764461806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.816 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.819 2 DEBUG nova.virt.libvirt.vif [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533021491',display_name='tempest-TestNetworkBasicOps-server-533021491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533021491',id=119,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFiLtVtON/CvAsHcf1ZXHYh3rc03vthPyQgHr1JmpZrgfX90g8u0AoXxhQvx7zNCW+RZBDYY9qzlgStzxh1Af0F+ohdhBY+PBrQ7zmCH8Cgi0yMJpOyuiddZB1OxGzpRsA==',key_name='tempest-TestNetworkBasicOps-1904256776',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-vph0u7ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:26Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=716d82da-745f-43ca-a7fa-38f02d3e5dc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.819 2 DEBUG nova.network.os_vif_util [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.820 2 DEBUG nova.network.os_vif_util [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.822 2 DEBUG nova.objects.instance [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 716d82da-745f-43ca-a7fa-38f02d3e5dc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.843 2 DEBUG nova.compute.manager [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.844 2 DEBUG oslo_concurrency.lockutils [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.844 2 DEBUG oslo_concurrency.lockutils [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.845 2 DEBUG oslo_concurrency.lockutils [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.846 2 DEBUG nova.compute.manager [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.846 2 WARNING nova.compute.manager [req-257844b9-e11c-4781-bcda-558a7f24fb3d req-52a789b8-de04-4d95-9e37-d2566b065dd0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received unexpected event network-vif-plugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.891 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <uuid>716d82da-745f-43ca-a7fa-38f02d3e5dc3</uuid>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <name>instance-00000077</name>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-533021491</nova:name>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:38:35</nova:creationTime>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <nova:port uuid="134818f1-9848-45e7-ac27-c5290f58f87f">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <entry name="serial">716d82da-745f-43ca-a7fa-38f02d3e5dc3</entry>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <entry name="uuid">716d82da-745f-43ca-a7fa-38f02d3e5dc3</entry>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:f6:cb:9f"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <target dev="tap134818f1-98"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/console.log" append="off"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:38:36 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:38:36 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:38:36 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:38:36 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.897 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Preparing to wait for external event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.897 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.897 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.898 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.899 2 DEBUG nova.virt.libvirt.vif [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533021491',display_name='tempest-TestNetworkBasicOps-server-533021491',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533021491',id=119,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFiLtVtON/CvAsHcf1ZXHYh3rc03vthPyQgHr1JmpZrgfX90g8u0AoXxhQvx7zNCW+RZBDYY9qzlgStzxh1Af0F+ohdhBY+PBrQ7zmCH8Cgi0yMJpOyuiddZB1OxGzpRsA==',key_name='tempest-TestNetworkBasicOps-1904256776',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-vph0u7ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:38:26Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=716d82da-745f-43ca-a7fa-38f02d3e5dc3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.899 2 DEBUG nova.network.os_vif_util [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.900 2 DEBUG nova.network.os_vif_util [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.900 2 DEBUG os_vif [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.902 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.904 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.907 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap134818f1-98, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.908 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap134818f1-98, col_values=(('external_ids', {'iface-id': '134818f1-9848-45e7-ac27-c5290f58f87f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:cb:9f', 'vm-uuid': '716d82da-745f-43ca-a7fa-38f02d3e5dc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:36 np0005473739 NetworkManager[44949]: <info>  [1759847916.9107] manager: (tap134818f1-98): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/511)
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:36 np0005473739 nova_compute[259550]: 2025-10-07 14:38:36.916 2 INFO os_vif [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98')#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.040 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.041 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.041 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:f6:cb:9f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.042 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Using config drive#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.062 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.141 2 INFO nova.virt.libvirt.driver [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Deleting instance files /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777_del#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.142 2 INFO nova.virt.libvirt.driver [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Deletion of /var/lib/nova/instances/4e64a021-390b-4a0c-bb4c-75a19f274777_del complete#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.447 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Creating config drive at /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.452 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzz7sht2v execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.489 2 DEBUG nova.network.neutron [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updated VIF entry in instance network info cache for port 134818f1-9848-45e7-ac27-c5290f58f87f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.490 2 DEBUG nova.network.neutron [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updating instance_info_cache with network_info: [{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.497 2 INFO nova.compute.manager [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Took 4.52 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.498 2 DEBUG oslo.service.loopingcall [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.498 2 DEBUG nova.compute.manager [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.499 2 DEBUG nova.network.neutron [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.591 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.592 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.592 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.592 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.593 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.593 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.593 2 WARNING nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received unexpected event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.594 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.594 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.594 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.595 2 DEBUG oslo_concurrency.lockutils [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.595 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] No waiting events found dispatching network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.595 2 DEBUG nova.compute.manager [req-526d1284-69de-40a2-9270-333f1b88eecd req-741d14f3-1d2d-4a07-bb6c-4e5b51715ab1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-unplugged-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.596 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpzz7sht2v" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.620 2 DEBUG nova.storage.rbd_utils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.625 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:37.691 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=38, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=37) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:38:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:37.692 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:38:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:37.692 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '38'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:37 np0005473739 nova_compute[259550]: 2025-10-07 14:38:37.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2286: 305 pgs: 305 active+clean; 345 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 107 op/s
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.389 2 DEBUG oslo_concurrency.processutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config 716d82da-745f-43ca-a7fa-38f02d3e5dc3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.765s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.391 2 INFO nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Deleting local config drive /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3/disk.config because it was imported into RBD.#033[00m
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.418 2 DEBUG nova.network.neutron [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.443 2 INFO nova.compute.manager [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Took 0.94 seconds to deallocate network for instance.#033[00m
Oct  7 10:38:38 np0005473739 kernel: tap134818f1-98: entered promiscuous mode
Oct  7 10:38:38 np0005473739 NetworkManager[44949]: <info>  [1759847918.4513] manager: (tap134818f1-98): new Tun device (/org/freedesktop/NetworkManager/Devices/512)
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:38Z|01255|binding|INFO|Claiming lport 134818f1-9848-45e7-ac27-c5290f58f87f for this chassis.
Oct  7 10:38:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:38Z|01256|binding|INFO|134818f1-9848-45e7-ac27-c5290f58f87f: Claiming fa:16:3e:f6:cb:9f 10.100.0.11
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.467 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:cb:9f 10.100.0.11'], port_security=['fa:16:3e:f6:cb:9f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '716d82da-745f-43ca-a7fa-38f02d3e5dc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd82525cf-6a8e-4dd1-ac1e-a6b578fadd19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be472fff-888c-4934-b0b7-07dc4319f2eb, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=134818f1-9848-45e7-ac27-c5290f58f87f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.469 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 134818f1-9848-45e7-ac27-c5290f58f87f in datapath 4bd15a72-ce65-4737-b705-4b2b86d3a32a bound to our chassis#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.470 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bd15a72-ce65-4737-b705-4b2b86d3a32a#033[00m
Oct  7 10:38:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:38Z|01257|binding|INFO|Setting lport 134818f1-9848-45e7-ac27-c5290f58f87f ovn-installed in OVS
Oct  7 10:38:38 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:38Z|01258|binding|INFO|Setting lport 134818f1-9848-45e7-ac27-c5290f58f87f up in Southbound
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:38 np0005473739 systemd-machined[214580]: New machine qemu-149-instance-00000077.
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.489 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b87a0721-f8ab-4640-b33b-d1184bb321b3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.502 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.503 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:38 np0005473739 systemd[1]: Started Virtual Machine qemu-149-instance-00000077.
Oct  7 10:38:38 np0005473739 systemd-udevd[388072]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.519 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6ad208eb-48f0-450b-adb2-ca2b1035de42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.526 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f1f7d97c-fc01-40be-b366-6cc0e408368b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:38 np0005473739 NetworkManager[44949]: <info>  [1759847918.5311] device (tap134818f1-98): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:38:38 np0005473739 NetworkManager[44949]: <info>  [1759847918.5317] device (tap134818f1-98): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.561 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6fd0d6ba-c87a-4d50-8c2e-4965e111b5dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.580 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[801722f5-757c-47a5-be24-9f96d7787f76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bd15a72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:db:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845459, 'reachable_time': 31245, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388079, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.606 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b69f9e1-9b6b-47e2-94a1-64369220fdc0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bd15a72-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845473, 'tstamp': 845473}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388083, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bd15a72-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845476, 'tstamp': 845476}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388083, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.608 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bd15a72-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.611 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bd15a72-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.612 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.612 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bd15a72-c0, col_values=(('external_ids', {'iface-id': '818ca059-c8de-4f85-a524-8f47c8fbf780'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:38.613 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.625 2 DEBUG oslo_concurrency.processutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:38:38 np0005473739 nova_compute[259550]: 2025-10-07 14:38:38.942 2 DEBUG nova.compute.manager [req-343dea61-9918-475e-b8bd-40923cd72d77 req-6bd5f156-d2a4-42cb-92f5-9f7366047037 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Received event network-vif-deleted-6e86ce79-9f1b-4e53-8ae5-918e8402b8c6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.025 2 DEBUG nova.compute.manager [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.026 2 DEBUG oslo_concurrency.lockutils [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.026 2 DEBUG oslo_concurrency.lockutils [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.026 2 DEBUG oslo_concurrency.lockutils [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.026 2 DEBUG nova.compute.manager [req-6dd79aa0-0520-4db1-893e-47ac8cca21b0 req-1ef2648d-f14a-48a1-8086-42d562c37a40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Processing event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:38:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:38:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2840752674' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.120 2 DEBUG oslo_concurrency.processutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.495s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.126 2 DEBUG nova.compute.provider_tree [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.146 2 DEBUG nova.scheduler.client.report [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.167 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.190 2 INFO nova.scheduler.client.report [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance 4e64a021-390b-4a0c-bb4c-75a19f274777#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.255 2 DEBUG oslo_concurrency.lockutils [None req-43625024-d563-49d9-942b-96fc888c82d7 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "4e64a021-390b-4a0c-bb4c-75a19f274777" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.286s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.436 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847919.4362261, 716d82da-745f-43ca-a7fa-38f02d3e5dc3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.437 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] VM Started (Lifecycle Event)#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.443 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.446 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.449 2 INFO nova.virt.libvirt.driver [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance spawned successfully.#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.452 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.456 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.459 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.480 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.480 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847919.4374018, 716d82da-745f-43ca-a7fa-38f02d3e5dc3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.480 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.485 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.486 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.486 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.487 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.487 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.488 2 DEBUG nova.virt.libvirt.driver [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.498 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.502 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847919.4459107, 716d82da-745f-43ca-a7fa-38f02d3e5dc3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.503 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.524 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.529 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.555 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.574 2 INFO nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Took 12.99 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.574 2 DEBUG nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.640 2 INFO nova.compute.manager [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Took 16.27 seconds to build instance.#033[00m
Oct  7 10:38:39 np0005473739 nova_compute[259550]: 2025-10-07 14:38:39.681 2 DEBUG oslo_concurrency.lockutils [None req-61bd21ad-f176-4303-86fe-21031f34d95f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2287: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.033 2 DEBUG nova.compute.manager [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.033 2 DEBUG nova.compute.manager [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing instance network info cache due to event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.034 2 DEBUG oslo_concurrency.lockutils [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.034 2 DEBUG oslo_concurrency.lockutils [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.035 2 DEBUG nova.network.neutron [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.116 2 DEBUG nova.compute.manager [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.117 2 DEBUG oslo_concurrency.lockutils [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.117 2 DEBUG oslo_concurrency.lockutils [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.118 2 DEBUG oslo_concurrency.lockutils [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.119 2 DEBUG nova.compute.manager [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] No waiting events found dispatching network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.121 2 WARNING nova.compute.manager [req-0b92e191-ff58-4c40-9d1b-8621e9a46517 req-1b659032-24c5-418a-8071-f0f75655e033 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received unexpected event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f for instance with vm_state active and task_state None.#033[00m
Oct  7 10:38:41 np0005473739 nova_compute[259550]: 2025-10-07 14:38:41.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2288: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 1.2 MiB/s wr, 138 op/s
Oct  7 10:38:42 np0005473739 nova_compute[259550]: 2025-10-07 14:38:42.404 2 DEBUG nova.network.neutron [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updated VIF entry in instance network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:38:42 np0005473739 nova_compute[259550]: 2025-10-07 14:38:42.404 2 DEBUG nova.network.neutron [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:42 np0005473739 nova_compute[259550]: 2025-10-07 14:38:42.566 2 DEBUG oslo_concurrency.lockutils [req-54752178-f9eb-4b1d-87ec-68199fc409a2 req-fde99da6-10cf-414f-960a-489bee80f0fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:42 np0005473739 nova_compute[259550]: 2025-10-07 14:38:42.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:43Z|01259|binding|INFO|Releasing lport 30dd4552-fdd6-4d17-af87-77adcec53278 from this chassis (sb_readonly=0)
Oct  7 10:38:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:43Z|01260|binding|INFO|Releasing lport 818ca059-c8de-4f85-a524-8f47c8fbf780 from this chassis (sb_readonly=0)
Oct  7 10:38:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:43Z|01261|binding|INFO|Releasing lport ec328f15-1843-4594-8d39-b0d2d9796360 from this chassis (sb_readonly=0)
Oct  7 10:38:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:43 np0005473739 nova_compute[259550]: 2025-10-07 14:38:43.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:43 np0005473739 nova_compute[259550]: 2025-10-07 14:38:43.352 2 DEBUG nova.compute.manager [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-changed-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:43 np0005473739 nova_compute[259550]: 2025-10-07 14:38:43.353 2 DEBUG nova.compute.manager [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Refreshing instance network info cache due to event network-changed-134818f1-9848-45e7-ac27-c5290f58f87f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:38:43 np0005473739 nova_compute[259550]: 2025-10-07 14:38:43.353 2 DEBUG oslo_concurrency.lockutils [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:43 np0005473739 nova_compute[259550]: 2025-10-07 14:38:43.353 2 DEBUG oslo_concurrency.lockutils [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:43 np0005473739 nova_compute[259550]: 2025-10-07 14:38:43.354 2 DEBUG nova.network.neutron [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Refreshing network info cache for port 134818f1-9848-45e7-ac27-c5290f58f87f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:38:44 np0005473739 podman[388149]: 2025-10-07 14:38:44.073447549 +0000 UTC m=+0.057027035 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Oct  7 10:38:44 np0005473739 podman[388150]: 2025-10-07 14:38:44.106970805 +0000 UTC m=+0.090561721 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.build-date=20251001, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:38:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2289: 305 pgs: 305 active+clean; 293 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 3.5 MiB/s rd, 394 KiB/s wr, 167 op/s
Oct  7 10:38:44 np0005473739 nova_compute[259550]: 2025-10-07 14:38:44.865 2 DEBUG nova.network.neutron [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updated VIF entry in instance network info cache for port 134818f1-9848-45e7-ac27-c5290f58f87f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:38:44 np0005473739 nova_compute[259550]: 2025-10-07 14:38:44.866 2 DEBUG nova.network.neutron [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updating instance_info_cache with network_info: [{"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:38:44 np0005473739 nova_compute[259550]: 2025-10-07 14:38:44.886 2 DEBUG oslo_concurrency.lockutils [req-73a6bd6a-6a6b-419c-8c40-eb8770a2bfdd req-2f08f53d-5cf8-4284-8773-8f6fd73910e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-716d82da-745f-43ca-a7fa-38f02d3e5dc3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:38:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2290: 305 pgs: 305 active+clean; 301 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 608 KiB/s wr, 127 op/s
Oct  7 10:38:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:46Z|00142|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f4:e8:e8 10.100.0.14
Oct  7 10:38:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:46Z|00143|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f4:e8:e8 10.100.0.14
Oct  7 10:38:46 np0005473739 nova_compute[259550]: 2025-10-07 14:38:46.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:47 np0005473739 nova_compute[259550]: 2025-10-07 14:38:47.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:47 np0005473739 nova_compute[259550]: 2025-10-07 14:38:47.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #111. Immutable memtables: 0.
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.052977) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 65] Flushing memtable with next log file: 111
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928053042, "job": 65, "event": "flush_started", "num_memtables": 1, "num_entries": 712, "num_deletes": 250, "total_data_size": 838832, "memory_usage": 851520, "flush_reason": "Manual Compaction"}
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 65] Level-0 flush table #112: started
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928057165, "cf_name": "default", "job": 65, "event": "table_file_creation", "file_number": 112, "file_size": 554084, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 47744, "largest_seqno": 48455, "table_properties": {"data_size": 550893, "index_size": 1035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8623, "raw_average_key_size": 20, "raw_value_size": 544206, "raw_average_value_size": 1308, "num_data_blocks": 46, "num_entries": 416, "num_filter_entries": 416, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847874, "oldest_key_time": 1759847874, "file_creation_time": 1759847928, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 112, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 65] Flush lasted 4245 microseconds, and 2423 cpu microseconds.
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.057230) [db/flush_job.cc:967] [default] [JOB 65] Level-0 flush table #112: 554084 bytes OK
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.057247) [db/memtable_list.cc:519] [default] Level-0 commit table #112 started
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059050) [db/memtable_list.cc:722] [default] Level-0 commit table #112: memtable #1 done
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059065) EVENT_LOG_v1 {"time_micros": 1759847928059061, "job": 65, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059081) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 65] Try to delete WAL files size 835134, prev total WAL file size 835134, number of live WAL files 2.
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000108.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059830) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740031373533' seq:72057594037927935, type:22 .. '6D6772737461740032303034' seq:0, type:0; will stop at (end)
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 66] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 65 Base level 0, inputs: [112(541KB)], [110(10MB)]
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928059961, "job": 66, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [112], "files_L6": [110], "score": -1, "input_data_size": 11319549, "oldest_snapshot_seqno": -1}
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 66] Generated table #113: 6786 keys, 8288376 bytes, temperature: kUnknown
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928106430, "cf_name": "default", "job": 66, "event": "table_file_creation", "file_number": 113, "file_size": 8288376, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8244933, "index_size": 25347, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17029, "raw_key_size": 177172, "raw_average_key_size": 26, "raw_value_size": 8125236, "raw_average_value_size": 1197, "num_data_blocks": 987, "num_entries": 6786, "num_filter_entries": 6786, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759847928, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 113, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.106660) [db/compaction/compaction_job.cc:1663] [default] [JOB 66] Compacted 1@0 + 1@6 files to L6 => 8288376 bytes
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.108831) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 243.3 rd, 178.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(35.4) write-amplify(15.0) OK, records in: 7276, records dropped: 490 output_compression: NoCompression
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.108849) EVENT_LOG_v1 {"time_micros": 1759847928108841, "job": 66, "event": "compaction_finished", "compaction_time_micros": 46528, "compaction_time_cpu_micros": 20787, "output_level": 6, "num_output_files": 1, "total_output_size": 8288376, "num_input_records": 7276, "num_output_records": 6786, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000112.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928109051, "job": 66, "event": "table_file_deletion", "file_number": 112}
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000110.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759847928110755, "job": 66, "event": "table_file_deletion", "file_number": 110}
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.059682) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110846) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110851) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110853) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110854) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:38:48 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:38:48.110855) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:38:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2291: 305 pgs: 305 active+clean; 301 MiB data, 917 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 608 KiB/s wr, 103 op/s
Oct  7 10:38:48 np0005473739 nova_compute[259550]: 2025-10-07 14:38:48.402 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847913.4021986, 4e64a021-390b-4a0c-bb4c-75a19f274777 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:38:48 np0005473739 nova_compute[259550]: 2025-10-07 14:38:48.403 2 INFO nova.compute.manager [-] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:38:48 np0005473739 nova_compute[259550]: 2025-10-07 14:38:48.425 2 DEBUG nova.compute.manager [None req-3a2b95e7-11a4-4f43-be0e-e1a893d322d6 - - - - - -] [instance: 4e64a021-390b-4a0c-bb4c-75a19f274777] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:38:49 np0005473739 nova_compute[259550]: 2025-10-07 14:38:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:49 np0005473739 nova_compute[259550]: 2025-10-07 14:38:49.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:38:50 np0005473739 nova_compute[259550]: 2025-10-07 14:38:50.113 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2292: 305 pgs: 305 active+clean; 324 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 155 op/s
Oct  7 10:38:51 np0005473739 nova_compute[259550]: 2025-10-07 14:38:51.916 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:51 np0005473739 nova_compute[259550]: 2025-10-07 14:38:51.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:51 np0005473739 nova_compute[259550]: 2025-10-07 14:38:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2293: 305 pgs: 305 active+clean; 326 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.2 MiB/s wr, 135 op/s
Oct  7 10:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:38:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:38:52 np0005473739 nova_compute[259550]: 2025-10-07 14:38:52.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:53 np0005473739 nova_compute[259550]: 2025-10-07 14:38:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2294: 305 pgs: 305 active+clean; 328 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.5 MiB/s wr, 117 op/s
Oct  7 10:38:54 np0005473739 nova_compute[259550]: 2025-10-07 14:38:54.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:55Z|00144|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:cb:9f 10.100.0.11
Oct  7 10:38:55 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:55Z|00145|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:cb:9f 10.100.0.11
Oct  7 10:38:56 np0005473739 nova_compute[259550]: 2025-10-07 14:38:56.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2295: 305 pgs: 305 active+clean; 346 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 789 KiB/s rd, 3.4 MiB/s wr, 106 op/s
Oct  7 10:38:56 np0005473739 nova_compute[259550]: 2025-10-07 14:38:56.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:57 np0005473739 nova_compute[259550]: 2025-10-07 14:38:57.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:38:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2296: 305 pgs: 305 active+clean; 346 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 372 KiB/s rd, 2.8 MiB/s wr, 87 op/s
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.479 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.479 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.480 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.480 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.480 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.482 2 INFO nova.compute.manager [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Terminating instance#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.483 2 DEBUG nova.compute.manager [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:38:59 np0005473739 kernel: tapad5643dc-1b (unregistering): left promiscuous mode
Oct  7 10:38:59 np0005473739 NetworkManager[44949]: <info>  [1759847939.5489] device (tapad5643dc-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.551 2 DEBUG nova.compute.manager [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.552 2 DEBUG nova.compute.manager [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing instance network info cache due to event network-changed-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.552 2 DEBUG oslo_concurrency.lockutils [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.552 2 DEBUG oslo_concurrency.lockutils [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.553 2 DEBUG nova.network.neutron [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Refreshing network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:38:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:59Z|01262|binding|INFO|Releasing lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe from this chassis (sb_readonly=0)
Oct  7 10:38:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:59Z|01263|binding|INFO|Setting lport ad5643dc-1b43-4ffa-b380-8ee6fbac98fe down in Southbound
Oct  7 10:38:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:59Z|01264|binding|INFO|Removing iface tapad5643dc-1b ovn-installed in OVS
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.563 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f4:e8:e8 10.100.0.14'], port_security=['fa:16:3e:f4:e8:e8 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '4c61749b-b18d-4fbe-b99c-90e15ced9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb73c9-a89b-4659-8761-7d887493b39b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=169b3722-7b9a-4733-8efb-f5bd5c71aacf, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.564 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe in datapath 5dfb73c9-a89b-4659-8761-7d887493b39b unbound from our chassis#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.566 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5dfb73c9-a89b-4659-8761-7d887493b39b#033[00m
Oct  7 10:38:59 np0005473739 kernel: tap1ee1b68d-60 (unregistering): left promiscuous mode
Oct  7 10:38:59 np0005473739 NetworkManager[44949]: <info>  [1759847939.5794] device (tap1ee1b68d-60): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aea3755c-ca36-449f-8b35-31c39f125872]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:59Z|01265|binding|INFO|Releasing lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 from this chassis (sb_readonly=0)
Oct  7 10:38:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:59Z|01266|binding|INFO|Setting lport 1ee1b68d-6081-4e61-a797-c2e41ac53a29 down in Southbound
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:38:59Z|01267|binding|INFO|Removing iface tap1ee1b68d-60 ovn-installed in OVS
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.596 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b'], port_security=['fa:16:3e:32:02:3b 2001:db8:0:1:f816:3eff:fe32:23b 2001:db8::f816:3eff:fe32:23b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe32:23b/64 2001:db8::f816:3eff:fe32:23b/64', 'neutron:device_id': '4c61749b-b18d-4fbe-b99c-90e15ced9469', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1ee1b68d-6081-4e61-a797-c2e41ac53a29) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.623 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9f710601-8b8e-4402-8df7-b6b3da296755]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.626 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c88f373f-7005-4629-93aa-8870ede9da47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Deactivated successfully.
Oct  7 10:38:59 np0005473739 systemd[1]: machine-qemu\x2d148\x2dinstance\x2d00000076.scope: Consumed 16.182s CPU time.
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.656 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f2e8047b-82fd-4d4a-87ea-f5d2e480adf8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 systemd-machined[214580]: Machine qemu-148-instance-00000076 terminated.
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.676 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7b62aaaf-ed90-4b60-a2b4-b17743d2258e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5dfb73c9-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e7:44:11'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 1070, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 13, 'tx_packets': 7, 'rx_bytes': 1070, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 351], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844675, 'reachable_time': 28533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 9, 'inoctets': 776, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 9, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 776, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 9, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388206, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[192ce812-2c15-4323-8e5c-f19d6419ac71]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5dfb73c9-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844686, 'tstamp': 844686}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388207, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5dfb73c9-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844689, 'tstamp': 844689}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388207, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.699 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb73c9-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 systemd-udevd[388194]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:38:59 np0005473739 NetworkManager[44949]: <info>  [1759847939.7048] manager: (tapad5643dc-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/513)
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5dfb73c9-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5dfb73c9-a0, col_values=(('external_ids', {'iface-id': '30dd4552-fdd6-4d17-af87-77adcec53278'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.711 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1ee1b68d-6081-4e61-a797-c2e41ac53a29 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 unbound from our chassis#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.713 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4c956141-6a21-499d-99b1-885d1a2972f7#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.728 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[403c237e-295d-47d8-9bd6-6c90c146633d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.740 2 INFO nova.virt.libvirt.driver [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Instance destroyed successfully.#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.741 2 DEBUG nova.objects.instance [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 4c61749b-b18d-4fbe-b99c-90e15ced9469 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.761 2 DEBUG nova.virt.libvirt.vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:38:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:38:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.761 2 DEBUG nova.network.os_vif_util [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.762 2 DEBUG nova.network.os_vif_util [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.762 2 DEBUG os_vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.761 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d85100a1-286f-47d7-b01b-2da0d139254c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.764 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapad5643dc-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.764 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b9062e0a-d319-42c9-9b81-766746073aa2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.779 2 INFO os_vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f4:e8:e8,bridge_name='br-int',has_traffic_filtering=True,id=ad5643dc-1b43-4ffa-b380-8ee6fbac98fe,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapad5643dc-1b')#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.780 2 DEBUG nova.virt.libvirt.vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1890244122',display_name='tempest-TestGettingAddress-server-1890244122',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1890244122',id=118,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:38:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-c20dq06d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:38:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=4c61749b-b18d-4fbe-b99c-90e15ced9469,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.780 2 DEBUG nova.network.os_vif_util [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.781 2 DEBUG nova.network.os_vif_util [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.781 2 DEBUG os_vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.782 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.782 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1ee1b68d-60, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.787 2 INFO os_vif [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:02:3b,bridge_name='br-int',has_traffic_filtering=True,id=1ee1b68d-6081-4e61-a797-c2e41ac53a29,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1ee1b68d-60')#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.796 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[94ddd5dd-d2b5-4dcf-83c6-fba1863c0edd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.817 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c37fa6c-5f81-4674-8448-7577373a6ce4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4c956141-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:7c:a2:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 352], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844816, 'reachable_time': 18374, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388242, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.839 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[26a7a116-475c-48e4-bc04-ebfcbe7ead03]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4c956141-61'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 844830, 'tstamp': 844830}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388251, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.841 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c956141-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.844 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.845 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4c956141-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.845 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.845 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4c956141-60, col_values=(('external_ids', {'iface-id': 'ec328f15-1843-4594-8d39-b0d2d9796360'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:38:59 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:38:59.846 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:38:59 np0005473739 nova_compute[259550]: 2025-10-07 14:38:59.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:39:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:00.071 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:00.072 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:00.073 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.229 2 INFO nova.virt.libvirt.driver [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Deleting instance files /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469_del#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.231 2 INFO nova.virt.libvirt.driver [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Deletion of /var/lib/nova/instances/4c61749b-b18d-4fbe-b99c-90e15ced9469_del complete#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.361 2 DEBUG nova.compute.manager [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-unplugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.362 2 DEBUG oslo_concurrency.lockutils [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.362 2 DEBUG oslo_concurrency.lockutils [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.362 2 DEBUG oslo_concurrency.lockutils [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.363 2 DEBUG nova.compute.manager [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-unplugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.363 2 DEBUG nova.compute.manager [req-e1fc23a5-e688-47cd-9570-15de44cf6337 req-fff567e4-b075-4a75-bff7-06799a3a0eb4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-unplugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:39:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2297: 305 pgs: 305 active+clean; 359 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 511 KiB/s rd, 3.7 MiB/s wr, 117 op/s
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.418 2 INFO nova.compute.manager [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Took 0.94 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.419 2 DEBUG oslo.service.loopingcall [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.419 2 DEBUG nova.compute.manager [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.420 2 DEBUG nova.network.neutron [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.531 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.532 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.559 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.572 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.573 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.573 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.640 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.641 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.659 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.660 2 INFO nova.compute.claims [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.848 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.866 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.867 2 DEBUG nova.compute.provider_tree [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.879 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:39:00 np0005473739 nova_compute[259550]: 2025-10-07 14:39:00.899 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.031 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.375 2 INFO nova.compute.manager [None req-5d792702-e5e6-4f04-8201-e88990916fbb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Get console output#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.389 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:39:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3067318881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.504 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.510 2 DEBUG nova.compute.provider_tree [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.524 2 DEBUG nova.scheduler.client.report [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.549 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.908s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.550 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.605 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.605 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.618 2 DEBUG nova.compute.manager [req-8944b6a3-06bc-48b3-97ae-be52d9d39394 req-2ff25bca-d09e-4fbe-881b-8fde83fbd6db 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-deleted-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.619 2 INFO nova.compute.manager [req-8944b6a3-06bc-48b3-97ae-be52d9d39394 req-2ff25bca-d09e-4fbe-881b-8fde83fbd6db 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Neutron deleted interface 1ee1b68d-6081-4e61-a797-c2e41ac53a29; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.619 2 DEBUG nova.network.neutron [req-8944b6a3-06bc-48b3-97ae-be52d9d39394 req-2ff25bca-d09e-4fbe-881b-8fde83fbd6db 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.248", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.631 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.652 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.655 2 DEBUG nova.compute.manager [req-8944b6a3-06bc-48b3-97ae-be52d9d39394 req-2ff25bca-d09e-4fbe-881b-8fde83fbd6db 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Detach interface failed, port_id=1ee1b68d-6081-4e61-a797-c2e41ac53a29, reason: Instance 4c61749b-b18d-4fbe-b99c-90e15ced9469 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.716 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.716 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.717 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.717 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.717 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.718 2 INFO nova.compute.manager [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Terminating instance#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.719 2 DEBUG nova.compute.manager [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.722 2 DEBUG nova.network.neutron [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updated VIF entry in instance network info cache for port ad5643dc-1b43-4ffa-b380-8ee6fbac98fe. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.722 2 DEBUG nova.network.neutron [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [{"id": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "address": "fa:16:3e:f4:e8:e8", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapad5643dc-1b", "ovs_interfaceid": "ad5643dc-1b43-4ffa-b380-8ee6fbac98fe", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "address": "fa:16:3e:32:02:3b", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe32:23b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1ee1b68d-60", "ovs_interfaceid": "1ee1b68d-6081-4e61-a797-c2e41ac53a29", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:01 np0005473739 kernel: tap134818f1-98 (unregistering): left promiscuous mode
Oct  7 10:39:01 np0005473739 NetworkManager[44949]: <info>  [1759847941.7813] device (tap134818f1-98): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:39:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:01Z|01268|binding|INFO|Releasing lport 134818f1-9848-45e7-ac27-c5290f58f87f from this chassis (sb_readonly=0)
Oct  7 10:39:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:01Z|01269|binding|INFO|Setting lport 134818f1-9848-45e7-ac27-c5290f58f87f down in Southbound
Oct  7 10:39:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:01Z|01270|binding|INFO|Removing iface tap134818f1-98 ovn-installed in OVS
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.787 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.791 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.793 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.793 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Creating image(s)#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.815 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:cb:9f 10.100.0.11'], port_security=['fa:16:3e:f6:cb:9f 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '716d82da-745f-43ca-a7fa-38f02d3e5dc3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd82525cf-6a8e-4dd1-ac1e-a6b578fadd19', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.174'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be472fff-888c-4934-b0b7-07dc4319f2eb, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=134818f1-9848-45e7-ac27-c5290f58f87f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.824 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 134818f1-9848-45e7-ac27-c5290f58f87f in datapath 4bd15a72-ce65-4737-b705-4b2b86d3a32a unbound from our chassis#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.826 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4bd15a72-ce65-4737-b705-4b2b86d3a32a#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.845 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8901374c-c5d6-4f29-bebe-03e9ba562b66]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.865 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:01 np0005473739 podman[388279]: 2025-10-07 14:39:01.882420873 +0000 UTC m=+0.060912909 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.885 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3f905200-0dc7-4bd8-aab4-c59b53723882]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:01 np0005473739 podman[388287]: 2025-10-07 14:39:01.889705927 +0000 UTC m=+0.067291218 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.888 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[12f9b98f-5645-4c88-aa91-94cac70ff620]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:01 np0005473739 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000077.scope: Deactivated successfully.
Oct  7 10:39:01 np0005473739 systemd[1]: machine-qemu\x2d149\x2dinstance\x2d00000077.scope: Consumed 16.868s CPU time.
Oct  7 10:39:01 np0005473739 systemd-machined[214580]: Machine qemu-149-instance-00000077 terminated.
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.910 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.922 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[996cbed4-195c-4d89-87a7-d335d6bc9560]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.939 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.945 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[db5da720-6c2d-4846-90e0-7d506ffe244d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4bd15a72-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:11:db:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 354], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845459, 'reachable_time': 31245, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388372, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.946 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.965 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[08e46c0f-04c2-438b-81e2-964103c303ab]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap4bd15a72-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845473, 'tstamp': 845473}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388386, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap4bd15a72-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 845476, 'tstamp': 845476}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388386, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.967 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bd15a72-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4bd15a72-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4bd15a72-c0, col_values=(('external_ids', {'iface-id': '818ca059-c8de-4f85-a524-8f47c8fbf780'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:01.973 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.988 2 DEBUG nova.policy [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5c505d04148e44b8b93ceab0e3cedef4', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:39:01 np0005473739 nova_compute[259550]: 2025-10-07 14:39:01.997 2 DEBUG oslo_concurrency.lockutils [req-760c94e7-ffba-4e80-8171-62ec9414ea0f req-84e1636a-7ec7-4b5b-bcf0-5c9014780280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4c61749b-b18d-4fbe-b99c-90e15ced9469" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.003 2 INFO nova.virt.libvirt.driver [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Instance destroyed successfully.#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.003 2 DEBUG nova.objects.instance [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 716d82da-745f-43ca-a7fa-38f02d3e5dc3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.025 2 DEBUG nova.virt.libvirt.vif [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:20Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-533021491',display_name='tempest-TestNetworkBasicOps-server-533021491',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-533021491',id=119,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFiLtVtON/CvAsHcf1ZXHYh3rc03vthPyQgHr1JmpZrgfX90g8u0AoXxhQvx7zNCW+RZBDYY9qzlgStzxh1Af0F+ohdhBY+PBrQ7zmCH8Cgi0yMJpOyuiddZB1OxGzpRsA==',key_name='tempest-TestNetworkBasicOps-1904256776',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:38:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-vph0u7ke',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:38:39Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=716d82da-745f-43ca-a7fa-38f02d3e5dc3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.026 2 DEBUG nova.network.os_vif_util [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "134818f1-9848-45e7-ac27-c5290f58f87f", "address": "fa:16:3e:f6:cb:9f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.174", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap134818f1-98", "ovs_interfaceid": "134818f1-9848-45e7-ac27-c5290f58f87f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.027 2 DEBUG nova.network.os_vif_util [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.027 2 DEBUG os_vif [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.029 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap134818f1-98, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.030 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.031 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.032 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.033 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.054 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.057 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.097 2 INFO os_vif [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:cb:9f,bridge_name='br-int',has_traffic_filtering=True,id=134818f1-9848-45e7-ac27-c5290f58f87f,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap134818f1-98')#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.183 2 DEBUG nova.network.neutron [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.201 2 INFO nova.compute.manager [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Took 1.78 seconds to deallocate network for instance.#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.259 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.260 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2298: 305 pgs: 305 active+clean; 334 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.2 MiB/s wr, 75 op/s
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.415 2 DEBUG oslo_concurrency.processutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.452 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.395s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.517 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] resizing rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.646 2 DEBUG nova.objects.instance [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'migration_context' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.659 2 DEBUG nova.compute.manager [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.660 2 DEBUG oslo_concurrency.lockutils [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.660 2 DEBUG oslo_concurrency.lockutils [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.660 2 DEBUG oslo_concurrency.lockutils [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.661 2 DEBUG nova.compute.manager [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.661 2 WARNING nova.compute.manager [req-df21b932-4596-457b-917c-1cff2a2f14a9 req-d9f130bf-35d8-47a6-b86c-526e230d43ff 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received unexpected event network-vif-plugged-1ee1b68d-6081-4e61-a797-c2e41ac53a29 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.671 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.672 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Ensure instance console log exists: /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.672 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.673 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.673 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.716 2 INFO nova.virt.libvirt.driver [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Deleting instance files /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3_del#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.717 2 INFO nova.virt.libvirt.driver [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Deletion of /var/lib/nova/instances/716d82da-745f-43ca-a7fa-38f02d3e5dc3_del complete#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.743 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Successfully created port: 660a78e9-3d3f-4949-88f4-3cad47b74229 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.774 2 INFO nova.compute.manager [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Took 1.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.775 2 DEBUG oslo.service.loopingcall [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.775 2 DEBUG nova.compute.manager [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.775 2 DEBUG nova.network.neutron [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:39:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2559127699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.917 2 DEBUG oslo_concurrency.processutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.923 2 DEBUG nova.compute.provider_tree [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:02 np0005473739 nova_compute[259550]: 2025-10-07 14:39:02.997 2 DEBUG nova.scheduler.client.report [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.057 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.798s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.089 2 INFO nova.scheduler.client.report [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 4c61749b-b18d-4fbe-b99c-90e15ced9469#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.155 2 DEBUG oslo_concurrency.lockutils [None req-f5aa1156-df2d-442c-a679-d2a6a8fd3bda d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.611 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Successfully updated port: 660a78e9-3d3f-4949-88f4-3cad47b74229 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.659 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.659 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.659 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.735 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.735 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.735 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4c61749b-b18d-4fbe-b99c-90e15ced9469-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] No waiting events found dispatching network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.736 2 WARNING nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received unexpected event network-vif-plugged-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Received event network-vif-deleted-ad5643dc-1b43-4ffa-b380-8ee6fbac98fe external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-unplugged-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.736 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] No waiting events found dispatching network-vif-unplugged-134818f1-9848-45e7-ac27-c5290f58f87f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-unplugged-134818f1-9848-45e7-ac27-c5290f58f87f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.737 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.738 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.738 2 DEBUG oslo_concurrency.lockutils [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.738 2 DEBUG nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] No waiting events found dispatching network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.738 2 WARNING nova.compute.manager [req-aa67a9bc-f679-4217-a1e9-cf5364ad414e req-b2567807-a2bd-4dc5-9762-02f815f419d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received unexpected event network-vif-plugged-134818f1-9848-45e7-ac27-c5290f58f87f for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.739 2 DEBUG nova.network.neutron [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.756 2 INFO nova.compute.manager [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Took 0.98 seconds to deallocate network for instance.#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.812 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.812 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.878 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.909 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.910 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.910 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.922 2 DEBUG oslo_concurrency.processutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.958 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:39:03 np0005473739 nova_compute[259550]: 2025-10-07 14:39:03.962 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1155700411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2299: 305 pgs: 305 active+clean; 277 MiB data, 938 MiB used, 59 GiB / 60 GiB avail; 342 KiB/s rd, 3.7 MiB/s wr, 136 op/s
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.399 2 DEBUG oslo_concurrency.processutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.408 2 DEBUG nova.compute.provider_tree [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.425 2 DEBUG nova.scheduler.client.report [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.503 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.506 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.544s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.506 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.506 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.507 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.747 2 INFO nova.scheduler.client.report [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 716d82da-745f-43ca-a7fa-38f02d3e5dc3#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.754 2 DEBUG nova.network.neutron [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.762 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.764 2 INFO nova.compute.manager [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Terminating instance#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.764 2 DEBUG nova.compute.manager [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.858 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.858 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance network_info: |[{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.861 2 DEBUG nova.compute.manager [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.861 2 DEBUG nova.compute.manager [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing instance network info cache due to event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.861 2 DEBUG oslo_concurrency.lockutils [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.862 2 DEBUG oslo_concurrency.lockutils [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.862 2 DEBUG nova.network.neutron [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.867 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start _get_guest_xml network_info=[{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.873 2 WARNING nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.879 2 DEBUG nova.virt.libvirt.host [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.880 2 DEBUG nova.virt.libvirt.host [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.883 2 DEBUG nova.virt.libvirt.host [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.883 2 DEBUG nova.virt.libvirt.host [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.884 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.884 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.884 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.885 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.885 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.885 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.885 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.886 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.886 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.886 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.886 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.887 2 DEBUG nova.virt.hardware [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:39:04 np0005473739 kernel: tap72a26c5e-6c (unregistering): left promiscuous mode
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.891 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:04 np0005473739 NetworkManager[44949]: <info>  [1759847944.9042] device (tap72a26c5e-6c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:39:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:04Z|01271|binding|INFO|Releasing lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b from this chassis (sb_readonly=0)
Oct  7 10:39:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:04Z|01272|binding|INFO|Setting lport 72a26c5e-6ceb-4ee6-b79b-d105eac8054b down in Southbound
Oct  7 10:39:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:04Z|01273|binding|INFO|Removing iface tap72a26c5e-6c ovn-installed in OVS
Oct  7 10:39:04 np0005473739 kernel: tap1a175a4f-4e (unregistering): left promiscuous mode
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:04 np0005473739 NetworkManager[44949]: <info>  [1759847944.9347] device (tap1a175a4f-4e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.944 2 DEBUG oslo_concurrency.lockutils [None req-a419a005-4190-4e10-ae61-c9b24052ef3c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "716d82da-745f-43ca-a7fa-38f02d3e5dc3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.227s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:04Z|01274|binding|INFO|Releasing lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 from this chassis (sb_readonly=1)
Oct  7 10:39:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:04Z|01275|binding|INFO|Removing iface tap1a175a4f-4e ovn-installed in OVS
Oct  7 10:39:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:04Z|01276|if_status|INFO|Dropped 4 log messages in last 175 seconds (most recently, 175 seconds ago) due to excessive rate
Oct  7 10:39:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:04Z|01277|if_status|INFO|Not setting lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 down as sb is readonly
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.945 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.947 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:04Z|01278|binding|INFO|Setting lport 1a175a4f-4ef8-469e-b213-5f8d404858c8 down in Southbound
Oct  7 10:39:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.951 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:bc:87 10.100.0.12'], port_security=['fa:16:3e:dc:bc:87 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '23c0ce36-9e34-4a73-9f99-3b79f8623238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5dfb73c9-a89b-4659-8761-7d887493b39b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=169b3722-7b9a-4733-8efb-f5bd5c71aacf, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=72a26c5e-6ceb-4ee6-b79b-d105eac8054b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.952 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b in datapath 5dfb73c9-a89b-4659-8761-7d887493b39b unbound from our chassis#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.953 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5dfb73c9-a89b-4659-8761-7d887493b39b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:39:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.954 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fdd78ae3-b9d2-4420-9fab-f1521bc661f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2174935593' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.954 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b namespace which is not needed anymore#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:04.969 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a'], port_security=['fa:16:3e:72:c4:4a 2001:db8:0:1:f816:3eff:fe72:c44a 2001:db8::f816:3eff:fe72:c44a'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe72:c44a/64 2001:db8::f816:3eff:fe72:c44a/64', 'neutron:device_id': '23c0ce36-9e34-4a73-9f99-3b79f8623238', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4c956141-6a21-499d-99b1-885d1a2972f7', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd07d1fc3-3bf7-4ca6-a994-01f8bb5c5bd0', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=2fceb8d2-9d2a-45b9-beb8-73d518298477, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1a175a4f-4ef8-469e-b213-5f8d404858c8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:04 np0005473739 nova_compute[259550]: 2025-10-07 14:39:04.992 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:05 np0005473739 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Deactivated successfully.
Oct  7 10:39:05 np0005473739 systemd[1]: machine-qemu\x2d145\x2dinstance\x2d00000074.scope: Consumed 17.604s CPU time.
Oct  7 10:39:05 np0005473739 systemd-machined[214580]: Machine qemu-145-instance-00000074 terminated.
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [NOTICE]   (385383) : haproxy version is 2.8.14-c23fe91
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [NOTICE]   (385383) : path to executable is /usr/sbin/haproxy
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [WARNING]  (385383) : Exiting Master process...
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [ALERT]    (385383) : Current worker (385385) exited with code 143 (Terminated)
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b[385379]: [WARNING]  (385383) : All workers exited. Exiting... (0)
Oct  7 10:39:05 np0005473739 systemd[1]: libpod-c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69.scope: Deactivated successfully.
Oct  7 10:39:05 np0005473739 podman[388617]: 2025-10-07 14:39:05.094429479 +0000 UTC m=+0.049032261 container died c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:39:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69-userdata-shm.mount: Deactivated successfully.
Oct  7 10:39:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b6c393a5ba60ac8c97a1f1c72230373f0ef69c5c9c49d0c8f32905833660b064-merged.mount: Deactivated successfully.
Oct  7 10:39:05 np0005473739 podman[388617]: 2025-10-07 14:39:05.14387896 +0000 UTC m=+0.098481732 container cleanup c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:39:05 np0005473739 systemd[1]: libpod-conmon-c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69.scope: Deactivated successfully.
Oct  7 10:39:05 np0005473739 NetworkManager[44949]: <info>  [1759847945.1853] manager: (tap72a26c5e-6c): new Tun device (/org/freedesktop/NetworkManager/Devices/514)
Oct  7 10:39:05 np0005473739 systemd-udevd[388588]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:39:05 np0005473739 NetworkManager[44949]: <info>  [1759847945.2101] manager: (tap1a175a4f-4e): new Tun device (/org/freedesktop/NetworkManager/Devices/515)
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.231 2 INFO nova.virt.libvirt.driver [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance destroyed successfully.#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.232 2 DEBUG nova.objects.instance [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 23c0ce36-9e34-4a73-9f99-3b79f8623238 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:05 np0005473739 podman[388664]: 2025-10-07 14:39:05.237794789 +0000 UTC m=+0.070990818 container remove c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.251 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cc7b7b3a-079e-42f8-87fc-39958f621d57]: (4, ('Tue Oct  7 02:39:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b (c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69)\nc1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69\nTue Oct  7 02:39:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b (c1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69)\nc1009abb4a5566e9c26dd04a37026e4a93b26b948fc1a147d1e481bf7bdd5d69\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f1092d0-5d20-46a8-9e76-5daee8cd9f3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.254 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5dfb73c9-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.256 2 DEBUG nova.virt.libvirt.vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:44Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.257 2 DEBUG nova.network.os_vif_util [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:05 np0005473739 kernel: tap5dfb73c9-a0: left promiscuous mode
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.258 2 DEBUG nova.network.os_vif_util [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.258 2 DEBUG os_vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.262 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.264 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap72a26c5e-6c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.265 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.279 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.282 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[74d90dd7-b96f-49f1-91ad-571e2d7b95fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.290 2 INFO os_vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:bc:87,bridge_name='br-int',has_traffic_filtering=True,id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b,network=Network(5dfb73c9-a89b-4659-8761-7d887493b39b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap72a26c5e-6c')#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.291 2 DEBUG nova.virt.libvirt.vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-78341121',display_name='tempest-TestGettingAddress-server-78341121',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-78341121',id=116,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBg6AyFvNKr/NvRYPKGBJa3VbWSwFRfb26o3hZvXnfFfn7X4aQ1Q/jbXkbYghujGDgpxLHnRRd1MC5kNQi4K1LdembiiRr0OaQYyRwa6iwfYghMLezefmo7mghpgI87HBQ==',key_name='tempest-TestGettingAddress-1662333012',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-g1800drv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:44Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=23c0ce36-9e34-4a73-9f99-3b79f8623238,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.291 2 DEBUG nova.network.os_vif_util [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.292 2 DEBUG nova.network.os_vif_util [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.292 2 DEBUG os_vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.294 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1a175a4f-4e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.300 2 INFO os_vif [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:c4:4a,bridge_name='br-int',has_traffic_filtering=True,id=1a175a4f-4ef8-469e-b213-5f8d404858c8,network=Network(4c956141-6a21-499d-99b1-885d1a2972f7),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1a175a4f-4e')#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.309 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[692e3887-a170-440a-9ee2-6b746a551ab7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.310 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ab747b-1148-4022-b47b-3ef783af7e5a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.333 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[afa5bcf4-2faa-43b8-b330-c80e7b89252d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844666, 'reachable_time': 38561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388705, 'error': None, 'target': 'ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 systemd[1]: run-netns-ovnmeta\x2d5dfb73c9\x2da89b\x2d4659\x2d8761\x2d7d887493b39b.mount: Deactivated successfully.
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.339 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5dfb73c9-a89b-4659-8761-7d887493b39b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.339 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[9d0d9b63-87ee-4339-9950-48d5b3c576e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.340 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1a175a4f-4ef8-469e-b213-5f8d404858c8 in datapath 4c956141-6a21-499d-99b1-885d1a2972f7 unbound from our chassis#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.342 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4c956141-6a21-499d-99b1-885d1a2972f7, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.343 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c57ef16e-ea64-47a9-9adb-c7c04276c60b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.343 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 namespace which is not needed anymore#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.368 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.369 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000074 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.373 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.373 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000075 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:39:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:39:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3618825100' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.486 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.596s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [NOTICE]   (385548) : haproxy version is 2.8.14-c23fe91
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [NOTICE]   (385548) : path to executable is /usr/sbin/haproxy
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [ALERT]    (385548) : Current worker (385550) exited with code 143 (Terminated)
Oct  7 10:39:05 np0005473739 neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7[385541]: [WARNING]  (385548) : All workers exited. Exiting... (0)
Oct  7 10:39:05 np0005473739 systemd[1]: libpod-9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44.scope: Deactivated successfully.
Oct  7 10:39:05 np0005473739 podman[388726]: 2025-10-07 14:39:05.513719692 +0000 UTC m=+0.064259688 container died 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.515 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.525 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44-userdata-shm.mount: Deactivated successfully.
Oct  7 10:39:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a337889f89fdae52e14dc13504ac936b7577e6b7adfb2aa670a8f40235199adf-merged.mount: Deactivated successfully.
Oct  7 10:39:05 np0005473739 podman[388726]: 2025-10-07 14:39:05.568602938 +0000 UTC m=+0.119142914 container cleanup 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:39:05 np0005473739 systemd[1]: libpod-conmon-9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44.scope: Deactivated successfully.
Oct  7 10:39:05 np0005473739 podman[388775]: 2025-10-07 14:39:05.635840555 +0000 UTC m=+0.045606329 container remove 9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.642 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1280b056-f388-4c36-a2b4-9c90d5393c53]: (4, ('Tue Oct  7 02:39:05 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 (9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44)\n9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44\nTue Oct  7 02:39:05 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 (9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44)\n9ef58150e40b91322c436c1cc925238697c69a140b3c296764ab490cdd2a6f44\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2606eb8-344d-4f06-b389-8ecd12402eaa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.645 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4c956141-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:05 np0005473739 kernel: tap4c956141-60: left promiscuous mode
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.730 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[73276e92-e1d3-4bf4-b0c9-1697b7830209]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.756 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e348980f-3fbe-45e3-b378-865bbdd0f5c5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.758 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f4628ed-effe-4bdf-ab89-b2b8721a5a61]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.775 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[49813131-7cfe-4db4-a340-8b62489c314f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 844808, 'reachable_time': 17355, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388809, 'error': None, 'target': 'ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.778 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4c956141-6a21-499d-99b1-885d1a2972f7 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:39:05 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:05.778 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4993563d-3cb7-4fad-9ac1-d682db8c48b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.783 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.785 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3472MB free_disk=59.820438385009766GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.785 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.786 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.853 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Received event network-vif-deleted-134818f1-9848-45e7-ac27-c5290f58f87f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.853 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.853 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing instance network info cache due to event network-changed-72a26c5e-6ceb-4ee6-b79b-d105eac8054b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.854 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.854 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.854 2 DEBUG nova.network.neutron [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Refreshing network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.889 2 INFO nova.virt.libvirt.driver [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Deleting instance files /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238_del#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.890 2 INFO nova.virt.libvirt.driver [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Deletion of /var/lib/nova/instances/23c0ce36-9e34-4a73-9f99-3b79f8623238_del complete#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 23c0ce36-9e34-4a73-9f99-3b79f8623238 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance c0eb8730-2b26-4cc0-8a9c-019688db568f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.893 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:39:05 np0005473739 nova_compute[259550]: 2025-10-07 14:39:05.975 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:39:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2958821824' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.027 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.029 2 DEBUG nova.virt.libvirt.vif [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-335561064',display_name='tempest-TestNetworkAdvancedServerOps-server-335561064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-335561064',id=120,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd+JvDcSqw5OJ03cvElecW4OwXxUWcOza7ZaEaEf+FC3qoJlFBHob9pK59ESmk17iW8KZuFczVLCS9KQd4bG4fRY/MC1LsIJiiL5MRoGeETGGZgzRCRibwIbIrlPnNS2Q==',key_name='tempest-TestNetworkAdvancedServerOps-452416595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-kg93fcgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:01Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=9996c6b8-7d50-42b8-9617-2a1ae7d36d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.030 2 DEBUG nova.network.os_vif_util [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.031 2 DEBUG nova.network.os_vif_util [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.032 2 DEBUG nova.objects.instance [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.078 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <uuid>9996c6b8-7d50-42b8-9617-2a1ae7d36d30</uuid>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <name>instance-00000078</name>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkAdvancedServerOps-server-335561064</nova:name>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:39:04</nova:creationTime>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <nova:user uuid="5c505d04148e44b8b93ceab0e3cedef4">tempest-TestNetworkAdvancedServerOps-316338420-project-member</nova:user>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <nova:project uuid="74c80c1e3c7c4a0dbf1c602d301618a7">tempest-TestNetworkAdvancedServerOps-316338420</nova:project>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <nova:port uuid="660a78e9-3d3f-4949-88f4-3cad47b74229">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <entry name="serial">9996c6b8-7d50-42b8-9617-2a1ae7d36d30</entry>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <entry name="uuid">9996c6b8-7d50-42b8-9617-2a1ae7d36d30</entry>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:b2:2c:15"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <target dev="tap660a78e9-3d"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/console.log" append="off"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:39:06 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:39:06 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:39:06 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:39:06 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.080 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Preparing to wait for external event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.080 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.080 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.080 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.081 2 DEBUG nova.virt.libvirt.vif [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-335561064',display_name='tempest-TestNetworkAdvancedServerOps-server-335561064',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-335561064',id=120,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd+JvDcSqw5OJ03cvElecW4OwXxUWcOza7ZaEaEf+FC3qoJlFBHob9pK59ESmk17iW8KZuFczVLCS9KQd4bG4fRY/MC1LsIJiiL5MRoGeETGGZgzRCRibwIbIrlPnNS2Q==',key_name='tempest-TestNetworkAdvancedServerOps-452416595',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-kg93fcgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:01Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=9996c6b8-7d50-42b8-9617-2a1ae7d36d30,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.081 2 DEBUG nova.network.os_vif_util [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.082 2 DEBUG nova.network.os_vif_util [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.082 2 DEBUG os_vif [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.083 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap660a78e9-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.087 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap660a78e9-3d, col_values=(('external_ids', {'iface-id': '660a78e9-3d3f-4949-88f4-3cad47b74229', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:2c:15', 'vm-uuid': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:06 np0005473739 NetworkManager[44949]: <info>  [1759847946.0896] manager: (tap660a78e9-3d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/516)
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.090 2 INFO nova.compute.manager [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Took 1.33 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.091 2 DEBUG oslo.service.loopingcall [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.091 2 DEBUG nova.compute.manager [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.092 2 DEBUG nova.network.neutron [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.095 2 INFO os_vif [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d')#033[00m
Oct  7 10:39:06 np0005473739 systemd[1]: run-netns-ovnmeta\x2d4c956141\x2d6a21\x2d499d\x2d99b1\x2d885d1a2972f7.mount: Deactivated successfully.
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.207 2 DEBUG nova.network.neutron [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updated VIF entry in instance network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.207 2 DEBUG nova.network.neutron [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.324 2 DEBUG oslo_concurrency.lockutils [req-d662a34e-5ed1-418f-8626-09212690bc75 req-61afcf9a-64e0-4b1a-9b69-d9c70abd9e4d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.328 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.328 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.328 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] No VIF found with MAC fa:16:3e:b2:2c:15, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.329 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Using config drive#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.346 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2300: 305 pgs: 305 active+clean; 221 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 337 KiB/s rd, 3.6 MiB/s wr, 141 op/s
Oct  7 10:39:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2690316980' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.420 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.425 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.446 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.475 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.476 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.690s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.995 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-unplugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-unplugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-unplugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.996 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.997 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.997 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.997 2 DEBUG oslo_concurrency.lockutils [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.997 2 DEBUG nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:06 np0005473739 nova_compute[259550]: 2025-10-07 14:39:06.998 2 WARNING nova.compute.manager [req-df089b1a-c797-4531-94db-4ee2ccafbdbf req-19115de2-1026-4148-a465-ec30ea41f384 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received unexpected event network-vif-plugged-72a26c5e-6ceb-4ee6-b79b-d105eac8054b for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.027 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Creating config drive at /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.032 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57sdmp_3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.179 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp57sdmp_3" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.202 2 DEBUG nova.storage.rbd_utils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] rbd image 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.206 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.369 2 DEBUG oslo_concurrency.processutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config 9996c6b8-7d50-42b8-9617-2a1ae7d36d30_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.370 2 INFO nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Deleting local config drive /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30/disk.config because it was imported into RBD.#033[00m
Oct  7 10:39:07 np0005473739 kernel: tap660a78e9-3d: entered promiscuous mode
Oct  7 10:39:07 np0005473739 NetworkManager[44949]: <info>  [1759847947.4173] manager: (tap660a78e9-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/517)
Oct  7 10:39:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:07Z|01279|binding|INFO|Claiming lport 660a78e9-3d3f-4949-88f4-3cad47b74229 for this chassis.
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.418 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:07Z|01280|binding|INFO|660a78e9-3d3f-4949-88f4-3cad47b74229: Claiming fa:16:3e:b2:2c:15 10.100.0.7
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.425 2 DEBUG nova.network.neutron [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:07 np0005473739 NetworkManager[44949]: <info>  [1759847947.4312] device (tap660a78e9-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:39:07 np0005473739 NetworkManager[44949]: <info>  [1759847947.4320] device (tap660a78e9-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.430 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2c:15 10.100.0.7'], port_security=['fa:16:3e:b2:2c:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '2', 'neutron:security_group_ids': '0d4d4d28-c7c9-4239-b81d-776c0fee085d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df54412f-162f-42ea-b08c-2dcfe769ac96, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=660a78e9-3d3f-4949-88f4-3cad47b74229) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.431 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 660a78e9-3d3f-4949-88f4-3cad47b74229 in datapath d58f5a01-ad0a-4168-95c9-fc8189cce054 bound to our chassis#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.432 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d58f5a01-ad0a-4168-95c9-fc8189cce054#033[00m
Oct  7 10:39:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:07Z|01281|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 ovn-installed in OVS
Oct  7 10:39:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:07Z|01282|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 up in Southbound
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.444 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20c38a93-300d-413f-a9a0-99639fdd09c8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.446 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd58f5a01-a1 in ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.448 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd58f5a01-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.448 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2315f0a-4cf5-4ed7-9f52-8ff85969f951]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.449 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[da1c7e4d-e93f-4db5-9542-46f49c700fac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.455 2 INFO nova.compute.manager [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Took 1.36 seconds to deallocate network for instance.#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.463 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f13e5305-4951-4281-89a9-5e23a2096c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 systemd-machined[214580]: New machine qemu-150-instance-00000078.
Oct  7 10:39:07 np0005473739 systemd[1]: Started Virtual Machine qemu-150-instance-00000078.
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.485 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd25ea9f-3b9a-4bc2-8cf4-fe01cd1bfe21]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.510 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.510 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.515 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6acb7aba-805c-48af-a2a6-1df8b6d82f05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.520 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[328acd5a-b7a1-4a35-8da8-924bafb34f21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 NetworkManager[44949]: <info>  [1759847947.5216] manager: (tapd58f5a01-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/518)
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.544 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.544 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.545 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.545 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.546 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.547 2 INFO nova.compute.manager [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Terminating instance#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.547 2 DEBUG nova.compute.manager [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.550 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.564 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ccae067a-0d4f-444f-9bd6-a6234cb7b7b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.568 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ec4ba2a-070d-4cfd-8ee9-26b5e57c3ac3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 NetworkManager[44949]: <info>  [1759847947.5943] device (tapd58f5a01-a0): carrier: link connected
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.600 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0c551d91-f5eb-4ac0-9d12-15fbc922bf69]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.614 2 DEBUG oslo_concurrency.processutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:07 np0005473739 kernel: tap1c96bebd-0f (unregistering): left promiscuous mode
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[68f988fb-4a21-4f96-8f77-f73548dd9b42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd58f5a01-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:0d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 368], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853115, 'reachable_time': 35738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 388937, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 NetworkManager[44949]: <info>  [1759847947.6289] device (tap1c96bebd-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:39:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:07Z|01283|binding|INFO|Releasing lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e from this chassis (sb_readonly=0)
Oct  7 10:39:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:07Z|01284|binding|INFO|Setting lport 1c96bebd-0f68-48d9-9bab-486d6e56cb4e down in Southbound
Oct  7 10:39:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:07Z|01285|binding|INFO|Removing iface tap1c96bebd-0f ovn-installed in OVS
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.640 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43f7033a-987a-409f-b93d-7cba014852be]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:dbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 853115, 'tstamp': 853115}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 388939, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.648 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:46:76:3f 10.100.0.13'], port_security=['fa:16:3e:46:76:3f 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c0eb8730-2b26-4cc0-8a9c-019688db568f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ddf5ea1d-9677-4e1c-8a3a-0f9543177677', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=be472fff-888c-4934-b0b7-07dc4319f2eb, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=1c96bebd-0f68-48d9-9bab-486d6e56cb4e) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.653 2 DEBUG nova.network.neutron [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updated VIF entry in instance network info cache for port 72a26c5e-6ceb-4ee6-b79b-d105eac8054b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.654 2 DEBUG nova.network.neutron [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Updating instance_info_cache with network_info: [{"id": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "address": "fa:16:3e:dc:bc:87", "network": {"id": "5dfb73c9-a89b-4659-8761-7d887493b39b", "bridge": "br-int", "label": "tempest-network-smoke--586589201", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap72a26c5e-6c", "ovs_interfaceid": "72a26c5e-6ceb-4ee6-b79b-d105eac8054b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "address": "fa:16:3e:72:c4:4a", "network": {"id": "4c956141-6a21-499d-99b1-885d1a2972f7", "bridge": "br-int", "label": "tempest-network-smoke--1404885451", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe72:c44a", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1a175a4f-4e", "ovs_interfaceid": "1a175a4f-4ef8-469e-b213-5f8d404858c8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.661 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67ae5a27-10c2-4423-8e41-65071c94caa2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd58f5a01-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:0d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 368], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853115, 'reachable_time': 35738, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 388943, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.684 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-23c0ce36-9e34-4a73-9f99-3b79f8623238" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.685 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-unplugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.685 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.686 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.686 2 DEBUG oslo_concurrency.lockutils [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.686 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-unplugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.687 2 DEBUG nova.compute.manager [req-1661f7fb-4f67-4d0c-b9ca-fab2c9e9f917 req-f7613431-2868-4611-bf6e-415b45c398b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-unplugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.691 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3774608-39c5-4db9-960c-cc5ac9b6f89b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000075.scope: Deactivated successfully.
Oct  7 10:39:07 np0005473739 systemd[1]: machine-qemu\x2d146\x2dinstance\x2d00000075.scope: Consumed 16.025s CPU time.
Oct  7 10:39:07 np0005473739 systemd-machined[214580]: Machine qemu-146-instance-00000075 terminated.
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.758 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab083be-ad9b-44e5-911d-27931fd62d8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.759 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd58f5a01-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.759 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.760 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd58f5a01-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 NetworkManager[44949]: <info>  [1759847947.7623] manager: (tapd58f5a01-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/519)
Oct  7 10:39:07 np0005473739 kernel: tapd58f5a01-a0: entered promiscuous mode
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.770 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd58f5a01-a0, col_values=(('external_ids', {'iface-id': 'd4b37819-62c7-42bc-af8c-24aff0a13de9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:07Z|01286|binding|INFO|Releasing lport d4b37819-62c7-42bc-af8c-24aff0a13de9 from this chassis (sb_readonly=0)
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.783 2 INFO nova.virt.libvirt.driver [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Instance destroyed successfully.#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.784 2 DEBUG nova.objects.instance [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid c0eb8730-2b26-4cc0-8a9c-019688db568f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.790 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.795 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.796 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee80a20f-3c41-499e-9a63-0c03266b0df1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.797 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:39:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:07.800 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'env', 'PROCESS_TAG=haproxy-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d58f5a01-ad0a-4168-95c9-fc8189cce054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.813 2 DEBUG nova.virt.libvirt.vif [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:37:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-528486164',display_name='tempest-TestNetworkBasicOps-server-528486164',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-528486164',id=117,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHQxueVciqThLEf3QSXvOwVT+5hy1ZzCF3GFJVm8JMkN6QiflWo55jpLsNYky8mEXwzLpOQbQ3bLKW+JqUogdrrbbB9pgVzSI20yKTEUP5ZPdTnLVnqTqbVW/n6Yq8HTpg==',key_name='tempest-TestNetworkBasicOps-1556177030',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:37:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-aayr7mmd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:37:52Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=c0eb8730-2b26-4cc0-8a9c-019688db568f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.813 2 DEBUG nova.network.os_vif_util [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "address": "fa:16:3e:46:76:3f", "network": {"id": "4bd15a72-ce65-4737-b705-4b2b86d3a32a", "bridge": "br-int", "label": "tempest-network-smoke--586820372", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap1c96bebd-0f", "ovs_interfaceid": "1c96bebd-0f68-48d9-9bab-486d6e56cb4e", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.814 2 DEBUG nova.network.os_vif_util [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.814 2 DEBUG os_vif [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.816 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap1c96bebd-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.820 2 INFO os_vif [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:46:76:3f,bridge_name='br-int',has_traffic_filtering=True,id=1c96bebd-0f68-48d9-9bab-486d6e56cb4e,network=Network(4bd15a72-ce65-4737-b705-4b2b86d3a32a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap1c96bebd-0f')#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.954 2 DEBUG nova.compute.manager [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.954 2 DEBUG oslo_concurrency.lockutils [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.955 2 DEBUG oslo_concurrency.lockutils [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.955 2 DEBUG oslo_concurrency.lockutils [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.955 2 DEBUG nova.compute.manager [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] No waiting events found dispatching network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:07 np0005473739 nova_compute[259550]: 2025-10-07 14:39:07.956 2 WARNING nova.compute.manager [req-93ca9911-cce1-487b-8663-d5ccf6c667eb req-d3522e30-9ba6-48bc-ac8e-689eadd9fab2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received unexpected event network-vif-plugged-1a175a4f-4ef8-469e-b213-5f8d404858c8 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:39:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1901233399' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.092 2 DEBUG oslo_concurrency.processutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.101 2 DEBUG nova.compute.provider_tree [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.119 2 DEBUG nova.scheduler.client.report [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.143 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.633s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.170 2 INFO nova.scheduler.client.report [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 23c0ce36-9e34-4a73-9f99-3b79f8623238#033[00m
Oct  7 10:39:08 np0005473739 podman[389025]: 2025-10-07 14:39:08.188228236 +0000 UTC m=+0.058401491 container create 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:39:08 np0005473739 systemd[1]: Started libpod-conmon-4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f.scope.
Oct  7 10:39:08 np0005473739 podman[389025]: 2025-10-07 14:39:08.158339398 +0000 UTC m=+0.028512683 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.260 2 DEBUG oslo_concurrency.lockutils [None req-cd90099c-e0ef-42c1-bda7-860a3a742604 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "23c0ce36-9e34-4a73-9f99-3b79f8623238" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.498s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:08 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01b59d4dc6edabb37a3c0b5853d080b83aedff571bda483335c1c14eb6b156ce/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.293 2 INFO nova.virt.libvirt.driver [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Deleting instance files /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f_del#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.294 2 INFO nova.virt.libvirt.driver [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Deletion of /var/lib/nova/instances/c0eb8730-2b26-4cc0-8a9c-019688db568f_del complete#033[00m
Oct  7 10:39:08 np0005473739 podman[389025]: 2025-10-07 14:39:08.299813678 +0000 UTC m=+0.169986953 container init 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:39:08 np0005473739 podman[389025]: 2025-10-07 14:39:08.307476352 +0000 UTC m=+0.177649607 container start 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:39:08 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [NOTICE]   (389086) : New worker (389088) forked
Oct  7 10:39:08 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [NOTICE]   (389086) : Loading success.
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.364 2 INFO nova.compute.manager [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Took 0.82 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.364 2 DEBUG oslo.service.loopingcall [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.365 2 DEBUG nova.compute.manager [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.365 2 DEBUG nova.network.neutron [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.367 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 1c96bebd-0f68-48d9-9bab-486d6e56cb4e in datapath 4bd15a72-ce65-4737-b705-4b2b86d3a32a unbound from our chassis#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.368 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4bd15a72-ce65-4737-b705-4b2b86d3a32a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.369 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd84780d-4767-4377-bdf6-c008843f9818]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.370 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a namespace which is not needed anymore#033[00m
Oct  7 10:39:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2301: 305 pgs: 305 active+clean; 221 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 195 KiB/s rd, 2.7 MiB/s wr, 115 op/s
Oct  7 10:39:08 np0005473739 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [NOTICE]   (385893) : haproxy version is 2.8.14-c23fe91
Oct  7 10:39:08 np0005473739 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [NOTICE]   (385893) : path to executable is /usr/sbin/haproxy
Oct  7 10:39:08 np0005473739 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [WARNING]  (385893) : Exiting Master process...
Oct  7 10:39:08 np0005473739 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [WARNING]  (385893) : Exiting Master process...
Oct  7 10:39:08 np0005473739 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [ALERT]    (385893) : Current worker (385895) exited with code 143 (Terminated)
Oct  7 10:39:08 np0005473739 neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a[385889]: [WARNING]  (385893) : All workers exited. Exiting... (0)
Oct  7 10:39:08 np0005473739 systemd[1]: libpod-70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c.scope: Deactivated successfully.
Oct  7 10:39:08 np0005473739 podman[389114]: 2025-10-07 14:39:08.498313292 +0000 UTC m=+0.044634224 container died 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:39:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c-userdata-shm.mount: Deactivated successfully.
Oct  7 10:39:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ad6812d5d6ccd623c1c8db2e48873105455dd3e4c1a071313da6b9108e14c4dd-merged.mount: Deactivated successfully.
Oct  7 10:39:08 np0005473739 podman[389114]: 2025-10-07 14:39:08.538919637 +0000 UTC m=+0.085240579 container cleanup 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:39:08 np0005473739 systemd[1]: libpod-conmon-70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c.scope: Deactivated successfully.
Oct  7 10:39:08 np0005473739 podman[389140]: 2025-10-07 14:39:08.594465411 +0000 UTC m=+0.037703699 container remove 70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c624ba16-3e24-41eb-a5ba-a024ddbcfb64]: (4, ('Tue Oct  7 02:39:08 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a (70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c)\n70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c\nTue Oct  7 02:39:08 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a (70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c)\n70965e34b2320eaa4b7c128ff83b8d3636f240af02bd2edc706757f01b79176c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92a09337-3818-400d-a3db-d28e26202b99]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.604 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4bd15a72-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:08 np0005473739 kernel: tap4bd15a72-c0: left promiscuous mode
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.622 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62858df8-3f23-402c-8d4c-2bb9e8ffa113]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.652 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61c967b2-b8cf-4877-b2bb-d80ce6b04219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.653 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4c7ee560-862b-4dce-91d9-1196eda918d8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.670 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d64f3c82-443b-4641-b8a7-1259ba9be6f5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 845449, 'reachable_time': 36288, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 389156, 'error': None, 'target': 'ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:08 np0005473739 systemd[1]: run-netns-ovnmeta\x2d4bd15a72\x2dce65\x2d4737\x2db705\x2d4b2b86d3a32a.mount: Deactivated successfully.
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.675 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4bd15a72-ce65-4737-b705-4b2b86d3a32a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:39:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:08.675 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8e3405ef-f000-4259-a944-3e71d380e63a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.722 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847948.7225485, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.723 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Started (Lifecycle Event)#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.751 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.756 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847948.7226646, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.756 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.781 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.784 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:39:08 np0005473739 nova_compute[259550]: 2025-10-07 14:39:08.813 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.104 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-deleted-72a26c5e-6ceb-4ee6-b79b-d105eac8054b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.104 2 INFO nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Neutron deleted interface 72a26c5e-6ceb-4ee6-b79b-d105eac8054b; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.105 2 DEBUG nova.network.neutron [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.108 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Detach interface failed, port_id=72a26c5e-6ceb-4ee6-b79b-d105eac8054b, reason: Instance 23c0ce36-9e34-4a73-9f99-3b79f8623238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.108 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Received event network-vif-deleted-1a175a4f-4ef8-469e-b213-5f8d404858c8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.108 2 INFO nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Neutron deleted interface 1a175a4f-4ef8-469e-b213-5f8d404858c8; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.108 2 DEBUG nova.network.neutron [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.110 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Detach interface failed, port_id=1a175a4f-4ef8-469e-b213-5f8d404858c8, reason: Instance 23c0ce36-9e34-4a73-9f99-3b79f8623238 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.111 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.111 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.111 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.111 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.112 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Processing event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.112 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.112 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.112 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.113 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.113 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.113 2 WARNING nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.113 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-unplugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.113 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.114 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.114 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.114 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] No waiting events found dispatching network-vif-unplugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.114 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-unplugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG oslo_concurrency.lockutils [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.115 2 DEBUG nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] No waiting events found dispatching network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.116 2 WARNING nova.compute.manager [req-9b2d6234-9ab1-4506-a1cc-ef0756c07ef3 req-9c838ca8-1d3c-4210-92ae-506b0c631512 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received unexpected event network-vif-plugged-1c96bebd-0f68-48d9-9bab-486d6e56cb4e for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.116 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.121 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847949.1215708, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.122 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.124 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.129 2 INFO nova.virt.libvirt.driver [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance spawned successfully.#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.129 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.143 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.147 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.156 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.157 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.157 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.158 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.158 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.159 2 DEBUG nova.virt.libvirt.driver [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.174 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.225 2 INFO nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Took 7.43 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.225 2 DEBUG nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.260 2 DEBUG nova.network.neutron [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.293 2 INFO nova.compute.manager [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Took 0.93 seconds to deallocate network for instance.#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.297 2 INFO nova.compute.manager [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Took 8.68 seconds to build instance.#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.325 2 DEBUG oslo_concurrency.lockutils [None req-feb4c08b-f303-4646-845d-686e2c66d95a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.793s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.346 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.346 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.408 2 DEBUG oslo_concurrency.processutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/712875872' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.884 2 DEBUG oslo_concurrency.processutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.890 2 DEBUG nova.compute.provider_tree [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.909 2 DEBUG nova.scheduler.client.report [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.937 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.590s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:09 np0005473739 nova_compute[259550]: 2025-10-07 14:39:09.965 2 INFO nova.scheduler.client.report [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance c0eb8730-2b26-4cc0-8a9c-019688db568f#033[00m
Oct  7 10:39:10 np0005473739 nova_compute[259550]: 2025-10-07 14:39:10.050 2 DEBUG nova.compute.manager [req-55dc7877-2261-4ea7-a597-abf7b7c40a52 req-96287f0f-7370-4ea9-a842-9a2297023a57 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Received event network-vif-deleted-1c96bebd-0f68-48d9-9bab-486d6e56cb4e external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:10 np0005473739 nova_compute[259550]: 2025-10-07 14:39:10.054 2 DEBUG oslo_concurrency.lockutils [None req-dfb0c87e-1c03-48c4-a96b-8a2f80935224 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "c0eb8730-2b26-4cc0-8a9c-019688db568f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.509s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2302: 305 pgs: 305 active+clean; 120 MiB data, 857 MiB used, 59 GiB / 60 GiB avail; 236 KiB/s rd, 2.7 MiB/s wr, 172 op/s
Oct  7 10:39:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2303: 305 pgs: 305 active+clean; 88 MiB data, 839 MiB used, 59 GiB / 60 GiB avail; 334 KiB/s rd, 1.8 MiB/s wr, 157 op/s
Oct  7 10:39:12 np0005473739 nova_compute[259550]: 2025-10-07 14:39:12.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:12 np0005473739 nova_compute[259550]: 2025-10-07 14:39:12.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:13 np0005473739 nova_compute[259550]: 2025-10-07 14:39:13.979 2 DEBUG nova.compute.manager [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:13 np0005473739 nova_compute[259550]: 2025-10-07 14:39:13.980 2 DEBUG nova.compute.manager [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing instance network info cache due to event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:39:13 np0005473739 nova_compute[259550]: 2025-10-07 14:39:13.981 2 DEBUG oslo_concurrency.lockutils [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:13 np0005473739 nova_compute[259550]: 2025-10-07 14:39:13.981 2 DEBUG oslo_concurrency.lockutils [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:13 np0005473739 nova_compute[259550]: 2025-10-07 14:39:13.981 2 DEBUG nova.network.neutron [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:39:14 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 301df201-07a7-4802-aa9b-dfe4c00d46e6 does not exist
Oct  7 10:39:14 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 486d70ff-2fe2-404d-b9f4-9b7a5ec177bd does not exist
Oct  7 10:39:14 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8e0c25fa-72a6-4f2e-8929-91495baa104a does not exist
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:39:14 np0005473739 podman[389333]: 2025-10-07 14:39:14.295397072 +0000 UTC m=+0.097822665 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  7 10:39:14 np0005473739 podman[389334]: 2025-10-07 14:39:14.314738899 +0000 UTC m=+0.118270382 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:39:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2304: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 203 op/s
Oct  7 10:39:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:14Z|01287|binding|INFO|Releasing lport d4b37819-62c7-42bc-af8c-24aff0a13de9 from this chassis (sb_readonly=0)
Oct  7 10:39:14 np0005473739 nova_compute[259550]: 2025-10-07 14:39:14.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:14 np0005473739 nova_compute[259550]: 2025-10-07 14:39:14.737 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847939.7338557, 4c61749b-b18d-4fbe-b99c-90e15ced9469 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:14 np0005473739 nova_compute[259550]: 2025-10-07 14:39:14.737 2 INFO nova.compute.manager [-] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:39:14 np0005473739 podman[389486]: 2025-10-07 14:39:14.745656944 +0000 UTC m=+0.055228348 container create dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:39:14 np0005473739 nova_compute[259550]: 2025-10-07 14:39:14.759 2 DEBUG nova.compute.manager [None req-4445251c-ed84-4f0a-b298-0e5f5bb17d8f - - - - - -] [instance: 4c61749b-b18d-4fbe-b99c-90e15ced9469] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:14 np0005473739 systemd[1]: Started libpod-conmon-dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af.scope.
Oct  7 10:39:14 np0005473739 podman[389486]: 2025-10-07 14:39:14.713156245 +0000 UTC m=+0.022727669 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:39:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:14 np0005473739 podman[389486]: 2025-10-07 14:39:14.832802612 +0000 UTC m=+0.142374046 container init dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:39:14 np0005473739 podman[389486]: 2025-10-07 14:39:14.840979741 +0000 UTC m=+0.150551145 container start dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:39:14 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:39:14 np0005473739 podman[389486]: 2025-10-07 14:39:14.84505496 +0000 UTC m=+0.154626364 container attach dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:39:14 np0005473739 systemd[1]: libpod-dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af.scope: Deactivated successfully.
Oct  7 10:39:14 np0005473739 pedantic_albattani[389502]: 167 167
Oct  7 10:39:14 np0005473739 conmon[389502]: conmon dd5d648c8679d7914df4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af.scope/container/memory.events
Oct  7 10:39:14 np0005473739 podman[389486]: 2025-10-07 14:39:14.852505648 +0000 UTC m=+0.162077062 container died dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:39:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f32ad818dbb666c16848bd7f5e709158b85f6e13fa80a067b6c6fa079e227a29-merged.mount: Deactivated successfully.
Oct  7 10:39:14 np0005473739 podman[389486]: 2025-10-07 14:39:14.895858347 +0000 UTC m=+0.205429751 container remove dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_albattani, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:39:14 np0005473739 systemd[1]: libpod-conmon-dd5d648c8679d7914df4bf5df976ec104b730cabbb24202e15cb35960a8082af.scope: Deactivated successfully.
Oct  7 10:39:15 np0005473739 podman[389528]: 2025-10-07 14:39:15.06701245 +0000 UTC m=+0.042445675 container create 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:39:15 np0005473739 systemd[1]: Started libpod-conmon-0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653.scope.
Oct  7 10:39:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:15 np0005473739 podman[389528]: 2025-10-07 14:39:15.142781434 +0000 UTC m=+0.118214679 container init 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2)
Oct  7 10:39:15 np0005473739 podman[389528]: 2025-10-07 14:39:15.051546527 +0000 UTC m=+0.026979782 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:39:15 np0005473739 podman[389528]: 2025-10-07 14:39:15.15382168 +0000 UTC m=+0.129254905 container start 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:39:15 np0005473739 podman[389528]: 2025-10-07 14:39:15.157753305 +0000 UTC m=+0.133186530 container attach 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:39:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:15Z|01288|binding|INFO|Releasing lport d4b37819-62c7-42bc-af8c-24aff0a13de9 from this chassis (sb_readonly=0)
Oct  7 10:39:15 np0005473739 nova_compute[259550]: 2025-10-07 14:39:15.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:15 np0005473739 nova_compute[259550]: 2025-10-07 14:39:15.451 2 DEBUG nova.network.neutron [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updated VIF entry in instance network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:39:15 np0005473739 nova_compute[259550]: 2025-10-07 14:39:15.452 2 DEBUG nova.network.neutron [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:15 np0005473739 nova_compute[259550]: 2025-10-07 14:39:15.568 2 DEBUG oslo_concurrency.lockutils [req-47c6fba4-358f-45e6-9e52-d4b4633b0ec2 req-c4cc3d5d-268a-41f2-96e5-54f07392d971 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:16 np0005473739 stupefied_jang[389545]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:39:16 np0005473739 stupefied_jang[389545]: --> relative data size: 1.0
Oct  7 10:39:16 np0005473739 stupefied_jang[389545]: --> All data devices are unavailable
Oct  7 10:39:16 np0005473739 systemd[1]: libpod-0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653.scope: Deactivated successfully.
Oct  7 10:39:16 np0005473739 podman[389528]: 2025-10-07 14:39:16.281589384 +0000 UTC m=+1.257022619 container died 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:39:16 np0005473739 systemd[1]: libpod-0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653.scope: Consumed 1.071s CPU time.
Oct  7 10:39:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-893295a505c4232f37e535a88154641c9eb43195044f52e1584bd19e7737c091-merged.mount: Deactivated successfully.
Oct  7 10:39:16 np0005473739 podman[389528]: 2025-10-07 14:39:16.398459247 +0000 UTC m=+1.373892482 container remove 0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_jang, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 10:39:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2305: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 254 KiB/s wr, 141 op/s
Oct  7 10:39:16 np0005473739 systemd[1]: libpod-conmon-0ef845608e50ce20d31a6f4362fdaa2537e402f58f5a98b9c5a53f3711e43653.scope: Deactivated successfully.
Oct  7 10:39:16 np0005473739 nova_compute[259550]: 2025-10-07 14:39:16.991 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847941.9615107, 716d82da-745f-43ca-a7fa-38f02d3e5dc3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:16 np0005473739 nova_compute[259550]: 2025-10-07 14:39:16.992 2 INFO nova.compute.manager [-] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:39:17 np0005473739 podman[389726]: 2025-10-07 14:39:17.015897565 +0000 UTC m=+0.039659630 container create 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 10:39:17 np0005473739 systemd[1]: Started libpod-conmon-8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819.scope.
Oct  7 10:39:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:17 np0005473739 podman[389726]: 2025-10-07 14:39:17.079564977 +0000 UTC m=+0.103327062 container init 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:39:17 np0005473739 nova_compute[259550]: 2025-10-07 14:39:17.087 2 DEBUG nova.compute.manager [None req-ea369d32-26b9-4218-8fec-3b6a3f087bb9 - - - - - -] [instance: 716d82da-745f-43ca-a7fa-38f02d3e5dc3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:17 np0005473739 podman[389726]: 2025-10-07 14:39:17.091384532 +0000 UTC m=+0.115146597 container start 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:39:17 np0005473739 reverent_mestorf[389742]: 167 167
Oct  7 10:39:17 np0005473739 podman[389726]: 2025-10-07 14:39:16.999392334 +0000 UTC m=+0.023154429 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:39:17 np0005473739 podman[389726]: 2025-10-07 14:39:17.095236795 +0000 UTC m=+0.118998880 container attach 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:39:17 np0005473739 systemd[1]: libpod-8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819.scope: Deactivated successfully.
Oct  7 10:39:17 np0005473739 podman[389726]: 2025-10-07 14:39:17.096806837 +0000 UTC m=+0.120568902 container died 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:39:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3a9059c762456d477cdba9d0f8855084f60d6402cceff09c737cdae94a7f9c00-merged.mount: Deactivated successfully.
Oct  7 10:39:17 np0005473739 podman[389726]: 2025-10-07 14:39:17.132604253 +0000 UTC m=+0.156366318 container remove 8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:39:17 np0005473739 systemd[1]: libpod-conmon-8d51d76ec799ad8a91be1b38e0c77ed0b6addbdc462a56c33af09f796089b819.scope: Deactivated successfully.
Oct  7 10:39:17 np0005473739 podman[389767]: 2025-10-07 14:39:17.308951065 +0000 UTC m=+0.039964018 container create 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:39:17 np0005473739 systemd[1]: Started libpod-conmon-7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2.scope.
Oct  7 10:39:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:17 np0005473739 podman[389767]: 2025-10-07 14:39:17.291825258 +0000 UTC m=+0.022838231 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:39:17 np0005473739 podman[389767]: 2025-10-07 14:39:17.3940953 +0000 UTC m=+0.125108273 container init 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:39:17 np0005473739 podman[389767]: 2025-10-07 14:39:17.400249226 +0000 UTC m=+0.131262179 container start 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 10:39:17 np0005473739 podman[389767]: 2025-10-07 14:39:17.405519316 +0000 UTC m=+0.136532289 container attach 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:39:17 np0005473739 nova_compute[259550]: 2025-10-07 14:39:17.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:17 np0005473739 nova_compute[259550]: 2025-10-07 14:39:17.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:18 np0005473739 nova_compute[259550]: 2025-10-07 14:39:18.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]: {
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:    "0": [
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:        {
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "devices": [
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "/dev/loop3"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            ],
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_name": "ceph_lv0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_size": "21470642176",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "name": "ceph_lv0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "tags": {
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cluster_name": "ceph",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.crush_device_class": "",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.encrypted": "0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osd_id": "0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.type": "block",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.vdo": "0"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            },
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "type": "block",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "vg_name": "ceph_vg0"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:        }
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:    ],
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:    "1": [
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:        {
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "devices": [
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "/dev/loop4"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            ],
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_name": "ceph_lv1",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_size": "21470642176",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "name": "ceph_lv1",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "tags": {
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cluster_name": "ceph",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.crush_device_class": "",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.encrypted": "0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osd_id": "1",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.type": "block",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.vdo": "0"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            },
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "type": "block",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "vg_name": "ceph_vg1"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:        }
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:    ],
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:    "2": [
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:        {
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "devices": [
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "/dev/loop5"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            ],
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_name": "ceph_lv2",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_size": "21470642176",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "name": "ceph_lv2",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "tags": {
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.cluster_name": "ceph",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.crush_device_class": "",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.encrypted": "0",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osd_id": "2",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.type": "block",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:                "ceph.vdo": "0"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            },
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "type": "block",
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:            "vg_name": "ceph_vg2"
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:        }
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]:    ]
Oct  7 10:39:18 np0005473739 unruffled_benz[389783]: }
Oct  7 10:39:18 np0005473739 systemd[1]: libpod-7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2.scope: Deactivated successfully.
Oct  7 10:39:18 np0005473739 podman[389792]: 2025-10-07 14:39:18.278999865 +0000 UTC m=+0.025485141 container died 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:39:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c2704a1e4829bad48fba06d42ea8e84149ee3f6843a890b05b08d5d4ca884b62-merged.mount: Deactivated successfully.
Oct  7 10:39:18 np0005473739 podman[389792]: 2025-10-07 14:39:18.357219036 +0000 UTC m=+0.103704282 container remove 7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_benz, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:39:18 np0005473739 systemd[1]: libpod-conmon-7db1e370d6a5823465b440054059505a35d79bc0321c01a4fbd1af79c9063dd2.scope: Deactivated successfully.
Oct  7 10:39:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2306: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 128 op/s
Oct  7 10:39:18 np0005473739 podman[389949]: 2025-10-07 14:39:18.958022909 +0000 UTC m=+0.043426781 container create a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 10:39:19 np0005473739 systemd[1]: Started libpod-conmon-a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0.scope.
Oct  7 10:39:19 np0005473739 podman[389949]: 2025-10-07 14:39:18.937383668 +0000 UTC m=+0.022787580 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:39:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:19 np0005473739 podman[389949]: 2025-10-07 14:39:19.048835326 +0000 UTC m=+0.134239208 container init a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:39:19 np0005473739 podman[389949]: 2025-10-07 14:39:19.057483917 +0000 UTC m=+0.142887799 container start a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:39:19 np0005473739 xenodochial_colden[389965]: 167 167
Oct  7 10:39:19 np0005473739 systemd[1]: libpod-a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0.scope: Deactivated successfully.
Oct  7 10:39:19 np0005473739 conmon[389965]: conmon a3ceb52df5aee41106c2 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0.scope/container/memory.events
Oct  7 10:39:19 np0005473739 podman[389949]: 2025-10-07 14:39:19.063418225 +0000 UTC m=+0.148822137 container attach a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:39:19 np0005473739 podman[389949]: 2025-10-07 14:39:19.063657582 +0000 UTC m=+0.149061464 container died a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:39:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8e913db734ec3b265293b124f4e8efb6aede0a0b2feeb1c34a5f95323b6f02bc-merged.mount: Deactivated successfully.
Oct  7 10:39:19 np0005473739 podman[389949]: 2025-10-07 14:39:19.101195275 +0000 UTC m=+0.186599157 container remove a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_colden, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:39:19 np0005473739 systemd[1]: libpod-conmon-a3ceb52df5aee41106c29cc91181eeaa01565e053d2384f633842f6eef3022a0.scope: Deactivated successfully.
Oct  7 10:39:19 np0005473739 podman[389988]: 2025-10-07 14:39:19.265734792 +0000 UTC m=+0.042435795 container create 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:39:19 np0005473739 systemd[1]: Started libpod-conmon-30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a.scope.
Oct  7 10:39:19 np0005473739 podman[389988]: 2025-10-07 14:39:19.249103617 +0000 UTC m=+0.025804640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:39:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:19 np0005473739 podman[389988]: 2025-10-07 14:39:19.364072869 +0000 UTC m=+0.140773892 container init 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:39:19 np0005473739 podman[389988]: 2025-10-07 14:39:19.372369401 +0000 UTC m=+0.149070404 container start 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:39:19 np0005473739 podman[389988]: 2025-10-07 14:39:19.375642378 +0000 UTC m=+0.152343411 container attach 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 10:39:20 np0005473739 nova_compute[259550]: 2025-10-07 14:39:20.227 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847945.2260573, 23c0ce36-9e34-4a73-9f99-3b79f8623238 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:20 np0005473739 nova_compute[259550]: 2025-10-07 14:39:20.229 2 INFO nova.compute.manager [-] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:39:20 np0005473739 nova_compute[259550]: 2025-10-07 14:39:20.248 2 DEBUG nova.compute.manager [None req-28d03074-648a-422c-9e4f-659eaeb94ae7 - - - - - -] [instance: 23c0ce36-9e34-4a73-9f99-3b79f8623238] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:20 np0005473739 gallant_galois[390005]: {
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "osd_id": 2,
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "type": "bluestore"
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:    },
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "osd_id": 1,
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "type": "bluestore"
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:    },
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "osd_id": 0,
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:        "type": "bluestore"
Oct  7 10:39:20 np0005473739 gallant_galois[390005]:    }
Oct  7 10:39:20 np0005473739 gallant_galois[390005]: }
Oct  7 10:39:20 np0005473739 systemd[1]: libpod-30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a.scope: Deactivated successfully.
Oct  7 10:39:20 np0005473739 podman[389988]: 2025-10-07 14:39:20.394429851 +0000 UTC m=+1.171130854 container died 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:39:20 np0005473739 systemd[1]: libpod-30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a.scope: Consumed 1.017s CPU time.
Oct  7 10:39:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2307: 305 pgs: 305 active+clean; 88 MiB data, 826 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 23 KiB/s wr, 128 op/s
Oct  7 10:39:20 np0005473739 systemd[1]: var-lib-containers-storage-overlay-76890074084b2bfa3fa145fcc03a2051d83f0ecb710a7b11561d18fc57025cbb-merged.mount: Deactivated successfully.
Oct  7 10:39:20 np0005473739 podman[389988]: 2025-10-07 14:39:20.505119068 +0000 UTC m=+1.281820071 container remove 30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_galois, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:39:20 np0005473739 systemd[1]: libpod-conmon-30f11fb5b946b5486fabef9e2590b3ad42b5ca1223251b7d89fabf969b96a65a.scope: Deactivated successfully.
Oct  7 10:39:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:39:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:39:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:39:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:39:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 182b7e8e-9c5a-42fa-afd3-253559fe7a9f does not exist
Oct  7 10:39:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ebcd22f1-cc67-4cd1-8748-edf02989e4bf does not exist
Oct  7 10:39:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:39:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2308: 305 pgs: 305 active+clean; 94 MiB data, 845 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 635 KiB/s wr, 73 op/s
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:39:22
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', '.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'images', 'vms', 'default.rgw.meta', 'cephfs.cephfs.data', 'cephfs.cephfs.meta']
Oct  7 10:39:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:39:22 np0005473739 nova_compute[259550]: 2025-10-07 14:39:22.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:22 np0005473739 nova_compute[259550]: 2025-10-07 14:39:22.782 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847947.7813118, c0eb8730-2b26-4cc0-8a9c-019688db568f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:22 np0005473739 nova_compute[259550]: 2025-10-07 14:39:22.783 2 INFO nova.compute.manager [-] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:39:22 np0005473739 nova_compute[259550]: 2025-10-07 14:39:22.804 2 DEBUG nova.compute.manager [None req-af434e86-1269-440b-a259-1b8b941a780f - - - - - -] [instance: c0eb8730-2b26-4cc0-8a9c-019688db568f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:22 np0005473739 nova_compute[259550]: 2025-10-07 14:39:22.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:22Z|00146|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b2:2c:15 10.100.0.7
Oct  7 10:39:22 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:22Z|00147|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b2:2c:15 10.100.0.7
Oct  7 10:39:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:39:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:39:23 np0005473739 nova_compute[259550]: 2025-10-07 14:39:23.994 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2309: 305 pgs: 305 active+clean; 119 MiB data, 860 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.1 MiB/s wr, 118 op/s
Oct  7 10:39:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2310: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 10:39:27 np0005473739 nova_compute[259550]: 2025-10-07 14:39:27.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:27 np0005473739 nova_compute[259550]: 2025-10-07 14:39:27.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:28 np0005473739 nova_compute[259550]: 2025-10-07 14:39:28.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2311: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 10:39:29 np0005473739 nova_compute[259550]: 2025-10-07 14:39:29.372 2 INFO nova.compute.manager [None req-954a3246-54c1-4410-91ce-fddd675c832a 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Get console output#033[00m
Oct  7 10:39:29 np0005473739 nova_compute[259550]: 2025-10-07 14:39:29.377 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:39:29 np0005473739 nova_compute[259550]: 2025-10-07 14:39:29.658 2 DEBUG nova.objects.instance [None req-462ecbaf-4ade-404b-8366-ff48d46e7ea0 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:29 np0005473739 nova_compute[259550]: 2025-10-07 14:39:29.696 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847969.6958537, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:29 np0005473739 nova_compute[259550]: 2025-10-07 14:39:29.696 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:39:29 np0005473739 nova_compute[259550]: 2025-10-07 14:39:29.724 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:29 np0005473739 nova_compute[259550]: 2025-10-07 14:39:29.729 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:39:29 np0005473739 nova_compute[259550]: 2025-10-07 14:39:29.754 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  7 10:39:30 np0005473739 kernel: tap660a78e9-3d (unregistering): left promiscuous mode
Oct  7 10:39:30 np0005473739 NetworkManager[44949]: <info>  [1759847970.4000] device (tap660a78e9-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:39:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2312: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:39:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:30Z|01289|binding|INFO|Releasing lport 660a78e9-3d3f-4949-88f4-3cad47b74229 from this chassis (sb_readonly=0)
Oct  7 10:39:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:30Z|01290|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 down in Southbound
Oct  7 10:39:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:30Z|01291|binding|INFO|Removing iface tap660a78e9-3d ovn-installed in OVS
Oct  7 10:39:30 np0005473739 nova_compute[259550]: 2025-10-07 14:39:30.413 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.427 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2c:15 10.100.0.7'], port_security=['fa:16:3e:b2:2c:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '4', 'neutron:security_group_ids': '0d4d4d28-c7c9-4239-b81d-776c0fee085d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df54412f-162f-42ea-b08c-2dcfe769ac96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=660a78e9-3d3f-4949-88f4-3cad47b74229) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.428 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 660a78e9-3d3f-4949-88f4-3cad47b74229 in datapath d58f5a01-ad0a-4168-95c9-fc8189cce054 unbound from our chassis#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.429 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d58f5a01-ad0a-4168-95c9-fc8189cce054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.430 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[040411a7-484f-4d1f-8a9c-ea861b651ba4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.430 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 namespace which is not needed anymore#033[00m
Oct  7 10:39:30 np0005473739 nova_compute[259550]: 2025-10-07 14:39:30.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:30 np0005473739 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct  7 10:39:30 np0005473739 systemd[1]: machine-qemu\x2d150\x2dinstance\x2d00000078.scope: Consumed 13.793s CPU time.
Oct  7 10:39:30 np0005473739 systemd-machined[214580]: Machine qemu-150-instance-00000078 terminated.
Oct  7 10:39:30 np0005473739 nova_compute[259550]: 2025-10-07 14:39:30.568 2 DEBUG nova.compute.manager [None req-462ecbaf-4ade-404b-8366-ff48d46e7ea0 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:30 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [NOTICE]   (389086) : haproxy version is 2.8.14-c23fe91
Oct  7 10:39:30 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [NOTICE]   (389086) : path to executable is /usr/sbin/haproxy
Oct  7 10:39:30 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [WARNING]  (389086) : Exiting Master process...
Oct  7 10:39:30 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [WARNING]  (389086) : Exiting Master process...
Oct  7 10:39:30 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [ALERT]    (389086) : Current worker (389088) exited with code 143 (Terminated)
Oct  7 10:39:30 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[389081]: [WARNING]  (389086) : All workers exited. Exiting... (0)
Oct  7 10:39:30 np0005473739 systemd[1]: libpod-4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f.scope: Deactivated successfully.
Oct  7 10:39:30 np0005473739 podman[390128]: 2025-10-07 14:39:30.600756638 +0000 UTC m=+0.056865561 container died 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:39:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f-userdata-shm.mount: Deactivated successfully.
Oct  7 10:39:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-01b59d4dc6edabb37a3c0b5853d080b83aedff571bda483335c1c14eb6b156ce-merged.mount: Deactivated successfully.
Oct  7 10:39:30 np0005473739 podman[390128]: 2025-10-07 14:39:30.715146804 +0000 UTC m=+0.171255727 container cleanup 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:39:30 np0005473739 systemd[1]: libpod-conmon-4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f.scope: Deactivated successfully.
Oct  7 10:39:30 np0005473739 podman[390168]: 2025-10-07 14:39:30.780909861 +0000 UTC m=+0.043313139 container remove 4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.787 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf04536-c899-4ea9-9e47-ef5a753517ec]: (4, ('Tue Oct  7 02:39:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 (4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f)\n4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f\nTue Oct  7 02:39:30 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 (4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f)\n4ccefd75d321621eaa0a5b2abe220ef44f78582c43a4e999b9b47b6236572f8f\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.788 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f84dfae-9cc5-4e85-93ef-15cd21bc5c2b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.789 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd58f5a01-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:30 np0005473739 nova_compute[259550]: 2025-10-07 14:39:30.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:30 np0005473739 kernel: tapd58f5a01-a0: left promiscuous mode
Oct  7 10:39:30 np0005473739 nova_compute[259550]: 2025-10-07 14:39:30.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.813 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fc709d7c-8569-4354-8452-f59ad9b0e5a8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.860 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a6cbf562-8a19-4cc3-a48b-eec6ae653311]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dac76122-8d1d-405b-bd00-90b37c67bbe4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.876 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3621bd85-f3d3-4b34-9d1f-98412963574e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 853107, 'reachable_time': 16460, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390187, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:30 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd58f5a01\x2dad0a\x2d4168\x2d95c9\x2dfc8189cce054.mount: Deactivated successfully.
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.879 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:39:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:30.879 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f1fad9e4-d758-44f2-bad4-f7f4eb3550b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:31 np0005473739 nova_compute[259550]: 2025-10-07 14:39:31.068 2 DEBUG nova.compute.manager [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:31 np0005473739 nova_compute[259550]: 2025-10-07 14:39:31.068 2 DEBUG oslo_concurrency.lockutils [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:31 np0005473739 nova_compute[259550]: 2025-10-07 14:39:31.069 2 DEBUG oslo_concurrency.lockutils [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:31 np0005473739 nova_compute[259550]: 2025-10-07 14:39:31.069 2 DEBUG oslo_concurrency.lockutils [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:31 np0005473739 nova_compute[259550]: 2025-10-07 14:39:31.069 2 DEBUG nova.compute.manager [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:31 np0005473739 nova_compute[259550]: 2025-10-07 14:39:31.069 2 WARNING nova.compute.manager [req-a9f320a9-189a-4331-b37f-4dae906650ae req-8f1a4ac8-145a-4a54-ad2c-6978a5ef9ce9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state suspended and task_state None.#033[00m
Oct  7 10:39:32 np0005473739 podman[390188]: 2025-10-07 14:39:32.079029097 +0000 UTC m=+0.067197247 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:39:32 np0005473739 podman[390189]: 2025-10-07 14:39:32.099061613 +0000 UTC m=+0.084377616 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2313: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007587643257146578 of space, bias 1.0, pg target 0.22762929771439736 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:39:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:39:32 np0005473739 nova_compute[259550]: 2025-10-07 14:39:32.709 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:39:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3128308685' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:39:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:39:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3128308685' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:39:32 np0005473739 nova_compute[259550]: 2025-10-07 14:39:32.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:32 np0005473739 nova_compute[259550]: 2025-10-07 14:39:32.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.002 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.002 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.021 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:39:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.115 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.116 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.124 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.124 2 INFO nova.compute.claims [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.140 2 DEBUG nova.compute.manager [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.140 2 DEBUG oslo_concurrency.lockutils [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.141 2 DEBUG oslo_concurrency.lockutils [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.141 2 DEBUG oslo_concurrency.lockutils [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.141 2 DEBUG nova.compute.manager [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.141 2 WARNING nova.compute.manager [req-b6bd0cad-761c-4c51-b1e3-b2896696921c req-cb810b17-2b8c-448f-bbfd-b49188b36cb9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state suspended and task_state None.#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.249 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1275762732' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.728 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.736 2 DEBUG nova.compute.provider_tree [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.795 2 DEBUG nova.scheduler.client.report [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.830 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.830 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.898 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.899 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.916 2 INFO nova.compute.manager [None req-723895d9-aead-47cf-8e1b-c3dd486a5a47 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Get console output#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.944 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:39:33 np0005473739 nova_compute[259550]: 2025-10-07 14:39:33.976 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.058 2 DEBUG nova.policy [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.072 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.074 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.075 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Creating image(s)#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.101 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.123 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.146 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.150 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.212 2 INFO nova.compute.manager [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Resuming#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.213 2 DEBUG nova.objects.instance [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'flavor' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.226 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.227 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.227 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.228 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.253 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.258 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d0b10640-5492-4d8f-8b94-a49a15b6e702_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.313 2 DEBUG oslo_concurrency.lockutils [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.314 2 DEBUG oslo_concurrency.lockutils [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.314 2 DEBUG nova.network.neutron [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:39:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2314: 305 pgs: 305 active+clean; 121 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 1.5 MiB/s wr, 60 op/s
Oct  7 10:39:34 np0005473739 nova_compute[259550]: 2025-10-07 14:39:34.963 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Successfully created port: b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:39:35 np0005473739 nova_compute[259550]: 2025-10-07 14:39:35.949 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Successfully updated port: b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:39:35 np0005473739 nova_compute[259550]: 2025-10-07 14:39:35.968 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:35 np0005473739 nova_compute[259550]: 2025-10-07 14:39:35.969 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:35 np0005473739 nova_compute[259550]: 2025-10-07 14:39:35.969 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.011 2 DEBUG nova.network.neutron [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.033 2 DEBUG nova.compute.manager [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.034 2 DEBUG nova.compute.manager [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.034 2 DEBUG oslo_concurrency.lockutils [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.043 2 DEBUG oslo_concurrency.lockutils [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.049 2 DEBUG nova.virt.libvirt.vif [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-335561064',display_name='tempest-TestNetworkAdvancedServerOps-server-335561064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-335561064',id=120,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd+JvDcSqw5OJ03cvElecW4OwXxUWcOza7ZaEaEf+FC3qoJlFBHob9pK59ESmk17iW8KZuFczVLCS9KQd4bG4fRY/MC1LsIJiiL5MRoGeETGGZgzRCRibwIbIrlPnNS2Q==',key_name='tempest-TestNetworkAdvancedServerOps-452416595',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-kg93fcgl',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:30Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=9996c6b8-7d50-42b8-9617-2a1ae7d36d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.049 2 DEBUG nova.network.os_vif_util [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.050 2 DEBUG nova.network.os_vif_util [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.050 2 DEBUG os_vif [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.051 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.052 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap660a78e9-3d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.055 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap660a78e9-3d, col_values=(('external_ids', {'iface-id': '660a78e9-3d3f-4949-88f4-3cad47b74229', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b2:2c:15', 'vm-uuid': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.056 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.056 2 INFO os_vif [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d')#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.133 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.376 2 DEBUG nova.objects.instance [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'numa_topology' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2315: 305 pgs: 305 active+clean; 121 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 17 KiB/s wr, 12 op/s
Oct  7 10:39:36 np0005473739 NetworkManager[44949]: <info>  [1759847976.4606] manager: (tap660a78e9-3d): new Tun device (/org/freedesktop/NetworkManager/Devices/520)
Oct  7 10:39:36 np0005473739 kernel: tap660a78e9-3d: entered promiscuous mode
Oct  7 10:39:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:36Z|01292|binding|INFO|Claiming lport 660a78e9-3d3f-4949-88f4-3cad47b74229 for this chassis.
Oct  7 10:39:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:36Z|01293|binding|INFO|660a78e9-3d3f-4949-88f4-3cad47b74229: Claiming fa:16:3e:b2:2c:15 10.100.0.7
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.480 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2c:15 10.100.0.7'], port_security=['fa:16:3e:b2:2c:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '5', 'neutron:security_group_ids': '0d4d4d28-c7c9-4239-b81d-776c0fee085d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df54412f-162f-42ea-b08c-2dcfe769ac96, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=660a78e9-3d3f-4949-88f4-3cad47b74229) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.481 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 660a78e9-3d3f-4949-88f4-3cad47b74229 in datapath d58f5a01-ad0a-4168-95c9-fc8189cce054 bound to our chassis#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.482 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d58f5a01-ad0a-4168-95c9-fc8189cce054#033[00m
Oct  7 10:39:36 np0005473739 systemd-udevd[390359]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:39:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:36Z|01294|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 ovn-installed in OVS
Oct  7 10:39:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:36Z|01295|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 up in Southbound
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.494 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[67e69aa0-2238-444b-90df-12c441eb5507]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.495 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd58f5a01-a1 in ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.496 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.497 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd58f5a01-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.497 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[36498ff0-5416-4bce-a918-662d6fb972a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 NetworkManager[44949]: <info>  [1759847976.4986] device (tap660a78e9-3d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:39:36 np0005473739 NetworkManager[44949]: <info>  [1759847976.4996] device (tap660a78e9-3d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.499 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d83c422f-f7e4-4973-94f1-02ddca082680]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 systemd-machined[214580]: New machine qemu-151-instance-00000078.
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.511 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[25539b9f-bde6-4955-aa41-5f5a245f80dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 systemd[1]: Started Virtual Machine qemu-151-instance-00000078.
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.536 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ffd02652-f0ea-4488-9a15-b4a28eb91f3d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.571 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8a61d55f-46cf-4534-a097-26a7401eb544]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.579 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ac64f27c-751e-4aec-90a7-3ffb3dccf24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 NetworkManager[44949]: <info>  [1759847976.5823] manager: (tapd58f5a01-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/521)
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.619 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[598f20db-8d58-4579-ad89-6a698d1e1798]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.624 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[74a0db51-e036-4715-bbec-40f10f5a6d37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 NetworkManager[44949]: <info>  [1759847976.6497] device (tapd58f5a01-a0): carrier: link connected
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.658 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4a56341a-e0c7-42c6-a396-aeb6e1d9c2b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.682 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9683c70-7560-48c7-8f78-2050c1cc196d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd58f5a01-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:0d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856021, 'reachable_time': 34211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390393, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.727 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[240f7898-0575-4b2e-8d70-054d46bdeb6c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feef:dbf'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856021, 'tstamp': 856021}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390394, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[02cbc6d6-000f-4ed7-9b8f-fd6a8c101a7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd58f5a01-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ef:0d:bf'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 372], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856021, 'reachable_time': 34211, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390395, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.791 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e2341460-8c34-4318-af8c-f7fe285c184d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.862 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0ee5c566-95d7-479d-b855-0357cd156f2a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.863 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd58f5a01-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.863 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.864 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd58f5a01-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 kernel: tapd58f5a01-a0: entered promiscuous mode
Oct  7 10:39:36 np0005473739 NetworkManager[44949]: <info>  [1759847976.8678] manager: (tapd58f5a01-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/522)
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.871 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd58f5a01-a0, col_values=(('external_ids', {'iface-id': 'd4b37819-62c7-42bc-af8c-24aff0a13de9'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:36Z|01296|binding|INFO|Releasing lport d4b37819-62c7-42bc-af8c-24aff0a13de9 from this chassis (sb_readonly=0)
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.876 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.877 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[200981b6-c982-44cf-b4f9-8667a43eae21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.877 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d58f5a01-ad0a-4168-95c9-fc8189cce054.pid.haproxy
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d58f5a01-ad0a-4168-95c9-fc8189cce054
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:39:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:36.878 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'env', 'PROCESS_TAG=haproxy-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d58f5a01-ad0a-4168-95c9-fc8189cce054.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:36 np0005473739 nova_compute[259550]: 2025-10-07 14:39:36.952 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:37 np0005473739 nova_compute[259550]: 2025-10-07 14:39:37.265 2 DEBUG nova.network.neutron [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:37 np0005473739 nova_compute[259550]: 2025-10-07 14:39:37.287 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:37 np0005473739 nova_compute[259550]: 2025-10-07 14:39:37.288 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance network_info: |[{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:39:37 np0005473739 nova_compute[259550]: 2025-10-07 14:39:37.289 2 DEBUG oslo_concurrency.lockutils [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:37 np0005473739 nova_compute[259550]: 2025-10-07 14:39:37.289 2 DEBUG nova.network.neutron [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:39:37 np0005473739 podman[390434]: 2025-10-07 14:39:37.221043095 +0000 UTC m=+0.026305964 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:39:37 np0005473739 nova_compute[259550]: 2025-10-07 14:39:37.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:37 np0005473739 nova_compute[259550]: 2025-10-07 14:39:37.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.135 2 DEBUG nova.compute.manager [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.136 2 DEBUG oslo_concurrency.lockutils [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.136 2 DEBUG oslo_concurrency.lockutils [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.137 2 DEBUG oslo_concurrency.lockutils [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.137 2 DEBUG nova.compute.manager [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.137 2 WARNING nova.compute.manager [req-b91264df-7c5a-4c61-8099-6beded35caff req-c3553de3-0fde-486c-9c11-7c5b8d95b74f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state suspended and task_state resuming.#033[00m
Oct  7 10:39:38 np0005473739 podman[390434]: 2025-10-07 14:39:38.226993874 +0000 UTC m=+1.032256713 container create 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.317 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 d0b10640-5492-4d8f-8b94-a49a15b6e702_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 4.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:38 np0005473739 systemd[1]: Started libpod-conmon-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b.scope.
Oct  7 10:39:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dfe4b90e1f92652e9319c7c056a0787abaa2c6f38b64b25f838b359a30878577/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:38.387 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=39, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=38) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.394 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:39:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2316: 305 pgs: 305 active+clean; 121 MiB data, 864 MiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 14 KiB/s wr, 11 op/s
Oct  7 10:39:38 np0005473739 podman[390434]: 2025-10-07 14:39:38.427560244 +0000 UTC m=+1.232823113 container init 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001)
Oct  7 10:39:38 np0005473739 podman[390434]: 2025-10-07 14:39:38.433107782 +0000 UTC m=+1.238370621 container start 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:39:38 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [NOTICE]   (390542) : New worker (390544) forked
Oct  7 10:39:38 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [NOTICE]   (390542) : Loading success.
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.490 2 DEBUG nova.compute.manager [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.491 2 DEBUG nova.objects.instance [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.493 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.494 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847978.4487634, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.498 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Started (Lifecycle Event)#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.526 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.527 2 INFO nova.virt.libvirt.driver [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance running successfully.#033[00m
Oct  7 10:39:38 np0005473739 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.531 2 DEBUG nova.virt.libvirt.guest [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.531 2 DEBUG nova.compute.manager [None req-e828ce2d-2ce9-4628-9bbd-7ee081c89c04 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.532 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:39:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:38.557 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.580 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.581 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847978.452041, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.582 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.632 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.637 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.773 2 DEBUG nova.objects.instance [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.790 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.790 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Ensure instance console log exists: /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.791 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.791 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.791 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.793 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start _get_guest_xml network_info=[{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.815 2 WARNING nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.820 2 DEBUG nova.virt.libvirt.host [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.821 2 DEBUG nova.virt.libvirt.host [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.827 2 DEBUG nova.virt.libvirt.host [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.828 2 DEBUG nova.virt.libvirt.host [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.829 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.829 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.829 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.830 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.831 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.831 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.831 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.831 2 DEBUG nova.virt.hardware [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.834 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.995 2 DEBUG nova.network.neutron [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:39:38 np0005473739 nova_compute[259550]: 2025-10-07 14:39:38.996 2 DEBUG nova.network.neutron [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.013 2 DEBUG oslo_concurrency.lockutils [req-f3b25241-4ff8-491d-9729-899b3fc3a09a req-99adfd41-d668-44be-aec0-6e06527cc51c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:39.042 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6e:a0:12 2001:db8:0:1:f816:3eff:fe6e:a012 2001:db8::f816:3eff:fe6e:a012'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe6e:a012/64 2001:db8::f816:3eff:fe6e:a012/64', 'neutron:device_id': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=763708cd-58bb-4680-a4f7-042aa711a366) old=Port_Binding(mac=['fa:16:3e:6e:a0:12 2001:db8::f816:3eff:fe6e:a012'], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe6e:a012/64', 'neutron:device_id': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:39.044 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port 763708cd-58bb-4680-a4f7-042aa711a366 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 updated#033[00m
Oct  7 10:39:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:39.045 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abe90ba0-a518-4cef-a49b-de57485faec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:39:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:39.046 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[791503b8-003b-4fb2-b22c-424b7db7ae26]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:39:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4047354148' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.323 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.348 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.360 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.479 2 INFO nova.compute.manager [None req-3e94abce-e60d-4417-902b-0379256134ea 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Get console output#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.486 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:39:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:39:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1598505080' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.878 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.880 2 DEBUG nova.virt.libvirt.vif [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:34Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.880 2 DEBUG nova.network.os_vif_util [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.881 2 DEBUG nova.network.os_vif_util [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.882 2 DEBUG nova.objects.instance [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.901 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <uuid>d0b10640-5492-4d8f-8b94-a49a15b6e702</uuid>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <name>instance-00000079</name>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:39:38</nova:creationTime>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <entry name="serial">d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <entry name="uuid">d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:6d:ee:73"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <target dev="tapb7bf5de8-3b"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log" append="off"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:39:39 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:39:39 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:39:39 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:39:39 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.902 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Preparing to wait for external event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.903 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.903 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.903 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.904 2 DEBUG nova.virt.libvirt.vif [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:34Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.904 2 DEBUG nova.network.os_vif_util [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.905 2 DEBUG nova.network.os_vif_util [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.905 2 DEBUG os_vif [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.906 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7bf5de8-3b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.909 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb7bf5de8-3b, col_values=(('external_ids', {'iface-id': 'b7bf5de8-3ba0-43cd-a839-d8812cbe4276', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:6d:ee:73', 'vm-uuid': 'd0b10640-5492-4d8f-8b94-a49a15b6e702'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.911 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:39 np0005473739 NetworkManager[44949]: <info>  [1759847979.9132] manager: (tapb7bf5de8-3b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/523)
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.919 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:39 np0005473739 nova_compute[259550]: 2025-10-07 14:39:39.920 2 INFO os_vif [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b')#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.011 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.012 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.013 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:6d:ee:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.013 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Using config drive#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.031 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.232 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.235 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.236 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.236 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.236 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.238 2 INFO nova.compute.manager [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Terminating instance#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.239 2 DEBUG nova.compute.manager [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.317 2 DEBUG nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.318 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.318 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.319 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.319 2 DEBUG nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.319 2 WARNING nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG nova.compute.manager [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing instance network info cache due to event network-changed-660a78e9-3d3f-4949-88f4-3cad47b74229. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.320 2 DEBUG nova.network.neutron [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Refreshing network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:39:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2317: 305 pgs: 305 active+clean; 158 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.7 MiB/s wr, 30 op/s
Oct  7 10:39:40 np0005473739 kernel: tap660a78e9-3d (unregistering): left promiscuous mode
Oct  7 10:39:40 np0005473739 NetworkManager[44949]: <info>  [1759847980.4367] device (tap660a78e9-3d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:40Z|01297|binding|INFO|Releasing lport 660a78e9-3d3f-4949-88f4-3cad47b74229 from this chassis (sb_readonly=0)
Oct  7 10:39:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:40Z|01298|binding|INFO|Setting lport 660a78e9-3d3f-4949-88f4-3cad47b74229 down in Southbound
Oct  7 10:39:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:40Z|01299|binding|INFO|Removing iface tap660a78e9-3d ovn-installed in OVS
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.460 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b2:2c:15 10.100.0.7'], port_security=['fa:16:3e:b2:2c:15 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9996c6b8-7d50-42b8-9617-2a1ae7d36d30', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '74c80c1e3c7c4a0dbf1c602d301618a7', 'neutron:revision_number': '6', 'neutron:security_group_ids': '0d4d4d28-c7c9-4239-b81d-776c0fee085d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=df54412f-162f-42ea-b08c-2dcfe769ac96, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=660a78e9-3d3f-4949-88f4-3cad47b74229) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.461 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 660a78e9-3d3f-4949-88f4-3cad47b74229 in datapath d58f5a01-ad0a-4168-95c9-fc8189cce054 unbound from our chassis#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.462 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d58f5a01-ad0a-4168-95c9-fc8189cce054, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.463 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ed2cd4e-dc4d-4b4c-aa36-e56e46229b7d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.463 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 namespace which is not needed anymore#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:40 np0005473739 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Deactivated successfully.
Oct  7 10:39:40 np0005473739 systemd[1]: machine-qemu\x2d151\x2dinstance\x2d00000078.scope: Consumed 1.305s CPU time.
Oct  7 10:39:40 np0005473739 systemd-machined[214580]: Machine qemu-151-instance-00000078 terminated.
Oct  7 10:39:40 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [NOTICE]   (390542) : haproxy version is 2.8.14-c23fe91
Oct  7 10:39:40 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [NOTICE]   (390542) : path to executable is /usr/sbin/haproxy
Oct  7 10:39:40 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [WARNING]  (390542) : Exiting Master process...
Oct  7 10:39:40 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [ALERT]    (390542) : Current worker (390544) exited with code 143 (Terminated)
Oct  7 10:39:40 np0005473739 neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054[390491]: [WARNING]  (390542) : All workers exited. Exiting... (0)
Oct  7 10:39:40 np0005473739 systemd[1]: libpod-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b.scope: Deactivated successfully.
Oct  7 10:39:40 np0005473739 conmon[390491]: conmon 8c28b3fe46c43f2fb200 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b.scope/container/memory.events
Oct  7 10:39:40 np0005473739 podman[390680]: 2025-10-07 14:39:40.608396086 +0000 UTC m=+0.065643205 container died 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.680 2 INFO nova.virt.libvirt.driver [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Instance destroyed successfully.#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.681 2 DEBUG nova.objects.instance [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lazy-loading 'resources' on Instance uuid 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.698 2 DEBUG nova.virt.libvirt.vif [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:38:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkAdvancedServerOps-server-335561064',display_name='tempest-TestNetworkAdvancedServerOps-server-335561064',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkadvancedserverops-server-335561064',id=120,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKd+JvDcSqw5OJ03cvElecW4OwXxUWcOza7ZaEaEf+FC3qoJlFBHob9pK59ESmk17iW8KZuFczVLCS9KQd4bG4fRY/MC1LsIJiiL5MRoGeETGGZgzRCRibwIbIrlPnNS2Q==',key_name='tempest-TestNetworkAdvancedServerOps-452416595',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='74c80c1e3c7c4a0dbf1c602d301618a7',ramdisk_id='',reservation_id='r-kg93fcgl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkAdvancedServerOps-316338420',owner_user_name='tempest-TestNetworkAdvancedServerOps-316338420-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:38Z,user_data=None,user_id='5c505d04148e44b8b93ceab0e3cedef4',uuid=9996c6b8-7d50-42b8-9617-2a1ae7d36d30,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.699 2 DEBUG nova.network.os_vif_util [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converting VIF {"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.699 2 DEBUG nova.network.os_vif_util [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.700 2 DEBUG os_vif [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:39:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b-userdata-shm.mount: Deactivated successfully.
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.705 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap660a78e9-3d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dfe4b90e1f92652e9319c7c056a0787abaa2c6f38b64b25f838b359a30878577-merged.mount: Deactivated successfully.
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.710 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Creating config drive at /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.716 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp16ro6gwy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:40 np0005473739 podman[390680]: 2025-10-07 14:39:40.723231084 +0000 UTC m=+0.180478203 container cleanup 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:39:40 np0005473739 systemd[1]: libpod-conmon-8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b.scope: Deactivated successfully.
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.765 2 INFO os_vif [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b2:2c:15,bridge_name='br-int',has_traffic_filtering=True,id=660a78e9-3d3f-4949-88f4-3cad47b74229,network=Network(d58f5a01-ad0a-4168-95c9-fc8189cce054),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap660a78e9-3d')#033[00m
Oct  7 10:39:40 np0005473739 podman[390726]: 2025-10-07 14:39:40.786668019 +0000 UTC m=+0.042271130 container remove 8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.792 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f57c1ef-2aa9-46bb-920e-a226dbb1dd67]: (4, ('Tue Oct  7 02:39:40 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 (8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b)\n8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b\nTue Oct  7 02:39:40 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 (8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b)\n8c28b3fe46c43f2fb200c231da1238c4a3a69f97d94da80031e0a0c402ea506b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.794 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[863f9766-9ffb-4765-94a1-e57e04db7c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.794 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd58f5a01-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:40 np0005473739 kernel: tapd58f5a01-a0: left promiscuous mode
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.818 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fd233dc-9da3-49a4-a26e-a11adccd8b1f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.845 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fb350ac3-3ac4-430f-ba29-450193ca3860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d498baef-fb08-4ffe-af60-53d713a5b510]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.861 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp16ro6gwy" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.866 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[252cd932-8f22-4593-9033-c46e9842c953]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856012, 'reachable_time': 26775, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390765, 'error': None, 'target': 'ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:40 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd58f5a01\x2dad0a\x2d4168\x2d95c9\x2dfc8189cce054.mount: Deactivated successfully.
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.871 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d58f5a01-ad0a-4168-95c9-fc8189cce054 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:39:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:40.871 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3c554c3f-358d-4a60-b5e5-f747f17565ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.888 2 DEBUG nova.storage.rbd_utils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:40 np0005473739 nova_compute[259550]: 2025-10-07 14:39:40.891 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.157 2 DEBUG oslo_concurrency.processutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.265s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.158 2 INFO nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Deleting local config drive /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/disk.config because it was imported into RBD.#033[00m
Oct  7 10:39:41 np0005473739 kernel: tapb7bf5de8-3b: entered promiscuous mode
Oct  7 10:39:41 np0005473739 systemd-udevd[390661]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:39:41 np0005473739 NetworkManager[44949]: <info>  [1759847981.2338] manager: (tapb7bf5de8-3b): new Tun device (/org/freedesktop/NetworkManager/Devices/524)
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.234 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:41Z|01300|binding|INFO|Claiming lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 for this chassis.
Oct  7 10:39:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:41Z|01301|binding|INFO|b7bf5de8-3ba0-43cd-a839-d8812cbe4276: Claiming fa:16:3e:6d:ee:73 10.100.0.7
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.245 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:ee:73 10.100.0.7'], port_security=['fa:16:3e:6d:ee:73 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd0b10640-5492-4d8f-8b94-a49a15b6e702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '8da4ff39-44c8-491e-a635-6be0569feae9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1c4ac2-8721-482c-b415-fdc398b5953a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b7bf5de8-3ba0-43cd-a839-d8812cbe4276) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.246 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 in datapath 580b59e0-70f8-44c3-a35f-9c4f88691f96 bound to our chassis#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.248 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 580b59e0-70f8-44c3-a35f-9c4f88691f96#033[00m
Oct  7 10:39:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:41Z|01302|binding|INFO|Setting lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 ovn-installed in OVS
Oct  7 10:39:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:41Z|01303|binding|INFO|Setting lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 up in Southbound
Oct  7 10:39:41 np0005473739 NetworkManager[44949]: <info>  [1759847981.2556] device (tapb7bf5de8-3b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:39:41 np0005473739 NetworkManager[44949]: <info>  [1759847981.2568] device (tapb7bf5de8-3b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.264 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4b91eddd-f0c6-467f-9d55-099a5911c3c2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.265 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap580b59e0-71 in ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.267 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap580b59e0-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.267 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c0819d43-6232-4e9a-b10e-612f699a3f5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[849029c1-f830-4d9a-975d-0efa7810f07b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 systemd-machined[214580]: New machine qemu-152-instance-00000079.
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.289 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[fd2fbff2-b8e2-4599-b134-9a611b4f9345]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 systemd[1]: Started Virtual Machine qemu-152-instance-00000079.
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.306 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a7b9994-2adc-4f95-a1f0-cef9cf96c728]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.335 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[15de7b4a-ba76-4e24-850a-8b79f849090a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.342 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c6d6ca82-9936-424e-a301-16868858d3af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 NetworkManager[44949]: <info>  [1759847981.3468] manager: (tap580b59e0-70): new Veth device (/org/freedesktop/NetworkManager/Devices/525)
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.383 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dcbf89f9-adb0-4da7-9be1-de47dc408c09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.386 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6b488e5b-9f02-47f1-a1a9-afcf8b49b6fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.410 2 INFO nova.virt.libvirt.driver [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Deleting instance files /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30_del#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.410 2 INFO nova.virt.libvirt.driver [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Deletion of /var/lib/nova/instances/9996c6b8-7d50-42b8-9617-2a1ae7d36d30_del complete#033[00m
Oct  7 10:39:41 np0005473739 NetworkManager[44949]: <info>  [1759847981.4134] device (tap580b59e0-70): carrier: link connected
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.417 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4dcce059-eeec-4faf-875d-fd978fa92138]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.434 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6df63b47-940c-4fb3-ad39-a5792b03ef4d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580b59e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:9a:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856497, 'reachable_time': 26637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 390848, 'error': None, 'target': 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[646d5704-4425-4eec-826b-0dcbc23253e6]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:9a05'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 856497, 'tstamp': 856497}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 390849, 'error': None, 'target': 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41c2be65-b51d-406a-9e5b-802441cef51d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap580b59e0-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:9a:05'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 375], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856497, 'reachable_time': 26637, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 390850, 'error': None, 'target': 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.479 2 INFO nova.compute.manager [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.479 2 DEBUG oslo.service.loopingcall [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.480 2 DEBUG nova.compute.manager [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.480 2 DEBUG nova.network.neutron [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.504 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f70170a-52e1-46ce-a804-6c444a1ea16b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.567 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f5a1626e-abd8-4a35-9e4d-b73c5caeb30d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.569 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580b59e0-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.569 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.570 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap580b59e0-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:41 np0005473739 NetworkManager[44949]: <info>  [1759847981.5727] manager: (tap580b59e0-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/526)
Oct  7 10:39:41 np0005473739 kernel: tap580b59e0-70: entered promiscuous mode
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.576 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap580b59e0-70, col_values=(('external_ids', {'iface-id': '406dfedc-cd29-46b7-9b91-7b006ecd582c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:41Z|01304|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct  7 10:39:41 np0005473739 nova_compute[259550]: 2025-10-07 14:39:41.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.596 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/580b59e0-70f8-44c3-a35f-9c4f88691f96.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/580b59e0-70f8-44c3-a35f-9c4f88691f96.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.597 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb2c9d25-dcea-46c0-85ba-de123b7a9a9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.598 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-580b59e0-70f8-44c3-a35f-9c4f88691f96
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/580b59e0-70f8-44c3-a35f-9c4f88691f96.pid.haproxy
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 580b59e0-70f8-44c3-a35f-9c4f88691f96
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:39:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:41.600 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'env', 'PROCESS_TAG=haproxy-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/580b59e0-70f8-44c3-a35f-9c4f88691f96.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:39:41 np0005473739 podman[390917]: 2025-10-07 14:39:41.985967906 +0000 UTC m=+0.045807496 container create 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:39:42 np0005473739 systemd[1]: Started libpod-conmon-150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352.scope.
Oct  7 10:39:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:39:42 np0005473739 podman[390917]: 2025-10-07 14:39:41.962757125 +0000 UTC m=+0.022596745 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:39:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ea80f1f94cd7157cc848e1780ed5de7036ed725f787faa3cc6a7f92f3662579a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:39:42 np0005473739 podman[390917]: 2025-10-07 14:39:42.087979331 +0000 UTC m=+0.147818931 container init 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:39:42 np0005473739 podman[390917]: 2025-10-07 14:39:42.093976831 +0000 UTC m=+0.153816431 container start 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:39:42 np0005473739 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [NOTICE]   (390942) : New worker (390944) forked
Oct  7 10:39:42 np0005473739 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [NOTICE]   (390942) : Loading success.
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.202 2 DEBUG nova.network.neutron [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updated VIF entry in instance network info cache for port 660a78e9-3d3f-4949-88f4-3cad47b74229. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.202 2 DEBUG nova.network.neutron [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [{"id": "660a78e9-3d3f-4949-88f4-3cad47b74229", "address": "fa:16:3e:b2:2c:15", "network": {"id": "d58f5a01-ad0a-4168-95c9-fc8189cce054", "bridge": "br-int", "label": "tempest-network-smoke--1063469934", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "74c80c1e3c7c4a0dbf1c602d301618a7", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap660a78e9-3d", "ovs_interfaceid": "660a78e9-3d3f-4949-88f4-3cad47b74229", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.225 2 DEBUG oslo_concurrency.lockutils [req-4626f88d-031e-424a-8f07-9c04e65c4757 req-21bd238f-1322-49bc-8048-4f6a60b4ac7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9996c6b8-7d50-42b8-9617-2a1ae7d36d30" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.284 2 DEBUG nova.network.neutron [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.305 2 INFO nova.compute.manager [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Took 0.82 seconds to deallocate network for instance.#033[00m
Oct  7 10:39:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2318: 305 pgs: 305 active+clean; 167 MiB data, 884 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.444 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847982.4439216, d0b10640-5492-4d8f-8b94-a49a15b6e702 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.444 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] VM Started (Lifecycle Event)#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.505 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.509 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847982.444058, d0b10640-5492-4d8f-8b94-a49a15b6e702 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.510 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.546 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.550 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.552 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.553 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.573 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.639 2 DEBUG oslo_concurrency.processutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.806 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.806 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.807 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.807 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.807 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.807 2 WARNING nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-unplugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.807 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] No waiting events found dispatching network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.808 2 WARNING nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received unexpected event network-vif-plugged-660a78e9-3d3f-4949-88f4-3cad47b74229 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.808 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Processing event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.809 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.810 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.810 2 DEBUG oslo_concurrency.lockutils [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.810 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.810 2 WARNING nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.810 2 DEBUG nova.compute.manager [req-b6f961a5-f22f-4c96-809f-18998fad2c8b req-ca207f77-608a-48b1-a50a-442a7ce0ca52 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Received event network-vif-deleted-660a78e9-3d3f-4949-88f4-3cad47b74229 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.811 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.815 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759847982.8145022, d0b10640-5492-4d8f-8b94-a49a15b6e702 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.815 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.817 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.826 2 INFO nova.virt.libvirt.driver [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance spawned successfully.#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.826 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.835 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.840 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.849 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.850 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.850 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.851 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.851 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.851 2 DEBUG nova.virt.libvirt.driver [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.908 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.942 2 INFO nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Took 8.87 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:39:42 np0005473739 nova_compute[259550]: 2025-10-07 14:39:42.942 2 DEBUG nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:43 np0005473739 nova_compute[259550]: 2025-10-07 14:39:43.018 2 INFO nova.compute.manager [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Took 9.93 seconds to build instance.#033[00m
Oct  7 10:39:43 np0005473739 nova_compute[259550]: 2025-10-07 14:39:43.046 2 DEBUG oslo_concurrency.lockutils [None req-bfacfb1e-bc8a-4f0a-b7e8-2c4125a94315 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.043s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207620396' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:43 np0005473739 nova_compute[259550]: 2025-10-07 14:39:43.115 2 DEBUG oslo_concurrency.processutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:43 np0005473739 nova_compute[259550]: 2025-10-07 14:39:43.122 2 DEBUG nova.compute.provider_tree [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:43 np0005473739 nova_compute[259550]: 2025-10-07 14:39:43.146 2 DEBUG nova.scheduler.client.report [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:43 np0005473739 nova_compute[259550]: 2025-10-07 14:39:43.174 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.621s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:43 np0005473739 nova_compute[259550]: 2025-10-07 14:39:43.209 2 INFO nova.scheduler.client.report [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Deleted allocations for instance 9996c6b8-7d50-42b8-9617-2a1ae7d36d30#033[00m
Oct  7 10:39:43 np0005473739 nova_compute[259550]: 2025-10-07 14:39:43.290 2 DEBUG oslo_concurrency.lockutils [None req-813fecde-e17c-4037-b512-2e9ce488ae3e 5c505d04148e44b8b93ceab0e3cedef4 74c80c1e3c7c4a0dbf1c602d301618a7 - - default default] Lock "9996c6b8-7d50-42b8-9617-2a1ae7d36d30" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2319: 305 pgs: 305 active+clean; 103 MiB data, 847 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.8 MiB/s wr, 104 op/s
Oct  7 10:39:44 np0005473739 nova_compute[259550]: 2025-10-07 14:39:44.712 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:44 np0005473739 nova_compute[259550]: 2025-10-07 14:39:44.712 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:44 np0005473739 nova_compute[259550]: 2025-10-07 14:39:44.731 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:39:44 np0005473739 nova_compute[259550]: 2025-10-07 14:39:44.788 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:44 np0005473739 nova_compute[259550]: 2025-10-07 14:39:44.789 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:44 np0005473739 nova_compute[259550]: 2025-10-07 14:39:44.796 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:39:44 np0005473739 nova_compute[259550]: 2025-10-07 14:39:44.796 2 INFO nova.compute.claims [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:39:44 np0005473739 nova_compute[259550]: 2025-10-07 14:39:44.927 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:45 np0005473739 podman[390976]: 2025-10-07 14:39:45.074164454 +0000 UTC m=+0.061106454 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  7 10:39:45 np0005473739 podman[390977]: 2025-10-07 14:39:45.115781735 +0000 UTC m=+0.102700195 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 10:39:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:39:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4228171555' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.397 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.404 2 DEBUG nova.compute.provider_tree [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.424 2 DEBUG nova.scheduler.client.report [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.444 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.655s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.444 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.506 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.506 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.562 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.645 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.684 2 DEBUG nova.policy [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.773 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.774 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.774 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Creating image(s)#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.797 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.825 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.852 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.856 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.935 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.936 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.937 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.937 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.961 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:39:45 np0005473739 nova_compute[259550]: 2025-10-07 14:39:45.965 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.334 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.369s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.399 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:39:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2320: 305 pgs: 305 active+clean; 88 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 120 op/s
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.513 2 DEBUG nova.objects.instance [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.535 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.535 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Ensure instance console log exists: /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.536 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.536 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.536 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.765 2 DEBUG nova.compute.manager [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.766 2 DEBUG nova.compute.manager [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.766 2 DEBUG oslo_concurrency.lockutils [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.766 2 DEBUG oslo_concurrency.lockutils [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:46 np0005473739 nova_compute[259550]: 2025-10-07 14:39:46.766 2 DEBUG nova.network.neutron [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:39:47 np0005473739 nova_compute[259550]: 2025-10-07 14:39:47.826 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2321: 305 pgs: 305 active+clean; 88 MiB data, 837 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.8 MiB/s wr, 109 op/s
Oct  7 10:39:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:39:48.561 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '39'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:39:48 np0005473739 nova_compute[259550]: 2025-10-07 14:39:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:39:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2322: 305 pgs: 305 active+clean; 115 MiB data, 849 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.7 MiB/s wr, 137 op/s
Oct  7 10:39:50 np0005473739 nova_compute[259550]: 2025-10-07 14:39:50.576 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Successfully created port: ff77de20-1280-4c30-941d-c53ed7efcbe8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:39:50 np0005473739 nova_compute[259550]: 2025-10-07 14:39:50.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:50 np0005473739 nova_compute[259550]: 2025-10-07 14:39:50.771 2 DEBUG nova.network.neutron [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:39:50 np0005473739 nova_compute[259550]: 2025-10-07 14:39:50.772 2 DEBUG nova.network.neutron [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:50Z|01305|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct  7 10:39:50 np0005473739 nova_compute[259550]: 2025-10-07 14:39:50.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:50 np0005473739 nova_compute[259550]: 2025-10-07 14:39:50.886 2 DEBUG oslo_concurrency.lockutils [req-4d2c267e-7566-4433-889f-872c9b0386de req-8e7f5549-f085-416a-8dae-a871cdecfd08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:51 np0005473739 nova_compute[259550]: 2025-10-07 14:39:51.777 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Successfully created port: a40ec757-407d-4375-b756-d4fb8f5664b4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:39:51 np0005473739 nova_compute[259550]: 2025-10-07 14:39:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:39:51 np0005473739 nova_compute[259550]: 2025-10-07 14:39:51.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:39:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2323: 305 pgs: 305 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.9 MiB/s wr, 130 op/s
Oct  7 10:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:39:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:39:52 np0005473739 nova_compute[259550]: 2025-10-07 14:39:52.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:53 np0005473739 nova_compute[259550]: 2025-10-07 14:39:53.476 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Successfully updated port: ff77de20-1280-4c30-941d-c53ed7efcbe8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:39:53 np0005473739 nova_compute[259550]: 2025-10-07 14:39:53.743 2 DEBUG nova.compute.manager [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:53 np0005473739 nova_compute[259550]: 2025-10-07 14:39:53.743 2 DEBUG nova.compute.manager [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing instance network info cache due to event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:39:53 np0005473739 nova_compute[259550]: 2025-10-07 14:39:53.744 2 DEBUG oslo_concurrency.lockutils [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:53 np0005473739 nova_compute[259550]: 2025-10-07 14:39:53.744 2 DEBUG oslo_concurrency.lockutils [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:53 np0005473739 nova_compute[259550]: 2025-10-07 14:39:53.744 2 DEBUG nova.network.neutron [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:39:53 np0005473739 nova_compute[259550]: 2025-10-07 14:39:53.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:39:53 np0005473739 nova_compute[259550]: 2025-10-07 14:39:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:39:54 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #48. Immutable memtables: 5.
Oct  7 10:39:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2324: 305 pgs: 305 active+clean; 134 MiB data, 859 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 128 op/s
Oct  7 10:39:54 np0005473739 nova_compute[259550]: 2025-10-07 14:39:54.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:39:54 np0005473739 nova_compute[259550]: 2025-10-07 14:39:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:39:55 np0005473739 nova_compute[259550]: 2025-10-07 14:39:55.109 2 DEBUG nova.network.neutron [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:39:55 np0005473739 nova_compute[259550]: 2025-10-07 14:39:55.678 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759847980.6769834, 9996c6b8-7d50-42b8-9617-2a1ae7d36d30 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:39:55 np0005473739 nova_compute[259550]: 2025-10-07 14:39:55.679 2 INFO nova.compute.manager [-] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:39:55 np0005473739 nova_compute[259550]: 2025-10-07 14:39:55.709 2 DEBUG nova.compute.manager [None req-096cba56-4f96-4827-baa2-dabd8615c138 - - - - - -] [instance: 9996c6b8-7d50-42b8-9617-2a1ae7d36d30] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:39:55 np0005473739 nova_compute[259550]: 2025-10-07 14:39:55.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:56 np0005473739 nova_compute[259550]: 2025-10-07 14:39:56.033 2 DEBUG nova.network.neutron [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:39:56 np0005473739 nova_compute[259550]: 2025-10-07 14:39:56.055 2 DEBUG oslo_concurrency.lockutils [req-92c90d1d-7c98-4180-9209-6c545e2e7b9b req-8c06700b-fdbb-4a7b-8f76-a27c7b085cdc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:39:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:56Z|00148|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:6d:ee:73 10.100.0.7
Oct  7 10:39:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:39:56Z|00149|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:6d:ee:73 10.100.0.7
Oct  7 10:39:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2325: 305 pgs: 305 active+clean; 145 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 709 KiB/s rd, 2.2 MiB/s wr, 86 op/s
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.007 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Successfully updated port: a40ec757-407d-4375-b756-d4fb8f5664b4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.148 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.148 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.148 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.184 2 DEBUG nova.compute.manager [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-changed-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.184 2 DEBUG nova.compute.manager [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing instance network info cache due to event network-changed-a40ec757-407d-4375-b756-d4fb8f5664b4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.185 2 DEBUG oslo_concurrency.lockutils [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.714 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:39:57 np0005473739 nova_compute[259550]: 2025-10-07 14:39:57.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:39:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:39:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2326: 305 pgs: 305 active+clean; 145 MiB data, 863 MiB used, 59 GiB / 60 GiB avail; 522 KiB/s rd, 2.2 MiB/s wr, 70 op/s
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.046 2 DEBUG nova.network.neutron [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.067 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.068 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance network_info: |[{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.069 2 DEBUG oslo_concurrency.lockutils [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.069 2 DEBUG nova.network.neutron [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing network info cache for port a40ec757-407d-4375-b756-d4fb8f5664b4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:40:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:00.072 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:00.073 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:00.073 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.075 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start _get_guest_xml network_info=[{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.082 2 WARNING nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.098 2 DEBUG nova.virt.libvirt.host [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.099 2 DEBUG nova.virt.libvirt.host [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.111 2 DEBUG nova.virt.libvirt.host [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.112 2 DEBUG nova.virt.libvirt.host [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.113 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.113 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.114 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.114 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.114 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.114 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.115 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.115 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.115 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.115 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.116 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.116 2 DEBUG nova.virt.hardware [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.120 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2327: 305 pgs: 305 active+clean; 167 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 701 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct  7 10:40:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:40:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/723887806' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.578 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.608 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.614 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:00 np0005473739 nova_compute[259550]: 2025-10-07 14:40:00.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.014 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:40:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:40:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3252103378' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.053 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.438s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.054 2 DEBUG nova.virt.libvirt.vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.055 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.056 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.057 2 DEBUG nova.virt.libvirt.vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.057 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.058 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.059 2 DEBUG nova.objects.instance [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.077 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <uuid>91ef3edf-0b1e-4a6d-8ef1-af2687c58b74</uuid>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <name>instance-0000007a</name>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1628698219</nova:name>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:40:00</nova:creationTime>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:port uuid="ff77de20-1280-4c30-941d-c53ed7efcbe8">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <nova:port uuid="a40ec757-407d-4375-b756-d4fb8f5664b4">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe37:32f2" ipVersion="6"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe37:32f2" ipVersion="6"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <entry name="serial">91ef3edf-0b1e-4a6d-8ef1-af2687c58b74</entry>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <entry name="uuid">91ef3edf-0b1e-4a6d-8ef1-af2687c58b74</entry>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:09:b5:2f"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <target dev="tapff77de20-12"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:37:32:f2"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <target dev="tapa40ec757-40"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/console.log" append="off"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:40:01 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:40:01 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:40:01 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:40:01 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.078 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Preparing to wait for external event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.078 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.079 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.079 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.079 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Preparing to wait for external event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.080 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.080 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.080 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.081 2 DEBUG nova.virt.libvirt.vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.081 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.082 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.083 2 DEBUG os_vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.084 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.092 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapff77de20-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.093 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapff77de20-12, col_values=(('external_ids', {'iface-id': 'ff77de20-1280-4c30-941d-c53ed7efcbe8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:b5:2f', 'vm-uuid': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:01 np0005473739 NetworkManager[44949]: <info>  [1759848001.0955] manager: (tapff77de20-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/527)
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.106 2 INFO os_vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12')#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.106 2 DEBUG nova.virt.libvirt.vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:39:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.107 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.107 2 DEBUG nova.network.os_vif_util [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.108 2 DEBUG os_vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.109 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.110 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa40ec757-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.111 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa40ec757-40, col_values=(('external_ids', {'iface-id': 'a40ec757-407d-4375-b756-d4fb8f5664b4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:37:32:f2', 'vm-uuid': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:01 np0005473739 NetworkManager[44949]: <info>  [1759848001.1127] manager: (tapa40ec757-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/528)
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.124 2 INFO os_vif [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40')#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.181 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.182 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.182 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:09:b5:2f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.183 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:37:32:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.183 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Using config drive#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.210 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.720 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Creating config drive at /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.725 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4hesnxv2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.865 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4hesnxv2" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.895 2 DEBUG nova.storage.rbd_utils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:01 np0005473739 nova_compute[259550]: 2025-10-07 14:40:01.898 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.089 2 DEBUG oslo_concurrency.processutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.191s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.090 2 INFO nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Deleting local config drive /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74/disk.config because it was imported into RBD.#033[00m
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.1388] manager: (tapff77de20-12): new Tun device (/org/freedesktop/NetworkManager/Devices/529)
Oct  7 10:40:02 np0005473739 kernel: tapff77de20-12: entered promiscuous mode
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01306|binding|INFO|Claiming lport ff77de20-1280-4c30-941d-c53ed7efcbe8 for this chassis.
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01307|binding|INFO|ff77de20-1280-4c30-941d-c53ed7efcbe8: Claiming fa:16:3e:09:b5:2f 10.100.0.12
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.183 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.188 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:b5:2f 10.100.0.12'], port_security=['fa:16:3e:09:b5:2f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b59ffdd2-4285-47f2-a931-fca691d1c031', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c45d6c4-6a15-4c19-8f6b-673e9b96b82d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ff77de20-1280-4c30-941d-c53ed7efcbe8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.189 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ff77de20-1280-4c30-941d-c53ed7efcbe8 in datapath b59ffdd2-4285-47f2-a931-fca691d1c031 bound to our chassis#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.190 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b59ffdd2-4285-47f2-a931-fca691d1c031#033[00m
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.1924] manager: (tapa40ec757-40): new Tun device (/org/freedesktop/NetworkManager/Devices/530)
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 kernel: tapa40ec757-40: entered promiscuous mode
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01308|binding|INFO|Setting lport ff77de20-1280-4c30-941d-c53ed7efcbe8 ovn-installed in OVS
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01309|binding|INFO|Setting lport ff77de20-1280-4c30-941d-c53ed7efcbe8 up in Southbound
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01310|if_status|INFO|Dropped 5 log messages in last 92 seconds (most recently, 92 seconds ago) due to excessive rate
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01311|if_status|INFO|Not updating pb chassis for a40ec757-407d-4375-b756-d4fb8f5664b4 now as sb is readonly
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.204 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[59b9c16c-ec6b-4e47-b689-913434805153]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.207 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb59ffdd2-41 in ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.210 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb59ffdd2-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.210 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c90a952-70e4-44e6-96a1-56df6f1551c1]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01312|binding|INFO|Claiming lport a40ec757-407d-4375-b756-d4fb8f5664b4 for this chassis.
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01313|binding|INFO|a40ec757-407d-4375-b756-d4fb8f5664b4: Claiming fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.212 2 DEBUG nova.network.neutron [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updated VIF entry in instance network info cache for port a40ec757-407d-4375-b756-d4fb8f5664b4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.212 2 DEBUG nova.network.neutron [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.211 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad72db80-3736-4c38-91a4-9d7151450d9e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01314|binding|INFO|Setting lport a40ec757-407d-4375-b756-d4fb8f5664b4 ovn-installed in OVS
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01315|binding|INFO|Setting lport a40ec757-407d-4375-b756-d4fb8f5664b4 up in Southbound
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.229 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.233 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2'], port_security=['fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe37:32f2/64 2001:db8::f816:3eff:fe37:32f2/64', 'neutron:device_id': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a40ec757-407d-4375-b756-d4fb8f5664b4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.236 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[89e36837-3f59-4977-93ac-3f8eed6c7c29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 systemd-machined[214580]: New machine qemu-153-instance-0000007a.
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.244 2 DEBUG oslo_concurrency.lockutils [req-07a94f41-a735-43cb-bf9b-fd9519e0f39e req-22834135-a575-4f9d-a2a5-38e18a93ecf8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:02 np0005473739 systemd[1]: Started Virtual Machine qemu-153-instance-0000007a.
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[031c57a4-dd35-4bcc-b71c-fd9c46159151]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 podman[391349]: 2025-10-07 14:40:02.277421049 +0000 UTC m=+0.060404095 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 10:40:02 np0005473739 systemd-udevd[391393]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:02 np0005473739 systemd-udevd[391394]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:02 np0005473739 podman[391345]: 2025-10-07 14:40:02.293462778 +0000 UTC m=+0.078010285 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.2966] device (tapff77de20-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.2977] device (tapff77de20-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.2983] device (tapa40ec757-40): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.2989] device (tapa40ec757-40): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.300 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c1acfe7c-f3cb-4b44-8d7e-6ba179c0bd98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.3059] manager: (tapb59ffdd2-40): new Veth device (/org/freedesktop/NetworkManager/Devices/531)
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.305 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dbf43075-9cbe-4819-9de6-64d128bdeede]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.346 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2db6aabc-032e-4484-ba70-47907b4eb920]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.351 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[76a092af-cb77-445f-9377-69e9a40c1976]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.3779] device (tapb59ffdd2-40): carrier: link connected
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.385 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cc434b14-c275-450f-8f6e-9fa60d8b91e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.404 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[15849579-cb03-4e80-b385-45ef148d9057]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb59ffdd2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:3d:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858594, 'reachable_time': 15330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391423, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d36d09af-0662-4aa1-b8f0-d03b72ca8262]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:3d8f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858594, 'tstamp': 858594}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391424, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2328: 305 pgs: 305 active+clean; 167 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.9 MiB/s wr, 71 op/s
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.444 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b0680c2-a9e8-4a2f-b5e5-3fb4b433feb7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb59ffdd2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:3d:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858594, 'reachable_time': 15330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391425, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.475 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17e149e0-0d2d-41ad-bb0b-13e1599bf6fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1a6e517-4cd6-433f-8595-0b6aa2abf7de]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.542 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb59ffdd2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb59ffdd2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 NetworkManager[44949]: <info>  [1759848002.5457] manager: (tapb59ffdd2-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/532)
Oct  7 10:40:02 np0005473739 kernel: tapb59ffdd2-40: entered promiscuous mode
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.553 2 DEBUG nova.compute.manager [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.553 2 DEBUG oslo_concurrency.lockutils [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.554 2 DEBUG oslo_concurrency.lockutils [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.554 2 DEBUG oslo_concurrency.lockutils [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.554 2 DEBUG nova.compute.manager [req-7b20a5f8-2c46-4ed6-b3b1-428dfa3eae74 req-a7195039-6367-4d65-92a1-658d9b47c9f9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Processing event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.555 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb59ffdd2-40, col_values=(('external_ids', {'iface-id': '4cc97c0a-633b-48bc-94c4-6f8ac1f61c66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:02Z|01316|binding|INFO|Releasing lport 4cc97c0a-633b-48bc-94c4-6f8ac1f61c66 from this chassis (sb_readonly=0)
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.560 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b59ffdd2-4285-47f2-a931-fca691d1c031.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b59ffdd2-4285-47f2-a931-fca691d1c031.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.561 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f15eeb4e-56f9-4eb3-a9d8-914d1d49c6d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.562 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-b59ffdd2-4285-47f2-a931-fca691d1c031
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/b59ffdd2-4285-47f2-a931-fca691d1c031.pid.haproxy
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID b59ffdd2-4285-47f2-a931-fca691d1c031
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:40:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:02.563 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'env', 'PROCESS_TAG=haproxy-b59ffdd2-4285-47f2-a931-fca691d1c031', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b59ffdd2-4285-47f2-a931-fca691d1c031.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.681 2 INFO nova.compute.manager [None req-42ecc973-ca36-49eb-816a-e1ba07af0f68 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Get console output#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.692 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:02 np0005473739 nova_compute[259550]: 2025-10-07 14:40:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.007 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.007 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:03 np0005473739 podman[391499]: 2025-10-07 14:40:02.922506686 +0000 UTC m=+0.021806734 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:40:03 np0005473739 podman[391499]: 2025-10-07 14:40:03.032752372 +0000 UTC m=+0.132052389 container create 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:40:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:03 np0005473739 systemd[1]: Started libpod-conmon-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1.scope.
Oct  7 10:40:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25645b5d35c32237e95d967c51a9e71c44938bb7f9072a90ac733ca580ab77f3/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:03 np0005473739 podman[391499]: 2025-10-07 14:40:03.1546644 +0000 UTC m=+0.253964437 container init 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:40:03 np0005473739 podman[391499]: 2025-10-07 14:40:03.160950967 +0000 UTC m=+0.260250984 container start 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:40:03 np0005473739 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [NOTICE]   (391537) : New worker (391541) forked
Oct  7 10:40:03 np0005473739 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [NOTICE]   (391537) : Loading success.
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.224 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a40ec757-407d-4375-b756-d4fb8f5664b4 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 unbound from our chassis#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.226 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abe90ba0-a518-4cef-a49b-de57485faec5#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf2dca0-1171-45a8-949a-a5cd3751c735]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.240 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapabe90ba0-a1 in ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.242 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapabe90ba0-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.243 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dee9ac3b-9c5a-4c47-a1f1-31b877db7e20]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.245 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c0e1d4ef-adaf-4629-b311-558eff1436ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.260 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[aa2f2bb6-6cdf-40cc-970e-a0f708f5789a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.272 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f462fb46-7b87-4353-afdc-de76964ec627]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.302 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e79f9301-355b-4f1c-8f60-3ea8a54decbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 NetworkManager[44949]: <info>  [1759848003.3093] manager: (tapabe90ba0-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/533)
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.313 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f90168cc-6077-4517-887d-bf09a75b24e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.345 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b20215ed-2447-431b-8835-2ccfee384228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.348 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c60635db-4576-459a-b3fa-b0371aa7265f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.361 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848003.360391, 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.361 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] VM Started (Lifecycle Event)#033[00m
Oct  7 10:40:03 np0005473739 NetworkManager[44949]: <info>  [1759848003.3726] device (tapabe90ba0-a0): carrier: link connected
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.378 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4ae20115-5b08-4751-81e3-64bb941043c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.380 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.409 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848003.3606946, 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.409 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.416 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[864d5108-b036-48ad-a66f-ebff4807995f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabe90ba0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:a0:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858693, 'reachable_time': 40982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391560, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.425 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.428 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.430 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cd3b273-737a-4f83-9aed-2b8119fd5403]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe6e:a012'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858693, 'tstamp': 858693}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391561, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.448 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.450 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b11404f0-73be-4eb5-85e4-645eb8926387]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabe90ba0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:a0:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858693, 'reachable_time': 40982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391562, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.477 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[94f8a9d4-a36f-463e-b5a8-8fcc57b36752]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:40:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/737997735' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.519 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5759d9a5-09cc-4335-b7db-51fee7c9b297]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.521 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabe90ba0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.522 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.522 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabe90ba0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:03 np0005473739 kernel: tapabe90ba0-a0: entered promiscuous mode
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.524 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:03 np0005473739 NetworkManager[44949]: <info>  [1759848003.5253] manager: (tapabe90ba0-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/534)
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.528 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabe90ba0-a0, col_values=(('external_ids', {'iface-id': '763708cd-58bb-4680-a4f7-042aa711a366'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:03Z|01317|binding|INFO|Releasing lport 763708cd-58bb-4680-a4f7-042aa711a366 from this chassis (sb_readonly=0)
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.533 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.526s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.551 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/abe90ba0-a518-4cef-a49b-de57485faec5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/abe90ba0-a518-4cef-a49b-de57485faec5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.553 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17c4d571-fa97-4d72-b683-c4aba625bf38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.554 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-abe90ba0-a518-4cef-a49b-de57485faec5
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/abe90ba0-a518-4cef-a49b-de57485faec5.pid.haproxy
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID abe90ba0-a518-4cef-a49b-de57485faec5
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:40:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:03.555 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'env', 'PROCESS_TAG=haproxy-abe90ba0-a518-4cef-a49b-de57485faec5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/abe90ba0-a518-4cef-a49b-de57485faec5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.640 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.640 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.644 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.645 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.828 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.829 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3373MB free_disk=59.92213821411133GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.829 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.830 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.926 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d0b10640-5492-4d8f-8b94-a49a15b6e702 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.927 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.927 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:40:03 np0005473739 nova_compute[259550]: 2025-10-07 14:40:03.927 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:40:03 np0005473739 podman[391595]: 2025-10-07 14:40:03.968583668 +0000 UTC m=+0.048242760 container create edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:40:04 np0005473739 systemd[1]: Started libpod-conmon-edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9.scope.
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c49969737f58c490492c852bd2ea4f9f98b17080447bba92e02b629fbfab611a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:04 np0005473739 podman[391595]: 2025-10-07 14:40:03.947606518 +0000 UTC m=+0.027265640 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:40:04 np0005473739 podman[391595]: 2025-10-07 14:40:04.047291451 +0000 UTC m=+0.126950563 container init edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:40:04 np0005473739 podman[391595]: 2025-10-07 14:40:04.054119534 +0000 UTC m=+0.133778626 container start edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:40:04 np0005473739 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [NOTICE]   (391615) : New worker (391617) forked
Oct  7 10:40:04 np0005473739 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [NOTICE]   (391615) : Loading success.
Oct  7 10:40:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2329: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct  7 10:40:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:40:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1417907105' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.462 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.467 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.483 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.506 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.507 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.677s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.655 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.655 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No event matching network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 in dict_keys([('network-vif-plugged', 'a40ec757-407d-4375-b756-d4fb8f5664b4')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.656 2 WARNING nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received unexpected event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.656 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Processing event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.657 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.658 2 DEBUG oslo_concurrency.lockutils [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.658 2 DEBUG nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.658 2 WARNING nova.compute.manager [req-0e81ecfa-0db9-4c5a-a113-24eead3f7890 req-cbda7392-e9f1-4171-a089-e16ea7deec37 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received unexpected event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.659 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.663 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848004.6631212, 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.663 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.666 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.670 2 INFO nova.virt.libvirt.driver [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance spawned successfully.#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.670 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.688 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.695 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.699 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.699 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.700 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.700 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.700 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.701 2 DEBUG nova.virt.libvirt.driver [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.730 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.761 2 INFO nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Took 18.99 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.762 2 DEBUG nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.825 2 INFO nova.compute.manager [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Took 20.06 seconds to build instance.#033[00m
Oct  7 10:40:04 np0005473739 nova_compute[259550]: 2025-10-07 14:40:04.842 2 DEBUG oslo_concurrency.lockutils [None req-5687ea17-97cc-4847-832b-f07c50cf40b7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 20.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:06 np0005473739 nova_compute[259550]: 2025-10-07 14:40:06.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:06 np0005473739 nova_compute[259550]: 2025-10-07 14:40:06.239 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-None" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:06 np0005473739 nova_compute[259550]: 2025-10-07 14:40:06.239 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-None" acquired by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:06 np0005473739 nova_compute[259550]: 2025-10-07 14:40:06.240 2 DEBUG nova.objects.instance [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'flavor' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2330: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 891 KiB/s rd, 2.2 MiB/s wr, 88 op/s
Oct  7 10:40:06 np0005473739 nova_compute[259550]: 2025-10-07 14:40:06.855 2 DEBUG nova.objects.instance [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_requests' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:06 np0005473739 nova_compute[259550]: 2025-10-07 14:40:06.867 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:40:07 np0005473739 nova_compute[259550]: 2025-10-07 14:40:07.116 2 DEBUG nova.policy [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:40:07 np0005473739 nova_compute[259550]: 2025-10-07 14:40:07.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:07 np0005473739 nova_compute[259550]: 2025-10-07 14:40:07.856 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Successfully created port: 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:40:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.289 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.290 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.323 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:40:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2331: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 794 KiB/s rd, 1.7 MiB/s wr, 57 op/s
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.501 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.501 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.502 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.510 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.511 2 INFO nova.compute.claims [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.531 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:08 np0005473739 nova_compute[259550]: 2025-10-07 14:40:08.919 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:40:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/860519083' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.409 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Successfully updated port: 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.426 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.432 2 DEBUG nova.compute.provider_tree [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.463 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.463 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.464 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.472 2 DEBUG nova.scheduler.client.report [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.528 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.027s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.529 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.591 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.592 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.618 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.639 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.736 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.738 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.739 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Creating image(s)#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.764 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.793 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.824 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.827 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.871 2 DEBUG nova.compute.manager [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.872 2 DEBUG nova.compute.manager [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.872 2 DEBUG oslo_concurrency.lockutils [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.918 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.919 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.920 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.920 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.940 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.943 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:09 np0005473739 nova_compute[259550]: 2025-10-07 14:40:09.990 2 DEBUG nova.policy [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '57b4a09a91e94b4c8417e522a9a10496', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.306 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.363s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.345 2 DEBUG nova.compute.manager [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.346 2 DEBUG nova.compute.manager [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing instance network info cache due to event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.346 2 DEBUG oslo_concurrency.lockutils [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.346 2 DEBUG oslo_concurrency.lockutils [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.347 2 DEBUG nova.network.neutron [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.395 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] resizing rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:40:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2332: 305 pgs: 305 active+clean; 167 MiB data, 902 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 97 op/s
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.490 2 DEBUG nova.objects.instance [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'migration_context' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.605 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.605 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Ensure instance console log exists: /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.605 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.606 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:10 np0005473739 nova_compute[259550]: 2025-10-07 14:40:10.606 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:11 np0005473739 nova_compute[259550]: 2025-10-07 14:40:11.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:11 np0005473739 nova_compute[259550]: 2025-10-07 14:40:11.463 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Successfully created port: 4eb92b42-4298-4d8e-8455-b37a2972c583 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:40:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2333: 305 pgs: 305 active+clean; 170 MiB data, 903 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 123 KiB/s wr, 77 op/s
Oct  7 10:40:12 np0005473739 nova_compute[259550]: 2025-10-07 14:40:12.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.276 2 DEBUG nova.network.neutron [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updated VIF entry in instance network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.277 2 DEBUG nova.network.neutron [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.296 2 DEBUG oslo_concurrency.lockutils [req-7f845283-f045-4462-a4c6-3324b3259ae2 req-ad129684-5e11-47f6-891b-90b2c0c427e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.310 2 DEBUG nova.network.neutron [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.327 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.328 2 DEBUG oslo_concurrency.lockutils [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.329 2 DEBUG nova.network.neutron [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.332 2 DEBUG nova.virt.libvirt.vif [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.332 2 DEBUG nova.network.os_vif_util [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.333 2 DEBUG nova.network.os_vif_util [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.334 2 DEBUG os_vif [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.335 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.338 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap359e9c20-ec, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.339 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap359e9c20-ec, col_values=(('external_ids', {'iface-id': '359e9c20-ec4d-4bc9-bfc1-93f3464bf09b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:75:78', 'vm-uuid': 'd0b10640-5492-4d8f-8b94-a49a15b6e702'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 NetworkManager[44949]: <info>  [1759848013.3419] manager: (tap359e9c20-ec): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/535)
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.349 2 INFO os_vif [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec')#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.350 2 DEBUG nova.virt.libvirt.vif [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.350 2 DEBUG nova.network.os_vif_util [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.351 2 DEBUG nova.network.os_vif_util [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.355 2 DEBUG nova.virt.libvirt.guest [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] attach device xml: <interface type="ethernet">
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:98:75:78"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <target dev="tap359e9c20-ec"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:40:13 np0005473739 nova_compute[259550]: attach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:339#033[00m
Oct  7 10:40:13 np0005473739 kernel: tap359e9c20-ec: entered promiscuous mode
Oct  7 10:40:13 np0005473739 NetworkManager[44949]: <info>  [1759848013.3672] manager: (tap359e9c20-ec): new Tun device (/org/freedesktop/NetworkManager/Devices/536)
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:13Z|01318|binding|INFO|Claiming lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for this chassis.
Oct  7 10:40:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:13Z|01319|binding|INFO|359e9c20-ec4d-4bc9-bfc1-93f3464bf09b: Claiming fa:16:3e:98:75:78 10.100.0.23
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.389 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:75:78 10.100.0.23'], port_security=['fa:16:3e:98:75:78 10.100.0.23'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'd0b10640-5492-4d8f-8b94-a49a15b6e702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4af57759-0100-4ebb-81fb-af43dd151fff, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.390 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b in datapath cae6e154-5797-4df5-a9e8-545cc6ed0188 bound to our chassis#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.392 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cae6e154-5797-4df5-a9e8-545cc6ed0188#033[00m
Oct  7 10:40:13 np0005473739 systemd-udevd[391842]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.410 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[907144ef-4e05-4051-a7eb-db065d335514]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.413 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcae6e154-51 in ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:40:13 np0005473739 NetworkManager[44949]: <info>  [1759848013.4156] device (tap359e9c20-ec): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:13 np0005473739 NetworkManager[44949]: <info>  [1759848013.4167] device (tap359e9c20-ec): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.414 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcae6e154-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.414 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[278d0e7e-f498-48ef-9da4-78659ddd019e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.419 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[894c8ca5-a711-49b6-94d6-df98765ed8cc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:13Z|01320|binding|INFO|Setting lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b ovn-installed in OVS
Oct  7 10:40:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:13Z|01321|binding|INFO|Setting lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b up in Southbound
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.436 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[309d8a80-a4b6-4587-bcf5-36d0414b925b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.451 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bed1ee1f-9e70-4f62-a921-e1384b0af8a8]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.477 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Successfully updated port: 4eb92b42-4298-4d8e-8455-b37a2972c583 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.482 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[35de56b9-7626-4ffe-b9e7-1c1b63686ddb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 systemd-udevd[391844]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:13 np0005473739 NetworkManager[44949]: <info>  [1759848013.4894] manager: (tapcae6e154-50): new Veth device (/org/freedesktop/NetworkManager/Devices/537)
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.490 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be9b78e3-180d-4e5f-98a5-bcf98c94fc51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.534 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[64cc0a56-9f08-4509-8f0d-08820ac65678]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.538 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9200b11a-269c-4f14-a274-b6972360a0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 NetworkManager[44949]: <info>  [1759848013.5618] device (tapcae6e154-50): carrier: link connected
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.565 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.565 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquired lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.565 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.567 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3b50c694-20e1-4b7c-837c-9929b73c758c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.586 2 DEBUG nova.virt.libvirt.driver [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.587 2 DEBUG nova.virt.libvirt.driver [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.588 2 DEBUG nova.virt.libvirt.driver [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:6d:ee:73, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.588 2 DEBUG nova.virt.libvirt.driver [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:98:75:78, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.594 2 DEBUG nova.compute.manager [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-changed-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.594 2 DEBUG nova.compute.manager [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Refreshing instance network info cache due to event network-changed-4eb92b42-4298-4d8e-8455-b37a2972c583. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.595 2 DEBUG oslo_concurrency.lockutils [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a182ef1-9279-4a01-9cc5-c1ab3fee0d44]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcae6e154-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:47:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859712, 'reachable_time': 43552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 391868, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.617 2 DEBUG nova.virt.libvirt.guest [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:40:13</nova:creationTime>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct  7 10:40:13 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    <nova:port uuid="359e9c20-ec4d-4bc9-bfc1-93f3464bf09b">
Oct  7 10:40:13 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:40:13 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:40:13 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:40:13 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.619 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[37cbebd8-6509-4532-ab84-5c5649807691]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec9:47d6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859712, 'tstamp': 859712}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 391869, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.645 2 DEBUG oslo_concurrency.lockutils [None req-4704dbc1-fb87-4146-ba05-3714119a0db8 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-None" "released" by "nova.compute.manager.ComputeManager.attach_interface.<locals>.do_attach_interface" :: held 7.405s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.647 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[23722516-8cc4-4175-b335-6b8b9ce5f4bf]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcae6e154-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:47:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859712, 'reachable_time': 43552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 391870, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.683 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3e21d1c1-7140-4c89-89c1-57c6ba25de0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.741 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eddb9b43-ad8a-40eb-bb85-04c4a80a37e9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcae6e154-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcae6e154-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 NetworkManager[44949]: <info>  [1759848013.7457] manager: (tapcae6e154-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/538)
Oct  7 10:40:13 np0005473739 kernel: tapcae6e154-50: entered promiscuous mode
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.747 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.749 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcae6e154-50, col_values=(('external_ids', {'iface-id': '795a08c5-66c3-453c-a5db-19a02c166ab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:13Z|01322|binding|INFO|Releasing lport 795a08c5-66c3-453c-a5db-19a02c166ab7 from this chassis (sb_readonly=0)
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.758 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.768 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.771 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cae6e154-5797-4df5-a9e8-545cc6ed0188.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cae6e154-5797-4df5-a9e8-545cc6ed0188.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.772 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28395a54-f7cd-474e-8707-e1c6de6ba3fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.772 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-cae6e154-5797-4df5-a9e8-545cc6ed0188
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/cae6e154-5797-4df5-a9e8-545cc6ed0188.pid.haproxy
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID cae6e154-5797-4df5-a9e8-545cc6ed0188
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:40:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:13.773 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'env', 'PROCESS_TAG=haproxy-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cae6e154-5797-4df5-a9e8-545cc6ed0188.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.845 2 DEBUG nova.compute.manager [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.845 2 DEBUG oslo_concurrency.lockutils [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.846 2 DEBUG oslo_concurrency.lockutils [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.846 2 DEBUG oslo_concurrency.lockutils [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.846 2 DEBUG nova.compute.manager [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:13 np0005473739 nova_compute[259550]: 2025-10-07 14:40:13.846 2 WARNING nova.compute.manager [req-6c23bbed-c91d-46a4-a38d-49aff723be6f req-82d042e2-6eab-4e66-9357-834148a315df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:40:14 np0005473739 podman[391902]: 2025-10-07 14:40:14.127937671 +0000 UTC m=+0.063843718 container create 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:40:14 np0005473739 systemd[1]: Started libpod-conmon-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e.scope.
Oct  7 10:40:14 np0005473739 podman[391902]: 2025-10-07 14:40:14.093889666 +0000 UTC m=+0.029795763 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:40:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1a41d1aa38f8abe0fef319a4a94399d666f0b1094fa534bf42c28aa629de3903/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:14 np0005473739 podman[391902]: 2025-10-07 14:40:14.213744951 +0000 UTC m=+0.149651018 container init 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:40:14 np0005473739 podman[391902]: 2025-10-07 14:40:14.219071103 +0000 UTC m=+0.154977160 container start 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:40:14 np0005473739 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [NOTICE]   (391921) : New worker (391923) forked
Oct  7 10:40:14 np0005473739 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [NOTICE]   (391921) : Loading success.
Oct  7 10:40:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2334: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 101 op/s
Oct  7 10:40:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:15Z|00150|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:75:78 10.100.0.23
Oct  7 10:40:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:15Z|00151|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:75:78 10.100.0.23
Oct  7 10:40:15 np0005473739 nova_compute[259550]: 2025-10-07 14:40:15.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:15.823 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=40, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=39) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:15.825 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:40:15 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:15.829 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '40'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:15 np0005473739 nova_compute[259550]: 2025-10-07 14:40:15.969 2 DEBUG nova.compute.manager [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:15 np0005473739 nova_compute[259550]: 2025-10-07 14:40:15.970 2 DEBUG oslo_concurrency.lockutils [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:15 np0005473739 nova_compute[259550]: 2025-10-07 14:40:15.970 2 DEBUG oslo_concurrency.lockutils [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:15 np0005473739 nova_compute[259550]: 2025-10-07 14:40:15.970 2 DEBUG oslo_concurrency.lockutils [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:15 np0005473739 nova_compute[259550]: 2025-10-07 14:40:15.970 2 DEBUG nova.compute.manager [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:15 np0005473739 nova_compute[259550]: 2025-10-07 14:40:15.971 2 WARNING nova.compute.manager [req-e5b6afc2-74aa-4f37-bc7b-69d473eafdd1 req-6f42d036-5b71-453d-b687-0b007e5dcf05 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:40:16 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  7 10:40:16 np0005473739 podman[391932]: 2025-10-07 14:40:16.082074553 +0000 UTC m=+0.067049234 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:40:16 np0005473739 podman[391933]: 2025-10-07 14:40:16.121412308 +0000 UTC m=+0.102708850 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct  7 10:40:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2335: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 93 op/s
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.481 2 DEBUG nova.network.neutron [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.504 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Releasing lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.505 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance network_info: |[{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.505 2 DEBUG oslo_concurrency.lockutils [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.505 2 DEBUG nova.network.neutron [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Refreshing network info cache for port 4eb92b42-4298-4d8e-8455-b37a2972c583 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.508 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start _get_guest_xml network_info=[{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.512 2 WARNING nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.518 2 DEBUG nova.virt.libvirt.host [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.519 2 DEBUG nova.virt.libvirt.host [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.525 2 DEBUG nova.virt.libvirt.host [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.526 2 DEBUG nova.virt.libvirt.host [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.527 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.528 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.529 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.530 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.530 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.530 2 DEBUG nova.virt.hardware [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.533 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.578 2 DEBUG nova.network.neutron [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.579 2 DEBUG nova.network.neutron [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.599 2 DEBUG oslo_concurrency.lockutils [req-f0193ea0-cf5e-4255-9626-8729241b5ab0 req-b604bacb-e228-42e0-b1ff-241e32e91f45 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:40:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2554996707' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:40:16 np0005473739 nova_compute[259550]: 2025-10-07 14:40:16.989 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.016 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.022 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:40:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/320746890' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.485 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.487 2 DEBUG nova.virt.libvirt.vif [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:09Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.488 2 DEBUG nova.network.os_vif_util [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.489 2 DEBUG nova.network.os_vif_util [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.490 2 DEBUG nova.objects.instance [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.505 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <uuid>0ec8fffb-cb39-4dd3-88b8-41467b24be13</uuid>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <name>instance-0000007b</name>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestServerAdvancedOps-server-1564405251</nova:name>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:40:16</nova:creationTime>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <nova:user uuid="57b4a09a91e94b4c8417e522a9a10496">tempest-TestServerAdvancedOps-1206359101-project-member</nova:user>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <nova:project uuid="c1c9329037b74c90a5df2b4a0e0afe75">tempest-TestServerAdvancedOps-1206359101</nova:project>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <nova:port uuid="4eb92b42-4298-4d8e-8455-b37a2972c583">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <entry name="serial">0ec8fffb-cb39-4dd3-88b8-41467b24be13</entry>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <entry name="uuid">0ec8fffb-cb39-4dd3-88b8-41467b24be13</entry>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:ee:3b:52"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <target dev="tap4eb92b42-42"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/console.log" append="off"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:40:17 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:40:17 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:40:17 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:40:17 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.506 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Preparing to wait for external event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.506 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.506 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.506 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.507 2 DEBUG nova.virt.libvirt.vif [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:09Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.508 2 DEBUG nova.network.os_vif_util [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.508 2 DEBUG nova.network.os_vif_util [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.509 2 DEBUG os_vif [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.510 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.511 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.514 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eb92b42-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.515 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4eb92b42-42, col_values=(('external_ids', {'iface-id': '4eb92b42-4298-4d8e-8455-b37a2972c583', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3b:52', 'vm-uuid': '0ec8fffb-cb39-4dd3-88b8-41467b24be13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:17 np0005473739 NetworkManager[44949]: <info>  [1759848017.5178] manager: (tap4eb92b42-42): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/539)
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.525 2 INFO os_vif [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42')#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.624 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.625 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.625 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] No VIF found with MAC fa:16:3e:ee:3b:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.625 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Using config drive#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.652 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:17 np0005473739 nova_compute[259550]: 2025-10-07 14:40:17.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:18Z|00152|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:b5:2f 10.100.0.12
Oct  7 10:40:18 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:18Z|00153|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:b5:2f 10.100.0.12
Oct  7 10:40:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2336: 305 pgs: 305 active+clean; 213 MiB data, 923 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct  7 10:40:18 np0005473739 nova_compute[259550]: 2025-10-07 14:40:18.725 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Creating config drive at /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config#033[00m
Oct  7 10:40:18 np0005473739 nova_compute[259550]: 2025-10-07 14:40:18.736 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpesn4y3rw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:18 np0005473739 nova_compute[259550]: 2025-10-07 14:40:18.886 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpesn4y3rw" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:18 np0005473739 nova_compute[259550]: 2025-10-07 14:40:18.915 2 DEBUG nova.storage.rbd_utils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] rbd image 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:18 np0005473739 nova_compute[259550]: 2025-10-07 14:40:18.919 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.029 2 DEBUG nova.network.neutron [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updated VIF entry in instance network info cache for port 4eb92b42-4298-4d8e-8455-b37a2972c583. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.030 2 DEBUG nova.network.neutron [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.046 2 DEBUG oslo_concurrency.lockutils [req-eac82111-1332-4b4e-8208-197f9b581bee req-8cb9e5d2-c597-402a-8a5d-7ce064ab58ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.092 2 DEBUG oslo_concurrency.processutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config 0ec8fffb-cb39-4dd3-88b8-41467b24be13_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.092 2 INFO nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Deleting local config drive /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13/disk.config because it was imported into RBD.#033[00m
Oct  7 10:40:19 np0005473739 kernel: tap4eb92b42-42: entered promiscuous mode
Oct  7 10:40:19 np0005473739 NetworkManager[44949]: <info>  [1759848019.1477] manager: (tap4eb92b42-42): new Tun device (/org/freedesktop/NetworkManager/Devices/540)
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:19Z|01323|binding|INFO|Claiming lport 4eb92b42-4298-4d8e-8455-b37a2972c583 for this chassis.
Oct  7 10:40:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:19Z|01324|binding|INFO|4eb92b42-4298-4d8e-8455-b37a2972c583: Claiming fa:16:3e:ee:3b:52 10.100.0.4
Oct  7 10:40:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:19.160 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:19.162 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 bound to our chassis#033[00m
Oct  7 10:40:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:19.163 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  7 10:40:19 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:19.164 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e757823b-d62d-4c95-9464-f4817068cf43]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:19 np0005473739 systemd-machined[214580]: New machine qemu-154-instance-0000007b.
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:19 np0005473739 systemd-udevd[392110]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:19Z|01325|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 ovn-installed in OVS
Oct  7 10:40:19 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:19Z|01326|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 up in Southbound
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:19 np0005473739 systemd[1]: Started Virtual Machine qemu-154-instance-0000007b.
Oct  7 10:40:19 np0005473739 NetworkManager[44949]: <info>  [1759848019.2179] device (tap4eb92b42-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:19 np0005473739 NetworkManager[44949]: <info>  [1759848019.2190] device (tap4eb92b42-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.944 2 DEBUG nova.compute.manager [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.944 2 DEBUG oslo_concurrency.lockutils [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.944 2 DEBUG oslo_concurrency.lockutils [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.945 2 DEBUG oslo_concurrency.lockutils [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.945 2 DEBUG nova.compute.manager [req-84b76b99-7afb-408f-adbf-f973d773c85a req-ab593593-3287-4fd9-b15d-26a55687fb50 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Processing event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.964 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848019.964502, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.965 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Started (Lifecycle Event)#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.967 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.972 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.976 2 INFO nova.virt.libvirt.driver [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance spawned successfully.#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.976 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.983 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:19 np0005473739 nova_compute[259550]: 2025-10-07 14:40:19.986 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.002 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.002 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.003 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.003 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.003 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.004 2 DEBUG nova.virt.libvirt.driver [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.008 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.008 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848019.9653676, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.009 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.034 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.038 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848019.9695575, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.038 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.060 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.063 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.074 2 INFO nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Took 10.34 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.074 2 DEBUG nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.245 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.282 2 INFO nova.compute.manager [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Took 11.80 seconds to build instance.#033[00m
Oct  7 10:40:20 np0005473739 nova_compute[259550]: 2025-10-07 14:40:20.311 2 DEBUG oslo_concurrency.lockutils [None req-a91f3a57-f6a9-420e-99db-c291564c0968 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.022s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2337: 305 pgs: 305 active+clean; 237 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 3.4 MiB/s wr, 109 op/s
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:40:21 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 40ec1401-c665-46d2-8502-4453c0a3e91d does not exist
Oct  7 10:40:21 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2ef472cc-49a8-4483-8789-7acfd3609929 does not exist
Oct  7 10:40:21 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8de346ad-bce1-42af-94be-eadab2bc2db1 does not exist
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:40:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.026 2 DEBUG nova.compute.manager [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.027 2 DEBUG oslo_concurrency.lockutils [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.028 2 DEBUG oslo_concurrency.lockutils [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.028 2 DEBUG oslo_concurrency.lockutils [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.028 2 DEBUG nova.compute.manager [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.028 2 WARNING nova.compute.manager [req-bcad2b0d-8728-4b1a-b916-c59d7c3d156a req-df6493bb-ef61-4fb7-81c5-fd5899d9ef74 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:40:22 np0005473739 podman[392432]: 2025-10-07 14:40:22.205641929 +0000 UTC m=+0.039823869 container create fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:40:22 np0005473739 systemd[1]: Started libpod-conmon-fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a.scope.
Oct  7 10:40:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:22 np0005473739 podman[392432]: 2025-10-07 14:40:22.187780775 +0000 UTC m=+0.021962735 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:40:22 np0005473739 podman[392432]: 2025-10-07 14:40:22.292425375 +0000 UTC m=+0.126607315 container init fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:40:22 np0005473739 podman[392432]: 2025-10-07 14:40:22.299192625 +0000 UTC m=+0.133374555 container start fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:40:22 np0005473739 podman[392432]: 2025-10-07 14:40:22.302326238 +0000 UTC m=+0.136508198 container attach fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:40:22 np0005473739 objective_roentgen[392448]: 167 167
Oct  7 10:40:22 np0005473739 systemd[1]: libpod-fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a.scope: Deactivated successfully.
Oct  7 10:40:22 np0005473739 podman[392432]: 2025-10-07 14:40:22.304176128 +0000 UTC m=+0.138358058 container died fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:40:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:40:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:40:22 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:40:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-617ab9fd97dfe0c8a47757def16a21685a4c7070b49f3d43037667d1b4efc11b-merged.mount: Deactivated successfully.
Oct  7 10:40:22 np0005473739 podman[392432]: 2025-10-07 14:40:22.364883982 +0000 UTC m=+0.199065912 container remove fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_roentgen, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:40:22 np0005473739 systemd[1]: libpod-conmon-fe817f064b534249b4e60e15f3f2431b442326cd0ab35ea3b436968b4c80465a.scope: Deactivated successfully.
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2338: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 598 KiB/s rd, 3.9 MiB/s wr, 101 op/s
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:22 np0005473739 podman[392475]: 2025-10-07 14:40:22.589491762 +0000 UTC m=+0.043294562 container create 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 10:40:22 np0005473739 systemd[1]: Started libpod-conmon-1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a.scope.
Oct  7 10:40:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:22 np0005473739 podman[392475]: 2025-10-07 14:40:22.572603322 +0000 UTC m=+0.026406142 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:40:22 np0005473739 podman[392475]: 2025-10-07 14:40:22.686971893 +0000 UTC m=+0.140774703 container init 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:40:22 np0005473739 podman[392475]: 2025-10-07 14:40:22.697008519 +0000 UTC m=+0.150811319 container start 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 10:40:22 np0005473739 podman[392475]: 2025-10-07 14:40:22.703957464 +0000 UTC m=+0.157760364 container attach 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:40:22
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'vms', 'backups', '.mgr', 'images']
Oct  7 10:40:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.905 2 DEBUG nova.objects.instance [None req-828e63df-bb74-43ea-9e90-4889af12f1dd 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.936 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848022.9367168, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.937 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:40:22 np0005473739 nova_compute[259550]: 2025-10-07 14:40:22.993 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:23 np0005473739 nova_compute[259550]: 2025-10-07 14:40:23.000 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:23 np0005473739 nova_compute[259550]: 2025-10-07 14:40:23.034 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  7 10:40:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:40:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:40:23 np0005473739 kernel: tap4eb92b42-42 (unregistering): left promiscuous mode
Oct  7 10:40:23 np0005473739 NetworkManager[44949]: <info>  [1759848023.4525] device (tap4eb92b42-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:40:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:23Z|01327|binding|INFO|Releasing lport 4eb92b42-4298-4d8e-8455-b37a2972c583 from this chassis (sb_readonly=0)
Oct  7 10:40:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:23Z|01328|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 down in Southbound
Oct  7 10:40:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:23Z|01329|binding|INFO|Removing iface tap4eb92b42-42 ovn-installed in OVS
Oct  7 10:40:23 np0005473739 nova_compute[259550]: 2025-10-07 14:40:23.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:23 np0005473739 nova_compute[259550]: 2025-10-07 14:40:23.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:23 np0005473739 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct  7 10:40:23 np0005473739 systemd[1]: machine-qemu\x2d154\x2dinstance\x2d0000007b.scope: Consumed 3.824s CPU time.
Oct  7 10:40:23 np0005473739 systemd-machined[214580]: Machine qemu-154-instance-0000007b terminated.
Oct  7 10:40:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:23.543 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:23.544 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 unbound from our chassis#033[00m
Oct  7 10:40:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:23.545 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  7 10:40:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:23.546 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdc5ca9-be2e-4215-8d88-68685a2422b9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:23 np0005473739 nova_compute[259550]: 2025-10-07 14:40:23.642 2 DEBUG nova.compute.manager [None req-828e63df-bb74-43ea-9e90-4889af12f1dd 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:23 np0005473739 competent_diffie[392492]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:40:23 np0005473739 competent_diffie[392492]: --> relative data size: 1.0
Oct  7 10:40:23 np0005473739 competent_diffie[392492]: --> All data devices are unavailable
Oct  7 10:40:23 np0005473739 systemd[1]: libpod-1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a.scope: Deactivated successfully.
Oct  7 10:40:23 np0005473739 systemd[1]: libpod-1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a.scope: Consumed 1.028s CPU time.
Oct  7 10:40:23 np0005473739 podman[392475]: 2025-10-07 14:40:23.794747518 +0000 UTC m=+1.248550318 container died 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:40:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fe8c995ae45cad174a8f52d0616796f82ad90dc9c2bb0d84b9b2789f19a41f32-merged.mount: Deactivated successfully.
Oct  7 10:40:23 np0005473739 podman[392475]: 2025-10-07 14:40:23.901687791 +0000 UTC m=+1.355490591 container remove 1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_diffie, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:40:23 np0005473739 systemd[1]: libpod-conmon-1e736a9abd9563b0a3f269e1fab7db8af32cca5768c60ab4c42081923db50d0a.scope: Deactivated successfully.
Oct  7 10:40:24 np0005473739 nova_compute[259550]: 2025-10-07 14:40:24.219 2 DEBUG nova.compute.manager [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:24 np0005473739 nova_compute[259550]: 2025-10-07 14:40:24.219 2 DEBUG oslo_concurrency.lockutils [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:24 np0005473739 nova_compute[259550]: 2025-10-07 14:40:24.220 2 DEBUG oslo_concurrency.lockutils [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:24 np0005473739 nova_compute[259550]: 2025-10-07 14:40:24.220 2 DEBUG oslo_concurrency.lockutils [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:24 np0005473739 nova_compute[259550]: 2025-10-07 14:40:24.220 2 DEBUG nova.compute.manager [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:24 np0005473739 nova_compute[259550]: 2025-10-07 14:40:24.220 2 WARNING nova.compute.manager [req-cb411942-ffb7-4311-96d8-23bae217dac7 req-3c10df1a-3fcf-4568-9a49-c4f33eedb72c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state None.#033[00m
Oct  7 10:40:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2339: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.8 MiB/s wr, 163 op/s
Oct  7 10:40:24 np0005473739 podman[392694]: 2025-10-07 14:40:24.514769804 +0000 UTC m=+0.020012763 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:40:24 np0005473739 podman[392694]: 2025-10-07 14:40:24.62560979 +0000 UTC m=+0.130852739 container create 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:40:24 np0005473739 systemd[1]: Started libpod-conmon-9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122.scope.
Oct  7 10:40:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:24 np0005473739 podman[392694]: 2025-10-07 14:40:24.763549987 +0000 UTC m=+0.268792966 container init 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:40:24 np0005473739 podman[392694]: 2025-10-07 14:40:24.771620501 +0000 UTC m=+0.276863460 container start 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:40:24 np0005473739 priceless_joliot[392710]: 167 167
Oct  7 10:40:24 np0005473739 systemd[1]: libpod-9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122.scope: Deactivated successfully.
Oct  7 10:40:24 np0005473739 podman[392694]: 2025-10-07 14:40:24.777023155 +0000 UTC m=+0.282266104 container attach 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:40:24 np0005473739 conmon[392710]: conmon 9e0788ffb871bc16e730 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122.scope/container/memory.events
Oct  7 10:40:24 np0005473739 podman[392694]: 2025-10-07 14:40:24.77872604 +0000 UTC m=+0.283969009 container died 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:40:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-79527061b0b371aa6d469abf3493c4f3ff812c9d1ab538942e2b0417b4a2f60a-merged.mount: Deactivated successfully.
Oct  7 10:40:24 np0005473739 podman[392694]: 2025-10-07 14:40:24.823160422 +0000 UTC m=+0.328403371 container remove 9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_joliot, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:40:24 np0005473739 systemd[1]: libpod-conmon-9e0788ffb871bc16e73028703b8aa207668ddcdc7648a380ad2ccf1b1f0ac122.scope: Deactivated successfully.
Oct  7 10:40:25 np0005473739 podman[392732]: 2025-10-07 14:40:25.018582216 +0000 UTC m=+0.046095337 container create 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 10:40:25 np0005473739 systemd[1]: Started libpod-conmon-8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592.scope.
Oct  7 10:40:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:25 np0005473739 podman[392732]: 2025-10-07 14:40:24.998681177 +0000 UTC m=+0.026194318 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:40:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:25 np0005473739 podman[392732]: 2025-10-07 14:40:25.107028548 +0000 UTC m=+0.134541689 container init 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:40:25 np0005473739 podman[392732]: 2025-10-07 14:40:25.114079855 +0000 UTC m=+0.141592976 container start 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:40:25 np0005473739 podman[392732]: 2025-10-07 14:40:25.117456774 +0000 UTC m=+0.144969895 container attach 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.361 2 INFO nova.compute.manager [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Resuming#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.363 2 DEBUG nova.objects.instance [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'flavor' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.441 2 DEBUG oslo_concurrency.lockutils [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.442 2 DEBUG oslo_concurrency.lockutils [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquired lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.442 2 DEBUG nova.network.neutron [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.646 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.646 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.750 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.864 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.865 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.875 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:40:25 np0005473739 nova_compute[259550]: 2025-10-07 14:40:25.875 2 INFO nova.compute.claims [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:40:25 np0005473739 peaceful_agnesi[392749]: {
Oct  7 10:40:25 np0005473739 peaceful_agnesi[392749]:    "0": [
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:        {
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "devices": [
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "/dev/loop3"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            ],
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_name": "ceph_lv0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_size": "21470642176",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "name": "ceph_lv0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "tags": {
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cluster_name": "ceph",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.crush_device_class": "",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.encrypted": "0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osd_id": "0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.type": "block",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.vdo": "0"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            },
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "type": "block",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "vg_name": "ceph_vg0"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:        }
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:    ],
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:    "1": [
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:        {
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "devices": [
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "/dev/loop4"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            ],
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_name": "ceph_lv1",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_size": "21470642176",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "name": "ceph_lv1",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "tags": {
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cluster_name": "ceph",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.crush_device_class": "",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.encrypted": "0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osd_id": "1",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.type": "block",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.vdo": "0"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            },
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "type": "block",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "vg_name": "ceph_vg1"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:        }
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:    ],
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:    "2": [
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:        {
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "devices": [
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "/dev/loop5"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            ],
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_name": "ceph_lv2",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_size": "21470642176",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "name": "ceph_lv2",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "tags": {
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.cluster_name": "ceph",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.crush_device_class": "",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.encrypted": "0",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osd_id": "2",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.type": "block",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:                "ceph.vdo": "0"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            },
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "type": "block",
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:            "vg_name": "ceph_vg2"
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:        }
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]:    ]
Oct  7 10:40:26 np0005473739 peaceful_agnesi[392749]: }
Oct  7 10:40:26 np0005473739 systemd[1]: libpod-8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592.scope: Deactivated successfully.
Oct  7 10:40:26 np0005473739 podman[392732]: 2025-10-07 14:40:26.028448701 +0000 UTC m=+1.055961842 container died 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:40:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-85235eeafa7e982f95872cb38ad0268d8f576fd23b5ba52c6c5c78ef9fbdde14-merged.mount: Deactivated successfully.
Oct  7 10:40:26 np0005473739 podman[392732]: 2025-10-07 14:40:26.083634577 +0000 UTC m=+1.111147698 container remove 8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_agnesi, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:40:26 np0005473739 systemd[1]: libpod-conmon-8d1dfdc7a268868bf03ebf92b2f4518043da68f590173e66b48ffd0f4515a592.scope: Deactivated successfully.
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.110 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.323 2 DEBUG nova.compute.manager [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.324 2 DEBUG oslo_concurrency.lockutils [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.325 2 DEBUG oslo_concurrency.lockutils [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.325 2 DEBUG oslo_concurrency.lockutils [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.325 2 DEBUG nova.compute.manager [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.325 2 WARNING nova.compute.manager [req-7fe8927b-a801-407e-8874-0f7d739b50e0 req-1f069625-369f-49a9-b6dd-0e165c5d68b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state resuming.#033[00m
Oct  7 10:40:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2340: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.474 2 DEBUG nova.network.neutron [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.514 2 DEBUG oslo_concurrency.lockutils [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Releasing lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.524 2 DEBUG nova.virt.libvirt.vif [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:23Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.525 2 DEBUG nova.network.os_vif_util [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.526 2 DEBUG nova.network.os_vif_util [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.527 2 DEBUG os_vif [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.529 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.536 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:40:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3533922842' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eb92b42-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.540 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4eb92b42-42, col_values=(('external_ids', {'iface-id': '4eb92b42-4298-4d8e-8455-b37a2972c583', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3b:52', 'vm-uuid': '0ec8fffb-cb39-4dd3-88b8-41467b24be13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.541 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.542 2 INFO os_vif [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42')#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.561 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.564 2 DEBUG nova.objects.instance [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.568 2 DEBUG nova.compute.provider_tree [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.718 2 DEBUG nova.scheduler.client.report [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:40:26 np0005473739 podman[392935]: 2025-10-07 14:40:26.721474302 +0000 UTC m=+0.047618617 container create 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:40:26 np0005473739 kernel: tap4eb92b42-42: entered promiscuous mode
Oct  7 10:40:26 np0005473739 NetworkManager[44949]: <info>  [1759848026.7554] manager: (tap4eb92b42-42): new Tun device (/org/freedesktop/NetworkManager/Devices/541)
Oct  7 10:40:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:26Z|01330|binding|INFO|Claiming lport 4eb92b42-4298-4d8e-8455-b37a2972c583 for this chassis.
Oct  7 10:40:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:26Z|01331|binding|INFO|4eb92b42-4298-4d8e-8455-b37a2972c583: Claiming fa:16:3e:ee:3b:52 10.100.0.4
Oct  7 10:40:26 np0005473739 systemd[1]: Started libpod-conmon-23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4.scope.
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:26 np0005473739 podman[392935]: 2025-10-07 14:40:26.696787245 +0000 UTC m=+0.022931590 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:40:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:26Z|01332|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 ovn-installed in OVS
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.799 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:26Z|01333|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 up in Southbound
Oct  7 10:40:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:26.803 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:26.804 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 bound to our chassis#033[00m
Oct  7 10:40:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:26.805 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  7 10:40:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:26.806 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0e8053eb-c940-4b5c-9718-48e541af15d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.809 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.944s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.809 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:40:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:26 np0005473739 systemd-machined[214580]: New machine qemu-155-instance-0000007b.
Oct  7 10:40:26 np0005473739 systemd[1]: Started Virtual Machine qemu-155-instance-0000007b.
Oct  7 10:40:26 np0005473739 podman[392935]: 2025-10-07 14:40:26.829506663 +0000 UTC m=+0.155650978 container init 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 10:40:26 np0005473739 systemd-udevd[392968]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:26 np0005473739 podman[392935]: 2025-10-07 14:40:26.837604978 +0000 UTC m=+0.163749273 container start 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 10:40:26 np0005473739 podman[392935]: 2025-10-07 14:40:26.8410561 +0000 UTC m=+0.167200405 container attach 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 10:40:26 np0005473739 hopeful_kepler[392961]: 167 167
Oct  7 10:40:26 np0005473739 systemd[1]: libpod-23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4.scope: Deactivated successfully.
Oct  7 10:40:26 np0005473739 podman[392935]: 2025-10-07 14:40:26.844267385 +0000 UTC m=+0.170411670 container died 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 10:40:26 np0005473739 NetworkManager[44949]: <info>  [1759848026.8498] device (tap4eb92b42-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:26 np0005473739 NetworkManager[44949]: <info>  [1759848026.8505] device (tap4eb92b42-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e1c9e90ccfec0d5c3df0bc0e8757c3e784c66ec1758d81131d7216d57d7dbbd7-merged.mount: Deactivated successfully.
Oct  7 10:40:26 np0005473739 podman[392935]: 2025-10-07 14:40:26.885667245 +0000 UTC m=+0.211811530 container remove 23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_kepler, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:40:26 np0005473739 systemd[1]: libpod-conmon-23bb8058790dc5be24fd63c16f5026e1dbccb8ac991d364e91398e224e7f35a4.scope: Deactivated successfully.
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.996 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:40:26 np0005473739 nova_compute[259550]: 2025-10-07 14:40:26.998 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.073 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:40:27 np0005473739 podman[392997]: 2025-10-07 14:40:27.084009968 +0000 UTC m=+0.042340267 container create ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.107 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:40:27 np0005473739 systemd[1]: Started libpod-conmon-ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e.scope.
Oct  7 10:40:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:40:27 np0005473739 podman[392997]: 2025-10-07 14:40:27.067179661 +0000 UTC m=+0.025509980 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:40:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:40:27 np0005473739 podman[392997]: 2025-10-07 14:40:27.174749709 +0000 UTC m=+0.133080028 container init ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:40:27 np0005473739 podman[392997]: 2025-10-07 14:40:27.183662777 +0000 UTC m=+0.141993076 container start ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 10:40:27 np0005473739 podman[392997]: 2025-10-07 14:40:27.188071144 +0000 UTC m=+0.146401463 container attach ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.203 2 DEBUG nova.policy [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.282 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.284 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.285 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Creating image(s)#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.304 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.326 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.349 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.353 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.426 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.427 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.428 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.428 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.447 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.450 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ca309873-104c-4cd4-a609-686d61823e0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.877 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 0ec8fffb-cb39-4dd3-88b8-41467b24be13 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.878 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848027.877388, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.879 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Started (Lifecycle Event)#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.894 2 DEBUG nova.compute.manager [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.895 2 DEBUG nova.objects.instance [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.915 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 ca309873-104c-4cd4-a609-686d61823e0f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.951 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.988 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:27 np0005473739 nova_compute[259550]: 2025-10-07 14:40:27.995 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.033 2 INFO nova.virt.libvirt.driver [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance running successfully.#033[00m
Oct  7 10:40:28 np0005473739 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.039 2 DEBUG nova.virt.libvirt.guest [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.039 2 DEBUG nova.compute.manager [None req-d1342708-bd97-459d-bdc6-07b705d79fb7 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.229 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.229 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848027.8832893, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.230 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]: {
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "osd_id": 2,
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "type": "bluestore"
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:    },
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "osd_id": 1,
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "type": "bluestore"
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:    },
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "osd_id": 0,
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:        "type": "bluestore"
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]:    }
Oct  7 10:40:28 np0005473739 nostalgic_pare[393014]: }
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.281 2 DEBUG nova.objects.instance [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid ca309873-104c-4cd4-a609-686d61823e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:28 np0005473739 systemd[1]: libpod-ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e.scope: Deactivated successfully.
Oct  7 10:40:28 np0005473739 systemd[1]: libpod-ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e.scope: Consumed 1.061s CPU time.
Oct  7 10:40:28 np0005473739 podman[392997]: 2025-10-07 14:40:28.29590921 +0000 UTC m=+1.254239509 container died ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.314 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.316 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0bd8683a5568bdd3ef510ff17f274d64d91e371fee26c12674fff2d0e9bea400-merged.mount: Deactivated successfully.
Oct  7 10:40:28 np0005473739 podman[392997]: 2025-10-07 14:40:28.351128279 +0000 UTC m=+1.309458578 container remove ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_pare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:40:28 np0005473739 systemd[1]: libpod-conmon-ebe6b29d0c48373c4365b74b624386e363ed1503caf87eea9ac192e8d2a9ef6e.scope: Deactivated successfully.
Oct  7 10:40:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:40:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:40:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:40:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:40:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d0e02ef3-a706-48ae-adcb-1c8bbd1a7483 does not exist
Oct  7 10:40:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b9e03ee4-b2cf-4ffb-91dc-7367fb323ae3 does not exist
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.433 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.433 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Ensure instance console log exists: /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.434 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.434 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.435 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2341: 305 pgs: 305 active+clean; 246 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.905 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Successfully created port: bcef16bc-28e8-4e97-a50b-7625a1917ee5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.963 2 DEBUG nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.964 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.964 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.964 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.964 2 DEBUG nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.965 2 WARNING nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG oslo_concurrency.lockutils [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.965 2 DEBUG nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:28 np0005473739 nova_compute[259550]: 2025-10-07 14:40:28.966 2 WARNING nova.compute.manager [req-dbe2530d-d5a7-4ce0-9620-8e75eed1ab9a req-451b2d4c-1171-436f-827d-7bb0c5f00659 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:40:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:40:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.445 2 DEBUG nova.objects.instance [None req-b626a2ac-b30c-4f1f-84d0-77740664bffe 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.471 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848029.4714851, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.472 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.500 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.505 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: active, current task_state: suspending, current DB power_state: 1, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.543 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (suspending). Skip.#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.702 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.702 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.719 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.840 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.840 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.857 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.857 2 INFO nova.compute.claims [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:40:29 np0005473739 nova_compute[259550]: 2025-10-07 14:40:29.993 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Successfully updated port: bcef16bc-28e8-4e97-a50b-7625a1917ee5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:40:30 np0005473739 kernel: tap4eb92b42-42 (unregistering): left promiscuous mode
Oct  7 10:40:30 np0005473739 NetworkManager[44949]: <info>  [1759848030.0034] device (tap4eb92b42-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:30Z|01334|binding|INFO|Releasing lport 4eb92b42-4298-4d8e-8455-b37a2972c583 from this chassis (sb_readonly=0)
Oct  7 10:40:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:30Z|01335|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 down in Southbound
Oct  7 10:40:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:30Z|01336|binding|INFO|Removing iface tap4eb92b42-42 ovn-installed in OVS
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:30 np0005473739 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct  7 10:40:30 np0005473739 systemd[1]: machine-qemu\x2d155\x2dinstance\x2d0000007b.scope: Consumed 2.520s CPU time.
Oct  7 10:40:30 np0005473739 systemd-machined[214580]: Machine qemu-155-instance-0000007b terminated.
Oct  7 10:40:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:30.093 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:30.094 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 unbound from our chassis#033[00m
Oct  7 10:40:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:30.095 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  7 10:40:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:30.096 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1da09992-9f05-4094-a7eb-c68ba0323da7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.152 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.153 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.153 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.155 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.203 2 DEBUG nova.compute.manager [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-changed-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.204 2 DEBUG nova.compute.manager [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Refreshing instance network info cache due to event network-changed-bcef16bc-28e8-4e97-a50b-7625a1917ee5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.204 2 DEBUG oslo_concurrency.lockutils [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.206 2 DEBUG nova.compute.manager [None req-b626a2ac-b30c-4f1f-84d0-77740664bffe 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.356 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:40:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2342: 305 pgs: 305 active+clean; 259 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.0 MiB/s wr, 156 op/s
Oct  7 10:40:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:40:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2889561579' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.631 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.638 2 DEBUG nova.compute.provider_tree [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.656 2 DEBUG nova.scheduler.client.report [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.759 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.919s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.760 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.811 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.811 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.851 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:40:30 np0005473739 nova_compute[259550]: 2025-10-07 14:40:30.887 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.038 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.039 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.040 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Creating image(s)#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.062 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.088 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.112 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.117 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.165 2 DEBUG nova.policy [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.206 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.207 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.208 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.209 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.234 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.238 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 52ccd902-898f-4809-a231-be5760626c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.491 2 DEBUG nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.491 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.492 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.492 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.492 2 DEBUG nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.493 2 WARNING nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state None.#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.493 2 DEBUG nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.493 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.493 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.494 2 DEBUG oslo_concurrency.lockutils [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.494 2 DEBUG nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.494 2 WARNING nova.compute.manager [req-2b21d0a9-76fd-4b18-afa8-d03253e26c35 req-ae42a21a-2145-40c6-8321-f19d01ddfe27 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state None.#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.533 2 DEBUG nova.network.neutron [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Updating instance_info_cache with network_info: [{"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.581 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.582 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance network_info: |[{"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.582 2 DEBUG oslo_concurrency.lockutils [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.582 2 DEBUG nova.network.neutron [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Refreshing network info cache for port bcef16bc-28e8-4e97-a50b-7625a1917ee5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.586 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start _get_guest_xml network_info=[{"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.590 2 WARNING nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.597 2 DEBUG nova.virt.libvirt.host [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.597 2 DEBUG nova.virt.libvirt.host [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.602 2 DEBUG nova.virt.libvirt.host [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.603 2 DEBUG nova.virt.libvirt.host [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.603 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.603 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.604 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.604 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.604 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.604 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.605 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.606 2 DEBUG nova.virt.hardware [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.609 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.681 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 52ccd902-898f-4809-a231-be5760626c2c_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.739 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.839 2 DEBUG nova.objects.instance [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 52ccd902-898f-4809-a231-be5760626c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.855 2 INFO nova.compute.manager [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Resuming#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.856 2 DEBUG nova.objects.instance [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'flavor' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.904 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.904 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Ensure instance console log exists: /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.905 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.905 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:31 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.905 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:31.999 2 DEBUG oslo_concurrency.lockutils [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.000 2 DEBUG oslo_concurrency.lockutils [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquired lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.000 2 DEBUG nova.network.neutron [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:40:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:40:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4244256990' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.106 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.127 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.131 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2343: 305 pgs: 305 active+clean; 293 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 2.3 MiB/s wr, 133 op/s
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:40:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2906102693' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.614 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.616 2 DEBUG nova.virt.libvirt.vif [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2008963535',display_name='tempest-TestNetworkBasicOps-server-2008963535',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2008963535',id=124,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxD9h4ZkkgrlueJ/+Le33LVwl4afSGXrv2n9VWfucsKy4uVMm2emVKU0zzzyJ1CeXnsYkd5eNJ5QzJC0SrQJgU7HtUo0DaaWNwjsm6StKgjUKnYtMfa+OUzVPQdsT6WdA==',key_name='tempest-TestNetworkBasicOps-266700111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-0d2v23l1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:27Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=ca309873-104c-4cd4-a609-686d61823e0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.616 2 DEBUG nova.network.os_vif_util [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.617 2 DEBUG nova.network.os_vif_util [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.619 2 DEBUG nova.objects.instance [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid ca309873-104c-4cd4-a609-686d61823e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0022150551255127955 of space, bias 1.0, pg target 0.6645165376538387 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:40:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.668 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <uuid>ca309873-104c-4cd4-a609-686d61823e0f</uuid>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <name>instance-0000007c</name>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-2008963535</nova:name>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:40:31</nova:creationTime>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <nova:port uuid="bcef16bc-28e8-4e97-a50b-7625a1917ee5">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.27" ipVersion="4"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <entry name="serial">ca309873-104c-4cd4-a609-686d61823e0f</entry>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <entry name="uuid">ca309873-104c-4cd4-a609-686d61823e0f</entry>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ca309873-104c-4cd4-a609-686d61823e0f_disk">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/ca309873-104c-4cd4-a609-686d61823e0f_disk.config">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:17:5f:df"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <target dev="tapbcef16bc-28"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/console.log" append="off"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:40:32 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:40:32 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:40:32 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:40:32 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.669 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Preparing to wait for external event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.669 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.670 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.670 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.671 2 DEBUG nova.virt.libvirt.vif [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2008963535',display_name='tempest-TestNetworkBasicOps-server-2008963535',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2008963535',id=124,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxD9h4ZkkgrlueJ/+Le33LVwl4afSGXrv2n9VWfucsKy4uVMm2emVKU0zzzyJ1CeXnsYkd5eNJ5QzJC0SrQJgU7HtUo0DaaWNwjsm6StKgjUKnYtMfa+OUzVPQdsT6WdA==',key_name='tempest-TestNetworkBasicOps-266700111',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-0d2v23l1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:27Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=ca309873-104c-4cd4-a609-686d61823e0f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.671 2 DEBUG nova.network.os_vif_util [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.671 2 DEBUG nova.network.os_vif_util [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.672 2 DEBUG os_vif [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.673 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbcef16bc-28, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbcef16bc-28, col_values=(('external_ids', {'iface-id': 'bcef16bc-28e8-4e97-a50b-7625a1917ee5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:17:5f:df', 'vm-uuid': 'ca309873-104c-4cd4-a609-686d61823e0f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:32 np0005473739 NetworkManager[44949]: <info>  [1759848032.6803] manager: (tapbcef16bc-28): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/542)
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.690 2 INFO os_vif [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28')#033[00m
Oct  7 10:40:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:40:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2819913526' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:40:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:40:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2819913526' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:40:32 np0005473739 podman[393592]: 2025-10-07 14:40:32.794109344 +0000 UTC m=+0.066211840 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:40:32 np0005473739 podman[393591]: 2025-10-07 14:40:32.815741299 +0000 UTC m=+0.088217626 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.833 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.833 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.833 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:17:5f:df, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.833 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Using config drive#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.854 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:32 np0005473739 nova_compute[259550]: 2025-10-07 14:40:32.995 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Successfully created port: b4567457-8da4-42a7-b4c0-42724b2c0bc9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:40:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.364 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Creating config drive at /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.369 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnqm9b6xh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.457 2 DEBUG nova.network.neutron [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [{"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.485 2 DEBUG oslo_concurrency.lockutils [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Releasing lock "refresh_cache-0ec8fffb-cb39-4dd3-88b8-41467b24be13" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.491 2 DEBUG nova.virt.libvirt.vif [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',old_vm_state='active',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=<?>,task_state='resuming',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='suspended') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.492 2 DEBUG nova.network.os_vif_util [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.493 2 DEBUG nova.network.os_vif_util [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.493 2 DEBUG os_vif [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.494 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.495 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.498 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.498 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4eb92b42-42, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.499 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap4eb92b42-42, col_values=(('external_ids', {'iface-id': '4eb92b42-4298-4d8e-8455-b37a2972c583', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:ee:3b:52', 'vm-uuid': '0ec8fffb-cb39-4dd3-88b8-41467b24be13'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.500 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.500 2 INFO os_vif [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42')#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.523 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnqm9b6xh" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.551 2 DEBUG nova.storage.rbd_utils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image ca309873-104c-4cd4-a609-686d61823e0f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.555 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config ca309873-104c-4cd4-a609-686d61823e0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.599 2 DEBUG nova.objects.instance [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'numa_topology' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:33 np0005473739 NetworkManager[44949]: <info>  [1759848033.6646] manager: (tap4eb92b42-42): new Tun device (/org/freedesktop/NetworkManager/Devices/543)
Oct  7 10:40:33 np0005473739 kernel: tap4eb92b42-42: entered promiscuous mode
Oct  7 10:40:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:33Z|01337|binding|INFO|Claiming lport 4eb92b42-4298-4d8e-8455-b37a2972c583 for this chassis.
Oct  7 10:40:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:33Z|01338|binding|INFO|4eb92b42-4298-4d8e-8455-b37a2972c583: Claiming fa:16:3e:ee:3b:52 10.100.0.4
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:33.677 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '7', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:33.678 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 bound to our chassis#033[00m
Oct  7 10:40:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:33.678 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  7 10:40:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:33.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dea77ccd-2179-4a5c-90f9-023865dfe416]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.691 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:33Z|01339|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 ovn-installed in OVS
Oct  7 10:40:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:33Z|01340|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 up in Southbound
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:33 np0005473739 systemd-udevd[393700]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:33 np0005473739 systemd-machined[214580]: New machine qemu-156-instance-0000007b.
Oct  7 10:40:33 np0005473739 NetworkManager[44949]: <info>  [1759848033.7219] device (tap4eb92b42-42): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:33 np0005473739 NetworkManager[44949]: <info>  [1759848033.7231] device (tap4eb92b42-42): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:33 np0005473739 systemd[1]: Started Virtual Machine qemu-156-instance-0000007b.
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.828 2 DEBUG nova.network.neutron [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Updated VIF entry in instance network info cache for port bcef16bc-28e8-4e97-a50b-7625a1917ee5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.830 2 DEBUG nova.network.neutron [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Updating instance_info_cache with network_info: [{"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.861 2 DEBUG oslo_concurrency.lockutils [req-bdcb78ff-f42e-4976-ac4c-0fd788f0450b req-5ba7b957-53c3-44ca-bff1-a10d2d7947c9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-ca309873-104c-4cd4-a609-686d61823e0f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.882 2 DEBUG nova.compute.manager [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.882 2 DEBUG oslo_concurrency.lockutils [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.882 2 DEBUG oslo_concurrency.lockutils [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.882 2 DEBUG oslo_concurrency.lockutils [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.883 2 DEBUG nova.compute.manager [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.883 2 WARNING nova.compute.manager [req-0f4fa94b-e01b-42ad-be07-c09af409f5ce req-8e345064-f568-4420-897f-5457f4624986 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state suspended and task_state resuming.#033[00m
Oct  7 10:40:33 np0005473739 nova_compute[259550]: 2025-10-07 14:40:33.895 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Successfully created port: d721f926-d7b0-4e02-bf84-b45b7c5df102 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.218 2 DEBUG oslo_concurrency.processutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config ca309873-104c-4cd4-a609-686d61823e0f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.662s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.218 2 INFO nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Deleting local config drive /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f/disk.config because it was imported into RBD.#033[00m
Oct  7 10:40:34 np0005473739 kernel: tapbcef16bc-28: entered promiscuous mode
Oct  7 10:40:34 np0005473739 systemd-udevd[393702]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:34 np0005473739 NetworkManager[44949]: <info>  [1759848034.2689] manager: (tapbcef16bc-28): new Tun device (/org/freedesktop/NetworkManager/Devices/544)
Oct  7 10:40:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:34Z|01341|binding|INFO|Claiming lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 for this chassis.
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:34Z|01342|binding|INFO|bcef16bc-28e8-4e97-a50b-7625a1917ee5: Claiming fa:16:3e:17:5f:df 10.100.0.27
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.280 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:5f:df 10.100.0.27'], port_security=['fa:16:3e:17:5f:df 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'ca309873-104c-4cd4-a609-686d61823e0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '84380b93-9bd8-46c6-8ce7-0eb2636c568f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4af57759-0100-4ebb-81fb-af43dd151fff, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=bcef16bc-28e8-4e97-a50b-7625a1917ee5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.282 161536 INFO neutron.agent.ovn.metadata.agent [-] Port bcef16bc-28e8-4e97-a50b-7625a1917ee5 in datapath cae6e154-5797-4df5-a9e8-545cc6ed0188 bound to our chassis#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.283 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cae6e154-5797-4df5-a9e8-545cc6ed0188#033[00m
Oct  7 10:40:34 np0005473739 NetworkManager[44949]: <info>  [1759848034.2849] device (tapbcef16bc-28): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:34 np0005473739 NetworkManager[44949]: <info>  [1759848034.2857] device (tapbcef16bc-28): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:34Z|01343|binding|INFO|Setting lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 ovn-installed in OVS
Oct  7 10:40:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:34Z|01344|binding|INFO|Setting lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 up in Southbound
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.304 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a1a2372f-7fbd-411b-9d9b-6c889729fe1e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:34 np0005473739 systemd-machined[214580]: New machine qemu-157-instance-0000007c.
Oct  7 10:40:34 np0005473739 systemd[1]: Started Virtual Machine qemu-157-instance-0000007c.
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.332 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a3e237-7900-45c9-8fe6-f82985f4a140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.336 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[98e3bcf4-1845-49fc-b6b0-a74e9fc051bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.363 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7b3c62dc-0a22-4d94-952d-d7b05cb25fd3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.381 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b46b2b31-cf96-444b-9566-2666327a67b6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcae6e154-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:47:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 6, 'rx_bytes': 916, 'tx_bytes': 444, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859712, 'reachable_time': 43552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 393780, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cebaaaf5-2147-4300-a6ae-52c8d114b414]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcae6e154-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859727, 'tstamp': 859727}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393782, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tapcae6e154-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859730, 'tstamp': 859730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 393782, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.401 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcae6e154-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.404 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.405 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcae6e154-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.406 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcae6e154-50, col_values=(('external_ids', {'iface-id': '795a08c5-66c3-453c-a5db-19a02c166ab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:34.407 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2344: 305 pgs: 305 active+clean; 320 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 128 op/s
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.710 2 DEBUG nova.virt.libvirt.host [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Removed pending event for 0ec8fffb-cb39-4dd3-88b8-41467b24be13 due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.711 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848034.7098725, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.711 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Started (Lifecycle Event)#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.731 2 DEBUG nova.compute.manager [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.732 2 DEBUG nova.objects.instance [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.750 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.761 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Started"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.766 2 INFO nova.virt.libvirt.driver [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance running successfully.#033[00m
Oct  7 10:40:34 np0005473739 virtqemud[259430]: argument unsupported: QEMU guest agent is not configured
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.769 2 DEBUG nova.virt.libvirt.guest [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Failed to set time: agent not configured sync_guest_time /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:200#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.769 2 DEBUG nova.compute.manager [None req-260611a6-3bb2-46e8-9de2-56899509c2b8 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.803 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] During sync_power_state the instance has a pending task (resuming). Skip.#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.803 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848034.7165442, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.804 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.831 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:34 np0005473739 nova_compute[259550]: 2025-10-07 14:40:34.835 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: suspended, current task_state: resuming, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:40:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 10K writes, 49K keys, 10K commit groups, 1.0 writes per commit group, ingest: 0.06 GB, 0.02 MB/s#012Cumulative WAL: 10K writes, 10K syncs, 1.00 writes per sync, written: 0.06 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1394 writes, 6291 keys, 1394 commit groups, 1.0 writes per commit group, ingest: 8.82 MB, 0.01 MB/s#012Interval WAL: 1394 writes, 1394 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     73.5      0.80              0.18        33    0.024       0      0       0.0       0.0#012  L6      1/0    7.90 MB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   4.4    138.7    115.7      2.21              0.73        32    0.069    185K    17K       0.0       0.0#012 Sum      1/0    7.90 MB   0.0      0.3     0.1      0.2       0.3      0.1       0.0   5.4    101.9    104.5      3.01              0.91        65    0.046    185K    17K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   7.3    103.0    104.2      0.49              0.14        10    0.049     36K   2557       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.2       0.2      0.0       0.0   0.0    138.7    115.7      2.21              0.73        32    0.069    185K    17K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     73.7      0.79              0.18        32    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4200.0 total, 600.0 interval#012Flush(GB): cumulative 0.057, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.31 GB write, 0.07 MB/s write, 0.30 GB read, 0.07 MB/s read, 3.0 seconds#012Interval compaction: 0.05 GB write, 0.08 MB/s write, 0.05 GB read, 0.08 MB/s read, 0.5 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 33.96 MB table_size: 0 occupancy: 18446744073709551615 collections: 8 last_copies: 0 last_secs: 0.000311 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2215,32.58 MB,10.7185%) FilterBlock(66,523.80 KB,0.168263%) IndexBlock(66,888.89 KB,0.285545%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 10:40:35 np0005473739 nova_compute[259550]: 2025-10-07 14:40:35.278 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848035.278009, ca309873-104c-4cd4-a609-686d61823e0f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:35 np0005473739 nova_compute[259550]: 2025-10-07 14:40:35.279 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] VM Started (Lifecycle Event)#033[00m
Oct  7 10:40:35 np0005473739 nova_compute[259550]: 2025-10-07 14:40:35.306 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:35 np0005473739 nova_compute[259550]: 2025-10-07 14:40:35.311 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848035.2782521, ca309873-104c-4cd4-a609-686d61823e0f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:35 np0005473739 nova_compute[259550]: 2025-10-07 14:40:35.311 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:40:35 np0005473739 nova_compute[259550]: 2025-10-07 14:40:35.336 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:35 np0005473739 nova_compute[259550]: 2025-10-07 14:40:35.340 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:35 np0005473739 nova_compute[259550]: 2025-10-07 14:40:35.359 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:40:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2345: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 67 KiB/s rd, 3.6 MiB/s wr, 63 op/s
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.850 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.851 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.851 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.851 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.851 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.852 2 WARNING nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.852 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.852 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.852 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.852 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.853 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Processing event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.853 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.853 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.853 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.854 2 DEBUG oslo_concurrency.lockutils [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.854 2 DEBUG nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] No waiting events found dispatching network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.854 2 WARNING nova.compute.manager [req-c4f8ddc1-de44-43c8-be96-8bdbe8b79b18 req-5693fcc4-be66-4909-b672-5e3fe4d42e9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received unexpected event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.855 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.858 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848036.858216, ca309873-104c-4cd4-a609-686d61823e0f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.858 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.860 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.867 2 INFO nova.virt.libvirt.driver [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance spawned successfully.#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.868 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.898 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.904 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.909 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.910 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.911 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.911 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.911 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.912 2 DEBUG nova.virt.libvirt.driver [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:36 np0005473739 nova_compute[259550]: 2025-10-07 14:40:36.988 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:40:37 np0005473739 nova_compute[259550]: 2025-10-07 14:40:37.036 2 INFO nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Took 9.75 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:40:37 np0005473739 nova_compute[259550]: 2025-10-07 14:40:37.036 2 DEBUG nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:37 np0005473739 nova_compute[259550]: 2025-10-07 14:40:37.118 2 INFO nova.compute.manager [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Took 11.29 seconds to build instance.#033[00m
Oct  7 10:40:37 np0005473739 nova_compute[259550]: 2025-10-07 14:40:37.151 2 DEBUG oslo_concurrency.lockutils [None req-784b0f8d-c53c-4f0d-8de3-f9685cae0694 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.505s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:37 np0005473739 nova_compute[259550]: 2025-10-07 14:40:37.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:37 np0005473739 nova_compute[259550]: 2025-10-07 14:40:37.731 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Successfully updated port: b4567457-8da4-42a7-b4c0-42724b2c0bc9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:40:37 np0005473739 nova_compute[259550]: 2025-10-07 14:40:37.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2346: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 3.6 MiB/s wr, 62 op/s
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.238 2 DEBUG nova.compute.manager [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.239 2 DEBUG nova.compute.manager [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing instance network info cache due to event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.240 2 DEBUG oslo_concurrency.lockutils [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.240 2 DEBUG oslo_concurrency.lockutils [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.241 2 DEBUG nova.network.neutron [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.491 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.491 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.492 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.492 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.493 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.494 2 INFO nova.compute.manager [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Terminating instance#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.495 2 DEBUG nova.compute.manager [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.517 2 DEBUG nova.network.neutron [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:40:39 np0005473739 kernel: tap4eb92b42-42 (unregistering): left promiscuous mode
Oct  7 10:40:39 np0005473739 NetworkManager[44949]: <info>  [1759848039.5507] device (tap4eb92b42-42): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:40:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:39Z|01345|binding|INFO|Releasing lport 4eb92b42-4298-4d8e-8455-b37a2972c583 from this chassis (sb_readonly=0)
Oct  7 10:40:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:39Z|01346|binding|INFO|Setting lport 4eb92b42-4298-4d8e-8455-b37a2972c583 down in Southbound
Oct  7 10:40:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:39Z|01347|binding|INFO|Removing iface tap4eb92b42-42 ovn-installed in OVS
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.565 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:39.598 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:ee:3b:52 10.100.0.4'], port_security=['fa:16:3e:ee:3b:52 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '0ec8fffb-cb39-4dd3-88b8-41467b24be13', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'c1c9329037b74c90a5df2b4a0e0afe75', 'neutron:revision_number': '8', 'neutron:security_group_ids': 'b1762bba-55ab-4449-bafb-3376afdfdbc4', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a8112f49-4ba8-42c2-9619-6731a09d69e4, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=4eb92b42-4298-4d8e-8455-b37a2972c583) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:39.599 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 4eb92b42-4298-4d8e-8455-b37a2972c583 in datapath 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 unbound from our chassis#033[00m
Oct  7 10:40:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:39.601 161536 DEBUG neutron.agent.ovn.metadata.agent [-] There is no metadata port for network 70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8 or it has no MAC or IP addresses configured, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:599#033[00m
Oct  7 10:40:39 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:39.602 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20be9b0c-9b38-483e-a464-d90af4051565]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:39 np0005473739 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Deactivated successfully.
Oct  7 10:40:39 np0005473739 systemd[1]: machine-qemu\x2d156\x2dinstance\x2d0000007b.scope: Consumed 5.501s CPU time.
Oct  7 10:40:39 np0005473739 systemd-machined[214580]: Machine qemu-156-instance-0000007b terminated.
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.730 2 INFO nova.virt.libvirt.driver [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Instance destroyed successfully.#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.730 2 DEBUG nova.objects.instance [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lazy-loading 'resources' on Instance uuid 0ec8fffb-cb39-4dd3-88b8-41467b24be13 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.755 2 DEBUG nova.virt.libvirt.vif [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:07Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerAdvancedOps-server-1564405251',display_name='tempest-TestServerAdvancedOps-server-1564405251',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserveradvancedops-server-1564405251',id=123,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='c1c9329037b74c90a5df2b4a0e0afe75',ramdisk_id='',reservation_id='r-wo40rctq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerAdvancedOps-1206359101',owner_user_name='tempest-TestServerAdvancedOps-1206359101-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:34Z,user_data=None,user_id='57b4a09a91e94b4c8417e522a9a10496',uuid=0ec8fffb-cb39-4dd3-88b8-41467b24be13,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.756 2 DEBUG nova.network.os_vif_util [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converting VIF {"id": "4eb92b42-4298-4d8e-8455-b37a2972c583", "address": "fa:16:3e:ee:3b:52", "network": {"id": "70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8", "bridge": "br-int", "label": "tempest-TestServerAdvancedOps-2107376542-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": false}}], "meta": {"injected": false, "tenant_id": "c1c9329037b74c90a5df2b4a0e0afe75", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap4eb92b42-42", "ovs_interfaceid": "4eb92b42-4298-4d8e-8455-b37a2972c583", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.756 2 DEBUG nova.network.os_vif_util [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.758 2 DEBUG os_vif [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.760 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4eb92b42-42, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:39 np0005473739 nova_compute[259550]: 2025-10-07 14:40:39.765 2 INFO os_vif [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:ee:3b:52,bridge_name='br-int',has_traffic_filtering=True,id=4eb92b42-4298-4d8e-8455-b37a2972c583,network=Network(70b3fc65-5527-4cbc-9bcb-a7e77d4e40f8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap4eb92b42-42')#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.088 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Successfully updated port: d721f926-d7b0-4e02-bf84-b45b7c5df102 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.117 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.124 2 DEBUG nova.network.neutron [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.143 2 DEBUG oslo_concurrency.lockutils [req-d8a63f8e-602b-47e7-a3d6-de213cea4894 req-ec251655-2f9a-4cdc-9692-9dd868178e02 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.143 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.144 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.204 2 INFO nova.virt.libvirt.driver [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Deleting instance files /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13_del#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.205 2 INFO nova.virt.libvirt.driver [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Deletion of /var/lib/nova/instances/0ec8fffb-cb39-4dd3-88b8-41467b24be13_del complete#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.247 2 DEBUG nova.compute.manager [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.248 2 DEBUG oslo_concurrency.lockutils [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.249 2 DEBUG oslo_concurrency.lockutils [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.249 2 DEBUG oslo_concurrency.lockutils [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.250 2 DEBUG nova.compute.manager [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.250 2 DEBUG nova.compute.manager [req-8396bc7b-88ed-43a8-98d1-1ea12affa1c4 req-edf743ad-ca13-4c07-b5bb-9809c148d406 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-unplugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.276 2 INFO nova.compute.manager [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.277 2 DEBUG oslo.service.loopingcall [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.278 2 DEBUG nova.compute.manager [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.278 2 DEBUG nova.network.neutron [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:40:40 np0005473739 nova_compute[259550]: 2025-10-07 14:40:40.392 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:40:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2347: 305 pgs: 305 active+clean; 339 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 977 KiB/s rd, 3.6 MiB/s wr, 104 op/s
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.016 2 DEBUG nova.network.neutron [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.034 2 INFO nova.compute.manager [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Took 0.76 seconds to deallocate network for instance.#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.078 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.079 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.216 2 DEBUG oslo_concurrency.processutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.340 2 DEBUG nova.compute.manager [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-changed-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.340 2 DEBUG nova.compute.manager [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing instance network info cache due to event network-changed-d721f926-d7b0-4e02-bf84-b45b7c5df102. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.341 2 DEBUG oslo_concurrency.lockutils [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:40:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/392849799' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.716 2 DEBUG oslo_concurrency.processutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.721 2 DEBUG nova.compute.provider_tree [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.755 2 DEBUG nova.scheduler.client.report [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.784 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.705s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:41 np0005473739 nova_compute[259550]: 2025-10-07 14:40:41.867 2 INFO nova.scheduler.client.report [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Deleted allocations for instance 0ec8fffb-cb39-4dd3-88b8-41467b24be13#033[00m
Oct  7 10:40:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2348: 305 pgs: 305 active+clean; 336 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 2.7 MiB/s wr, 124 op/s
Oct  7 10:40:42 np0005473739 nova_compute[259550]: 2025-10-07 14:40:42.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.031 2 DEBUG nova.compute.manager [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.032 2 DEBUG oslo_concurrency.lockutils [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.032 2 DEBUG oslo_concurrency.lockutils [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.032 2 DEBUG oslo_concurrency.lockutils [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.033 2 DEBUG nova.compute.manager [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] No waiting events found dispatching network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.033 2 WARNING nova.compute.manager [req-24320672-abbe-47e5-84ea-8aefc7bc1070 req-a5dca7f1-4402-4821-8e7b-9a293f22ee7d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received unexpected event network-vif-plugged-4eb92b42-4298-4d8e-8455-b37a2972c583 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:40:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.094 2 DEBUG oslo_concurrency.lockutils [None req-7d6e0ee7-1e46-42b3-b66f-ce8ee4becdc5 57b4a09a91e94b4c8417e522a9a10496 c1c9329037b74c90a5df2b4a0e0afe75 - - default default] Lock "0ec8fffb-cb39-4dd3-88b8-41467b24be13" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.310 2 DEBUG nova.network.neutron [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.333 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.333 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance network_info: |[{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.334 2 DEBUG oslo_concurrency.lockutils [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.334 2 DEBUG nova.network.neutron [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing network info cache for port d721f926-d7b0-4e02-bf84-b45b7c5df102 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.339 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start _get_guest_xml network_info=[{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.346 2 WARNING nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.355 2 DEBUG nova.virt.libvirt.host [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.356 2 DEBUG nova.virt.libvirt.host [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.360 2 DEBUG nova.virt.libvirt.host [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.361 2 DEBUG nova.virt.libvirt.host [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.362 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.362 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.362 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.363 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.363 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.363 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.363 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.364 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.364 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.364 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.365 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.365 2 DEBUG nova.virt.hardware [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.369 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:43Z|01348|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct  7 10:40:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:43Z|01349|binding|INFO|Releasing lport 795a08c5-66c3-453c-a5db-19a02c166ab7 from this chassis (sb_readonly=0)
Oct  7 10:40:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:43Z|01350|binding|INFO|Releasing lport 4cc97c0a-633b-48bc-94c4-6f8ac1f61c66 from this chassis (sb_readonly=0)
Oct  7 10:40:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:43Z|01351|binding|INFO|Releasing lport 763708cd-58bb-4680-a4f7-042aa711a366 from this chassis (sb_readonly=0)
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:40:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2441973241' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.835 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.866 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:43 np0005473739 nova_compute[259550]: 2025-10-07 14:40:43.872 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:40:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1045589483' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.328 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.330 2 DEBUG nova.virt.libvirt.vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.331 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.332 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.333 2 DEBUG nova.virt.libvirt.vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.333 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.334 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.335 2 DEBUG nova.objects.instance [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 52ccd902-898f-4809-a231-be5760626c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.351 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <uuid>52ccd902-898f-4809-a231-be5760626c2c</uuid>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <name>instance-0000007d</name>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1539930206</nova:name>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:40:43</nova:creationTime>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:port uuid="b4567457-8da4-42a7-b4c0-42724b2c0bc9">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <nova:port uuid="d721f926-d7b0-4e02-bf84-b45b7c5df102">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe15:f003" ipVersion="6"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe15:f003" ipVersion="6"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <entry name="serial">52ccd902-898f-4809-a231-be5760626c2c</entry>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <entry name="uuid">52ccd902-898f-4809-a231-be5760626c2c</entry>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/52ccd902-898f-4809-a231-be5760626c2c_disk">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/52ccd902-898f-4809-a231-be5760626c2c_disk.config">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:09:cf:e1"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <target dev="tapb4567457-8d"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:15:f0:03"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <target dev="tapd721f926-d7"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/console.log" append="off"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:40:44 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:40:44 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:40:44 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:40:44 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.352 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Preparing to wait for external event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.353 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.353 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.353 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.353 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Preparing to wait for external event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.354 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.354 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.354 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.355 2 DEBUG nova.virt.libvirt.vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.355 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.356 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.356 2 DEBUG os_vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.357 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb4567457-8d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.360 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb4567457-8d, col_values=(('external_ids', {'iface-id': 'b4567457-8da4-42a7-b4c0-42724b2c0bc9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:cf:e1', 'vm-uuid': '52ccd902-898f-4809-a231-be5760626c2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:44 np0005473739 NetworkManager[44949]: <info>  [1759848044.3631] manager: (tapb4567457-8d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/545)
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.369 2 INFO os_vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d')#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.370 2 DEBUG nova.virt.libvirt.vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:40:30Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.370 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.371 2 DEBUG nova.network.os_vif_util [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.371 2 DEBUG os_vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.372 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.376 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd721f926-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.377 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd721f926-d7, col_values=(('external_ids', {'iface-id': 'd721f926-d7b0-4e02-bf84-b45b7c5df102', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:15:f0:03', 'vm-uuid': '52ccd902-898f-4809-a231-be5760626c2c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:44 np0005473739 NetworkManager[44949]: <info>  [1759848044.3808] manager: (tapd721f926-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/546)
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.387 2 INFO os_vif [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7')#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.440 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.440 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.440 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:09:cf:e1, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.441 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:15:f0:03, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.441 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Using config drive#033[00m
Oct  7 10:40:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2349: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 132 op/s
Oct  7 10:40:44 np0005473739 nova_compute[259550]: 2025-10-07 14:40:44.467 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:45 np0005473739 nova_compute[259550]: 2025-10-07 14:40:45.745 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Creating config drive at /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config#033[00m
Oct  7 10:40:45 np0005473739 nova_compute[259550]: 2025-10-07 14:40:45.750 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqlkmi5v1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:45 np0005473739 nova_compute[259550]: 2025-10-07 14:40:45.895 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqlkmi5v1" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:45 np0005473739 nova_compute[259550]: 2025-10-07 14:40:45.927 2 DEBUG nova.storage.rbd_utils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 52ccd902-898f-4809-a231-be5760626c2c_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:40:45 np0005473739 nova_compute[259550]: 2025-10-07 14:40:45.931 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config 52ccd902-898f-4809-a231-be5760626c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.088 2 DEBUG oslo_concurrency.processutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config 52ccd902-898f-4809-a231-be5760626c2c_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.157s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.089 2 INFO nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Deleting local config drive /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c/disk.config because it was imported into RBD.#033[00m
Oct  7 10:40:46 np0005473739 NetworkManager[44949]: <info>  [1759848046.1398] manager: (tapb4567457-8d): new Tun device (/org/freedesktop/NetworkManager/Devices/547)
Oct  7 10:40:46 np0005473739 kernel: tapb4567457-8d: entered promiscuous mode
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01352|binding|INFO|Claiming lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 for this chassis.
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01353|binding|INFO|b4567457-8da4-42a7-b4c0-42724b2c0bc9: Claiming fa:16:3e:09:cf:e1 10.100.0.6
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:46 np0005473739 NetworkManager[44949]: <info>  [1759848046.1634] manager: (tapd721f926-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/548)
Oct  7 10:40:46 np0005473739 kernel: tapd721f926-d7: entered promiscuous mode
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01354|binding|INFO|Setting lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 ovn-installed in OVS
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01355|if_status|INFO|Not updating pb chassis for d721f926-d7b0-4e02-bf84-b45b7c5df102 now as sb is readonly
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:46 np0005473739 systemd-udevd[394042]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:46 np0005473739 systemd-udevd[394043]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:46 np0005473739 NetworkManager[44949]: <info>  [1759848046.2035] device (tapd721f926-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:46 np0005473739 NetworkManager[44949]: <info>  [1759848046.2042] device (tapd721f926-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:46 np0005473739 NetworkManager[44949]: <info>  [1759848046.2064] device (tapb4567457-8d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:40:46 np0005473739 NetworkManager[44949]: <info>  [1759848046.2072] device (tapb4567457-8d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.206 2 DEBUG nova.network.neutron [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updated VIF entry in instance network info cache for port d721f926-d7b0-4e02-bf84-b45b7c5df102. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.207 2 DEBUG nova.network.neutron [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:46 np0005473739 systemd-machined[214580]: New machine qemu-158-instance-0000007d.
Oct  7 10:40:46 np0005473739 systemd[1]: Started Virtual Machine qemu-158-instance-0000007d.
Oct  7 10:40:46 np0005473739 podman[394019]: 2025-10-07 14:40:46.242737654 +0000 UTC m=+0.068595805 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent)
Oct  7 10:40:46 np0005473739 podman[394022]: 2025-10-07 14:40:46.278194866 +0000 UTC m=+0.106089600 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller)
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01356|binding|INFO|Claiming lport d721f926-d7b0-4e02-bf84-b45b7c5df102 for this chassis.
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01357|binding|INFO|d721f926-d7b0-4e02-bf84-b45b7c5df102: Claiming fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.366 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:cf:e1 10.100.0.6'], port_security=['fa:16:3e:09:cf:e1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '52ccd902-898f-4809-a231-be5760626c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b59ffdd2-4285-47f2-a931-fca691d1c031', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c45d6c4-6a15-4c19-8f6b-673e9b96b82d, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b4567457-8da4-42a7-b4c0-42724b2c0bc9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01358|binding|INFO|Setting lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 up in Southbound
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.368 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b4567457-8da4-42a7-b4c0-42724b2c0bc9 in datapath b59ffdd2-4285-47f2-a931-fca691d1c031 bound to our chassis#033[00m
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01359|binding|INFO|Setting lport d721f926-d7b0-4e02-bf84-b45b7c5df102 ovn-installed in OVS
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.370 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b59ffdd2-4285-47f2-a931-fca691d1c031#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.379 2 DEBUG oslo_concurrency.lockutils [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.379 2 DEBUG nova.compute.manager [req-7e832b63-6b8f-405b-bec7-6a1a6bfcd6ff req-de944dbc-4ff8-49cc-b116-e3563fa3cdcc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Received event network-vif-deleted-4eb92b42-4298-4d8e-8455-b37a2972c583 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.388 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[efcc201f-7490-4186-bfc7-415313c37d8a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.429 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ef775a81-f2a0-4c42-8390-3a0abc185185]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.441 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e47bf219-7b11-463e-ad5e-75b0a9bc6fc8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2350: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 613 KiB/s wr, 106 op/s
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.488 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003'], port_security=['fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe15:f003/64 2001:db8::f816:3eff:fe15:f003/64', 'neutron:device_id': '52ccd902-898f-4809-a231-be5760626c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d721f926-d7b0-4e02-bf84-b45b7c5df102) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:40:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:46Z|01360|binding|INFO|Setting lport d721f926-d7b0-4e02-bf84-b45b7c5df102 up in Southbound
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.488 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1cb2d72a-6a97-4ef8-9ac7-186f3c570e7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.509 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7e3e0828-a576-491e-b565-1decb21aad28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb59ffdd2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:3d:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858594, 'reachable_time': 15330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394086, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.538 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa78ab2c-9136-41c5-87b5-bf566d6051ff]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb59ffdd2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858606, 'tstamp': 858606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394087, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb59ffdd2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858610, 'tstamp': 858610}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394087, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.540 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb59ffdd2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb59ffdd2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.544 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb59ffdd2-40, col_values=(('external_ids', {'iface-id': '4cc97c0a-633b-48bc-94c4-6f8ac1f61c66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.545 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.546 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d721f926-d7b0-4e02-bf84-b45b7c5df102 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 unbound from our chassis#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.548 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abe90ba0-a518-4cef-a49b-de57485faec5#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.565 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4946aae1-6803-4541-a8cc-03cd4c5c09da]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.610 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9012511d-6456-48f6-af5c-f79cfc3fbdec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.614 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[397b0282-d830-493b-9d08-2b08bfbaf262]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.663 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[76533887-81cd-4218-b112-9dcaf7aaf669]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.689 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6467f772-b421-4b1b-8f10-0112330f4c42]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabe90ba0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:a0:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 23, 'tx_packets': 4, 'rx_bytes': 2146, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858693, 'reachable_time': 40982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 23, 'inoctets': 1824, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 23, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1824, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 23, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394093, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.709 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3549f328-f21d-4394-8e11-88f96ff42188]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabe90ba0-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858707, 'tstamp': 858707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394094, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabe90ba0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:46 np0005473739 nova_compute[259550]: 2025-10-07 14:40:46.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabe90ba0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabe90ba0-a0, col_values=(('external_ids', {'iface-id': '763708cd-58bb-4680-a4f7-042aa711a366'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:40:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:40:46.715 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.133 2 DEBUG nova.compute.manager [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.134 2 DEBUG oslo_concurrency.lockutils [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.134 2 DEBUG oslo_concurrency.lockutils [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.134 2 DEBUG oslo_concurrency.lockutils [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.135 2 DEBUG nova.compute.manager [req-a8596417-718a-4142-b28c-8aa1378f081b req-b91c65b8-7e33-4863-a408-2329f57cd8b2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Processing event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.239 2 DEBUG nova.compute.manager [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.239 2 DEBUG oslo_concurrency.lockutils [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.240 2 DEBUG oslo_concurrency.lockutils [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.240 2 DEBUG oslo_concurrency.lockutils [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.240 2 DEBUG nova.compute.manager [req-bdf7f985-db8d-4fdd-90ff-07829bb8fe77 req-f4dcd3c2-a7a7-4608-8c39-b4c8d39ee6e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Processing event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:40:47 np0005473739 nova_compute[259550]: 2025-10-07 14:40:47.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.014 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848048.013203, 52ccd902-898f-4809-a231-be5760626c2c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.014 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] VM Started (Lifecycle Event)#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.016 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance event wait completed in 0 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.021 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.024 2 INFO nova.virt.libvirt.driver [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance spawned successfully.#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.025 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.063 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.067 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.098 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.098 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.099 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.099 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.100 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.100 2 DEBUG nova.virt.libvirt.driver [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.117 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.118 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848048.0136638, 52ccd902-898f-4809-a231-be5760626c2c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.118 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.240 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.244 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848048.0197148, 52ccd902-898f-4809-a231-be5760626c2c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.245 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.316 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.319 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.359 2 INFO nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Took 17.32 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.360 2 DEBUG nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.415 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:40:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2351: 305 pgs: 305 active+clean; 293 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 16 KiB/s wr, 103 op/s
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.541 2 INFO nova.compute.manager [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Took 18.73 seconds to build instance.#033[00m
Oct  7 10:40:48 np0005473739 nova_compute[259550]: 2025-10-07 14:40:48.641 2 DEBUG oslo_concurrency.lockutils [None req-6f3f33f9-a82c-426b-bbef-886ac04a3e00 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.939s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.379 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.382 2 DEBUG nova.compute.manager [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.383 2 DEBUG oslo_concurrency.lockutils [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.383 2 DEBUG oslo_concurrency.lockutils [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.383 2 DEBUG oslo_concurrency.lockutils [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.384 2 DEBUG nova.compute.manager [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.384 2 WARNING nova.compute.manager [req-9e03c94c-5344-4e80-8e33-d889f88c2799 req-6b612930-d213-4d5d-bf98-76687c9f24a7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received unexpected event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.466 2 DEBUG nova.compute.manager [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.467 2 DEBUG oslo_concurrency.lockutils [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.467 2 DEBUG oslo_concurrency.lockutils [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.467 2 DEBUG oslo_concurrency.lockutils [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.467 2 DEBUG nova.compute.manager [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.468 2 WARNING nova.compute.manager [req-d4d0c819-2f98-4cd4-a8be-bea63284fe53 req-3ed75a6a-5d55-4f6e-a650-627af4570f7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received unexpected event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:40:49 np0005473739 nova_compute[259550]: 2025-10-07 14:40:49.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2352: 305 pgs: 305 active+clean; 293 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.4 MiB/s rd, 32 KiB/s wr, 126 op/s
Oct  7 10:40:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2353: 305 pgs: 305 active+clean; 294 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 570 KiB/s wr, 123 op/s
Oct  7 10:40:52 np0005473739 nova_compute[259550]: 2025-10-07 14:40:52.506 2 DEBUG nova.compute.manager [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:40:52 np0005473739 nova_compute[259550]: 2025-10-07 14:40:52.507 2 DEBUG nova.compute.manager [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing instance network info cache due to event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:40:52 np0005473739 nova_compute[259550]: 2025-10-07 14:40:52.507 2 DEBUG oslo_concurrency.lockutils [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:40:52 np0005473739 nova_compute[259550]: 2025-10-07 14:40:52.507 2 DEBUG oslo_concurrency.lockutils [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:40:52 np0005473739 nova_compute[259550]: 2025-10-07 14:40:52.507 2 DEBUG nova.network.neutron [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:40:52 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:52Z|00154|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:17:5f:df 10.100.0.27
Oct  7 10:40:52 np0005473739 ovn_controller[151684]: 2025-10-07T14:40:52Z|00155|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:17:5f:df 10.100.0.27
Oct  7 10:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:40:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:40:52 np0005473739 nova_compute[259550]: 2025-10-07 14:40:52.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:52 np0005473739 nova_compute[259550]: 2025-10-07 14:40:52.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:52 np0005473739 nova_compute[259550]: 2025-10-07 14:40:52.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #114. Immutable memtables: 0.
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.078395) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 67] Flushing memtable with next log file: 114
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053078477, "job": 67, "event": "flush_started", "num_memtables": 1, "num_entries": 1297, "num_deletes": 251, "total_data_size": 1920941, "memory_usage": 1946816, "flush_reason": "Manual Compaction"}
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 67] Level-0 flush table #115: started
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053104140, "cf_name": "default", "job": 67, "event": "table_file_creation", "file_number": 115, "file_size": 1890731, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 48456, "largest_seqno": 49752, "table_properties": {"data_size": 1884622, "index_size": 3376, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1669, "raw_key_size": 13241, "raw_average_key_size": 20, "raw_value_size": 1872280, "raw_average_value_size": 2828, "num_data_blocks": 151, "num_entries": 662, "num_filter_entries": 662, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759847929, "oldest_key_time": 1759847929, "file_creation_time": 1759848053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 115, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 67] Flush lasted 25796 microseconds, and 7005 cpu microseconds.
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.104196) [db/flush_job.cc:967] [default] [JOB 67] Level-0 flush table #115: 1890731 bytes OK
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.104221) [db/memtable_list.cc:519] [default] Level-0 commit table #115 started
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.106333) [db/memtable_list.cc:722] [default] Level-0 commit table #115: memtable #1 done
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.106348) EVENT_LOG_v1 {"time_micros": 1759848053106343, "job": 67, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.106366) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 67] Try to delete WAL files size 1915067, prev total WAL file size 1915067, number of live WAL files 2.
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000111.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.107319) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034353138' seq:72057594037927935, type:22 .. '7061786F730034373730' seq:0, type:0; will stop at (end)
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 68] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 67 Base level 0, inputs: [115(1846KB)], [113(8094KB)]
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053107383, "job": 68, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [115], "files_L6": [113], "score": -1, "input_data_size": 10179107, "oldest_snapshot_seqno": -1}
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 68] Generated table #116: 6934 keys, 8484003 bytes, temperature: kUnknown
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053152531, "cf_name": "default", "job": 68, "event": "table_file_creation", "file_number": 116, "file_size": 8484003, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8439498, "index_size": 26089, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 17349, "raw_key_size": 180986, "raw_average_key_size": 26, "raw_value_size": 8317148, "raw_average_value_size": 1199, "num_data_blocks": 1014, "num_entries": 6934, "num_filter_entries": 6934, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848053, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 116, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.152743) [db/compaction/compaction_job.cc:1663] [default] [JOB 68] Compacted 1@0 + 1@6 files to L6 => 8484003 bytes
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.154211) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 225.2 rd, 187.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.8, 7.9 +0.0 blob) out(8.1 +0.0 blob), read-write-amplify(9.9) write-amplify(4.5) OK, records in: 7448, records dropped: 514 output_compression: NoCompression
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.154230) EVENT_LOG_v1 {"time_micros": 1759848053154220, "job": 68, "event": "compaction_finished", "compaction_time_micros": 45210, "compaction_time_cpu_micros": 22136, "output_level": 6, "num_output_files": 1, "total_output_size": 8484003, "num_input_records": 7448, "num_output_records": 6934, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000115.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053154631, "job": 68, "event": "table_file_deletion", "file_number": 115}
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000113.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848053155977, "job": 68, "event": "table_file_deletion", "file_number": 113}
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.107225) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156054) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156059) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156061) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:40:53 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:40:53.156064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.236 2 DEBUG nova.network.neutron [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updated VIF entry in instance network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.238 2 DEBUG nova.network.neutron [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.360 2 DEBUG oslo_concurrency.lockutils [req-6782ff18-c0ef-4b81-b2ff-785f18072ab4 req-07222c5f-9e90-4f36-926d-08350ec731f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2354: 305 pgs: 305 active+clean; 320 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.6 MiB/s wr, 125 op/s
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.728 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848039.7273135, 0ec8fffb-cb39-4dd3-88b8-41467b24be13 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.729 2 INFO nova.compute.manager [-] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.749 2 DEBUG nova.compute.manager [None req-ae463f8e-084d-40e5-8426-4b371a9ae537 - - - - - -] [instance: 0ec8fffb-cb39-4dd3-88b8-41467b24be13] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:54 np0005473739 nova_compute[259550]: 2025-10-07 14:40:54.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:55 np0005473739 nova_compute[259550]: 2025-10-07 14:40:55.989 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:56 np0005473739 nova_compute[259550]: 2025-10-07 14:40:56.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2355: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct  7 10:40:56 np0005473739 nova_compute[259550]: 2025-10-07 14:40:56.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:40:57 np0005473739 nova_compute[259550]: 2025-10-07 14:40:57.861 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:40:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:40:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2356: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct  7 10:40:59 np0005473739 nova_compute[259550]: 2025-10-07 14:40:59.384 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:00.073 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:00.074 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:00.075 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2357: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.1 MiB/s wr, 134 op/s
Oct  7 10:41:01 np0005473739 nova_compute[259550]: 2025-10-07 14:41:01.390 2 DEBUG nova.compute.manager [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:01 np0005473739 nova_compute[259550]: 2025-10-07 14:41:01.391 2 DEBUG nova.compute.manager [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:41:01 np0005473739 nova_compute[259550]: 2025-10-07 14:41:01.392 2 DEBUG oslo_concurrency.lockutils [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:01 np0005473739 nova_compute[259550]: 2025-10-07 14:41:01.392 2 DEBUG oslo_concurrency.lockutils [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:01 np0005473739 nova_compute[259550]: 2025-10-07 14:41:01.393 2 DEBUG nova.network.neutron [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:41:01 np0005473739 nova_compute[259550]: 2025-10-07 14:41:01.494 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2358: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 2.2 MiB/s wr, 113 op/s
Oct  7 10:41:02 np0005473739 nova_compute[259550]: 2025-10-07 14:41:02.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:02 np0005473739 podman[394139]: 2025-10-07 14:41:02.951841068 +0000 UTC m=+0.060464788 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:41:02 np0005473739 podman[394138]: 2025-10-07 14:41:02.952612698 +0000 UTC m=+0.062555693 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 10:41:02 np0005473739 nova_compute[259550]: 2025-10-07 14:41:02.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:02 np0005473739 nova_compute[259550]: 2025-10-07 14:41:02.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:41:02 np0005473739 nova_compute[259550]: 2025-10-07 14:41:02.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:41:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:03 np0005473739 nova_compute[259550]: 2025-10-07 14:41:03.487 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:03 np0005473739 nova_compute[259550]: 2025-10-07 14:41:03.516 2 DEBUG nova.network.neutron [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:41:03 np0005473739 nova_compute[259550]: 2025-10-07 14:41:03.517 2 DEBUG nova.network.neutron [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": null, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:03 np0005473739 nova_compute[259550]: 2025-10-07 14:41:03.609 2 DEBUG oslo_concurrency.lockutils [req-45a94d74-8502-4442-8cb5-79a0dfd971ce req-41744434-1ee0-430e-a548-2301fc25e13f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:41:03 np0005473739 nova_compute[259550]: 2025-10-07 14:41:03.609 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:03 np0005473739 nova_compute[259550]: 2025-10-07 14:41:03.610 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:41:03 np0005473739 nova_compute[259550]: 2025-10-07 14:41:03.610 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:41:04 np0005473739 nova_compute[259550]: 2025-10-07 14:41:04.386 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2359: 305 pgs: 305 active+clean; 326 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 947 KiB/s rd, 1.6 MiB/s wr, 77 op/s
Oct  7 10:41:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:05Z|00156|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:cf:e1 10.100.0.6
Oct  7 10:41:05 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:05Z|00157|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:cf:e1 10.100.0.6
Oct  7 10:41:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2360: 305 pgs: 305 active+clean; 335 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 213 KiB/s rd, 1.3 MiB/s wr, 52 op/s
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.151 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.352 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.353 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.353 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.392 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.393 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.393 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.394 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.394 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:41:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/174453716' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.842 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.976 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.977 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.980 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.980 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007d as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.983 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.983 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.986 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:41:07 np0005473739 nova_compute[259550]: 2025-10-07 14:41:07.987 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000079 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:41:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.196 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.197 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2945MB free_disk=59.81833267211914GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.198 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.198 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2361: 305 pgs: 305 active+clean; 335 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 95 KiB/s rd, 814 KiB/s wr, 21 op/s
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.566 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance d0b10640-5492-4d8f-8b94-a49a15b6e702 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.566 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.566 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance ca309873-104c-4cd4-a609-686d61823e0f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.566 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 52ccd902-898f-4809-a231-be5760626c2c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.567 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.567 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:41:08 np0005473739 nova_compute[259550]: 2025-10-07 14:41:08.648 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:41:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/268461916' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:41:09 np0005473739 nova_compute[259550]: 2025-10-07 14:41:09.107 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:09 np0005473739 nova_compute[259550]: 2025-10-07 14:41:09.115 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:41:09 np0005473739 nova_compute[259550]: 2025-10-07 14:41:09.189 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:41:09 np0005473739 nova_compute[259550]: 2025-10-07 14:41:09.259 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:41:09 np0005473739 nova_compute[259550]: 2025-10-07 14:41:09.260 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:09 np0005473739 nova_compute[259550]: 2025-10-07 14:41:09.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:10 np0005473739 nova_compute[259550]: 2025-10-07 14:41:10.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2362: 305 pgs: 305 active+clean; 356 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 10:41:10 np0005473739 nova_compute[259550]: 2025-10-07 14:41:10.888 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2363: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.2 MiB/s wr, 65 op/s
Oct  7 10:41:12 np0005473739 nova_compute[259550]: 2025-10-07 14:41:12.878 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:14 np0005473739 nova_compute[259550]: 2025-10-07 14:41:14.391 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2364: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  7 10:41:16 np0005473739 nova_compute[259550]: 2025-10-07 14:41:16.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2365: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 311 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  7 10:41:17 np0005473739 podman[394221]: 2025-10-07 14:41:17.087364775 +0000 UTC m=+0.076802613 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:41:17 np0005473739 podman[394222]: 2025-10-07 14:41:17.104924601 +0000 UTC m=+0.092501399 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:41:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:17.468 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=41, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=40) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:17 np0005473739 nova_compute[259550]: 2025-10-07 14:41:17.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:17.469 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:41:17 np0005473739 nova_compute[259550]: 2025-10-07 14:41:17.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2366: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 1.4 MiB/s wr, 44 op/s
Oct  7 10:41:19 np0005473739 nova_compute[259550]: 2025-10-07 14:41:19.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2367: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 230 KiB/s rd, 1.4 MiB/s wr, 46 op/s
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2368: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 97 KiB/s wr, 5 op/s
Oct  7 10:41:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:22.471 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '41'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:41:22
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'cephfs.cephfs.meta', 'backups', 'default.rgw.meta', 'default.rgw.control', 'images', 'default.rgw.log', '.rgw.root', 'volumes', '.mgr', 'cephfs.cephfs.data']
Oct  7 10:41:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:41:22 np0005473739 nova_compute[259550]: 2025-10-07 14:41:22.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:41:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:41:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:41:24 np0005473739 nova_compute[259550]: 2025-10-07 14:41:24.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2369: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s wr, 2 op/s
Oct  7 10:41:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2370: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 30 KiB/s wr, 3 op/s
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.516 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.517 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.590 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.697 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.698 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.705 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.705 2 INFO nova.compute.claims [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.798 2 DEBUG nova.compute.manager [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.799 2 DEBUG nova.compute.manager [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing instance network info cache due to event network-changed-b4567457-8da4-42a7-b4c0-42724b2c0bc9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.799 2 DEBUG oslo_concurrency.lockutils [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.799 2 DEBUG oslo_concurrency.lockutils [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.800 2 DEBUG nova.network.neutron [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Refreshing network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.888 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.946 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.946 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.946 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.947 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.947 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.948 2 INFO nova.compute.manager [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Terminating instance#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.949 2 DEBUG nova.compute.manager [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:41:27 np0005473739 nova_compute[259550]: 2025-10-07 14:41:27.954 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:28 np0005473739 kernel: tapb4567457-8d (unregistering): left promiscuous mode
Oct  7 10:41:28 np0005473739 NetworkManager[44949]: <info>  [1759848088.0148] device (tapb4567457-8d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:41:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:28Z|01361|binding|INFO|Releasing lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 from this chassis (sb_readonly=0)
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.028 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:28Z|01362|binding|INFO|Setting lport b4567457-8da4-42a7-b4c0-42724b2c0bc9 down in Southbound
Oct  7 10:41:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:28Z|01363|binding|INFO|Removing iface tapb4567457-8d ovn-installed in OVS
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 kernel: tapd721f926-d7 (unregistering): left promiscuous mode
Oct  7 10:41:28 np0005473739 NetworkManager[44949]: <info>  [1759848088.0581] device (tapd721f926-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.060 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.067 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:28Z|01364|binding|INFO|Releasing lport d721f926-d7b0-4e02-bf84-b45b7c5df102 from this chassis (sb_readonly=1)
Oct  7 10:41:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:28Z|01365|binding|INFO|Removing iface tapd721f926-d7 ovn-installed in OVS
Oct  7 10:41:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:28Z|01366|if_status|INFO|Dropped 1 log messages in last 143 seconds (most recently, 143 seconds ago) due to excessive rate
Oct  7 10:41:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:28Z|01367|if_status|INFO|Not setting lport d721f926-d7b0-4e02-bf84-b45b7c5df102 down as sb is readonly
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:28Z|01368|binding|INFO|Setting lport d721f926-d7b0-4e02-bf84-b45b7c5df102 down in Southbound
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.120 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:cf:e1 10.100.0.6'], port_security=['fa:16:3e:09:cf:e1 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '52ccd902-898f-4809-a231-be5760626c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b59ffdd2-4285-47f2-a931-fca691d1c031', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c45d6c4-6a15-4c19-8f6b-673e9b96b82d, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b4567457-8da4-42a7-b4c0-42724b2c0bc9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.121 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b4567457-8da4-42a7-b4c0-42724b2c0bc9 in datapath b59ffdd2-4285-47f2-a931-fca691d1c031 unbound from our chassis#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.123 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b59ffdd2-4285-47f2-a931-fca691d1c031#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.140 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0dc3accc-1b6f-4e88-ac7d-f76d6136a82a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Deactivated successfully.
Oct  7 10:41:28 np0005473739 systemd[1]: machine-qemu\x2d158\x2dinstance\x2d0000007d.scope: Consumed 19.826s CPU time.
Oct  7 10:41:28 np0005473739 systemd-machined[214580]: Machine qemu-158-instance-0000007d terminated.
Oct  7 10:41:28 np0005473739 NetworkManager[44949]: <info>  [1759848088.1865] manager: (tapd721f926-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/549)
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.186 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[84517abc-d718-4100-b437-31e6e5f0e5d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.197 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bcf294a6-881d-4728-975d-3f578941d230]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.217 2 INFO nova.virt.libvirt.driver [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Instance destroyed successfully.#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.218 2 DEBUG nova.objects.instance [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 52ccd902-898f-4809-a231-be5760626c2c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.231 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f9f0fc57-082d-460c-abc7-e77505d6eb78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.249 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003'], port_security=['fa:16:3e:15:f0:03 2001:db8:0:1:f816:3eff:fe15:f003 2001:db8::f816:3eff:fe15:f003'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe15:f003/64 2001:db8::f816:3eff:fe15:f003/64', 'neutron:device_id': '52ccd902-898f-4809-a231-be5760626c2c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d721f926-d7b0-4e02-bf84-b45b7c5df102) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.256 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[99daefb4-a9c0-4c12-89e2-1b2acd087f26]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb59ffdd2-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:3d:8f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 378], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858594, 'reachable_time': 15330, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394325, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.279 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfaf0621-1911-4418-a009-03eb85b581c0]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb59ffdd2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858606, 'tstamp': 858606}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394326, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb59ffdd2-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858610, 'tstamp': 858610}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394326, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.281 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb59ffdd2-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.293 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb59ffdd2-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.294 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.294 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb59ffdd2-40, col_values=(('external_ids', {'iface-id': '4cc97c0a-633b-48bc-94c4-6f8ac1f61c66'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.295 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.298 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d721f926-d7b0-4e02-bf84-b45b7c5df102 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 unbound from our chassis#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.300 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network abe90ba0-a518-4cef-a49b-de57485faec5#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.316 2 DEBUG nova.virt.libvirt.vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:48Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.317 2 DEBUG nova.network.os_vif_util [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.246", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.318 2 DEBUG nova.network.os_vif_util [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.318 2 DEBUG os_vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.321 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb4567457-8d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.325 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.325 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2c384d88-a4f8-4e24-b008-c2ff019ecb05]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.348 2 INFO os_vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:cf:e1,bridge_name='br-int',has_traffic_filtering=True,id=b4567457-8da4-42a7-b4c0-42724b2c0bc9,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb4567457-8d')#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.349 2 DEBUG nova.virt.libvirt.vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1539930206',display_name='tempest-TestGettingAddress-server-1539930206',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1539930206',id=125,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:48Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-d9tm55rs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:48Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=52ccd902-898f-4809-a231-be5760626c2c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.349 2 DEBUG nova.network.os_vif_util [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.350 2 DEBUG nova.network.os_vif_util [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.351 2 DEBUG os_vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd721f926-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.358 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.360 2 INFO os_vif [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:15:f0:03,bridge_name='br-int',has_traffic_filtering=True,id=d721f926-d7b0-4e02-bf84-b45b7c5df102,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd721f926-d7')#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.373 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[dd86261e-9aa7-4c2e-b22c-a23c5908f580]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.380 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5800a5b2-a70f-47e5-bd55-8350041a32b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.417 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b2355c-b70e-4a8b-bf99-53e9c544cf78]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3177214505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.439 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.446 2 DEBUG nova.compute.provider_tree [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.448 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[300fcaa0-9638-4e15-a1d5-93c0929a8af9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapabe90ba0-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:6e:a0:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 38, 'tx_packets': 5, 'rx_bytes': 3460, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 379], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858693, 'reachable_time': 40982, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 38, 'inoctets': 2928, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 38, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2928, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 38, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 394355, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2371: 305 pgs: 305 active+clean; 359 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 KiB/s rd, 16 KiB/s wr, 2 op/s
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[409e4df0-21ea-43e5-b2ff-0f8cdc43c253]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapabe90ba0-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 858707, 'tstamp': 858707}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 394356, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.471 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabe90ba0-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.474 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapabe90ba0-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.474 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.474 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapabe90ba0-a0, col_values=(('external_ids', {'iface-id': '763708cd-58bb-4680-a4f7-042aa711a366'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:28.475 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.523 2 DEBUG nova.compute.manager [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-unplugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.523 2 DEBUG oslo_concurrency.lockutils [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.524 2 DEBUG oslo_concurrency.lockutils [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.524 2 DEBUG oslo_concurrency.lockutils [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.524 2 DEBUG nova.compute.manager [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-unplugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.524 2 DEBUG nova.compute.manager [req-c71abda6-9369-4b03-a579-a6e4682c4db2 req-a1b46bff-cfb2-4be1-a492-a1f8c7a63a78 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-unplugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.570 2 DEBUG nova.scheduler.client.report [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.656 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.958s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.658 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.734 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.736 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.823 2 INFO nova.virt.libvirt.driver [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Deleting instance files /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c_del#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.825 2 INFO nova.virt.libvirt.driver [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Deletion of /var/lib/nova/instances/52ccd902-898f-4809-a231-be5760626c2c_del complete#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.829 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:41:28 np0005473739 nova_compute[259550]: 2025-10-07 14:41:28.957 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.025 2 INFO nova.compute.manager [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Took 1.08 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.026 2 DEBUG oslo.service.loopingcall [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.026 2 DEBUG nova.compute.manager [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.026 2 DEBUG nova.network.neutron [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.161 2 DEBUG nova.policy [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '512332529ac84b3f9a7bb7d98d8577ac', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '3b215e149c484ed3a0d2130b82d65f6f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.415 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.416 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.417 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Creating image(s)#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.444 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.470 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.499 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.503 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.589 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.589 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.590 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.590 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.612 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.616 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:41:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 83322ff9-95b0-41ae-af30-adf6bb4a4c1c does not exist
Oct  7 10:41:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 295f4782-5c69-450b-a0a6-407f9be5abda does not exist
Oct  7 10:41:29 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e52f158f-fc83-4f3b-872c-a0d98360fed8 does not exist
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:41:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.704 2 DEBUG nova.network.neutron [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updated VIF entry in instance network info cache for port b4567457-8da4-42a7-b4c0-42724b2c0bc9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.705 2 DEBUG nova.network.neutron [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "address": "fa:16:3e:15:f0:03", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe15:f003", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd721f926-d7", "ovs_interfaceid": "d721f926-d7b0-4e02-bf84-b45b7c5df102", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.843 2 DEBUG oslo_concurrency.lockutils [req-6e24b557-052d-41b4-8458-0bf008b5d288 req-831a81eb-07d4-4776-97a3-9fddb1c59f6d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-52ccd902-898f-4809-a231-be5760626c2c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.910 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-unplugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.911 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.911 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.911 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.912 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-unplugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.913 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-unplugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.913 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.913 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.913 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.914 2 DEBUG oslo_concurrency.lockutils [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.914 2 DEBUG nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.915 2 WARNING nova.compute.manager [req-daee5948-654e-4890-ac6b-ced517ede349 req-6c4668d1-09d4-4060-9f28-2f0f26fbf169 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received unexpected event network-vif-plugged-d721f926-d7b0-4e02-bf84-b45b7c5df102 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:41:29 np0005473739 nova_compute[259550]: 2025-10-07 14:41:29.967 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.351s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.038 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] resizing rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.150 2 DEBUG nova.objects.instance [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lazy-loading 'migration_context' on Instance uuid 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.162 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.162 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Ensure instance console log exists: /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.163 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.163 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.164 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.193 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Successfully created port: feac1006-7556-4dd6-9691-bc886a9410f3 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:41:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:41:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:41:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:41:30 np0005473739 podman[394793]: 2025-10-07 14:41:30.278763936 +0000 UTC m=+0.046942988 container create c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:41:30 np0005473739 systemd[1]: Started libpod-conmon-c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1.scope.
Oct  7 10:41:30 np0005473739 podman[394793]: 2025-10-07 14:41:30.253534666 +0000 UTC m=+0.021713758 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:41:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:41:30 np0005473739 podman[394793]: 2025-10-07 14:41:30.375356964 +0000 UTC m=+0.143536066 container init c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 10:41:30 np0005473739 podman[394793]: 2025-10-07 14:41:30.385504553 +0000 UTC m=+0.153683605 container start c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:41:30 np0005473739 podman[394793]: 2025-10-07 14:41:30.389285114 +0000 UTC m=+0.157464156 container attach c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:41:30 np0005473739 objective_nash[394809]: 167 167
Oct  7 10:41:30 np0005473739 systemd[1]: libpod-c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1.scope: Deactivated successfully.
Oct  7 10:41:30 np0005473739 podman[394793]: 2025-10-07 14:41:30.393420584 +0000 UTC m=+0.161599636 container died c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:41:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2372: 305 pgs: 305 active+clean; 310 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 18 KiB/s wr, 26 op/s
Oct  7 10:41:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f968130948fe29a5884a2b78657abcaf1f35896689d67b5e9445fe5065593147-merged.mount: Deactivated successfully.
Oct  7 10:41:30 np0005473739 podman[394793]: 2025-10-07 14:41:30.499851293 +0000 UTC m=+0.268030305 container remove c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=objective_nash, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 10:41:30 np0005473739 systemd[1]: libpod-conmon-c640e358748bfa1bf0133609e7ab4ba87c361f790049e6966a9890670fcd94c1.scope: Deactivated successfully.
Oct  7 10:41:30 np0005473739 podman[394831]: 2025-10-07 14:41:30.719218924 +0000 UTC m=+0.047232267 container create 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:41:30 np0005473739 systemd[1]: Started libpod-conmon-1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e.scope.
Oct  7 10:41:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:41:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:30 np0005473739 podman[394831]: 2025-10-07 14:41:30.698337659 +0000 UTC m=+0.026351032 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:41:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:30 np0005473739 podman[394831]: 2025-10-07 14:41:30.810175862 +0000 UTC m=+0.138189295 container init 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.813 2 DEBUG nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.815 2 DEBUG oslo_concurrency.lockutils [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "52ccd902-898f-4809-a231-be5760626c2c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.815 2 DEBUG oslo_concurrency.lockutils [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.815 2 DEBUG oslo_concurrency.lockutils [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.815 2 DEBUG nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] No waiting events found dispatching network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.816 2 WARNING nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received unexpected event network-vif-plugged-b4567457-8da4-42a7-b4c0-42724b2c0bc9 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.817 2 DEBUG nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-deleted-d721f926-d7b0-4e02-bf84-b45b7c5df102 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.817 2 INFO nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Neutron deleted interface d721f926-d7b0-4e02-bf84-b45b7c5df102; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.818 2 DEBUG nova.network.neutron [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [{"id": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "address": "fa:16:3e:09:cf:e1", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb4567457-8d", "ovs_interfaceid": "b4567457-8da4-42a7-b4c0-42724b2c0bc9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:30 np0005473739 podman[394831]: 2025-10-07 14:41:30.822059777 +0000 UTC m=+0.150073130 container start 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True)
Oct  7 10:41:30 np0005473739 podman[394831]: 2025-10-07 14:41:30.827870882 +0000 UTC m=+0.155884345 container attach 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.872 2 DEBUG nova.compute.manager [req-3d72168c-d1b2-416c-abaf-10b8bfa8290a req-3f028113-69eb-4890-bb47-457f8c1accf0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Detach interface failed, port_id=d721f926-d7b0-4e02-bf84-b45b7c5df102, reason: Instance 52ccd902-898f-4809-a231-be5760626c2c could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.913 2 DEBUG nova.network.neutron [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:30 np0005473739 nova_compute[259550]: 2025-10-07 14:41:30.950 2 INFO nova.compute.manager [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Took 1.92 seconds to deallocate network for instance.#033[00m
Oct  7 10:41:31 np0005473739 nova_compute[259550]: 2025-10-07 14:41:31.046 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:31 np0005473739 nova_compute[259550]: 2025-10-07 14:41:31.046 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:31 np0005473739 nova_compute[259550]: 2025-10-07 14:41:31.159 2 DEBUG oslo_concurrency.processutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:41:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4174876285' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:41:31 np0005473739 nova_compute[259550]: 2025-10-07 14:41:31.671 2 DEBUG oslo_concurrency.processutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:31 np0005473739 nova_compute[259550]: 2025-10-07 14:41:31.680 2 DEBUG nova.compute.provider_tree [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:41:31 np0005473739 nova_compute[259550]: 2025-10-07 14:41:31.710 2 DEBUG nova.scheduler.client.report [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:41:31 np0005473739 nova_compute[259550]: 2025-10-07 14:41:31.900 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.853s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:31 np0005473739 nova_compute[259550]: 2025-10-07 14:41:31.959 2 INFO nova.scheduler.client.report [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 52ccd902-898f-4809-a231-be5760626c2c#033[00m
Oct  7 10:41:31 np0005473739 unruffled_hellman[394848]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:41:31 np0005473739 unruffled_hellman[394848]: --> relative data size: 1.0
Oct  7 10:41:31 np0005473739 unruffled_hellman[394848]: --> All data devices are unavailable
Oct  7 10:41:32 np0005473739 systemd[1]: libpod-1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e.scope: Deactivated successfully.
Oct  7 10:41:32 np0005473739 systemd[1]: libpod-1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e.scope: Consumed 1.099s CPU time.
Oct  7 10:41:32 np0005473739 podman[394831]: 2025-10-07 14:41:32.004716823 +0000 UTC m=+1.332730176 container died 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:41:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5ea1d2fe034d0a3f6ada09dc6b5b5415dd1822e93e140fc95a0f29b6d420fe37-merged.mount: Deactivated successfully.
Oct  7 10:41:32 np0005473739 podman[394831]: 2025-10-07 14:41:32.068677842 +0000 UTC m=+1.396691195 container remove 1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=unruffled_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:41:32 np0005473739 systemd[1]: libpod-conmon-1108faaa4ab6e8514ec83a8c021ea4b89af2199ad3345d12eadf2ff348a0269e.scope: Deactivated successfully.
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.134 2 DEBUG oslo_concurrency.lockutils [None req-c2c508dc-2014-49fc-a9fc-d09fb6f2f9ba d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "52ccd902-898f-4809-a231-be5760626c2c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2373: 305 pgs: 305 active+clean; 299 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 1.3 MiB/s wr, 43 op/s
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.653 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Successfully updated port: feac1006-7556-4dd6-9691-bc886a9410f3 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.002530782989841766 of space, bias 1.0, pg target 0.7592348969525298 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.669 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.669 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquired lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.669 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:41:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:41:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:41:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2872906395' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:41:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:41:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2872906395' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:41:32 np0005473739 podman[395052]: 2025-10-07 14:41:32.727083924 +0000 UTC m=+0.043345933 container create 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 10:41:32 np0005473739 systemd[1]: Started libpod-conmon-7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f.scope.
Oct  7 10:41:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:41:32 np0005473739 podman[395052]: 2025-10-07 14:41:32.708054618 +0000 UTC m=+0.024316647 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.804 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:41:32 np0005473739 podman[395052]: 2025-10-07 14:41:32.819032577 +0000 UTC m=+0.135294596 container init 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:41:32 np0005473739 podman[395052]: 2025-10-07 14:41:32.829247449 +0000 UTC m=+0.145509458 container start 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:41:32 np0005473739 podman[395052]: 2025-10-07 14:41:32.832722921 +0000 UTC m=+0.148984940 container attach 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:41:32 np0005473739 nervous_pasteur[395070]: 167 167
Oct  7 10:41:32 np0005473739 systemd[1]: libpod-7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f.scope: Deactivated successfully.
Oct  7 10:41:32 np0005473739 conmon[395070]: conmon 7b7d5f68b275c1ab96b0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f.scope/container/memory.events
Oct  7 10:41:32 np0005473739 podman[395052]: 2025-10-07 14:41:32.838446543 +0000 UTC m=+0.154708572 container died 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:41:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f8996353de1342cdf983d9cfe6944b22d0e8a2e16dc78ac661ca71afee977721-merged.mount: Deactivated successfully.
Oct  7 10:41:32 np0005473739 podman[395052]: 2025-10-07 14:41:32.877650325 +0000 UTC m=+0.193912334 container remove 7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:32 np0005473739 systemd[1]: libpod-conmon-7b7d5f68b275c1ab96b0d8a0f6e3693504f9721ae9c7b54162148a53befa4b7f.scope: Deactivated successfully.
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.897 2 DEBUG nova.compute.manager [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Received event network-vif-deleted-b4567457-8da4-42a7-b4c0-42724b2c0bc9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.897 2 DEBUG nova.compute.manager [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-changed-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.898 2 DEBUG nova.compute.manager [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Refreshing instance network info cache due to event network-changed-feac1006-7556-4dd6-9691-bc886a9410f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:41:32 np0005473739 nova_compute[259550]: 2025-10-07 14:41:32.898 2 DEBUG oslo_concurrency.lockutils [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:33 np0005473739 podman[395090]: 2025-10-07 14:41:33.084112364 +0000 UTC m=+0.069408577 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:41:33 np0005473739 podman[395089]: 2025-10-07 14:41:33.085019917 +0000 UTC m=+0.072174809 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:41:33 np0005473739 podman[395112]: 2025-10-07 14:41:33.089840796 +0000 UTC m=+0.047937365 container create 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:41:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:33 np0005473739 systemd[1]: Started libpod-conmon-1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b.scope.
Oct  7 10:41:33 np0005473739 podman[395112]: 2025-10-07 14:41:33.069878575 +0000 UTC m=+0.027975154 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:41:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:41:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:33 np0005473739 podman[395112]: 2025-10-07 14:41:33.197036495 +0000 UTC m=+0.155133084 container init 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:41:33 np0005473739 podman[395112]: 2025-10-07 14:41:33.206672681 +0000 UTC m=+0.164769240 container start 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:41:33 np0005473739 podman[395112]: 2025-10-07 14:41:33.212256769 +0000 UTC m=+0.170353348 container attach 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.522 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.523 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.523 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.523 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.523 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.524 2 INFO nova.compute.manager [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Terminating instance#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.525 2 DEBUG nova.compute.manager [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.528 2 DEBUG nova.network.neutron [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.546 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Releasing lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.547 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance network_info: |[{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.547 2 DEBUG oslo_concurrency.lockutils [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.548 2 DEBUG nova.network.neutron [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Refreshing network info cache for port feac1006-7556-4dd6-9691-bc886a9410f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.550 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start _get_guest_xml network_info=[{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.555 2 WARNING nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.565 2 DEBUG nova.virt.libvirt.host [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.568 2 DEBUG nova.virt.libvirt.host [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.574 2 DEBUG nova.virt.libvirt.host [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.577 2 DEBUG nova.virt.libvirt.host [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.577 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.577 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.578 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.578 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.579 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.579 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.579 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.580 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.582 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.582 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.584 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.584 2 DEBUG nova.virt.hardware [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.587 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:33 np0005473739 kernel: tapff77de20-12 (unregistering): left promiscuous mode
Oct  7 10:41:33 np0005473739 NetworkManager[44949]: <info>  [1759848093.6393] device (tapff77de20-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:41:33 np0005473739 kernel: tapa40ec757-40 (unregistering): left promiscuous mode
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:33Z|01369|binding|INFO|Releasing lport ff77de20-1280-4c30-941d-c53ed7efcbe8 from this chassis (sb_readonly=0)
Oct  7 10:41:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:33Z|01370|binding|INFO|Setting lport ff77de20-1280-4c30-941d-c53ed7efcbe8 down in Southbound
Oct  7 10:41:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:33Z|01371|binding|INFO|Removing iface tapff77de20-12 ovn-installed in OVS
Oct  7 10:41:33 np0005473739 NetworkManager[44949]: <info>  [1759848093.6605] device (tapa40ec757-40): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.664 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:b5:2f 10.100.0.12'], port_security=['fa:16:3e:09:b5:2f 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b59ffdd2-4285-47f2-a931-fca691d1c031', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9c45d6c4-6a15-4c19-8f6b-673e9b96b82d, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ff77de20-1280-4c30-941d-c53ed7efcbe8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.665 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ff77de20-1280-4c30-941d-c53ed7efcbe8 in datapath b59ffdd2-4285-47f2-a931-fca691d1c031 unbound from our chassis#033[00m
Oct  7 10:41:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.667 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b59ffdd2-4285-47f2-a931-fca691d1c031, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:41:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.672 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[519aca76-253d-43e3-9b6e-edcea27860af]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.672 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 namespace which is not needed anymore#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:33Z|01372|binding|INFO|Releasing lport a40ec757-407d-4375-b756-d4fb8f5664b4 from this chassis (sb_readonly=0)
Oct  7 10:41:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:33Z|01373|binding|INFO|Setting lport a40ec757-407d-4375-b756-d4fb8f5664b4 down in Southbound
Oct  7 10:41:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:33Z|01374|binding|INFO|Removing iface tapa40ec757-40 ovn-installed in OVS
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.705 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:33.709 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2'], port_security=['fa:16:3e:37:32:f2 2001:db8:0:1:f816:3eff:fe37:32f2 2001:db8::f816:3eff:fe37:32f2'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8:0:1:f816:3eff:fe37:32f2/64 2001:db8::f816:3eff:fe37:32f2/64', 'neutron:device_id': '91ef3edf-0b1e-4a6d-8ef1-af2687c58b74', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-abe90ba0-a518-4cef-a49b-de57485faec5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3b63116b-3cbd-4da7-8f74-19fd26396505', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=133edfbf-d913-46a7-a148-1a6e26213678, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=a40ec757-407d-4375-b756-d4fb8f5664b4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007a.scope: Deactivated successfully.
Oct  7 10:41:33 np0005473739 systemd[1]: machine-qemu\x2d153\x2dinstance\x2d0000007a.scope: Consumed 16.547s CPU time.
Oct  7 10:41:33 np0005473739 systemd-machined[214580]: Machine qemu-153-instance-0000007a terminated.
Oct  7 10:41:33 np0005473739 NetworkManager[44949]: <info>  [1759848093.7590] manager: (tapa40ec757-40): new Tun device (/org/freedesktop/NetworkManager/Devices/550)
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.787 2 INFO nova.virt.libvirt.driver [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Instance destroyed successfully.#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.787 2 DEBUG nova.objects.instance [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.802 2 DEBUG nova.virt.libvirt.vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:04Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.802 2 DEBUG nova.network.os_vif_util [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.195", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.803 2 DEBUG nova.network.os_vif_util [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.804 2 DEBUG os_vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.806 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapff77de20-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.822 2 INFO os_vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:b5:2f,bridge_name='br-int',has_traffic_filtering=True,id=ff77de20-1280-4c30-941d-c53ed7efcbe8,network=Network(b59ffdd2-4285-47f2-a931-fca691d1c031),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapff77de20-12')#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.823 2 DEBUG nova.virt.libvirt.vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1628698219',display_name='tempest-TestGettingAddress-server-1628698219',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1628698219',id=122,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBELC4wtHp5lNiukcYT7L+JyRej/6+cxU7SHYcuIyVfyhWOP3LTnIbwG60ImYgM+1VHs977IYVbu1ek1Hx7HtB92z/tCU//lYC9gVzrAyKqdZUsCe9DAlSHy22ubZ09UQXQ==',key_name='tempest-TestGettingAddress-203621996',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fvhmi32z',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:04Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=91ef3edf-0b1e-4a6d-8ef1-af2687c58b74,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.823 2 DEBUG nova.network.os_vif_util [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.824 2 DEBUG nova.network.os_vif_util [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.824 2 DEBUG os_vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.826 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapa40ec757-40, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:33 np0005473739 nova_compute[259550]: 2025-10-07 14:41:33.831 2 INFO os_vif [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:37:32:f2,bridge_name='br-int',has_traffic_filtering=True,id=a40ec757-407d-4375-b756-d4fb8f5664b4,network=Network(abe90ba0-a518-4cef-a49b-de57485faec5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa40ec757-40')#033[00m
Oct  7 10:41:33 np0005473739 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [NOTICE]   (391537) : haproxy version is 2.8.14-c23fe91
Oct  7 10:41:33 np0005473739 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [NOTICE]   (391537) : path to executable is /usr/sbin/haproxy
Oct  7 10:41:33 np0005473739 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [ALERT]    (391537) : Current worker (391541) exited with code 143 (Terminated)
Oct  7 10:41:33 np0005473739 neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031[391515]: [WARNING]  (391537) : All workers exited. Exiting... (0)
Oct  7 10:41:33 np0005473739 systemd[1]: libpod-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1.scope: Deactivated successfully.
Oct  7 10:41:33 np0005473739 conmon[391515]: conmon 07847417c8983cc8622e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1.scope/container/memory.events
Oct  7 10:41:33 np0005473739 podman[395223]: 2025-10-07 14:41:33.857696175 +0000 UTC m=+0.058614819 container died 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:41:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1-userdata-shm.mount: Deactivated successfully.
Oct  7 10:41:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-25645b5d35c32237e95d967c51a9e71c44938bb7f9072a90ac733ca580ab77f3-merged.mount: Deactivated successfully.
Oct  7 10:41:33 np0005473739 podman[395223]: 2025-10-07 14:41:33.954308003 +0000 UTC m=+0.155226647 container cleanup 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:41:33 np0005473739 systemd[1]: libpod-conmon-07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1.scope: Deactivated successfully.
Oct  7 10:41:34 np0005473739 podman[395273]: 2025-10-07 14:41:34.035774389 +0000 UTC m=+0.057416938 container remove 07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.043 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1a96e9d9-8c5f-4e6e-8422-c0ec73aea54e]: (4, ('Tue Oct  7 02:41:33 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 (07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1)\n07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1\nTue Oct  7 02:41:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 (07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1)\n07847417c8983cc8622e11cfd38d6b03ea79f380ecfe694164eaaf7b0b8b43e1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 confident_poincare[395151]: {
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:    "0": [
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:        {
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "devices": [
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "/dev/loop3"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            ],
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_name": "ceph_lv0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_size": "21470642176",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "name": "ceph_lv0",
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.046 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5ddf2007-a8d9-4d31-89ac-9f7d37cee8ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "tags": {
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cluster_name": "ceph",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.crush_device_class": "",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.encrypted": "0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osd_id": "0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.type": "block",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.vdo": "0"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            },
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "type": "block",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "vg_name": "ceph_vg0"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:        }
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:    ],
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:    "1": [
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:        {
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "devices": [
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "/dev/loop4"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            ],
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_name": "ceph_lv1",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_size": "21470642176",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "name": "ceph_lv1",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "tags": {
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cluster_name": "ceph",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.crush_device_class": "",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.encrypted": "0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osd_id": "1",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.type": "block",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.vdo": "0"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            },
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "type": "block",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "vg_name": "ceph_vg1"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:        }
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:    ],
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:    "2": [
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:        {
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "devices": [
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "/dev/loop5"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            ],
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_name": "ceph_lv2",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_size": "21470642176",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "name": "ceph_lv2",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "tags": {
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.cluster_name": "ceph",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.crush_device_class": "",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.encrypted": "0",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osd_id": "2",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.type": "block",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:                "ceph.vdo": "0"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            },
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "type": "block",
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:            "vg_name": "ceph_vg2"
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:        }
Oct  7 10:41:34 np0005473739 confident_poincare[395151]:    ]
Oct  7 10:41:34 np0005473739 confident_poincare[395151]: }
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.047 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb59ffdd2-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:34 np0005473739 systemd[1]: libpod-1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b.scope: Deactivated successfully.
Oct  7 10:41:34 np0005473739 podman[395112]: 2025-10-07 14:41:34.089334423 +0000 UTC m=+1.047431032 container died 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 kernel: tapb59ffdd2-40: left promiscuous mode
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:41:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1893562222' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.115 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2332590-2c02-4ac1-b2ef-855565fc44e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fff96983e00650ea1f78a36e9dd559d0fc4c54acd275b82cd13f55e92a238724-merged.mount: Deactivated successfully.
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.142 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8fb5894d-f521-4131-8a41-373a20d55f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.144 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95a163fa-69cb-4a0a-9a0c-3bc8af8b7753]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.151 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.564s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:34 np0005473739 podman[395112]: 2025-10-07 14:41:34.171447175 +0000 UTC m=+1.129543734 container remove 1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_poincare, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.172 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c263ca35-077d-4ac8-bde9-a681b993ebed]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858585, 'reachable_time': 16037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395303, 'error': None, 'target': 'ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 systemd[1]: run-netns-ovnmeta\x2db59ffdd2\x2d4285\x2d47f2\x2da931\x2dfca691d1c031.mount: Deactivated successfully.
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.177 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.180 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b59ffdd2-4285-47f2-a931-fca691d1c031 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.181 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c159f7fc-9567-4d38-a6e6-68579c18db86]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.182 161536 INFO neutron.agent.ovn.metadata.agent [-] Port a40ec757-407d-4375-b756-d4fb8f5664b4 in datapath abe90ba0-a518-4cef-a49b-de57485faec5 unbound from our chassis#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.184 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network abe90ba0-a518-4cef-a49b-de57485faec5, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:41:34 np0005473739 systemd[1]: libpod-conmon-1dd86d60b0efa6faeb01d23ead09f9bf94eaf27b1e3c9dc14815d11a9b06343b.scope: Deactivated successfully.
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.190 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[db1d773b-bc5b-4180-812d-e40bab00bc6c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.191 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 namespace which is not needed anymore#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.191 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:34 np0005473739 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [NOTICE]   (391615) : haproxy version is 2.8.14-c23fe91
Oct  7 10:41:34 np0005473739 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [NOTICE]   (391615) : path to executable is /usr/sbin/haproxy
Oct  7 10:41:34 np0005473739 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [WARNING]  (391615) : Exiting Master process...
Oct  7 10:41:34 np0005473739 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [ALERT]    (391615) : Current worker (391617) exited with code 143 (Terminated)
Oct  7 10:41:34 np0005473739 neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5[391610]: [WARNING]  (391615) : All workers exited. Exiting... (0)
Oct  7 10:41:34 np0005473739 systemd[1]: libpod-edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9.scope: Deactivated successfully.
Oct  7 10:41:34 np0005473739 podman[395366]: 2025-10-07 14:41:34.343033456 +0000 UTC m=+0.047557425 container died edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:41:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9-userdata-shm.mount: Deactivated successfully.
Oct  7 10:41:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c49969737f58c490492c852bd2ea4f9f98b17080447bba92e02b629fbfab611a-merged.mount: Deactivated successfully.
Oct  7 10:41:34 np0005473739 podman[395366]: 2025-10-07 14:41:34.39550309 +0000 UTC m=+0.100027039 container cleanup edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  7 10:41:34 np0005473739 systemd[1]: libpod-conmon-edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9.scope: Deactivated successfully.
Oct  7 10:41:34 np0005473739 podman[395460]: 2025-10-07 14:41:34.462885631 +0000 UTC m=+0.044846033 container remove edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:41:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2374: 305 pgs: 305 active+clean; 326 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 60 op/s
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb4375d3-80be-4e04-b4e9-bc56c50a0fff]: (4, ('Tue Oct  7 02:41:34 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 (edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9)\nedf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9\nTue Oct  7 02:41:34 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 (edf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9)\nedf956fabfe91580fd77e3be2e6b7c118c134b2615a2e3b7e85c286d298147c9\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.472 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e5427477-6fef-4d0c-b40a-7e32ed25861a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.475 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapabe90ba0-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 kernel: tapabe90ba0-a0: left promiscuous mode
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.488 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a6bf7237-d2a3-4f42-91bd-3a171ba2a2cc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.509 2 INFO nova.virt.libvirt.driver [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Deleting instance files /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_del#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.511 2 INFO nova.virt.libvirt.driver [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Deletion of /var/lib/nova/instances/91ef3edf-0b1e-4a6d-8ef1-af2687c58b74_del complete#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.523 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[734794a6-a803-4d16-8eb7-5489556ff1fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.524 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d382fb4c-65c5-43b2-a104-a11564641332]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3ab9f80-bd1d-4a94-8a52-6020a9a677a4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 858686, 'reachable_time': 29986, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395501, 'error': None, 'target': 'ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.544 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-abe90ba0-a518-4cef-a49b-de57485faec5 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:41:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:34.544 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[50318de0-16a6-46da-b20d-73ea3078f013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:34 np0005473739 systemd[1]: run-netns-ovnmeta\x2dabe90ba0\x2da518\x2d4cef\x2da49b\x2dde57485faec5.mount: Deactivated successfully.
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.559 2 INFO nova.compute.manager [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Took 1.03 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.559 2 DEBUG oslo.service.loopingcall [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.560 2 DEBUG nova.compute.manager [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.560 2 DEBUG nova.network.neutron [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:41:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:41:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1450717079' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.692 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.502s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.694 2 DEBUG nova.virt.libvirt.vif [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1671371669',display_name='tempest-TestServerBasicOps-server-1671371669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1671371669',id=126,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvFnIqTqfHxE6+s78OqW40EeX1Gc32/mDkbAeqCywn5GmNeJRkLzGwn+a0yP/7G4uTQPEHQaiqW/W1UjNqDLgiC69VXo+2gdbXJjgI/CEgIH3VE1qmFfhNZh8qFmO8jUg==',key_name='tempest-TestServerBasicOps-1893482168',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b215e149c484ed3a0d2130b82d65f6f',ramdisk_id='',reservation_id='r-q3r9pyxl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1198650773',owner_user_name='tempest-TestServerBasicOps-1198650773-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:41:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='512332529ac84b3f9a7bb7d98d8577ac',uuid=300a7ac9-5462-4db2-817f-07ea0b2d6aa6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.694 2 DEBUG nova.network.os_vif_util [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converting VIF {"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.695 2 DEBUG nova.network.os_vif_util [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.697 2 DEBUG nova.objects.instance [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lazy-loading 'pci_devices' on Instance uuid 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.720 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <uuid>300a7ac9-5462-4db2-817f-07ea0b2d6aa6</uuid>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <name>instance-0000007e</name>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestServerBasicOps-server-1671371669</nova:name>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:41:33</nova:creationTime>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <nova:user uuid="512332529ac84b3f9a7bb7d98d8577ac">tempest-TestServerBasicOps-1198650773-project-member</nova:user>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <nova:project uuid="3b215e149c484ed3a0d2130b82d65f6f">tempest-TestServerBasicOps-1198650773</nova:project>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <nova:port uuid="feac1006-7556-4dd6-9691-bc886a9410f3">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <entry name="serial">300a7ac9-5462-4db2-817f-07ea0b2d6aa6</entry>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <entry name="uuid">300a7ac9-5462-4db2-817f-07ea0b2d6aa6</entry>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:03:35:e2"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <target dev="tapfeac1006-75"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/console.log" append="off"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:41:34 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:41:34 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:41:34 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:41:34 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.721 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Preparing to wait for external event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.721 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.721 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.722 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.722 2 DEBUG nova.virt.libvirt.vif [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1671371669',display_name='tempest-TestServerBasicOps-server-1671371669',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1671371669',id=126,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvFnIqTqfHxE6+s78OqW40EeX1Gc32/mDkbAeqCywn5GmNeJRkLzGwn+a0yP/7G4uTQPEHQaiqW/W1UjNqDLgiC69VXo+2gdbXJjgI/CEgIH3VE1qmFfhNZh8qFmO8jUg==',key_name='tempest-TestServerBasicOps-1893482168',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='3b215e149c484ed3a0d2130b82d65f6f',ramdisk_id='',reservation_id='r-q3r9pyxl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-1198650773',owner_user_name='tempest-TestServerBasicOps-1198650773-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:41:29Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='512332529ac84b3f9a7bb7d98d8577ac',uuid=300a7ac9-5462-4db2-817f-07ea0b2d6aa6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.723 2 DEBUG nova.network.os_vif_util [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converting VIF {"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.724 2 DEBUG nova.network.os_vif_util [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.724 2 DEBUG os_vif [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.725 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.726 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.729 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfeac1006-75, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfeac1006-75, col_values=(('external_ids', {'iface-id': 'feac1006-7556-4dd6-9691-bc886a9410f3', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:03:35:e2', 'vm-uuid': '300a7ac9-5462-4db2-817f-07ea0b2d6aa6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 NetworkManager[44949]: <info>  [1759848094.7323] manager: (tapfeac1006-75): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/551)
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.744 2 INFO os_vif [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75')#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.797 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.797 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.798 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] No VIF found with MAC fa:16:3e:03:35:e2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.798 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Using config drive#033[00m
Oct  7 10:41:34 np0005473739 nova_compute[259550]: 2025-10-07 14:41:34.819 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:41:34 np0005473739 podman[395553]: 2025-10-07 14:41:34.856794112 +0000 UTC m=+0.039506201 container create c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:41:34 np0005473739 systemd[1]: Started libpod-conmon-c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645.scope.
Oct  7 10:41:34 np0005473739 podman[395553]: 2025-10-07 14:41:34.838106505 +0000 UTC m=+0.020818614 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:41:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:41:34 np0005473739 podman[395553]: 2025-10-07 14:41:34.963825317 +0000 UTC m=+0.146537426 container init c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:41:34 np0005473739 podman[395553]: 2025-10-07 14:41:34.973627257 +0000 UTC m=+0.156339346 container start c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:41:34 np0005473739 podman[395553]: 2025-10-07 14:41:34.978080145 +0000 UTC m=+0.160792254 container attach c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:41:34 np0005473739 dazzling_ritchie[395580]: 167 167
Oct  7 10:41:34 np0005473739 systemd[1]: libpod-c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645.scope: Deactivated successfully.
Oct  7 10:41:34 np0005473739 podman[395553]: 2025-10-07 14:41:34.979509813 +0000 UTC m=+0.162221902 container died c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.000 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.001 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing instance network info cache due to event network-changed-ff77de20-1280-4c30-941d-c53ed7efcbe8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.001 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.001 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.001 2 DEBUG nova.network.neutron [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Refreshing network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:41:35 np0005473739 podman[395553]: 2025-10-07 14:41:35.018677585 +0000 UTC m=+0.201389674 container remove c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_ritchie, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:41:35 np0005473739 systemd[1]: libpod-conmon-c86420a75e491753c545f94ed8878fa5f345d487fc3c1e350edfe8446f3ec645.scope: Deactivated successfully.
Oct  7 10:41:35 np0005473739 podman[395605]: 2025-10-07 14:41:35.232702874 +0000 UTC m=+0.046091487 container create 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:41:35 np0005473739 systemd[1]: Started libpod-conmon-952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb.scope.
Oct  7 10:41:35 np0005473739 podman[395605]: 2025-10-07 14:41:35.213247486 +0000 UTC m=+0.026636149 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:41:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:41:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:35 np0005473739 podman[395605]: 2025-10-07 14:41:35.338768513 +0000 UTC m=+0.152157136 container init 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:41:35 np0005473739 podman[395605]: 2025-10-07 14:41:35.350211257 +0000 UTC m=+0.163599870 container start 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:41:35 np0005473739 podman[395605]: 2025-10-07 14:41:35.354097791 +0000 UTC m=+0.167486404 container attach 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.749 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Creating config drive at /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config#033[00m
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.755 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw2hozw7n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.902 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpw2hozw7n" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.932 2 DEBUG nova.storage.rbd_utils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] rbd image 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:41:35 np0005473739 nova_compute[259550]: 2025-10-07 14:41:35.938 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.119 2 DEBUG oslo_concurrency.processutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config 300a7ac9-5462-4db2-817f-07ea0b2d6aa6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.120 2 INFO nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Deleting local config drive /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6/disk.config because it was imported into RBD.#033[00m
Oct  7 10:41:36 np0005473739 kernel: tapfeac1006-75: entered promiscuous mode
Oct  7 10:41:36 np0005473739 NetworkManager[44949]: <info>  [1759848096.1840] manager: (tapfeac1006-75): new Tun device (/org/freedesktop/NetworkManager/Devices/552)
Oct  7 10:41:36 np0005473739 systemd-udevd[395159]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:41:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:36Z|01375|binding|INFO|Claiming lport feac1006-7556-4dd6-9691-bc886a9410f3 for this chassis.
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.190 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:36Z|01376|binding|INFO|feac1006-7556-4dd6-9691-bc886a9410f3: Claiming fa:16:3e:03:35:e2 10.100.0.14
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.195 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:35:e2 10.100.0.14'], port_security=['fa:16:3e:03:35:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '300a7ac9-5462-4db2-817f-07ea0b2d6aa6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b215e149c484ed3a0d2130b82d65f6f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '2290bbe4-d1ef-40a3-bca7-dc73e3a190e7 4672fac1-ae90-46b4-8fd3-3fd3255b2962', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d118d88-a678-4c5f-9b83-025d93ef3b4c, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=feac1006-7556-4dd6-9691-bc886a9410f3) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.197 161536 INFO neutron.agent.ovn.metadata.agent [-] Port feac1006-7556-4dd6-9691-bc886a9410f3 in datapath 70c32d19-cc0d-46fc-a583-1be2bd26332c bound to our chassis#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.198 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 70c32d19-cc0d-46fc-a583-1be2bd26332c#033[00m
Oct  7 10:41:36 np0005473739 NetworkManager[44949]: <info>  [1759848096.2039] device (tapfeac1006-75): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:41:36 np0005473739 NetworkManager[44949]: <info>  [1759848096.2048] device (tapfeac1006-75): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:41:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:36Z|01377|binding|INFO|Setting lport feac1006-7556-4dd6-9691-bc886a9410f3 ovn-installed in OVS
Oct  7 10:41:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:36Z|01378|binding|INFO|Setting lport feac1006-7556-4dd6-9691-bc886a9410f3 up in Southbound
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.212 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[429f38f9-17a6-4b33-9914-6d5fbf00c6c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.213 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap70c32d19-c1 in ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.216 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap70c32d19-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.216 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ce7e6b5c-a577-4851-97ba-8124a46a9fdb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:36Z|01379|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct  7 10:41:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:36Z|01380|binding|INFO|Releasing lport 795a08c5-66c3-453c-a5db-19a02c166ab7 from this chassis (sb_readonly=0)
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.218 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[add2fdc5-6434-4fb4-9af7-c2dd03c4e9a5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 systemd-machined[214580]: New machine qemu-159-instance-0000007e.
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.236 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[63a25d81-9ff0-4913-b371-72000d70baff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 systemd[1]: Started Virtual Machine qemu-159-instance-0000007e.
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.255 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e155e182-5a40-41e1-b07f-ddc9cd412c42]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.289 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3957c9d2-cc72-44ee-818c-1fc2ccaf33a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 NetworkManager[44949]: <info>  [1759848096.3007] manager: (tap70c32d19-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/553)
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.298 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[485f42b9-b271-4a8b-adff-adc77706b549]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.335 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9f4d8dc8-d9a8-474d-a3ee-00d82ccdbc8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.342 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d4864174-92e2-464d-8ff4-33f0c3c5069e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]: {
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "osd_id": 2,
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "type": "bluestore"
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:    },
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "osd_id": 1,
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "type": "bluestore"
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:    },
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "osd_id": 0,
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:        "type": "bluestore"
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]:    }
Oct  7 10:41:36 np0005473739 heuristic_wright[395622]: }
Oct  7 10:41:36 np0005473739 NetworkManager[44949]: <info>  [1759848096.3708] device (tap70c32d19-c0): carrier: link connected
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.379 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[775769d5-b66b-43e1-b816-d3c429b90b73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 systemd[1]: libpod-952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb.scope: Deactivated successfully.
Oct  7 10:41:36 np0005473739 systemd[1]: libpod-952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb.scope: Consumed 1.014s CPU time.
Oct  7 10:41:36 np0005473739 podman[395605]: 2025-10-07 14:41:36.390746485 +0000 UTC m=+1.204135098 container died 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8445f361-4e62-4dce-ac6b-25d973595dd0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap70c32d19-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ac:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867993, 'reachable_time': 33172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395738, 'error': None, 'target': 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.419 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c96cb44-11ee-495d-8886-a3952b39b44d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef2:ace5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 867993, 'tstamp': 867993}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395741, 'error': None, 'target': 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f89e2c8fd018cee7186437e0a07400431786147ed3eeebb692941f12e7cbb8ce-merged.mount: Deactivated successfully.
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.436 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[728d5f15-3405-423c-a674-f1b071971a48]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap70c32d19-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f2:ac:e5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 220, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 396], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867993, 'reachable_time': 33172, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 192, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 192, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 395747, 'error': None, 'target': 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 podman[395605]: 2025-10-07 14:41:36.45038599 +0000 UTC m=+1.263774603 container remove 952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_wright, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 10:41:36 np0005473739 systemd[1]: libpod-conmon-952cf6b9ad8510c768d2e46128caa9ce1849edae95e813db18e4063d3fb08ccb.scope: Deactivated successfully.
Oct  7 10:41:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2375: 305 pgs: 305 active+clean; 297 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.8 MiB/s wr, 65 op/s
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.474 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4691e52-eef7-4112-bf33-01b416488220]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.478 2 DEBUG nova.network.neutron [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updated VIF entry in instance network info cache for port feac1006-7556-4dd6-9691-bc886a9410f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.479 2 DEBUG nova.network.neutron [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:41:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:41:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:41:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:41:36 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e8a6b2c0-f4d8-4ca0-81d8-b529722e575c does not exist
Oct  7 10:41:36 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b11cf4a5-73cc-42e4-a931-d5d5be6f4cce does not exist
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.541 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0227e9aa-50c2-43a1-96d8-7eee044c9005]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70c32d19-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.543 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap70c32d19-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:36 np0005473739 kernel: tap70c32d19-c0: entered promiscuous mode
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:36 np0005473739 NetworkManager[44949]: <info>  [1759848096.5462] manager: (tap70c32d19-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/554)
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.556 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap70c32d19-c0, col_values=(('external_ids', {'iface-id': 'eb0d4e5a-943a-459a-bddc-3f4beed43512'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:36 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:36Z|01381|binding|INFO|Releasing lport eb0d4e5a-943a-459a-bddc-3f4beed43512 from this chassis (sb_readonly=0)
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.570 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/70c32d19-cc0d-46fc-a583-1be2bd26332c.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/70c32d19-cc0d-46fc-a583-1be2bd26332c.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.572 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[96bb96fb-59bd-438c-821c-c65e33a3f0ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.573 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-70c32d19-cc0d-46fc-a583-1be2bd26332c
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/70c32d19-cc0d-46fc-a583-1be2bd26332c.pid.haproxy
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 70c32d19-cc0d-46fc-a583-1be2bd26332c
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:36.576 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'env', 'PROCESS_TAG=haproxy-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/70c32d19-cc0d-46fc-a583-1be2bd26332c.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.628 2 DEBUG nova.compute.manager [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.629 2 DEBUG oslo_concurrency.lockutils [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.636 2 DEBUG oslo_concurrency.lockutils [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.007s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.636 2 DEBUG oslo_concurrency.lockutils [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.637 2 DEBUG nova.compute.manager [req-6f0f322f-f951-4aa1-b22d-61dbbc16f44e req-e0e77955-b848-435c-bd07-f25e9827abb5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Processing event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:41:36 np0005473739 nova_compute[259550]: 2025-10-07 14:41:36.718 2 DEBUG oslo_concurrency.lockutils [req-f434a5bf-c610-49a5-aedc-bc60d1c92442 req-1b8b6a13-87b0-4a24-b10f-84be82ce2d31 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:41:36 np0005473739 podman[395833]: 2025-10-07 14:41:36.970555996 +0000 UTC m=+0.048136750 container create 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:41:37 np0005473739 systemd[1]: Started libpod-conmon-106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710.scope.
Oct  7 10:41:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:41:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8da373f4ff1907c25e28cd2a4fb9ff763074cfeaac1f7943c05c71d9f726084b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:41:37 np0005473739 podman[395833]: 2025-10-07 14:41:36.945228663 +0000 UTC m=+0.022809437 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:41:37 np0005473739 podman[395833]: 2025-10-07 14:41:37.060339263 +0000 UTC m=+0.137920017 container init 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:41:37 np0005473739 podman[395833]: 2025-10-07 14:41:37.065691715 +0000 UTC m=+0.143272469 container start 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.071 2 DEBUG nova.network.neutron [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updated VIF entry in instance network info cache for port ff77de20-1280-4c30-941d-c53ed7efcbe8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.071 2 DEBUG nova.network.neutron [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "address": "fa:16:3e:09:b5:2f", "network": {"id": "b59ffdd2-4285-47f2-a931-fca691d1c031", "bridge": "br-int", "label": "tempest-network-smoke--942617684", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapff77de20-12", "ovs_interfaceid": "ff77de20-1280-4c30-941d-c53ed7efcbe8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.085 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.086 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.086 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.087 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.087 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.087 2 WARNING nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received unexpected event network-vif-plugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.087 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-unplugged-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.088 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.088 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.088 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.088 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-unplugged-a40ec757-407d-4375-b756-d4fb8f5664b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.089 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-unplugged-a40ec757-407d-4375-b756-d4fb8f5664b4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.089 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-deleted-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.089 2 INFO nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Neutron deleted interface ff77de20-1280-4c30-941d-c53ed7efcbe8; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.089 2 DEBUG nova.network.neutron [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [{"id": "a40ec757-407d-4375-b756-d4fb8f5664b4", "address": "fa:16:3e:37:32:f2", "network": {"id": "abe90ba0-a518-4cef-a49b-de57485faec5", "bridge": "br-int", "label": "tempest-network-smoke--2058464715", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe37:32f2", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa40ec757-40", "ovs_interfaceid": "a40ec757-407d-4375-b756-d4fb8f5664b4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.092 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.093 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-unplugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.093 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.093 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.094 2 DEBUG oslo_concurrency.lockutils [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.094 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-unplugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.094 2 DEBUG nova.compute.manager [req-f0082d4d-09e3-498c-af36-7aecad9f435b req-792be8c8-e89a-4ad9-826a-99fe89a3ee29 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-unplugged-ff77de20-1280-4c30-941d-c53ed7efcbe8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:41:37 np0005473739 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [NOTICE]   (395853) : New worker (395870) forked
Oct  7 10:41:37 np0005473739 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [NOTICE]   (395853) : Loading success.
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.110 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Detach interface failed, port_id=ff77de20-1280-4c30-941d-c53ed7efcbe8, reason: Instance 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.110 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.111 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.111 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.111 2 DEBUG oslo_concurrency.lockutils [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.111 2 DEBUG nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] No waiting events found dispatching network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.112 2 WARNING nova.compute.manager [req-bae559e8-bc77-4113-8d99-badf04f71647 req-b2868989-b758-444a-8f82-27d2ff6152a3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received unexpected event network-vif-plugged-a40ec757-407d-4375-b756-d4fb8f5664b4 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:41:37 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:41:37 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.658 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848097.6579847, 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.659 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] VM Started (Lifecycle Event)#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.663 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.667 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.672 2 INFO nova.virt.libvirt.driver [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance spawned successfully.#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.672 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.679 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.682 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.692 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.693 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.693 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.694 2 DEBUG nova.virt.libvirt.driver [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.701 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.701 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848097.6580865, 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.701 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.721 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.724 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848097.6661854, 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.724 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.754 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.755 2 DEBUG nova.network.neutron [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.759 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.763 2 INFO nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Took 8.35 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.763 2 DEBUG nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.775 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.785 2 INFO nova.compute.manager [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Took 3.22 seconds to deallocate network for instance.#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.833 2 INFO nova.compute.manager [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Took 10.16 seconds to build instance.#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.840 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.840 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.851 2 DEBUG oslo_concurrency.lockutils [None req-c11afba8-f936-411b-b659-1d23d6168c4c 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.334s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.894 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:37 np0005473739 nova_compute[259550]: 2025-10-07 14:41:37.939 2 DEBUG oslo_concurrency.processutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:41:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2904002790' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.412 2 DEBUG oslo_concurrency.processutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.419 2 DEBUG nova.compute.provider_tree [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.444 2 DEBUG nova.scheduler.client.report [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.467 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2376: 305 pgs: 305 active+clean; 297 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 64 op/s
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.509 2 INFO nova.scheduler.client.report [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.593 2 DEBUG oslo_concurrency.lockutils [None req-b64a258e-c54f-4bd7-b12b-601fd1ec006b d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "91ef3edf-0b1e-4a6d-8ef1-af2687c58b74" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.071s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.713 2 DEBUG nova.compute.manager [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.713 2 DEBUG oslo_concurrency.lockutils [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.713 2 DEBUG oslo_concurrency.lockutils [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.714 2 DEBUG oslo_concurrency.lockutils [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.714 2 DEBUG nova.compute.manager [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] No waiting events found dispatching network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:38 np0005473739 nova_compute[259550]: 2025-10-07 14:41:38.714 2 WARNING nova.compute.manager [req-a369222c-a993-4631-8849-8c384fc2c2c3 req-73f351da-ffb3-4aba-b56f-4c45177ebad6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received unexpected event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:41:39 np0005473739 nova_compute[259550]: 2025-10-07 14:41:39.199 2 DEBUG nova.compute.manager [req-f12688cb-0402-4293-bf7d-3f6ff477a92a req-9037a809-a6c5-4719-90c5-764a152ea27a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Received event network-vif-deleted-a40ec757-407d-4375-b756-d4fb8f5664b4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:39 np0005473739 nova_compute[259550]: 2025-10-07 14:41:39.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2377: 305 pgs: 305 active+clean; 246 MiB data, 976 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 1.8 MiB/s wr, 115 op/s
Oct  7 10:41:41 np0005473739 nova_compute[259550]: 2025-10-07 14:41:41.324 2 DEBUG nova.compute.manager [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-changed-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:41 np0005473739 nova_compute[259550]: 2025-10-07 14:41:41.324 2 DEBUG nova.compute.manager [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Refreshing instance network info cache due to event network-changed-feac1006-7556-4dd6-9691-bc886a9410f3. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:41:41 np0005473739 nova_compute[259550]: 2025-10-07 14:41:41.325 2 DEBUG oslo_concurrency.lockutils [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:41 np0005473739 nova_compute[259550]: 2025-10-07 14:41:41.325 2 DEBUG oslo_concurrency.lockutils [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:41 np0005473739 nova_compute[259550]: 2025-10-07 14:41:41.325 2 DEBUG nova.network.neutron [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Refreshing network info cache for port feac1006-7556-4dd6-9691-bc886a9410f3 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:41:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2378: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 136 op/s
Oct  7 10:41:42 np0005473739 nova_compute[259550]: 2025-10-07 14:41:42.674 2 DEBUG nova.network.neutron [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updated VIF entry in instance network info cache for port feac1006-7556-4dd6-9691-bc886a9410f3. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:41:42 np0005473739 nova_compute[259550]: 2025-10-07 14:41:42.674 2 DEBUG nova.network.neutron [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [{"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:42 np0005473739 nova_compute[259550]: 2025-10-07 14:41:42.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:43 np0005473739 nova_compute[259550]: 2025-10-07 14:41:43.109 2 DEBUG oslo_concurrency.lockutils [req-ca6707b5-d108-4d8c-9e6c-088f3b0b2f1d req-8417ffc4-9f5a-426e-a3ef-52df87841aa7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-300a7ac9-5462-4db2-817f-07ea0b2d6aa6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:41:43 np0005473739 nova_compute[259550]: 2025-10-07 14:41:43.213 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848088.210345, 52ccd902-898f-4809-a231-be5760626c2c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:41:43 np0005473739 nova_compute[259550]: 2025-10-07 14:41:43.213 2 INFO nova.compute.manager [-] [instance: 52ccd902-898f-4809-a231-be5760626c2c] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:41:43 np0005473739 nova_compute[259550]: 2025-10-07 14:41:43.565 2 DEBUG nova.compute.manager [None req-f0759171-d8e3-4e2f-97b4-40a63f66c802 - - - - - -] [instance: 52ccd902-898f-4809-a231-be5760626c2c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:41:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2379: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 546 KiB/s wr, 118 op/s
Oct  7 10:41:44 np0005473739 nova_compute[259550]: 2025-10-07 14:41:44.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.901 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.944 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.945 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid ca309873-104c-4cd4-a609-686d61823e0f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.945 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.945 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "ca309873-104c-4cd4-a609-686d61823e0f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.949 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:45 np0005473739 nova_compute[259550]: 2025-10-07 14:41:45.949 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:46 np0005473739 nova_compute[259550]: 2025-10-07 14:41:46.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.059s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:46 np0005473739 nova_compute[259550]: 2025-10-07 14:41:46.041 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "ca309873-104c-4cd4-a609-686d61823e0f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.094s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:46 np0005473739 nova_compute[259550]: 2025-10-07 14:41:46.042 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.093s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2380: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 101 op/s
Oct  7 10:41:47 np0005473739 nova_compute[259550]: 2025-10-07 14:41:47.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:47Z|01382|binding|INFO|Releasing lport eb0d4e5a-943a-459a-bddc-3f4beed43512 from this chassis (sb_readonly=0)
Oct  7 10:41:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:47Z|01383|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct  7 10:41:47 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:47Z|01384|binding|INFO|Releasing lport 795a08c5-66c3-453c-a5db-19a02c166ab7 from this chassis (sb_readonly=0)
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.000 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.001 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.001 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.001 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.002 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.003 2 INFO nova.compute.manager [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Terminating instance#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.004 2 DEBUG nova.compute.manager [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 kernel: tapbcef16bc-28 (unregistering): left promiscuous mode
Oct  7 10:41:48 np0005473739 NetworkManager[44949]: <info>  [1759848108.0764] device (tapbcef16bc-28): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:41:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:48Z|01385|binding|INFO|Releasing lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 from this chassis (sb_readonly=0)
Oct  7 10:41:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:48Z|01386|binding|INFO|Setting lport bcef16bc-28e8-4e97-a50b-7625a1917ee5 down in Southbound
Oct  7 10:41:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.091 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:48Z|01387|binding|INFO|Removing iface tapbcef16bc-28 ovn-installed in OVS
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 podman[395929]: 2025-10-07 14:41:48.125416776 +0000 UTC m=+0.094599305 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.129 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:17:5f:df 10.100.0.27'], port_security=['fa:16:3e:17:5f:df 10.100.0.27'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.27/28', 'neutron:device_id': 'ca309873-104c-4cd4-a609-686d61823e0f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '84380b93-9bd8-46c6-8ce7-0eb2636c568f', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4af57759-0100-4ebb-81fb-af43dd151fff, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=bcef16bc-28e8-4e97-a50b-7625a1917ee5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.131 161536 INFO neutron.agent.ovn.metadata.agent [-] Port bcef16bc-28e8-4e97-a50b-7625a1917ee5 in datapath cae6e154-5797-4df5-a9e8-545cc6ed0188 unbound from our chassis#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.132 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cae6e154-5797-4df5-a9e8-545cc6ed0188#033[00m
Oct  7 10:41:48 np0005473739 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Deactivated successfully.
Oct  7 10:41:48 np0005473739 systemd[1]: machine-qemu\x2d157\x2dinstance\x2d0000007c.scope: Consumed 18.245s CPU time.
Oct  7 10:41:48 np0005473739 systemd-machined[214580]: Machine qemu-157-instance-0000007c terminated.
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.150 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[873e8872-dab7-4938-a60e-8a04465ea047]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:48 np0005473739 podman[395930]: 2025-10-07 14:41:48.150905875 +0000 UTC m=+0.117822314 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller)
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.181 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[72bb1012-f9c3-437e-9cca-331f12739947]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.184 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[183a4228-4fe6-4a01-a095-ae8ff54624a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.212 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[38777716-74f6-4277-a5f1-8e350bf58880]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.235 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[34971164-166e-4ff5-b556-303245e968e0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcae6e154-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c9:47:d6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 15, 'tx_packets': 8, 'rx_bytes': 1222, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 15, 'tx_packets': 8, 'rx_bytes': 1222, 'tx_bytes': 528, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 381], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859712, 'reachable_time': 43552, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 10, 'inoctets': 872, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 4, 'outoctets': 304, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 10, 'outmcastpkts': 4, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 872, 'outmcastoctets': 304, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 10, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 4, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 395984, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.243 2 INFO nova.virt.libvirt.driver [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Instance destroyed successfully.#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.244 2 DEBUG nova.objects.instance [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid ca309873-104c-4cd4-a609-686d61823e0f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[54a79886-9e9e-424f-ac7c-b9f59fc07249]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcae6e154-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859727, 'tstamp': 859727}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395992, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.17'], ['IFA_LOCAL', '10.100.0.17'], ['IFA_BROADCAST', '10.100.0.31'], ['IFA_LABEL', 'tapcae6e154-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 859730, 'tstamp': 859730}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 395992, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.255 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcae6e154-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.261 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcae6e154-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.261 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.261 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcae6e154-50, col_values=(('external_ids', {'iface-id': '795a08c5-66c3-453c-a5db-19a02c166ab7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:48.262 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.280 2 DEBUG nova.virt.libvirt.vif [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:40:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-2008963535',display_name='tempest-TestNetworkBasicOps-server-2008963535',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-2008963535',id=124,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFxD9h4ZkkgrlueJ/+Le33LVwl4afSGXrv2n9VWfucsKy4uVMm2emVKU0zzzyJ1CeXnsYkd5eNJ5QzJC0SrQJgU7HtUo0DaaWNwjsm6StKgjUKnYtMfa+OUzVPQdsT6WdA==',key_name='tempest-TestNetworkBasicOps-266700111',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:40:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-0d2v23l1',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:40:37Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=ca309873-104c-4cd4-a609-686d61823e0f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.280 2 DEBUG nova.network.os_vif_util [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "address": "fa:16:3e:17:5f:df", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.27", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbcef16bc-28", "ovs_interfaceid": "bcef16bc-28e8-4e97-a50b-7625a1917ee5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.281 2 DEBUG nova.network.os_vif_util [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.281 2 DEBUG os_vif [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.283 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.283 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbcef16bc-28, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.287 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.289 2 INFO os_vif [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:17:5f:df,bridge_name='br-int',has_traffic_filtering=True,id=bcef16bc-28e8-4e97-a50b-7625a1917ee5,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbcef16bc-28')#033[00m
Oct  7 10:41:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2381: 305 pgs: 305 active+clean; 246 MiB data, 970 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 28 KiB/s wr, 95 op/s
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.781 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848093.7793257, 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.782 2 INFO nova.compute.manager [-] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.805 2 DEBUG nova.compute.manager [None req-235d8d94-1902-4ab5-9aaa-cf0411cf4dc0 - - - - - -] [instance: 91ef3edf-0b1e-4a6d-8ef1-af2687c58b74] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.929 2 INFO nova.virt.libvirt.driver [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Deleting instance files /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f_del#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.929 2 INFO nova.virt.libvirt.driver [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Deletion of /var/lib/nova/instances/ca309873-104c-4cd4-a609-686d61823e0f_del complete#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.989 2 INFO nova.compute.manager [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Took 0.98 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.990 2 DEBUG oslo.service.loopingcall [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.991 2 DEBUG nova.compute.manager [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:41:48 np0005473739 nova_compute[259550]: 2025-10-07 14:41:48.991 2 DEBUG nova.network.neutron [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:41:49 np0005473739 nova_compute[259550]: 2025-10-07 14:41:49.004 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:41:49 np0005473739 nova_compute[259550]: 2025-10-07 14:41:49.020 2 DEBUG nova.compute.manager [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-unplugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:49 np0005473739 nova_compute[259550]: 2025-10-07 14:41:49.020 2 DEBUG oslo_concurrency.lockutils [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:49 np0005473739 nova_compute[259550]: 2025-10-07 14:41:49.021 2 DEBUG oslo_concurrency.lockutils [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:49 np0005473739 nova_compute[259550]: 2025-10-07 14:41:49.021 2 DEBUG oslo_concurrency.lockutils [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:49 np0005473739 nova_compute[259550]: 2025-10-07 14:41:49.021 2 DEBUG nova.compute.manager [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] No waiting events found dispatching network-vif-unplugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:49 np0005473739 nova_compute[259550]: 2025-10-07 14:41:49.021 2 DEBUG nova.compute.manager [req-6b09c162-84cd-4589-b8d2-b7a2a76dd840 req-046c49d4-3e39-4745-89a7-2defd004f12c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-unplugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:41:50 np0005473739 nova_compute[259550]: 2025-10-07 14:41:50.004 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2382: 305 pgs: 305 active+clean; 211 MiB data, 958 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 649 KiB/s wr, 129 op/s
Oct  7 10:41:50 np0005473739 nova_compute[259550]: 2025-10-07 14:41:50.727 2 DEBUG nova.network.neutron [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:50 np0005473739 nova_compute[259550]: 2025-10-07 14:41:50.860 2 INFO nova.compute.manager [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Took 1.87 seconds to deallocate network for instance.#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.002 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.003 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.122 2 DEBUG oslo_concurrency.processutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:41:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:51Z|00158|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:03:35:e2 10.100.0.14
Oct  7 10:41:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:51Z|00159|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:03:35:e2 10.100.0.14
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.203 2 DEBUG nova.compute.manager [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.204 2 DEBUG oslo_concurrency.lockutils [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "ca309873-104c-4cd4-a609-686d61823e0f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.205 2 DEBUG oslo_concurrency.lockutils [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.205 2 DEBUG oslo_concurrency.lockutils [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.205 2 DEBUG nova.compute.manager [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] No waiting events found dispatching network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.205 2 WARNING nova.compute.manager [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received unexpected event network-vif-plugged-bcef16bc-28e8-4e97-a50b-7625a1917ee5 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.206 2 DEBUG nova.compute.manager [req-ddfb8fa8-4d25-40ab-aa10-47a8d218ee90 req-a07566ce-5511-4cc4-a7ca-f309bce4380f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Received event network-vif-deleted-bcef16bc-28e8-4e97-a50b-7625a1917ee5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:41:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345765719' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.599 2 DEBUG oslo_concurrency.processutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.605 2 DEBUG nova.compute.provider_tree [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.700 2 DEBUG nova.scheduler.client.report [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.822 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.820s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:51 np0005473739 nova_compute[259550]: 2025-10-07 14:41:51.953 2 INFO nova.scheduler.client.report [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance ca309873-104c-4cd4-a609-686d61823e0f#033[00m
Oct  7 10:41:52 np0005473739 nova_compute[259550]: 2025-10-07 14:41:52.090 2 DEBUG oslo_concurrency.lockutils [None req-76cfd374-ac86-4054-b0ac-6d5aa16982fb 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "ca309873-104c-4cd4-a609-686d61823e0f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2383: 305 pgs: 305 active+clean; 187 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 1.3 MiB/s wr, 119 op/s
Oct  7 10:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:41:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:41:52 np0005473739 nova_compute[259550]: 2025-10-07 14:41:52.902 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.888 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.889 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b" acquired by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.918 2 DEBUG nova.objects.instance [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'flavor' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.954 2 DEBUG nova.virt.libvirt.vif [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.954 2 DEBUG nova.network.os_vif_util [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.955 2 DEBUG nova.network.os_vif_util [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.958 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.961 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.963 2 DEBUG nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Attempting to detach device tap359e9c20-ec from instance d0b10640-5492-4d8f-8b94-a49a15b6e702 from the persistent domain config. _detach_from_persistent /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2487#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.963 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:98:75:78"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <target dev="tap359e9c20-ec"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:41:53 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.970 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.975 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface>not found in domain: <domain type='kvm' id='152'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <name>instance-00000079</name>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <uuid>d0b10640-5492-4d8f-8b94-a49a15b6e702</uuid>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:40:13</nova:creationTime>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <nova:port uuid="359e9c20-ec4d-4bc9-bfc1-93f3464bf09b">
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:41:53 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <entry name='serial'>d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <entry name='uuid'>d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk' index='2'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config' index='1'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:6d:ee:73'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target dev='tapb7bf5de8-3b'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:98:75:78'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target dev='tap359e9c20-ec'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='net1'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log' append='off'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log' append='off'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c211,c671</label>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c211,c671</imagelabel>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:41:53 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:41:53 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.977 2 INFO nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully detached device tap359e9c20-ec from instance d0b10640-5492-4d8f-8b94-a49a15b6e702 from the persistent domain config.#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.977 2 DEBUG nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] (1/8): Attempting to detach device tap359e9c20-ec with device alias net1 from instance d0b10640-5492-4d8f-8b94-a49a15b6e702 from the live domain config. _detach_from_live_with_retry /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2523#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.978 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] detach device xml: <interface type="ethernet">
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <mac address="fa:16:3e:98:75:78"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <model type="virtio"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <mtu size="1442"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]:  <target dev="tap359e9c20-ec"/>
Oct  7 10:41:53 np0005473739 nova_compute[259550]: </interface>
Oct  7 10:41:53 np0005473739 nova_compute[259550]: detach_device /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:465#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:53 np0005473739 nova_compute[259550]: 2025-10-07 14:41:53.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:41:54 np0005473739 kernel: tap359e9c20-ec (unregistering): left promiscuous mode
Oct  7 10:41:54 np0005473739 NetworkManager[44949]: <info>  [1759848114.0761] device (tap359e9c20-ec): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.085 2 DEBUG nova.virt.libvirt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Received event <DeviceRemovedEvent: 1759848114.085082, d0b10640-5492-4d8f-8b94-a49a15b6e702 => net1> from libvirt while the driver is waiting for it; dispatched. emit_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2370#033[00m
Oct  7 10:41:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:54Z|01388|binding|INFO|Releasing lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b from this chassis (sb_readonly=0)
Oct  7 10:41:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:54Z|01389|binding|INFO|Setting lport 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b down in Southbound
Oct  7 10:41:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:54Z|01390|binding|INFO|Removing iface tap359e9c20-ec ovn-installed in OVS
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.088 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.089 2 DEBUG nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Start waiting for the detach event from libvirt for device tap359e9c20-ec with device alias net1 for instance d0b10640-5492-4d8f-8b94-a49a15b6e702 _detach_from_live_and_wait_for_event /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:2599#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.090 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] looking for interface given config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface> get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:257#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.094 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] interface for config: <interface type="ethernet"><mac address="fa:16:3e:98:75:78"/><model type="virtio"/><driver name="vhost" rx_queue_size="512"/><mtu size="1442"/><target dev="tap359e9c20-ec"/></interface>not found in domain: <domain type='kvm' id='152'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <name>instance-00000079</name>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <uuid>d0b10640-5492-4d8f-8b94-a49a15b6e702</uuid>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1" xmlns:instance="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:40:13</nova:creationTime>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:port uuid="359e9c20-ec4d-4bc9-bfc1-93f3464bf09b">
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.23" ipVersion="4"/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:41:54 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <memory unit='KiB'>131072</memory>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <currentMemory unit='KiB'>131072</currentMemory>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <vcpu placement='static'>1</vcpu>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <resource>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <partition>/machine</partition>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </resource>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <sysinfo type='smbios'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <entry name='manufacturer'>RDO</entry>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <entry name='product'>OpenStack Compute</entry>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <entry name='version'>27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <entry name='serial'>d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <entry name='uuid'>d0b10640-5492-4d8f-8b94-a49a15b6e702</entry>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <entry name='family'>Virtual Machine</entry>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <type arch='x86_64' machine='pc-q35-rhel9.6.0'>hvm</type>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <boot dev='hd'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <smbios mode='sysinfo'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <vmcoreinfo state='on'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <cpu mode='custom' match='exact' check='full'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <model fallback='forbid'>EPYC-Rome</model>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <vendor>AMD</vendor>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <topology sockets='1' dies='1' clusters='1' cores='1' threads='1'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='x2apic'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc-deadline'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='hypervisor'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='tsc_adjust'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='spec-ctrl'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='stibp'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='arch-capabilities'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='ssbd'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='cmp_legacy'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='overflow-recov'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='succor'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='ibrs'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='amd-ssbd'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='virt-ssbd'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='lbrv'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='tsc-scale'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='vmcb-clean'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='flushbyasid'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pause-filter'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='pfthreshold'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svme-addr-chk'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='lfence-always-serializing'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='rdctl-no'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='skip-l1dfl-vmentry'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='mds-no'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='pschange-mc-no'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='gds-no'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='rfds-no'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='xsaves'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='svm'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='require' name='topoext'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='npt'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <feature policy='disable' name='nrip-save'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <clock offset='utc'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <timer name='pit' tickpolicy='delay'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <timer name='rtc' tickpolicy='catchup'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <timer name='hpet' present='no'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <on_poweroff>destroy</on_poweroff>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <on_reboot>restart</on_reboot>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <on_crash>destroy</on_crash>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <emulator>/usr/libexec/qemu-kvm</emulator>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <disk type='network' device='disk'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk' index='2'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target dev='vda' bus='virtio'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='virtio-disk0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <disk type='network' device='cdrom'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <driver name='qemu' type='raw' cache='none'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <auth username='openstack'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:        <secret type='ceph' uuid='82044f27-a8da-5b2a-a297-ff6afc620e1f'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <source protocol='rbd' name='vms/d0b10640-5492-4d8f-8b94-a49a15b6e702_disk.config' index='1'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:        <host name='192.168.122.100' port='6789'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target dev='sda' bus='sata'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <readonly/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='sata0-0-0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='drive' controller='0' bus='0' target='0' unit='0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='0' model='pcie-root'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pcie.0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='1' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='1' port='0x10'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.1'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='2' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='2' port='0x11'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.2'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='3' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='3' port='0x12'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.3'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='4' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='4' port='0x13'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.4'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='5' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='5' port='0x14'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.5'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='6' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='6' port='0x15'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.6'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='7' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='7' port='0x16'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.7'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x6'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='8' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='8' port='0x17'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.8'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x7'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='9' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='9' port='0x18'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.9'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0' multifunction='on'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='10' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='10' port='0x19'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.10'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x1'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='11' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='11' port='0x1a'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.11'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x2'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='12' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='12' port='0x1b'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.12'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x3'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='13' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='13' port='0x1c'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.13'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x4'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='14' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='14' port='0x1d'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.14'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x5'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='15' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='15' port='0x1e'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.15'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x6'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='16' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='16' port='0x1f'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.16'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x7'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='17' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='17' port='0x20'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.17'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0' multifunction='on'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='18' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='18' port='0x21'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.18'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x1'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='19' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='19' port='0x22'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.19'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x2'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='20' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='20' port='0x23'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.20'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x3'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='21' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='21' port='0x24'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.21'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x4'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='22' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='22' port='0x25'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.22'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x5'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='23' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='23' port='0x26'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.23'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x6'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='24' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='24' port='0x27'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.24'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='25' model='pcie-root-port'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-root-port'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target chassis='25' port='0x28'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.25'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='pci' index='26' model='pcie-to-pci-bridge'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model name='pcie-pci-bridge'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='pci.26'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='usb' index='0' model='piix3-uhci'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='usb'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x1a' slot='0x01' function='0x0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <controller type='sata' index='0'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='ide'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x1f' function='0x2'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </controller>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <interface type='ethernet'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <mac address='fa:16:3e:6d:ee:73'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target dev='tapb7bf5de8-3b'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model type='virtio'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <driver name='vhost' rx_queue_size='512'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <mtu size='1442'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='net0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <serial type='pty'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log' append='off'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target type='isa-serial' port='0'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:        <model name='isa-serial'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      </target>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <console type='pty' tty='/dev/pts/0'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <source path='/dev/pts/0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <log file='/var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702/console.log' append='off'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <target type='serial' port='0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='serial0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </console>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <input type='tablet' bus='usb'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='input0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='usb' bus='0' port='1'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <input type='mouse' bus='ps2'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='input1'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <input type='keyboard' bus='ps2'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='input2'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </input>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <graphics type='vnc' port='5900' autoport='yes' listen='::0'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <listen type='address' address='::0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </graphics>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <audio id='1' type='none'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <model type='virtio' heads='1' primary='yes'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='video0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <watchdog model='itco' action='reset'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='watchdog0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </watchdog>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <memballoon model='virtio'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <stats period='10'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='balloon0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <rng model='virtio'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <backend model='random'>/dev/urandom</backend>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <alias name='rng0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <address type='pci' domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='selinux' relabel='yes'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <label>system_u:system_r:svirt_t:s0:c211,c671</label>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <imagelabel>system_u:object_r:svirt_image_t:s0:c211,c671</imagelabel>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <seclabel type='dynamic' model='dac' relabel='yes'>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <label>+107:+107</label>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <imagelabel>+107:+107</imagelabel>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </seclabel>
Oct  7 10:41:54 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:41:54 np0005473739 nova_compute[259550]: get_interface_by_cfg /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:282#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.096 2 INFO nova.virt.libvirt.driver [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully detached device tap359e9c20-ec from instance d0b10640-5492-4d8f-8b94-a49a15b6e702 from the live domain config.#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.097 2 DEBUG nova.virt.libvirt.vif [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=<?>,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=<?>,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state=None,terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.097 2 DEBUG nova.network.os_vif_util [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "address": "fa:16:3e:98:75:78", "network": {"id": "cae6e154-5797-4df5-a9e8-545cc6ed0188", "bridge": "br-int", "label": "tempest-network-smoke--1366118304", "subnets": [{"cidr": "10.100.0.16/28", "dns": [], "gateway": {"address": null, "type": "gateway", "version": null, "meta": {}}, "ips": [{"address": "10.100.0.23", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap359e9c20-ec", "ovs_interfaceid": "359e9c20-ec4d-4bc9-bfc1-93f3464bf09b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.098 2 DEBUG nova.network.os_vif_util [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.098 2 DEBUG os_vif [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.100 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.101 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap359e9c20-ec, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.104 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.106 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.108 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:75:78 10.100.0.23', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.23/28', 'neutron:device_id': 'd0b10640-5492-4d8f-8b94-a49a15b6e702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '5', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4af57759-0100-4ebb-81fb-af43dd151fff, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.108 2 INFO os_vif [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:75:78,bridge_name='br-int',has_traffic_filtering=True,id=359e9c20-ec4d-4bc9-bfc1-93f3464bf09b,network=Network(cae6e154-5797-4df5-a9e8-545cc6ed0188),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap359e9c20-ec')#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.109 2 DEBUG nova.virt.libvirt.guest [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] set metadata xml: <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:name>tempest-TestNetworkBasicOps-server-666494646</nova:name>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:creationTime>2025-10-07 14:41:54</nova:creationTime>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:flavor name="m1.nano">
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:memory>128</nova:memory>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:disk>1</nova:disk>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:swap>0</nova:swap>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:vcpus>1</nova:vcpus>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </nova:flavor>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:owner>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </nova:owner>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  <nova:ports>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    <nova:port uuid="b7bf5de8-3ba0-43cd-a839-d8812cbe4276">
Oct  7 10:41:54 np0005473739 nova_compute[259550]:      <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:    </nova:port>
Oct  7 10:41:54 np0005473739 nova_compute[259550]:  </nova:ports>
Oct  7 10:41:54 np0005473739 nova_compute[259550]: </nova:instance>
Oct  7 10:41:54 np0005473739 nova_compute[259550]: set_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py:359#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.110 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b in datapath cae6e154-5797-4df5-a9e8-545cc6ed0188 unbound from our chassis#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.111 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cae6e154-5797-4df5-a9e8-545cc6ed0188, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.111 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d14b758d-a9d7-4f6a-811b-4e6258a807c1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.112 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 namespace which is not needed anymore#033[00m
Oct  7 10:41:54 np0005473739 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [NOTICE]   (391921) : haproxy version is 2.8.14-c23fe91
Oct  7 10:41:54 np0005473739 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [NOTICE]   (391921) : path to executable is /usr/sbin/haproxy
Oct  7 10:41:54 np0005473739 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [WARNING]  (391921) : Exiting Master process...
Oct  7 10:41:54 np0005473739 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [ALERT]    (391921) : Current worker (391923) exited with code 143 (Terminated)
Oct  7 10:41:54 np0005473739 neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188[391917]: [WARNING]  (391921) : All workers exited. Exiting... (0)
Oct  7 10:41:54 np0005473739 systemd[1]: libpod-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e.scope: Deactivated successfully.
Oct  7 10:41:54 np0005473739 conmon[391917]: conmon 397322e542acc104d0b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e.scope/container/memory.events
Oct  7 10:41:54 np0005473739 podman[396061]: 2025-10-07 14:41:54.247087473 +0000 UTC m=+0.047789232 container died 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:41:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e-userdata-shm.mount: Deactivated successfully.
Oct  7 10:41:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1a41d1aa38f8abe0fef319a4a94399d666f0b1094fa534bf42c28aa629de3903-merged.mount: Deactivated successfully.
Oct  7 10:41:54 np0005473739 podman[396061]: 2025-10-07 14:41:54.292058578 +0000 UTC m=+0.092760337 container cleanup 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 10:41:54 np0005473739 systemd[1]: libpod-conmon-397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e.scope: Deactivated successfully.
Oct  7 10:41:54 np0005473739 podman[396087]: 2025-10-07 14:41:54.355788802 +0000 UTC m=+0.043153218 container remove 397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.361 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e22b5d0f-3f2c-418d-99d0-1d699311b0fd]: (4, ('Tue Oct  7 02:41:54 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 (397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e)\n397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e\nTue Oct  7 02:41:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 (397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e)\n397322e542acc104d0b1e9dfd27b51867395e09895f7cf8a6d0b71640bc5076e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[082e81f2-1e9a-484b-8417-bb7d47aa0d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.364 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcae6e154-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:54 np0005473739 kernel: tapcae6e154-50: left promiscuous mode
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.385 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de2ad207-e10d-4da2-8f6d-999a4d861dd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.401 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8f9db31a-f4e4-435d-a6b2-d03d8f5d16a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.403 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71fc7d3f-58fe-4df5-af9b-f72b489ae88c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.423 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7383bd17-d7fb-488e-bd6e-b20882ed9ee4]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 859704, 'reachable_time': 22272, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396104, 'error': None, 'target': 'ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.426 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cae6e154-5797-4df5-a9e8-545cc6ed0188 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:41:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:54.426 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[03cd7af5-8532-4c65-853b-c4d1a274fb7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:54 np0005473739 systemd[1]: run-netns-ovnmeta\x2dcae6e154\x2d5797\x2d4df5\x2da9e8\x2d545cc6ed0188.mount: Deactivated successfully.
Oct  7 10:41:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2384: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.973 2 DEBUG nova.compute.manager [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-unplugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.974 2 DEBUG oslo_concurrency.lockutils [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.974 2 DEBUG oslo_concurrency.lockutils [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.974 2 DEBUG oslo_concurrency.lockutils [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.975 2 DEBUG nova.compute.manager [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-unplugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:54 np0005473739 nova_compute[259550]: 2025-10-07 14:41:54.975 2 WARNING nova.compute.manager [req-b9e0894b-3f56-4add-a90b-75ed98e115ae req-05c35d47-dafe-4685-a73d-5be40bcb3f83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-unplugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:41:55 np0005473739 nova_compute[259550]: 2025-10-07 14:41:55.286 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:55 np0005473739 nova_compute[259550]: 2025-10-07 14:41:55.287 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:55 np0005473739 nova_compute[259550]: 2025-10-07 14:41:55.288 2 DEBUG nova.network.neutron [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:41:55 np0005473739 nova_compute[259550]: 2025-10-07 14:41:55.976 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:55 np0005473739 nova_compute[259550]: 2025-10-07 14:41:55.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2385: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 93 op/s
Oct  7 10:41:56 np0005473739 nova_compute[259550]: 2025-10-07 14:41:56.780 2 INFO nova.network.neutron [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Port 359e9c20-ec4d-4bc9-bfc1-93f3464bf09b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  7 10:41:56 np0005473739 nova_compute[259550]: 2025-10-07 14:41:56.780 2 DEBUG nova.network.neutron [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:41:56 np0005473739 nova_compute[259550]: 2025-10-07 14:41:56.831 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:41:56 np0005473739 nova_compute[259550]: 2025-10-07 14:41:56.872 2 DEBUG oslo_concurrency.lockutils [None req-97c599ae-04c1-4f0a-bedc-897944b2d2aa 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "interface-d0b10640-5492-4d8f-8b94-a49a15b6e702-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b" "released" by "nova.compute.manager.ComputeManager.detach_interface.<locals>.do_detach_interface" :: held 2.983s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:56 np0005473739 nova_compute[259550]: 2025-10-07 14:41:56.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:56 np0005473739 nova_compute[259550]: 2025-10-07 14:41:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:56 np0005473739 nova_compute[259550]: 2025-10-07 14:41:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:41:56 np0005473739 nova_compute[259550]: 2025-10-07 14:41:56.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.071 2 DEBUG nova.compute.manager [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.071 2 DEBUG oslo_concurrency.lockutils [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.072 2 DEBUG oslo_concurrency.lockutils [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.072 2 DEBUG oslo_concurrency.lockutils [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.072 2 DEBUG nova.compute.manager [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.073 2 WARNING nova.compute.manager [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b for instance with vm_state active and task_state None.#033[00m
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.073 2 DEBUG nova.compute.manager [req-0b29dc83-b7d5-4cba-bbdc-3ca7622a601b req-21bd5bb5-67c4-4e64-815c-b4723873e9e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-deleted-359e9c20-ec4d-4bc9-bfc1-93f3464bf09b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:57Z|01391|binding|INFO|Releasing lport eb0d4e5a-943a-459a-bddc-3f4beed43512 from this chassis (sb_readonly=0)
Oct  7 10:41:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:57Z|01392|binding|INFO|Releasing lport 406dfedc-cd29-46b7-9b91-7b006ecd582c from this chassis (sb_readonly=0)
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:57 np0005473739 nova_compute[259550]: 2025-10-07 14:41:57.905 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:41:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2386: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.536 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.537 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.537 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.538 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.538 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.539 2 INFO nova.compute.manager [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Terminating instance#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.540 2 DEBUG nova.compute.manager [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:41:58 np0005473739 kernel: tapb7bf5de8-3b (unregistering): left promiscuous mode
Oct  7 10:41:58 np0005473739 NetworkManager[44949]: <info>  [1759848118.6134] device (tapb7bf5de8-3b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:58Z|01393|binding|INFO|Releasing lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 from this chassis (sb_readonly=0)
Oct  7 10:41:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:58Z|01394|binding|INFO|Setting lport b7bf5de8-3ba0-43cd-a839-d8812cbe4276 down in Southbound
Oct  7 10:41:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:41:58Z|01395|binding|INFO|Removing iface tapb7bf5de8-3b ovn-installed in OVS
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.640 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:6d:ee:73 10.100.0.7'], port_security=['fa:16:3e:6d:ee:73 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': 'd0b10640-5492-4d8f-8b94-a49a15b6e702', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '8da4ff39-44c8-491e-a635-6be0569feae9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0f1c4ac2-8721-482c-b415-fdc398b5953a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b7bf5de8-3ba0-43cd-a839-d8812cbe4276) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.641 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 in datapath 580b59e0-70f8-44c3-a35f-9c4f88691f96 unbound from our chassis#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.643 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 580b59e0-70f8-44c3-a35f-9c4f88691f96, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[25fb8fb0-f778-4f80-b583-719bdf96f9c3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.644 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 namespace which is not needed anymore#033[00m
Oct  7 10:41:58 np0005473739 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Deactivated successfully.
Oct  7 10:41:58 np0005473739 systemd[1]: machine-qemu\x2d152\x2dinstance\x2d00000079.scope: Consumed 19.721s CPU time.
Oct  7 10:41:58 np0005473739 systemd-machined[214580]: Machine qemu-152-instance-00000079 terminated.
Oct  7 10:41:58 np0005473739 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [NOTICE]   (390942) : haproxy version is 2.8.14-c23fe91
Oct  7 10:41:58 np0005473739 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [NOTICE]   (390942) : path to executable is /usr/sbin/haproxy
Oct  7 10:41:58 np0005473739 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [WARNING]  (390942) : Exiting Master process...
Oct  7 10:41:58 np0005473739 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [WARNING]  (390942) : Exiting Master process...
Oct  7 10:41:58 np0005473739 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [ALERT]    (390942) : Current worker (390944) exited with code 143 (Terminated)
Oct  7 10:41:58 np0005473739 neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96[390938]: [WARNING]  (390942) : All workers exited. Exiting... (0)
Oct  7 10:41:58 np0005473739 systemd[1]: libpod-150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352.scope: Deactivated successfully.
Oct  7 10:41:58 np0005473739 podman[396128]: 2025-10-07 14:41:58.779399164 +0000 UTC m=+0.047316909 container died 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.794 2 INFO nova.virt.libvirt.driver [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Instance destroyed successfully.#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.795 2 DEBUG nova.objects.instance [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid d0b10640-5492-4d8f-8b94-a49a15b6e702 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:41:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352-userdata-shm.mount: Deactivated successfully.
Oct  7 10:41:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ea80f1f94cd7157cc848e1780ed5de7036ed725f787faa3cc6a7f92f3662579a-merged.mount: Deactivated successfully.
Oct  7 10:41:58 np0005473739 podman[396128]: 2025-10-07 14:41:58.823224558 +0000 UTC m=+0.091142303 container cleanup 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.833 2 DEBUG nova.virt.libvirt.vif [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:39:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-666494646',display_name='tempest-TestNetworkBasicOps-server-666494646',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-666494646',id=121,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBK1spSCFixXNsUFx/gNeoHExW+DY1/4E+O5JhigGItAWYtVOtc4GVbv0L/rgo0glCGTWIkGxAFExfWpDWhQ8tY55XDjxFuD7v7bFZwlCzmx6XPgY1bFJ0yFMdc8TPdCuzA==',key_name='tempest-TestNetworkBasicOps-45009849',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:39:42Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-1mnwx928',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:39:42Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=d0b10640-5492-4d8f-8b94-a49a15b6e702,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.833 2 DEBUG nova.network.os_vif_util [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.834 2 DEBUG nova.network.os_vif_util [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.834 2 DEBUG os_vif [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.836 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7bf5de8-3b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:58 np0005473739 systemd[1]: libpod-conmon-150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352.scope: Deactivated successfully.
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.839 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.842 2 INFO os_vif [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:6d:ee:73,bridge_name='br-int',has_traffic_filtering=True,id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276,network=Network(580b59e0-70f8-44c3-a35f-9c4f88691f96),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb7bf5de8-3b')#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.884 2 DEBUG nova.compute.manager [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-unplugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.884 2 DEBUG oslo_concurrency.lockutils [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.885 2 DEBUG oslo_concurrency.lockutils [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.886 2 DEBUG oslo_concurrency.lockutils [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.887 2 DEBUG nova.compute.manager [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-unplugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.887 2 DEBUG nova.compute.manager [req-e6514af9-1a44-40af-9e4a-585dcbfea45f req-9bc2adff-875a-4155-bc24-8b10d0cc38ec 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-unplugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:41:58 np0005473739 podman[396167]: 2025-10-07 14:41:58.887411595 +0000 UTC m=+0.043929780 container remove 150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.897 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cf69e57d-73f3-4510-8142-8c5fa5748157]: (4, ('Tue Oct  7 02:41:58 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 (150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352)\n150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352\nTue Oct  7 02:41:58 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 (150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352)\n150c91d3edccd6d94adff252ff776085c02da1df8c9e96b87febc1db81826352\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.901 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42f585da-f860-4c78-a0f8-9955b5adb12d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.902 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap580b59e0-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:41:58 np0005473739 kernel: tap580b59e0-70: left promiscuous mode
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.907 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 nova_compute[259550]: 2025-10-07 14:41:58.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.925 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[424c26c8-037c-4603-8b9d-66ccf8469d16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.961 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28ddae6e-e11c-496b-b4f9-e111c4065145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.962 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9c226ca5-8b2e-4c67-a4b9-142a719d5f18]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.978 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b1cf8a42-3960-42e0-8bf7-7c2ffd076ffc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 856489, 'reachable_time': 40316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396200, 'error': None, 'target': 'ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.981 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-580b59e0-70f8-44c3-a35f-9c4f88691f96 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:41:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:41:58.981 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[462adcf7-8b71-4a08-b9fb-1b723cc08aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:41:58 np0005473739 systemd[1]: run-netns-ovnmeta\x2d580b59e0\x2d70f8\x2d44c3\x2da35f\x2d9c4f88691f96.mount: Deactivated successfully.
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.297 2 INFO nova.virt.libvirt.driver [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Deleting instance files /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702_del#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.299 2 INFO nova.virt.libvirt.driver [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Deletion of /var/lib/nova/instances/d0b10640-5492-4d8f-8b94-a49a15b6e702_del complete#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.315 2 DEBUG nova.compute.manager [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.315 2 DEBUG nova.compute.manager [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing instance network info cache due to event network-changed-b7bf5de8-3ba0-43cd-a839-d8812cbe4276. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.316 2 DEBUG oslo_concurrency.lockutils [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.316 2 DEBUG oslo_concurrency.lockutils [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.317 2 DEBUG nova.network.neutron [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Refreshing network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.399 2 INFO nova.compute.manager [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Took 0.86 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.400 2 DEBUG oslo.service.loopingcall [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.400 2 DEBUG nova.compute.manager [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:41:59 np0005473739 nova_compute[259550]: 2025-10-07 14:41:59.400 2 DEBUG nova.network.neutron [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:00.074 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:00.075 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:00.076 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2387: 305 pgs: 305 active+clean; 200 MiB data, 953 MiB used, 59 GiB / 60 GiB avail; 348 KiB/s rd, 2.1 MiB/s wr, 95 op/s
Oct  7 10:42:00 np0005473739 nova_compute[259550]: 2025-10-07 14:42:00.966 2 DEBUG nova.compute.manager [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:00 np0005473739 nova_compute[259550]: 2025-10-07 14:42:00.966 2 DEBUG oslo_concurrency.lockutils [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:00 np0005473739 nova_compute[259550]: 2025-10-07 14:42:00.966 2 DEBUG oslo_concurrency.lockutils [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:00 np0005473739 nova_compute[259550]: 2025-10-07 14:42:00.967 2 DEBUG oslo_concurrency.lockutils [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:00 np0005473739 nova_compute[259550]: 2025-10-07 14:42:00.967 2 DEBUG nova.compute.manager [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] No waiting events found dispatching network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:42:00 np0005473739 nova_compute[259550]: 2025-10-07 14:42:00.967 2 WARNING nova.compute.manager [req-735d5aa7-a26b-40ba-8dfa-48efbe7a6a3b req-438c40d3-fecf-4d39-a78a-6dc86b49e2ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received unexpected event network-vif-plugged-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:42:01 np0005473739 nova_compute[259550]: 2025-10-07 14:42:01.813 2 DEBUG nova.network.neutron [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:01 np0005473739 nova_compute[259550]: 2025-10-07 14:42:01.933 2 DEBUG nova.network.neutron [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updated VIF entry in instance network info cache for port b7bf5de8-3ba0-43cd-a839-d8812cbe4276. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:42:01 np0005473739 nova_compute[259550]: 2025-10-07 14:42:01.934 2 DEBUG nova.network.neutron [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [{"id": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "address": "fa:16:3e:6d:ee:73", "network": {"id": "580b59e0-70f8-44c3-a35f-9c4f88691f96", "bridge": "br-int", "label": "tempest-network-smoke--349911043", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb7bf5de8-3b", "ovs_interfaceid": "b7bf5de8-3ba0-43cd-a839-d8812cbe4276", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:01 np0005473739 nova_compute[259550]: 2025-10-07 14:42:01.996 2 DEBUG nova.compute.manager [req-e282eb86-0090-487d-9b1f-ab629f4ef9ed req-25158942-42c3-464a-aa12-64d21e797f1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Received event network-vif-deleted-b7bf5de8-3ba0-43cd-a839-d8812cbe4276 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:01 np0005473739 nova_compute[259550]: 2025-10-07 14:42:01.996 2 INFO nova.compute.manager [req-e282eb86-0090-487d-9b1f-ab629f4ef9ed req-25158942-42c3-464a-aa12-64d21e797f1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Neutron deleted interface b7bf5de8-3ba0-43cd-a839-d8812cbe4276; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:42:01 np0005473739 nova_compute[259550]: 2025-10-07 14:42:01.996 2 DEBUG nova.network.neutron [req-e282eb86-0090-487d-9b1f-ab629f4ef9ed req-25158942-42c3-464a-aa12-64d21e797f1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.004 2 INFO nova.compute.manager [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Took 2.60 seconds to deallocate network for instance.#033[00m
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.028 2 DEBUG nova.compute.manager [req-e282eb86-0090-487d-9b1f-ab629f4ef9ed req-25158942-42c3-464a-aa12-64d21e797f1a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Detach interface failed, port_id=b7bf5de8-3ba0-43cd-a839-d8812cbe4276, reason: Instance d0b10640-5492-4d8f-8b94-a49a15b6e702 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.051 2 DEBUG oslo_concurrency.lockutils [req-94ce6b3c-3421-4a86-90e3-4651a0d807f6 req-4db6cfee-bdb4-4a32-92f1-2b30070d9e7a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-d0b10640-5492-4d8f-8b94-a49a15b6e702" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.099 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.100 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.317 2 DEBUG oslo_concurrency.processutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2388: 305 pgs: 305 active+clean; 180 MiB data, 947 MiB used, 59 GiB / 60 GiB avail; 316 KiB/s rd, 1.5 MiB/s wr, 75 op/s
Oct  7 10:42:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:42:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976224580' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.764 2 DEBUG oslo_concurrency.processutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.772 2 DEBUG nova.compute.provider_tree [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:42:02 np0005473739 nova_compute[259550]: 2025-10-07 14:42:02.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.003 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.004 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.078 2 DEBUG nova.scheduler.client.report [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:42:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.240 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848108.2395742, ca309873-104c-4cd4-a609-686d61823e0f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.240 2 INFO nova.compute.manager [-] [instance: ca309873-104c-4cd4-a609-686d61823e0f] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.291 2 DEBUG nova.compute.manager [None req-6126e4f9-ef14-4f5e-9eee-fa882e7065d6 - - - - - -] [instance: ca309873-104c-4cd4-a609-686d61823e0f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.306 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.206s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.325 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.351 2 INFO nova.scheduler.client.report [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance d0b10640-5492-4d8f-8b94-a49a15b6e702#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.464 2 DEBUG oslo_concurrency.lockutils [None req-a44689b5-a83c-4e3b-b7dd-ee6e35f6b721 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "d0b10640-5492-4d8f-8b94-a49a15b6e702" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.927s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:03 np0005473739 nova_compute[259550]: 2025-10-07 14:42:03.838 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:04 np0005473739 nova_compute[259550]: 2025-10-07 14:42:04.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:04 np0005473739 podman[396224]: 2025-10-07 14:42:04.090103183 +0000 UTC m=+0.072774915 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:42:04 np0005473739 podman[396225]: 2025-10-07 14:42:04.0971037 +0000 UTC m=+0.077569984 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:42:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2389: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 124 KiB/s rd, 872 KiB/s wr, 46 op/s
Oct  7 10:42:05 np0005473739 nova_compute[259550]: 2025-10-07 14:42:05.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.019 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:42:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2797654740' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:42:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2390: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.489 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.575 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.576 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.728 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.729 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3501MB free_disk=59.942710876464844GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.730 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.730 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.841 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.842 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.843 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:42:06 np0005473739 nova_compute[259550]: 2025-10-07 14:42:06.888 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:42:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2895425505' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:42:07 np0005473739 nova_compute[259550]: 2025-10-07 14:42:07.339 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:07 np0005473739 nova_compute[259550]: 2025-10-07 14:42:07.345 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:42:07 np0005473739 nova_compute[259550]: 2025-10-07 14:42:07.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:07 np0005473739 nova_compute[259550]: 2025-10-07 14:42:07.431 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:42:07 np0005473739 nova_compute[259550]: 2025-10-07 14:42:07.590 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:42:07 np0005473739 nova_compute[259550]: 2025-10-07 14:42:07.591 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.861s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:07 np0005473739 nova_compute[259550]: 2025-10-07 14:42:07.592 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:07 np0005473739 nova_compute[259550]: 2025-10-07 14:42:07.955 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2391: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 14 KiB/s wr, 28 op/s
Oct  7 10:42:08 np0005473739 nova_compute[259550]: 2025-10-07 14:42:08.840 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:09 np0005473739 nova_compute[259550]: 2025-10-07 14:42:09.612 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:10Z|01396|binding|INFO|Releasing lport eb0d4e5a-943a-459a-bddc-3f4beed43512 from this chassis (sb_readonly=0)
Oct  7 10:42:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2392: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct  7 10:42:10 np0005473739 nova_compute[259550]: 2025-10-07 14:42:10.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:10 np0005473739 nova_compute[259550]: 2025-10-07 14:42:10.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2393: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 4.2 KiB/s wr, 25 op/s
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:12.533 161642 DEBUG eventlet.wsgi.server [-] (161642) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:12.535 161642 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: Accept: */*#015
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: Connection: close#015
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: Content-Type: text/plain#015
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: Host: 169.254.169.254#015
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: User-Agent: curl/7.84.0#015
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: X-Forwarded-For: 10.100.0.14#015
Oct  7 10:42:12 np0005473739 ovn_metadata_agent[161531]: X-Ovn-Network-Id: 70c32d19-cc0d-46fc-a583-1be2bd26332c __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  7 10:42:12 np0005473739 nova_compute[259550]: 2025-10-07 14:42:12.958 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:42:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.0 total, 600.0 interval#012Cumulative writes: 36K writes, 147K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.04 MB/s#012Cumulative WAL: 36K writes, 12K syncs, 2.82 writes per sync, written: 0.14 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4957 writes, 21K keys, 4957 commit groups, 1.0 writes per commit group, ingest: 24.44 MB, 0.04 MB/s#012Interval WAL: 4957 writes, 1908 syncs, 2.60 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:42:13 np0005473739 nova_compute[259550]: 2025-10-07 14:42:13.793 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848118.7921877, d0b10640-5492-4d8f-8b94-a49a15b6e702 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:13 np0005473739 nova_compute[259550]: 2025-10-07 14:42:13.793 2 INFO nova.compute.manager [-] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:42:13 np0005473739 nova_compute[259550]: 2025-10-07 14:42:13.842 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:13 np0005473739 nova_compute[259550]: 2025-10-07 14:42:13.940 2 DEBUG nova.compute.manager [None req-d4bcc752-126e-4887-947c-1b122386de4d - - - - - -] [instance: d0b10640-5492-4d8f-8b94-a49a15b6e702] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.000 161642 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  7 10:42:14 np0005473739 haproxy-metadata-proxy-70c32d19-cc0d-46fc-a583-1be2bd26332c[395870]: 10.100.0.14:32962 [07/Oct/2025:14:42:12.532] listener listener/metadata 0/0/0/1468/1468 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.001 161642 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4661798#033[00m
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.123 161642 DEBUG eventlet.wsgi.server [-] (161642) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.124 161642 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: Accept: */*#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: Connection: close#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: Content-Length: 100#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: Content-Type: application/x-www-form-urlencoded#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: Host: 169.254.169.254#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: User-Agent: curl/7.84.0#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: X-Forwarded-For: 10.100.0.14#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: X-Ovn-Network-Id: 70c32d19-cc0d-46fc-a583-1be2bd26332c#015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: #015
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Oct  7 10:42:14 np0005473739 nova_compute[259550]: 2025-10-07 14:42:14.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.407 161642 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Oct  7 10:42:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:14.407 161642 INFO eventlet.wsgi.server [-] 10.100.0.14,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2836077#033[00m
Oct  7 10:42:14 np0005473739 haproxy-metadata-proxy-70c32d19-cc0d-46fc-a583-1be2bd26332c[395870]: 10.100.0.14:32972 [07/Oct/2025:14:42:14.122] listener listener/metadata 0/0/0/285/285 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Oct  7 10:42:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2394: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 7.5 KiB/s rd, 2.8 KiB/s wr, 12 op/s
Oct  7 10:42:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2395: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 1023 B/s wr, 2 op/s
Oct  7 10:42:16 np0005473739 nova_compute[259550]: 2025-10-07 14:42:16.981 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:16 np0005473739 nova_compute[259550]: 2025-10-07 14:42:16.982 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:16 np0005473739 nova_compute[259550]: 2025-10-07 14:42:16.983 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:16 np0005473739 nova_compute[259550]: 2025-10-07 14:42:16.983 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:16 np0005473739 nova_compute[259550]: 2025-10-07 14:42:16.984 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:16 np0005473739 nova_compute[259550]: 2025-10-07 14:42:16.986 2 INFO nova.compute.manager [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Terminating instance#033[00m
Oct  7 10:42:16 np0005473739 nova_compute[259550]: 2025-10-07 14:42:16.988 2 DEBUG nova.compute.manager [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:42:17 np0005473739 kernel: tapfeac1006-75 (unregistering): left promiscuous mode
Oct  7 10:42:17 np0005473739 NetworkManager[44949]: <info>  [1759848137.0508] device (tapfeac1006-75): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:42:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:17Z|01397|binding|INFO|Releasing lport feac1006-7556-4dd6-9691-bc886a9410f3 from this chassis (sb_readonly=0)
Oct  7 10:42:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:17Z|01398|binding|INFO|Setting lport feac1006-7556-4dd6-9691-bc886a9410f3 down in Southbound
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:17Z|01399|binding|INFO|Removing iface tapfeac1006-75 ovn-installed in OVS
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.111 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.114 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:03:35:e2 10.100.0.14'], port_security=['fa:16:3e:03:35:e2 10.100.0.14'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28', 'neutron:device_id': '300a7ac9-5462-4db2-817f-07ea0b2d6aa6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '3b215e149c484ed3a0d2130b82d65f6f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '2290bbe4-d1ef-40a3-bca7-dc73e3a190e7 4672fac1-ae90-46b4-8fd3-3fd3255b2962', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.187'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=1d118d88-a678-4c5f-9b83-025d93ef3b4c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=feac1006-7556-4dd6-9691-bc886a9410f3) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.115 161536 INFO neutron.agent.ovn.metadata.agent [-] Port feac1006-7556-4dd6-9691-bc886a9410f3 in datapath 70c32d19-cc0d-46fc-a583-1be2bd26332c unbound from our chassis#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.116 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 70c32d19-cc0d-46fc-a583-1be2bd26332c, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:42:17 np0005473739 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Deactivated successfully.
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.117 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06ac1956-6637-4ee7-9fbb-e5966fee2c96]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.118 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c namespace which is not needed anymore#033[00m
Oct  7 10:42:17 np0005473739 systemd[1]: machine-qemu\x2d159\x2dinstance\x2d0000007e.scope: Consumed 14.990s CPU time.
Oct  7 10:42:17 np0005473739 systemd-machined[214580]: Machine qemu-159-instance-0000007e terminated.
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.227 2 INFO nova.virt.libvirt.driver [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Instance destroyed successfully.#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.228 2 DEBUG nova.objects.instance [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lazy-loading 'resources' on Instance uuid 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:42:17 np0005473739 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [NOTICE]   (395853) : haproxy version is 2.8.14-c23fe91
Oct  7 10:42:17 np0005473739 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [NOTICE]   (395853) : path to executable is /usr/sbin/haproxy
Oct  7 10:42:17 np0005473739 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [WARNING]  (395853) : Exiting Master process...
Oct  7 10:42:17 np0005473739 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [ALERT]    (395853) : Current worker (395870) exited with code 143 (Terminated)
Oct  7 10:42:17 np0005473739 neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c[395849]: [WARNING]  (395853) : All workers exited. Exiting... (0)
Oct  7 10:42:17 np0005473739 systemd[1]: libpod-106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710.scope: Deactivated successfully.
Oct  7 10:42:17 np0005473739 podman[396333]: 2025-10-07 14:42:17.274724055 +0000 UTC m=+0.048898641 container died 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.293 2 DEBUG nova.virt.libvirt.vif [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:41:26Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1671371669',display_name='tempest-TestServerBasicOps-server-1671371669',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1671371669',id=126,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGvFnIqTqfHxE6+s78OqW40EeX1Gc32/mDkbAeqCywn5GmNeJRkLzGwn+a0yP/7G4uTQPEHQaiqW/W1UjNqDLgiC69VXo+2gdbXJjgI/CEgIH3VE1qmFfhNZh8qFmO8jUg==',key_name='tempest-TestServerBasicOps-1893482168',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:41:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='3b215e149c484ed3a0d2130b82d65f6f',ramdisk_id='',reservation_id='r-q3r9pyxl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-1198650773',owner_user_name='tempest-TestServerBasicOps-1198650773-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:42:14Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='512332529ac84b3f9a7bb7d98d8577ac',uuid=300a7ac9-5462-4db2-817f-07ea0b2d6aa6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.293 2 DEBUG nova.network.os_vif_util [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converting VIF {"id": "feac1006-7556-4dd6-9691-bc886a9410f3", "address": "fa:16:3e:03:35:e2", "network": {"id": "70c32d19-cc0d-46fc-a583-1be2bd26332c", "bridge": "br-int", "label": "tempest-TestServerBasicOps-488270039-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.187", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "3b215e149c484ed3a0d2130b82d65f6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfeac1006-75", "ovs_interfaceid": "feac1006-7556-4dd6-9691-bc886a9410f3", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.294 2 DEBUG nova.network.os_vif_util [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.294 2 DEBUG os_vif [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.296 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfeac1006-75, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.304 2 INFO os_vif [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:03:35:e2,bridge_name='br-int',has_traffic_filtering=True,id=feac1006-7556-4dd6-9691-bc886a9410f3,network=Network(70c32d19-cc0d-46fc-a583-1be2bd26332c),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfeac1006-75')#033[00m
Oct  7 10:42:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710-userdata-shm.mount: Deactivated successfully.
Oct  7 10:42:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8da373f4ff1907c25e28cd2a4fb9ff763074cfeaac1f7943c05c71d9f726084b-merged.mount: Deactivated successfully.
Oct  7 10:42:17 np0005473739 podman[396333]: 2025-10-07 14:42:17.328096413 +0000 UTC m=+0.102271009 container cleanup 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:42:17 np0005473739 systemd[1]: libpod-conmon-106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710.scope: Deactivated successfully.
Oct  7 10:42:17 np0005473739 podman[396390]: 2025-10-07 14:42:17.406178979 +0000 UTC m=+0.052628260 container remove 106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.411 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a95e3e25-d7b0-4723-bbe1-bbb878633aa2]: (4, ('Tue Oct  7 02:42:17 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c (106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710)\n106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710\nTue Oct  7 02:42:17 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c (106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710)\n106b6f3984b5c03be4f02fa741af42e0445756e9b7f38d7cfc4170b6839d0710\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.413 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[211b535f-f459-4363-8099-7ab29d711490]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.414 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap70c32d19-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.416 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 kernel: tap70c32d19-c0: left promiscuous mode
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.430 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.431 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2e53d13-3089-4fa5-86a1-4659072ff838]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.469 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[38e97fe1-be30-493c-ad57-27ef41bc713f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c9ae868-6964-42de-b223-06115a89f875]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.491 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4ffca0a-1345-44d4-87df-0e60f06cbffe]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 867984, 'reachable_time': 23033, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 396407, 'error': None, 'target': 'ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.493 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-70c32d19-cc0d-46fc-a583-1be2bd26332c deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:42:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:17.493 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a62a5ab5-5e25-4a0b-8f84-20a145733dfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:17 np0005473739 systemd[1]: run-netns-ovnmeta\x2d70c32d19\x2dcc0d\x2d46fc\x2da583\x2d1be2bd26332c.mount: Deactivated successfully.
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.787 2 INFO nova.virt.libvirt.driver [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Deleting instance files /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6_del#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.788 2 INFO nova.virt.libvirt.driver [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Deletion of /var/lib/nova/instances/300a7ac9-5462-4db2-817f-07ea0b2d6aa6_del complete#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.796 2 DEBUG nova.compute.manager [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-unplugged-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.796 2 DEBUG oslo_concurrency.lockutils [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.797 2 DEBUG oslo_concurrency.lockutils [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.798 2 DEBUG oslo_concurrency.lockutils [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.798 2 DEBUG nova.compute.manager [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] No waiting events found dispatching network-vif-unplugged-feac1006-7556-4dd6-9691-bc886a9410f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.798 2 DEBUG nova.compute.manager [req-098fee9d-62b0-4e1a-9134-2ac099fd8973 req-53bb1346-ff41-4b10-8524-febd1c15ad13 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-unplugged-feac1006-7556-4dd6-9691-bc886a9410f3 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:17 np0005473739 nova_compute[259550]: 2025-10-07 14:42:17.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:18 np0005473739 nova_compute[259550]: 2025-10-07 14:42:18.097 2 INFO nova.compute.manager [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Took 1.11 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:42:18 np0005473739 nova_compute[259550]: 2025-10-07 14:42:18.097 2 DEBUG oslo.service.loopingcall [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:42:18 np0005473739 nova_compute[259550]: 2025-10-07 14:42:18.098 2 DEBUG nova.compute.manager [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:42:18 np0005473739 nova_compute[259550]: 2025-10-07 14:42:18.098 2 DEBUG nova.network.neutron [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:42:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:42:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 42K writes, 162K keys, 42K commit groups, 1.0 writes per commit group, ingest: 0.16 GB, 0.04 MB/s#012Cumulative WAL: 42K writes, 15K syncs, 2.78 writes per sync, written: 0.16 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5937 writes, 24K keys, 5937 commit groups, 1.0 writes per commit group, ingest: 30.85 MB, 0.05 MB/s#012Interval WAL: 5937 writes, 2206 syncs, 2.69 writes per sync, written: 0.03 GB, 0.05 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:42:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2396: 305 pgs: 305 active+clean; 121 MiB data, 924 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 1023 B/s wr, 2 op/s
Oct  7 10:42:19 np0005473739 podman[396409]: 2025-10-07 14:42:19.066250355 +0000 UTC m=+0.054993664 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:42:19 np0005473739 podman[396410]: 2025-10-07 14:42:19.114020414 +0000 UTC m=+0.095987893 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller)
Oct  7 10:42:19 np0005473739 nova_compute[259550]: 2025-10-07 14:42:19.963 2 DEBUG nova.compute.manager [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:19 np0005473739 nova_compute[259550]: 2025-10-07 14:42:19.963 2 DEBUG oslo_concurrency.lockutils [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:19 np0005473739 nova_compute[259550]: 2025-10-07 14:42:19.963 2 DEBUG oslo_concurrency.lockutils [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:19 np0005473739 nova_compute[259550]: 2025-10-07 14:42:19.963 2 DEBUG oslo_concurrency.lockutils [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:19 np0005473739 nova_compute[259550]: 2025-10-07 14:42:19.964 2 DEBUG nova.compute.manager [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] No waiting events found dispatching network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:42:19 np0005473739 nova_compute[259550]: 2025-10-07 14:42:19.964 2 WARNING nova.compute.manager [req-b009c7f3-5f8f-448c-b839-66632e405636 req-94786c87-cea9-4527-abd3-68aa4373ec6f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received unexpected event network-vif-plugged-feac1006-7556-4dd6-9691-bc886a9410f3 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:42:20 np0005473739 nova_compute[259550]: 2025-10-07 14:42:20.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:20.201 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=42, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=41) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:42:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:20.202 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:42:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2397: 305 pgs: 305 active+clean; 77 MiB data, 901 MiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 5.5 KiB/s wr, 24 op/s
Oct  7 10:42:20 np0005473739 nova_compute[259550]: 2025-10-07 14:42:20.915 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:20 np0005473739 nova_compute[259550]: 2025-10-07 14:42:20.916 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.027 2 DEBUG nova.network.neutron [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.074 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.476 2 DEBUG nova.compute.manager [req-315b5268-5280-4f52-81b7-97d4dfc1d9c3 req-e80aee47-ac2a-4585-9f6d-1baa1656cd4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Received event network-vif-deleted-feac1006-7556-4dd6-9691-bc886a9410f3 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.477 2 INFO nova.compute.manager [req-315b5268-5280-4f52-81b7-97d4dfc1d9c3 req-e80aee47-ac2a-4585-9f6d-1baa1656cd4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Neutron deleted interface feac1006-7556-4dd6-9691-bc886a9410f3; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.477 2 DEBUG nova.network.neutron [req-315b5268-5280-4f52-81b7-97d4dfc1d9c3 req-e80aee47-ac2a-4585-9f6d-1baa1656cd4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.483 2 INFO nova.compute.manager [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Took 3.38 seconds to deallocate network for instance.#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.646 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.647 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.655 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.656 2 INFO nova.compute.claims [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.662 2 DEBUG nova.compute.manager [req-315b5268-5280-4f52-81b7-97d4dfc1d9c3 req-e80aee47-ac2a-4585-9f6d-1baa1656cd4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Detach interface failed, port_id=feac1006-7556-4dd6-9691-bc886a9410f3, reason: Instance 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:42:21 np0005473739 nova_compute[259550]: 2025-10-07 14:42:21.784 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:22 np0005473739 nova_compute[259550]: 2025-10-07 14:42:22.043 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:22 np0005473739 nova_compute[259550]: 2025-10-07 14:42:22.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:42:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4200.1 total, 600.0 interval#012Cumulative writes: 30K writes, 123K keys, 30K commit groups, 1.0 writes per commit group, ingest: 0.12 GB, 0.03 MB/s#012Cumulative WAL: 30K writes, 10K syncs, 2.84 writes per sync, written: 0.12 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5232 writes, 21K keys, 5232 commit groups, 1.0 writes per commit group, ingest: 25.78 MB, 0.04 MB/s#012Interval WAL: 5232 writes, 1997 syncs, 2.62 writes per sync, written: 0.03 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:42:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:42:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2032458394' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2398: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct  7 10:42:22 np0005473739 nova_compute[259550]: 2025-10-07 14:42:22.500 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:22 np0005473739 nova_compute[259550]: 2025-10-07 14:42:22.506 2 DEBUG nova.compute.provider_tree [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:42:22 np0005473739 nova_compute[259550]: 2025-10-07 14:42:22.587 2 DEBUG nova.scheduler.client.report [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:42:22
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'backups', 'default.rgw.control', 'cephfs.cephfs.data', 'default.rgw.meta', 'vms', '.rgw.root', 'cephfs.cephfs.meta', '.mgr', 'default.rgw.log', 'volumes']
Oct  7 10:42:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:42:22 np0005473739 nova_compute[259550]: 2025-10-07 14:42:22.963 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:42:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:42:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.261 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.263 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.265 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 1.481s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.336 2 DEBUG oslo_concurrency.processutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.448 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.449 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.596 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:42:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:42:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3688149553' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.767 2 DEBUG oslo_concurrency.processutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.775 2 DEBUG nova.compute.provider_tree [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.801 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:42:23 np0005473739 nova_compute[259550]: 2025-10-07 14:42:23.860 2 DEBUG nova.policy [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:42:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.104 2 DEBUG nova.scheduler.client.report [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:42:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:24.205 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '42'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2399: 305 pgs: 305 active+clean; 41 MiB data, 877 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 5.2 KiB/s wr, 30 op/s
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.589 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.324s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.723 2 INFO nova.scheduler.client.report [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Deleted allocations for instance 300a7ac9-5462-4db2-817f-07ea0b2d6aa6#033[00m
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.873 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.875 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.875 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Creating image(s)#033[00m
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.901 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.921 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.940 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:24 np0005473739 nova_compute[259550]: 2025-10-07 14:42:24.944 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.018 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.019 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.019 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.020 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.037 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.040 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 77918bef-8f72-4152-ac55-f4d4c98477ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.120 2 DEBUG oslo_concurrency.lockutils [None req-7c3af773-6681-47cf-99da-fe03045fb55f 512332529ac84b3f9a7bb7d98d8577ac 3b215e149c484ed3a0d2130b82d65f6f - - default default] Lock "300a7ac9-5462-4db2-817f-07ea0b2d6aa6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.123 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Successfully created port: c75fde3c-8461-4ed7-9c14-7f14f5794599 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.336 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 77918bef-8f72-4152-ac55-f4d4c98477ec_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.295s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.388 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.474 2 DEBUG nova.objects.instance [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.525 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.526 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Ensure instance console log exists: /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.526 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.527 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.527 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:25 np0005473739 nova_compute[259550]: 2025-10-07 14:42:25.546 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:26 np0005473739 nova_compute[259550]: 2025-10-07 14:42:26.163 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Successfully created port: dfe40ca6-700f-4101-8729-3d1ee103c5ea _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:42:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2400: 305 pgs: 305 active+clean; 62 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 746 KiB/s wr, 54 op/s
Oct  7 10:42:27 np0005473739 nova_compute[259550]: 2025-10-07 14:42:27.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:27 np0005473739 nova_compute[259550]: 2025-10-07 14:42:27.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:28 np0005473739 nova_compute[259550]: 2025-10-07 14:42:28.317 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Successfully updated port: c75fde3c-8461-4ed7-9c14-7f14f5794599 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:42:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2401: 305 pgs: 305 active+clean; 62 MiB data, 886 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 746 KiB/s wr, 52 op/s
Oct  7 10:42:28 np0005473739 nova_compute[259550]: 2025-10-07 14:42:28.633 2 DEBUG nova.compute.manager [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:28 np0005473739 nova_compute[259550]: 2025-10-07 14:42:28.634 2 DEBUG nova.compute.manager [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing instance network info cache due to event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:42:28 np0005473739 nova_compute[259550]: 2025-10-07 14:42:28.634 2 DEBUG oslo_concurrency.lockutils [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:42:28 np0005473739 nova_compute[259550]: 2025-10-07 14:42:28.635 2 DEBUG oslo_concurrency.lockutils [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:42:28 np0005473739 nova_compute[259550]: 2025-10-07 14:42:28.635 2 DEBUG nova.network.neutron [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:42:29 np0005473739 nova_compute[259550]: 2025-10-07 14:42:29.141 2 DEBUG nova.network.neutron [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:42:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2402: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 1.8 MiB/s wr, 55 op/s
Oct  7 10:42:31 np0005473739 nova_compute[259550]: 2025-10-07 14:42:31.994 2 DEBUG nova.network.neutron [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.149 2 DEBUG oslo_concurrency.lockutils [req-8549bd28-55c8-420c-99db-f8372d38ac66 req-1560c9cc-1a42-4e34-8500-251f07b5c41a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.224 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848137.223097, 300a7ac9-5462-4db2-817f-07ea0b2d6aa6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.225 2 INFO nova.compute.manager [-] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.270 2 DEBUG nova.compute.manager [None req-82647f94-e50b-43af-948a-d851099614a6 - - - - - -] [instance: 300a7ac9-5462-4db2-817f-07ea0b2d6aa6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2403: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 33 op/s
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.637 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Successfully updated port: dfe40ca6-700f-4101-8729-3d1ee103c5ea _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.671 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.671 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.672 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0003459970412515465 of space, bias 1.0, pg target 0.10379911237546395 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:42:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:42:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:42:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624504052' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:42:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:42:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624504052' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.724 2 DEBUG nova.compute.manager [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-changed-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.725 2 DEBUG nova.compute.manager [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing instance network info cache due to event network-changed-dfe40ca6-700f-4101-8729-3d1ee103c5ea. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.725 2 DEBUG oslo_concurrency.lockutils [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:42:32 np0005473739 nova_compute[259550]: 2025-10-07 14:42:32.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:33 np0005473739 nova_compute[259550]: 2025-10-07 14:42:33.055 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:42:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2404: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:42:35 np0005473739 podman[396664]: 2025-10-07 14:42:35.073382799 +0000 UTC m=+0.063088918 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:42:35 np0005473739 podman[396665]: 2025-10-07 14:42:35.075982278 +0000 UTC m=+0.056694437 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 10:42:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2405: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:42:37 np0005473739 nova_compute[259550]: 2025-10-07 14:42:37.329 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:42:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c6407494-f471-44f5-abe7-924bdd1c36d4 does not exist
Oct  7 10:42:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cc21e67f-afd9-4770-9e75-788bac4c7f59 does not exist
Oct  7 10:42:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 491cbff7-fe55-42a4-a792-ad6a4f6c8f1a does not exist
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:42:37 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:42:37 np0005473739 podman[396974]: 2025-10-07 14:42:37.9608702 +0000 UTC m=+0.038297409 container create 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 10:42:37 np0005473739 nova_compute[259550]: 2025-10-07 14:42:37.965 2 DEBUG nova.network.neutron [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:37 np0005473739 nova_compute[259550]: 2025-10-07 14:42:37.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:38 np0005473739 systemd[1]: Started libpod-conmon-92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e.scope.
Oct  7 10:42:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:38 np0005473739 podman[396974]: 2025-10-07 14:42:37.943724954 +0000 UTC m=+0.021152093 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.044 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.044 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance network_info: |[{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.044 2 DEBUG oslo_concurrency.lockutils [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.045 2 DEBUG nova.network.neutron [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing network info cache for port dfe40ca6-700f-4101-8729-3d1ee103c5ea _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.048 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start _get_guest_xml network_info=[{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.056 2 WARNING nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:42:38 np0005473739 podman[396974]: 2025-10-07 14:42:38.062794099 +0000 UTC m=+0.140221238 container init 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.065 2 DEBUG nova.virt.libvirt.host [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.066 2 DEBUG nova.virt.libvirt.host [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.069 2 DEBUG nova.virt.libvirt.host [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.069 2 DEBUG nova.virt.libvirt.host [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.070 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.070 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.070 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:42:38 np0005473739 podman[396974]: 2025-10-07 14:42:38.071449479 +0000 UTC m=+0.148876598 container start 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.071 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.072 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.072 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.072 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.072 2 DEBUG nova.virt.hardware [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:42:38 np0005473739 podman[396974]: 2025-10-07 14:42:38.074686306 +0000 UTC m=+0.152113455 container attach 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.075 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:38 np0005473739 condescending_wing[396990]: 167 167
Oct  7 10:42:38 np0005473739 systemd[1]: libpod-92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e.scope: Deactivated successfully.
Oct  7 10:42:38 np0005473739 podman[396974]: 2025-10-07 14:42:38.078345912 +0000 UTC m=+0.155773051 container died 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:42:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f52cc011f3d1375c44dbf797c1e059d7dc1cc8937c6c3c7ba3f215f1c56a086e-merged.mount: Deactivated successfully.
Oct  7 10:42:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:38 np0005473739 podman[396974]: 2025-10-07 14:42:38.114525504 +0000 UTC m=+0.191952623 container remove 92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:42:38 np0005473739 systemd[1]: libpod-conmon-92288936e18a57dd3bdc9975175ac140dd142d4626ffee70b70d4b69e422663e.scope: Deactivated successfully.
Oct  7 10:42:38 np0005473739 podman[397034]: 2025-10-07 14:42:38.27164371 +0000 UTC m=+0.040698263 container create f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:42:38 np0005473739 systemd[1]: Started libpod-conmon-f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38.scope.
Oct  7 10:42:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:38 np0005473739 podman[397034]: 2025-10-07 14:42:38.251412243 +0000 UTC m=+0.020466796 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:42:38 np0005473739 podman[397034]: 2025-10-07 14:42:38.35402914 +0000 UTC m=+0.123083683 container init f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:42:38 np0005473739 podman[397034]: 2025-10-07 14:42:38.362997719 +0000 UTC m=+0.132052252 container start f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:42:38 np0005473739 podman[397034]: 2025-10-07 14:42:38.366357818 +0000 UTC m=+0.135412381 container attach f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:42:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2406: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.0 MiB/s wr, 3 op/s
Oct  7 10:42:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:42:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1623183904' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.524 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.549 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:38 np0005473739 nova_compute[259550]: 2025-10-07 14:42:38.555 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:42:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1234686973' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.023 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.026 2 DEBUG nova.virt.libvirt.vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.026 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.027 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.028 2 DEBUG nova.virt.libvirt.vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.028 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.028 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.029 2 DEBUG nova.objects.instance [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.059 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <uuid>77918bef-8f72-4152-ac55-f4d4c98477ec</uuid>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <name>instance-0000007f</name>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-649311904</nova:name>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:42:38</nova:creationTime>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:port uuid="c75fde3c-8461-4ed7-9c14-7f14f5794599">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <nova:port uuid="dfe40ca6-700f-4101-8729-3d1ee103c5ea">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:feb4:c556" ipVersion="6"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <entry name="serial">77918bef-8f72-4152-ac55-f4d4c98477ec</entry>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <entry name="uuid">77918bef-8f72-4152-ac55-f4d4c98477ec</entry>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/77918bef-8f72-4152-ac55-f4d4c98477ec_disk">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:41:99:4d"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <target dev="tapc75fde3c-84"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:b4:c5:56"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <target dev="tapdfe40ca6-70"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/console.log" append="off"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:42:39 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:42:39 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:42:39 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:42:39 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.059 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Preparing to wait for external event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.059 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.060 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.060 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.060 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Preparing to wait for external event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.060 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.061 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.061 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.062 2 DEBUG nova.virt.libvirt.vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.062 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.063 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.063 2 DEBUG os_vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.064 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.065 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.070 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc75fde3c-84, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.070 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapc75fde3c-84, col_values=(('external_ids', {'iface-id': 'c75fde3c-8461-4ed7-9c14-7f14f5794599', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:41:99:4d', 'vm-uuid': '77918bef-8f72-4152-ac55-f4d4c98477ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:39 np0005473739 NetworkManager[44949]: <info>  [1759848159.1217] manager: (tapc75fde3c-84): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/555)
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.127 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.128 2 INFO os_vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84')#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.129 2 DEBUG nova.virt.libvirt.vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:24Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.129 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.130 2 DEBUG nova.network.os_vif_util [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.130 2 DEBUG os_vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.131 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.133 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapdfe40ca6-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.134 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapdfe40ca6-70, col_values=(('external_ids', {'iface-id': 'dfe40ca6-700f-4101-8729-3d1ee103c5ea', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b4:c5:56', 'vm-uuid': '77918bef-8f72-4152-ac55-f4d4c98477ec'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:39 np0005473739 NetworkManager[44949]: <info>  [1759848159.1358] manager: (tapdfe40ca6-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/556)
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.145 2 INFO os_vif [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70')#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.346 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.347 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.347 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:41:99:4d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.347 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:b4:c5:56, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.347 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Using config drive#033[00m
Oct  7 10:42:39 np0005473739 nova_compute[259550]: 2025-10-07 14:42:39.369 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:39 np0005473739 youthful_banach[397051]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:42:39 np0005473739 youthful_banach[397051]: --> relative data size: 1.0
Oct  7 10:42:39 np0005473739 youthful_banach[397051]: --> All data devices are unavailable
Oct  7 10:42:39 np0005473739 systemd[1]: libpod-f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38.scope: Deactivated successfully.
Oct  7 10:42:39 np0005473739 podman[397145]: 2025-10-07 14:42:39.443136799 +0000 UTC m=+0.023360962 container died f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:42:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1609536243f5907642ea0fbaf5393fd0c4eb692897cea7686dd213659b8af1a9-merged.mount: Deactivated successfully.
Oct  7 10:42:39 np0005473739 podman[397145]: 2025-10-07 14:42:39.496014285 +0000 UTC m=+0.076238428 container remove f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_banach, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:42:39 np0005473739 systemd[1]: libpod-conmon-f5e13393f8b0e2e26cf841ca8b59fea7d93b6c5927f926006a125b71493b8c38.scope: Deactivated successfully.
Oct  7 10:42:40 np0005473739 podman[397300]: 2025-10-07 14:42:40.100274586 +0000 UTC m=+0.036666036 container create e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:42:40 np0005473739 systemd[1]: Started libpod-conmon-e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287.scope.
Oct  7 10:42:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:40 np0005473739 podman[397300]: 2025-10-07 14:42:40.176221785 +0000 UTC m=+0.112613255 container init e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 10:42:40 np0005473739 podman[397300]: 2025-10-07 14:42:40.085827172 +0000 UTC m=+0.022218642 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:42:40 np0005473739 podman[397300]: 2025-10-07 14:42:40.183581941 +0000 UTC m=+0.119973391 container start e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:42:40 np0005473739 podman[397300]: 2025-10-07 14:42:40.186895748 +0000 UTC m=+0.123287198 container attach e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:42:40 np0005473739 nervous_ride[397316]: 167 167
Oct  7 10:42:40 np0005473739 systemd[1]: libpod-e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287.scope: Deactivated successfully.
Oct  7 10:42:40 np0005473739 podman[397300]: 2025-10-07 14:42:40.188676956 +0000 UTC m=+0.125068406 container died e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:42:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-63888330e78b18e36c98a32ab4360bd59558bb10a031344ab59a0136d8573f8d-merged.mount: Deactivated successfully.
Oct  7 10:42:40 np0005473739 podman[397300]: 2025-10-07 14:42:40.226194723 +0000 UTC m=+0.162586163 container remove e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_ride, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:42:40 np0005473739 systemd[1]: libpod-conmon-e6e634be7376922d52e4a745f6390fe29284bfd5de166c57081deb972cbb7287.scope: Deactivated successfully.
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.311 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Creating config drive at /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config#033[00m
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.315 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpswympl1y execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:40 np0005473739 podman[397340]: 2025-10-07 14:42:40.42021147 +0000 UTC m=+0.053562994 container create 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:42:40 np0005473739 systemd[1]: Started libpod-conmon-5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b.scope.
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.472 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpswympl1y" returned: 0 in 0.156s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:40 np0005473739 podman[397340]: 2025-10-07 14:42:40.398065451 +0000 UTC m=+0.031416995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:42:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2407: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1023 B/s rd, 1.0 MiB/s wr, 3 op/s
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.504 2 DEBUG nova.storage.rbd_utils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:40 np0005473739 podman[397340]: 2025-10-07 14:42:40.507167101 +0000 UTC m=+0.140518655 container init 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.510 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:40 np0005473739 podman[397340]: 2025-10-07 14:42:40.514002363 +0000 UTC m=+0.147353887 container start 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:42:40 np0005473739 podman[397340]: 2025-10-07 14:42:40.517257759 +0000 UTC m=+0.150609313 container attach 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.644 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.711 2 DEBUG oslo_concurrency.processutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config 77918bef-8f72-4152-ac55-f4d4c98477ec_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.712 2 INFO nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Deleting local config drive /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec/disk.config because it was imported into RBD.#033[00m
Oct  7 10:42:40 np0005473739 NetworkManager[44949]: <info>  [1759848160.7867] manager: (tapc75fde3c-84): new Tun device (/org/freedesktop/NetworkManager/Devices/557)
Oct  7 10:42:40 np0005473739 NetworkManager[44949]: <info>  [1759848160.8131] manager: (tapdfe40ca6-70): new Tun device (/org/freedesktop/NetworkManager/Devices/558)
Oct  7 10:42:40 np0005473739 systemd-udevd[397411]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:42:40 np0005473739 systemd-udevd[397412]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:42:40 np0005473739 systemd-machined[214580]: New machine qemu-160-instance-0000007f.
Oct  7 10:42:40 np0005473739 kernel: tapdfe40ca6-70: entered promiscuous mode
Oct  7 10:42:40 np0005473739 NetworkManager[44949]: <info>  [1759848160.8603] device (tapdfe40ca6-70): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:42:40 np0005473739 kernel: tapc75fde3c-84: entered promiscuous mode
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:40 np0005473739 NetworkManager[44949]: <info>  [1759848160.8654] device (tapc75fde3c-84): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:42:40 np0005473739 NetworkManager[44949]: <info>  [1759848160.8675] device (tapdfe40ca6-70): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:42:40 np0005473739 NetworkManager[44949]: <info>  [1759848160.8687] device (tapc75fde3c-84): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:42:40 np0005473739 systemd[1]: Started Virtual Machine qemu-160-instance-0000007f.
Oct  7 10:42:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:40Z|01400|binding|INFO|Claiming lport c75fde3c-8461-4ed7-9c14-7f14f5794599 for this chassis.
Oct  7 10:42:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:40Z|01401|binding|INFO|c75fde3c-8461-4ed7-9c14-7f14f5794599: Claiming fa:16:3e:41:99:4d 10.100.0.4
Oct  7 10:42:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:40Z|01402|binding|INFO|Claiming lport dfe40ca6-700f-4101-8729-3d1ee103c5ea for this chassis.
Oct  7 10:42:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:40Z|01403|binding|INFO|dfe40ca6-700f-4101-8729-3d1ee103c5ea: Claiming fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:40Z|01404|binding|INFO|Setting lport c75fde3c-8461-4ed7-9c14-7f14f5794599 ovn-installed in OVS
Oct  7 10:42:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:40Z|01405|binding|INFO|Setting lport dfe40ca6-700f-4101-8729-3d1ee103c5ea ovn-installed in OVS
Oct  7 10:42:40 np0005473739 nova_compute[259550]: 2025-10-07 14:42:40.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:40Z|01406|binding|INFO|Setting lport c75fde3c-8461-4ed7-9c14-7f14f5794599 up in Southbound
Oct  7 10:42:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:40Z|01407|binding|INFO|Setting lport dfe40ca6-700f-4101-8729-3d1ee103c5ea up in Southbound
Oct  7 10:42:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:40.989 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:99:4d 10.100.0.4'], port_security=['fa:16:3e:41:99:4d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '77918bef-8f72-4152-ac55-f4d4c98477ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262bb8b3-c881-4ed3-8240-c435878fb605, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c75fde3c-8461-4ed7-9c14-7f14f5794599) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:42:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:40.990 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556'], port_security=['fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb4:c556/64', 'neutron:device_id': '77918bef-8f72-4152-ac55-f4d4c98477ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c3782d-89d5-4d69-ae40-86969f172913, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=dfe40ca6-700f-4101-8729-3d1ee103c5ea) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:42:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:40.991 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c75fde3c-8461-4ed7-9c14-7f14f5794599 in datapath bb059ee7-3091-491e-8da2-c9bd1da0f922 bound to our chassis#033[00m
Oct  7 10:42:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:40.992 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb059ee7-3091-491e-8da2-c9bd1da0f922#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.004 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d3dbb756-c930-4760-8995-d5ce64926df4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.004 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapbb059ee7-31 in ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.006 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapbb059ee7-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.006 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56c42539-a8a6-486a-89ca-5e57355fc671]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.007 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6cb9cee6-8f55-4387-85d9-ac9e3a7e3bbd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.018 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a8f79e-1435-452f-b11c-06f6d8e7df8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.040 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[89c8372d-7a18-48b2-a4d3-d181a727b429]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.067 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[af4c3791-aa4f-4573-b0be-c1e7997c6aca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.074 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fccc7af3-d26e-4dee-adf0-04ad546d7bda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 NetworkManager[44949]: <info>  [1759848161.0755] manager: (tapbb059ee7-30): new Veth device (/org/freedesktop/NetworkManager/Devices/559)
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.106 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[57782a0d-d7ac-497e-83bf-50570beb8234]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.108 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2c74eaa4-f514-4c0b-ade8-b49f09855cf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 NetworkManager[44949]: <info>  [1759848161.1315] device (tapbb059ee7-30): carrier: link connected
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.136 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[aea42d31-9407-4e3b-a3d4-66e17df74f12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.154 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dfbd8ebb-3893-46a3-9e8c-d8be1d6f894d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb059ee7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3b:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874469, 'reachable_time': 30371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397450, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.170 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c854e420-85f2-465d-a4b6-9351d1e8af24]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe59:3bd9'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874469, 'tstamp': 874469}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397451, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.187 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[12a41738-11d6-403d-8ca2-b966c24017b0]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb059ee7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3b:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874469, 'reachable_time': 30371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397452, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.224 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[93bdc264-e0b0-4e18-ae7f-b5f4355cffa9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.283 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[390691e1-7075-4540-95e5-314d99144cdf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.285 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb059ee7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.285 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.286 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb059ee7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:41 np0005473739 nova_compute[259550]: 2025-10-07 14:42:41.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:41 np0005473739 kernel: tapbb059ee7-30: entered promiscuous mode
Oct  7 10:42:41 np0005473739 NetworkManager[44949]: <info>  [1759848161.3328] manager: (tapbb059ee7-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/560)
Oct  7 10:42:41 np0005473739 nova_compute[259550]: 2025-10-07 14:42:41.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.333 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb059ee7-30, col_values=(('external_ids', {'iface-id': '770f7899-3ca8-4bf3-9f06-6b8c25522fc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:41Z|01408|binding|INFO|Releasing lport 770f7899-3ca8-4bf3-9f06-6b8c25522fc3 from this chassis (sb_readonly=0)
Oct  7 10:42:41 np0005473739 nova_compute[259550]: 2025-10-07 14:42:41.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.350 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/bb059ee7-3091-491e-8da2-c9bd1da0f922.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/bb059ee7-3091-491e-8da2-c9bd1da0f922.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.351 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b6920d02-f04b-49f6-a62a-77042591cc43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.352 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-bb059ee7-3091-491e-8da2-c9bd1da0f922
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/bb059ee7-3091-491e-8da2-c9bd1da0f922.pid.haproxy
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID bb059ee7-3091-491e-8da2-c9bd1da0f922
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.353 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'env', 'PROCESS_TAG=haproxy-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/bb059ee7-3091-491e-8da2-c9bd1da0f922.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]: {
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:    "0": [
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:        {
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "devices": [
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "/dev/loop3"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            ],
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_name": "ceph_lv0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_size": "21470642176",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "name": "ceph_lv0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "tags": {
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cluster_name": "ceph",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.crush_device_class": "",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.encrypted": "0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osd_id": "0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.type": "block",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.vdo": "0"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            },
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "type": "block",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "vg_name": "ceph_vg0"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:        }
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:    ],
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:    "1": [
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:        {
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "devices": [
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "/dev/loop4"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            ],
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_name": "ceph_lv1",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_size": "21470642176",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "name": "ceph_lv1",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "tags": {
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cluster_name": "ceph",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.crush_device_class": "",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.encrypted": "0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osd_id": "1",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.type": "block",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.vdo": "0"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            },
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "type": "block",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "vg_name": "ceph_vg1"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:        }
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:    ],
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:    "2": [
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:        {
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "devices": [
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "/dev/loop5"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            ],
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_name": "ceph_lv2",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_size": "21470642176",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "name": "ceph_lv2",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "tags": {
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.cluster_name": "ceph",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.crush_device_class": "",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.encrypted": "0",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osd_id": "2",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.type": "block",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:                "ceph.vdo": "0"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            },
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "type": "block",
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:            "vg_name": "ceph_vg2"
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:        }
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]:    ]
Oct  7 10:42:41 np0005473739 gifted_meninsky[397358]: }
Oct  7 10:42:41 np0005473739 nova_compute[259550]: 2025-10-07 14:42:41.421 2 DEBUG nova.network.neutron [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated VIF entry in instance network info cache for port dfe40ca6-700f-4101-8729-3d1ee103c5ea. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:42:41 np0005473739 nova_compute[259550]: 2025-10-07 14:42:41.421 2 DEBUG nova.network.neutron [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:41 np0005473739 systemd[1]: libpod-5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b.scope: Deactivated successfully.
Oct  7 10:42:41 np0005473739 podman[397340]: 2025-10-07 14:42:41.45366263 +0000 UTC m=+1.087014154 container died 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:42:41 np0005473739 nova_compute[259550]: 2025-10-07 14:42:41.455 2 DEBUG oslo_concurrency.lockutils [req-b199f553-9872-4c1d-90bd-446c399d768e req-b9854a6f-0c54-4dd8-b838-ec520b5d22cb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:42:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7218345ed887c4ee5ff2fd1906f9fb689fd9f8e938ab1a22d9eb411dda401dd3-merged.mount: Deactivated successfully.
Oct  7 10:42:41 np0005473739 podman[397340]: 2025-10-07 14:42:41.507608824 +0000 UTC m=+1.140960348 container remove 5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_meninsky, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:42:41 np0005473739 systemd[1]: libpod-conmon-5788c0e0651eef109486e689a7ced7f0cb51d0f5388ef17d2b9768494ee06a2b.scope: Deactivated successfully.
Oct  7 10:42:41 np0005473739 podman[397590]: 2025-10-07 14:42:41.739422195 +0000 UTC m=+0.048380696 container create be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:42:41 np0005473739 systemd[1]: Started libpod-conmon-be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d.scope.
Oct  7 10:42:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aade3835522dc3103f13522e66cfc9c234f96ccd29b80c7bd03443b1f8c3128d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:41 np0005473739 podman[397590]: 2025-10-07 14:42:41.713118266 +0000 UTC m=+0.022076767 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:42:41 np0005473739 podman[397590]: 2025-10-07 14:42:41.81372754 +0000 UTC m=+0.122686071 container init be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:42:41 np0005473739 podman[397590]: 2025-10-07 14:42:41.819197886 +0000 UTC m=+0.128156387 container start be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:42:41 np0005473739 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [NOTICE]   (397660) : New worker (397662) forked
Oct  7 10:42:41 np0005473739 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [NOTICE]   (397660) : Loading success.
Oct  7 10:42:41 np0005473739 nova_compute[259550]: 2025-10-07 14:42:41.872 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848161.872121, 77918bef-8f72-4152-ac55-f4d4c98477ec => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:41 np0005473739 nova_compute[259550]: 2025-10-07 14:42:41.873 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] VM Started (Lifecycle Event)#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.890 161536 INFO neutron.agent.ovn.metadata.agent [-] Port dfe40ca6-700f-4101-8729-3d1ee103c5ea in datapath e6e769bc-2b33-4210-8062-fbc8d16f9127 unbound from our chassis#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.892 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e6e769bc-2b33-4210-8062-fbc8d16f9127#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.905 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b7beac18-84d9-467c-8a48-960f970ae49b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.906 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tape6e769bc-21 in ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.908 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tape6e769bc-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.908 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[414498b8-dcc4-45a5-a9cf-d33b5d75904b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.910 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3fb50d-5105-4989-a263-5d76460ce24b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.924 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b16feba9-5156-452b-8964-6de5f1b4ed24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.936 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4e81cf62-61a2-48f9-bf81-882a3006ed2f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.964 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2c5d4c27-f837-484c-89db-d9498dbd75d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:41.970 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3566648b-e272-42bf-b864-73ef6d7b3dad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:41 np0005473739 NetworkManager[44949]: <info>  [1759848161.9718] manager: (tape6e769bc-20): new Veth device (/org/freedesktop/NetworkManager/Devices/561)
Oct  7 10:42:41 np0005473739 systemd-udevd[397437]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.001 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7ef4014a-c60f-498e-ad19-80c2885f4c75]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.004 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[578fb7f2-63e8-4235-9d55-ae678c362058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.018 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.022 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848161.8722425, 77918bef-8f72-4152-ac55-f4d4c98477ec => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.023 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:42:42 np0005473739 NetworkManager[44949]: <info>  [1759848162.0322] device (tape6e769bc-20): carrier: link connected
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.037 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5c33574e-b314-457c-9be4-23d87f063b48]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.055 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[174e3ffc-bd0e-4947-b75f-ba1263fc2a0c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6e769bc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ce:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874559, 'reachable_time': 36097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 397719, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.075 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[907bbd52-e615-40bd-bd60-932696f980d5]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe70:ce9b'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874559, 'tstamp': 874559}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 397722, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.091 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c105ec6-f59b-420e-8a0b-788fa0b48450]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6e769bc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ce:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874559, 'reachable_time': 36097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 397725, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 podman[397723]: 2025-10-07 14:42:42.121487391 +0000 UTC m=+0.036046550 container create 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.121 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4cb43d3c-877c-4383-a047-f22d4060f989]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.153 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69f72715-d162-46c2-8b09-4617bd4491f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.155 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6e769bc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.155 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.155 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6e769bc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:42 np0005473739 systemd[1]: Started libpod-conmon-822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc.scope.
Oct  7 10:42:42 np0005473739 NetworkManager[44949]: <info>  [1759848162.1578] manager: (tape6e769bc-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/562)
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.157 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:42 np0005473739 kernel: tape6e769bc-20: entered promiscuous mode
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.160 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.161 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape6e769bc-20, col_values=(('external_ids', {'iface-id': '21cca283-5f07-4e28-8ee2-a58e7356e156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:42Z|01409|binding|INFO|Releasing lport 21cca283-5f07-4e28-8ee2-a58e7356e156 from this chassis (sb_readonly=0)
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.164 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/e6e769bc-2b33-4210-8062-fbc8d16f9127.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/e6e769bc-2b33-4210-8062-fbc8d16f9127.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.162 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.165 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[068b878d-1f95-4878-a686-1329f0890c51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.165 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-e6e769bc-2b33-4210-8062-fbc8d16f9127
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/e6e769bc-2b33-4210-8062-fbc8d16f9127.pid.haproxy
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID e6e769bc-2b33-4210-8062-fbc8d16f9127
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:42:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:42.166 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'env', 'PROCESS_TAG=haproxy-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/e6e769bc-2b33-4210-8062-fbc8d16f9127.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:42 np0005473739 podman[397723]: 2025-10-07 14:42:42.106341708 +0000 UTC m=+0.020900867 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:42:42 np0005473739 podman[397723]: 2025-10-07 14:42:42.214907384 +0000 UTC m=+0.129466543 container init 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:42:42 np0005473739 podman[397723]: 2025-10-07 14:42:42.223750499 +0000 UTC m=+0.138309638 container start 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:42:42 np0005473739 podman[397723]: 2025-10-07 14:42:42.227217311 +0000 UTC m=+0.141776470 container attach 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:42:42 np0005473739 keen_kepler[397744]: 167 167
Oct  7 10:42:42 np0005473739 systemd[1]: libpod-822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc.scope: Deactivated successfully.
Oct  7 10:42:42 np0005473739 conmon[397744]: conmon 822ed29485fcf00478ab <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc.scope/container/memory.events
Oct  7 10:42:42 np0005473739 podman[397723]: 2025-10-07 14:42:42.232023319 +0000 UTC m=+0.146582458 container died 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.245 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.248 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:42:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cf47402c6d7a6b56f542f30045e449afb376d866fcd9ecd99a267203d8d19875-merged.mount: Deactivated successfully.
Oct  7 10:42:42 np0005473739 podman[397723]: 2025-10-07 14:42:42.26819677 +0000 UTC m=+0.182755909 container remove 822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_kepler, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:42:42 np0005473739 systemd[1]: libpod-conmon-822ed29485fcf00478abd3b619a36f8bd38bd5d3032a6c068261fbbe54aedffc.scope: Deactivated successfully.
Oct  7 10:42:42 np0005473739 podman[397774]: 2025-10-07 14:42:42.470393864 +0000 UTC m=+0.053023610 container create 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:42:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2408: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 2.1 KiB/s rd, 170 B/s wr, 2 op/s
Oct  7 10:42:42 np0005473739 systemd[1]: Started libpod-conmon-9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b.scope.
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.523 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:42:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:42 np0005473739 podman[397804]: 2025-10-07 14:42:42.538788923 +0000 UTC m=+0.054796568 container create 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:42:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:42 np0005473739 podman[397774]: 2025-10-07 14:42:42.448193584 +0000 UTC m=+0.030823390 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:42:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:42 np0005473739 podman[397774]: 2025-10-07 14:42:42.565278687 +0000 UTC m=+0.147908453 container init 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:42:42 np0005473739 podman[397774]: 2025-10-07 14:42:42.573420773 +0000 UTC m=+0.156050529 container start 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:42:42 np0005473739 podman[397774]: 2025-10-07 14:42:42.576986958 +0000 UTC m=+0.159616734 container attach 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:42:42 np0005473739 systemd[1]: Started libpod-conmon-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109.scope.
Oct  7 10:42:42 np0005473739 podman[397804]: 2025-10-07 14:42:42.509838093 +0000 UTC m=+0.025845768 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:42:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/975fa376099815760edd3ac270db67c4e9de90d314e2f36412b98c8c8fb0e428/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:42 np0005473739 podman[397804]: 2025-10-07 14:42:42.626917465 +0000 UTC m=+0.142925130 container init 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:42:42 np0005473739 podman[397804]: 2025-10-07 14:42:42.632003741 +0000 UTC m=+0.148011386 container start 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:42:42 np0005473739 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [NOTICE]   (397831) : New worker (397833) forked
Oct  7 10:42:42 np0005473739 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [NOTICE]   (397831) : Loading success.
Oct  7 10:42:42 np0005473739 nova_compute[259550]: 2025-10-07 14:42:42.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]: {
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "osd_id": 2,
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "type": "bluestore"
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:    },
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "osd_id": 1,
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "type": "bluestore"
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:    },
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "osd_id": 0,
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:        "type": "bluestore"
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]:    }
Oct  7 10:42:43 np0005473739 agitated_grothendieck[397820]: }
Oct  7 10:42:43 np0005473739 systemd[1]: libpod-9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b.scope: Deactivated successfully.
Oct  7 10:42:43 np0005473739 podman[397774]: 2025-10-07 14:42:43.526851706 +0000 UTC m=+1.109481452 container died 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:42:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7da13c56bcffd84141303230b59ba381972a80db1b722dc37b92ecb830a647ac-merged.mount: Deactivated successfully.
Oct  7 10:42:43 np0005473739 podman[397774]: 2025-10-07 14:42:43.589784149 +0000 UTC m=+1.172413895 container remove 9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_grothendieck, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:42:43 np0005473739 systemd[1]: libpod-conmon-9477ae4fab86e43e4ad295f51887e909aa1b408d5121acaaa98a707a5da5661b.scope: Deactivated successfully.
Oct  7 10:42:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:42:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:42:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:42:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:42:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d5d97e10-25a7-441a-991f-906f8f7bbff2 does not exist
Oct  7 10:42:43 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5651c734-f50e-4b2e-9418-2d87fc85e6ff does not exist
Oct  7 10:42:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:42:43 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.069 2 DEBUG nova.compute.manager [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.071 2 DEBUG oslo_concurrency.lockutils [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.071 2 DEBUG oslo_concurrency.lockutils [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.071 2 DEBUG oslo_concurrency.lockutils [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.071 2 DEBUG nova.compute.manager [req-667d0dd1-0afa-43ab-8cdf-f4a23011019b req-067479b2-03f3-4185-99b1-501ae13dc18b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Processing event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.175 2 DEBUG nova.compute.manager [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.176 2 DEBUG oslo_concurrency.lockutils [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.177 2 DEBUG oslo_concurrency.lockutils [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.177 2 DEBUG oslo_concurrency.lockutils [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.177 2 DEBUG nova.compute.manager [req-5423ede3-8197-4dd5-b1ef-ea4aef2b559e req-e4b461ff-1aad-412f-91bb-7273b7e7c242 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Processing event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.178 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance event wait completed in 2 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.185 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848164.1854818, 77918bef-8f72-4152-ac55-f4d4c98477ec => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.186 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.189 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.193 2 INFO nova.virt.libvirt.driver [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance spawned successfully.#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.193 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.287 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.288 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.288 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.289 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.289 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.289 2 DEBUG nova.virt.libvirt.driver [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.294 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.297 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.449 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:42:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2409: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 3.6 KiB/s rd, 12 KiB/s wr, 5 op/s
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.835 2 INFO nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Took 19.96 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:42:44 np0005473739 nova_compute[259550]: 2025-10-07 14:42:44.835 2 DEBUG nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:45 np0005473739 nova_compute[259550]: 2025-10-07 14:42:45.282 2 INFO nova.compute.manager [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Took 23.68 seconds to build instance.#033[00m
Oct  7 10:42:45 np0005473739 nova_compute[259550]: 2025-10-07 14:42:45.912 2 DEBUG oslo_concurrency.lockutils [None req-241ee689-9a34-42d0-b61d-d1ae2d82f958 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 24.996s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.366 2 DEBUG nova.compute.manager [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.369 2 DEBUG oslo_concurrency.lockutils [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.370 2 DEBUG oslo_concurrency.lockutils [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.371 2 DEBUG oslo_concurrency.lockutils [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.371 2 DEBUG nova.compute.manager [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.372 2 WARNING nova.compute.manager [req-39e61412-9ff9-4d58-b5c3-e037400d15c4 req-f1936fb8-8d98-4546-85c6-17984e521d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received unexpected event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea for instance with vm_state active and task_state None.#033[00m
Oct  7 10:42:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2410: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 12 KiB/s wr, 37 op/s
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.604 2 DEBUG nova.compute.manager [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.605 2 DEBUG oslo_concurrency.lockutils [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.605 2 DEBUG oslo_concurrency.lockutils [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.605 2 DEBUG oslo_concurrency.lockutils [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.606 2 DEBUG nova.compute.manager [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:42:46 np0005473739 nova_compute[259550]: 2025-10-07 14:42:46.606 2 WARNING nova.compute.manager [req-b62648d1-1e07-483e-8da3-5a98e5b13a5d req-e745df93-ca9a-46c0-85e6-751e8048ea1f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received unexpected event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:42:47 np0005473739 nova_compute[259550]: 2025-10-07 14:42:47.533 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:47 np0005473739 nova_compute[259550]: 2025-10-07 14:42:47.534 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:47 np0005473739 nova_compute[259550]: 2025-10-07 14:42:47.588 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:42:47 np0005473739 nova_compute[259550]: 2025-10-07 14:42:47.785 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:47 np0005473739 nova_compute[259550]: 2025-10-07 14:42:47.785 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:47 np0005473739 nova_compute[259550]: 2025-10-07 14:42:47.797 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:42:47 np0005473739 nova_compute[259550]: 2025-10-07 14:42:47.797 2 INFO nova.compute.claims [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.004 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.022 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:42:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2089006158' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.481 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.492 2 DEBUG nova.compute.provider_tree [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:42:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2411: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 873 KiB/s rd, 12 KiB/s wr, 37 op/s
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.547 2 DEBUG nova.scheduler.client.report [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.610 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.824s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.611 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.750 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.751 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:42:48 np0005473739 nova_compute[259550]: 2025-10-07 14:42:48.945 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:42:49 np0005473739 nova_compute[259550]: 2025-10-07 14:42:49.141 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:49 np0005473739 nova_compute[259550]: 2025-10-07 14:42:49.146 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:42:49 np0005473739 nova_compute[259550]: 2025-10-07 14:42:49.914 2 DEBUG nova.policy [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:42:49 np0005473739 nova_compute[259550]: 2025-10-07 14:42:49.971 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:42:49 np0005473739 nova_compute[259550]: 2025-10-07 14:42:49.975 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:42:49 np0005473739 nova_compute[259550]: 2025-10-07 14:42:49.976 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Creating image(s)#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.022 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:50 np0005473739 podman[397957]: 2025-10-07 14:42:50.078991633 +0000 UTC m=+0.064675970 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.081 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.117 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.122 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:50 np0005473739 podman[397971]: 2025-10-07 14:42:50.128116519 +0000 UTC m=+0.109972683 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.206 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.208 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.209 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.209 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.234 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.238 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4954e98d-461f-46e8-9f41-c81930c02cff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2412: 305 pgs: 305 active+clean; 88 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.575 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 4954e98d-461f-46e8-9f41-c81930c02cff_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.645 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.739 2 DEBUG nova.objects.instance [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 4954e98d-461f-46e8-9f41-c81930c02cff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.760 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.761 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Ensure instance console log exists: /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.761 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.761 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.762 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:50 np0005473739 NetworkManager[44949]: <info>  [1759848170.7719] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/563)
Oct  7 10:42:50 np0005473739 NetworkManager[44949]: <info>  [1759848170.7730] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/564)
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:50Z|01410|binding|INFO|Releasing lport 21cca283-5f07-4e28-8ee2-a58e7356e156 from this chassis (sb_readonly=0)
Oct  7 10:42:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:50Z|01411|binding|INFO|Releasing lport 770f7899-3ca8-4bf3-9f06-6b8c25522fc3 from this chassis (sb_readonly=0)
Oct  7 10:42:50 np0005473739 nova_compute[259550]: 2025-10-07 14:42:50.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:51 np0005473739 nova_compute[259550]: 2025-10-07 14:42:51.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.173 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Successfully updated port: 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.192 2 DEBUG nova.compute.manager [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.193 2 DEBUG nova.compute.manager [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing instance network info cache due to event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.193 2 DEBUG oslo_concurrency.lockutils [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.193 2 DEBUG oslo_concurrency.lockutils [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.193 2 DEBUG nova.network.neutron [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.257 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.257 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.258 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:42:52 np0005473739 nova_compute[259550]: 2025-10-07 14:42:52.440 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:42:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2413: 305 pgs: 305 active+clean; 110 MiB data, 898 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 696 KiB/s wr, 75 op/s
Oct  7 10:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:42:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:42:53 np0005473739 nova_compute[259550]: 2025-10-07 14:42:53.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.278 2 DEBUG nova.network.neutron [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.301 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.302 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance network_info: |[{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.304 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start _get_guest_xml network_info=[{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.309 2 WARNING nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.316 2 DEBUG nova.virt.libvirt.host [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.317 2 DEBUG nova.virt.libvirt.host [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.321 2 DEBUG nova.compute.manager [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.322 2 DEBUG nova.compute.manager [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Refreshing instance network info cache due to event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.322 2 DEBUG oslo_concurrency.lockutils [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.322 2 DEBUG oslo_concurrency.lockutils [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.323 2 DEBUG nova.network.neutron [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Refreshing network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.324 2 DEBUG nova.virt.libvirt.host [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.325 2 DEBUG nova.virt.libvirt.host [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.325 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.325 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.326 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.326 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.326 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.327 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.328 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.328 2 DEBUG nova.virt.hardware [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.331 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2414: 305 pgs: 305 active+clean; 134 MiB data, 916 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct  7 10:42:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:42:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2255972073' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.841 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.510s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.862 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.866 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.908 2 DEBUG nova.network.neutron [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated VIF entry in instance network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.909 2 DEBUG nova.network.neutron [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:54 np0005473739 nova_compute[259550]: 2025-10-07 14:42:54.939 2 DEBUG oslo_concurrency.lockutils [req-d97a65f6-0de9-4cb0-98cc-41529dbf2476 req-e6c272ab-b28a-4a69-89b8-dbc87086b08b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:42:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:42:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/338211284' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.396 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.397 2 DEBUG nova.virt.libvirt.vif [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-20568441',display_name='tempest-TestNetworkBasicOps-server-20568441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-20568441',id=128,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLqq1NDEWVa3Pi0bJGoGA/5QwojsWmG4PsNzBryv1MHFYQv3a/HZ2jG+uXsf9gU2YDMUeSFxRaiZcRUccSi6nnsGaKDlAkF25zSznVEcPGfGqTa/G6c3Yyx8D7jl5gKmBw==',key_name='tempest-TestNetworkBasicOps-562171512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-tdpabr3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:49Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=4954e98d-461f-46e8-9f41-c81930c02cff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.398 2 DEBUG nova.network.os_vif_util [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.398 2 DEBUG nova.network.os_vif_util [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.399 2 DEBUG nova.objects.instance [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 4954e98d-461f-46e8-9f41-c81930c02cff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.458 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <uuid>4954e98d-461f-46e8-9f41-c81930c02cff</uuid>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <name>instance-00000080</name>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-20568441</nova:name>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:42:54</nova:creationTime>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <nova:port uuid="8e464ebe-1be0-4a3a-b8df-c088ec663aa2">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <entry name="serial">4954e98d-461f-46e8-9f41-c81930c02cff</entry>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <entry name="uuid">4954e98d-461f-46e8-9f41-c81930c02cff</entry>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4954e98d-461f-46e8-9f41-c81930c02cff_disk">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/4954e98d-461f-46e8-9f41-c81930c02cff_disk.config">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:b7:10:26"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <target dev="tap8e464ebe-1b"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/console.log" append="off"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:42:55 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:42:55 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:42:55 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:42:55 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.459 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Preparing to wait for external event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.459 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.460 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.460 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.461 2 DEBUG nova.virt.libvirt.vif [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:42:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-20568441',display_name='tempest-TestNetworkBasicOps-server-20568441',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-20568441',id=128,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLqq1NDEWVa3Pi0bJGoGA/5QwojsWmG4PsNzBryv1MHFYQv3a/HZ2jG+uXsf9gU2YDMUeSFxRaiZcRUccSi6nnsGaKDlAkF25zSznVEcPGfGqTa/G6c3Yyx8D7jl5gKmBw==',key_name='tempest-TestNetworkBasicOps-562171512',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-tdpabr3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:42:49Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=4954e98d-461f-46e8-9f41-c81930c02cff,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.461 2 DEBUG nova.network.os_vif_util [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.461 2 DEBUG nova.network.os_vif_util [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.462 2 DEBUG os_vif [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.463 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.467 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e464ebe-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.468 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e464ebe-1b, col_values=(('external_ids', {'iface-id': '8e464ebe-1be0-4a3a-b8df-c088ec663aa2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:10:26', 'vm-uuid': '4954e98d-461f-46e8-9f41-c81930c02cff'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:55 np0005473739 NetworkManager[44949]: <info>  [1759848175.4704] manager: (tap8e464ebe-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/565)
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.481 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.482 2 INFO os_vif [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b')#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.550 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.550 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.550 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:b7:10:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.551 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Using config drive#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.576 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:55 np0005473739 nova_compute[259550]: 2025-10-07 14:42:55.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.199 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Creating config drive at /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.205 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtw0xk2z execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.300 2 DEBUG nova.network.neutron [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updated VIF entry in instance network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.301 2 DEBUG nova.network.neutron [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.346 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpdtw0xk2z" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.373 2 DEBUG nova.storage.rbd_utils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.376 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.414 2 DEBUG oslo_concurrency.lockutils [req-b9a26612-e34a-47b0-98e4-cacb9dbb163c req-8cfdd93f-bc79-4155-8433-59557446b6d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:42:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2415: 305 pgs: 305 active+clean; 144 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 2.9 MiB/s wr, 113 op/s
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.569 2 DEBUG oslo_concurrency.processutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config 4954e98d-461f-46e8-9f41-c81930c02cff_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.570 2 INFO nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Deleting local config drive /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff/disk.config because it was imported into RBD.#033[00m
Oct  7 10:42:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:56Z|00160|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:41:99:4d 10.100.0.4
Oct  7 10:42:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:56Z|00161|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:41:99:4d 10.100.0.4
Oct  7 10:42:56 np0005473739 NetworkManager[44949]: <info>  [1759848176.6332] manager: (tap8e464ebe-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/566)
Oct  7 10:42:56 np0005473739 kernel: tap8e464ebe-1b: entered promiscuous mode
Oct  7 10:42:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:56Z|01412|binding|INFO|Claiming lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for this chassis.
Oct  7 10:42:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:56Z|01413|binding|INFO|8e464ebe-1be0-4a3a-b8df-c088ec663aa2: Claiming fa:16:3e:b7:10:26 10.100.0.12
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:56Z|01414|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 ovn-installed in OVS
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:56Z|01415|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 up in Southbound
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.656 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:10:26 10.100.0.12'], port_security=['fa:16:3e:b7:10:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4954e98d-461f-46e8-9f41-c81930c02cff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aea4318-48a4-451b-b36b-e364946c1859', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d04921d-dca2-4310-91f6-b9ea410f8b20, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8e464ebe-1be0-4a3a-b8df-c088ec663aa2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.658 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 in datapath 7aea4318-48a4-451b-b36b-e364946c1859 bound to our chassis#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.659 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7aea4318-48a4-451b-b36b-e364946c1859#033[00m
Oct  7 10:42:56 np0005473739 systemd-udevd[398302]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f80f9fac-7564-49c4-9a69-30c3067820c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.676 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7aea4318-41 in ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.679 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7aea4318-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a6a49a3-1f1a-4358-a6e8-ba42dcaa101a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.680 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ff08458-b7dd-4e22-9e7e-1867e7694f36]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 NetworkManager[44949]: <info>  [1759848176.6909] device (tap8e464ebe-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:42:56 np0005473739 NetworkManager[44949]: <info>  [1759848176.6921] device (tap8e464ebe-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:42:56 np0005473739 systemd-machined[214580]: New machine qemu-161-instance-00000080.
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.695 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6cd73192-d170-4295-aff5-22d50cfa9d56]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 systemd[1]: Started Virtual Machine qemu-161-instance-00000080.
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.721 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cab42816-0ad3-42a4-a52f-b73c6b68811f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.754 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[73311364-256b-4cbb-af64-833bd0b2b3fd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 NetworkManager[44949]: <info>  [1759848176.7604] manager: (tap7aea4318-40): new Veth device (/org/freedesktop/NetworkManager/Devices/567)
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.759 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bb150253-42db-48c7-8c50-9bf55a2de0a4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.797 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3ecac053-6828-41d5-9d48-6741c9ee18c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.801 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7c3b9af2-bc65-4903-9e62-996fb66c831e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 NetworkManager[44949]: <info>  [1759848176.8292] device (tap7aea4318-40): carrier: link connected
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.834 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[242185cc-6a53-4ba0-940e-914ecbdddcb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.850 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[47b66eaa-4033-40d2-9b68-8b2a9db10f3a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aea4318-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876039, 'reachable_time': 27562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398336, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.866 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4bb3268c-af1d-4868-884d-9e06c04f4870]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:73fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 876039, 'tstamp': 876039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 398337, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.885 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[89fda78e-02e0-4c7f-be68-86b003959428]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aea4318-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 110, 'tx_bytes': 180, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 405], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876039, 'reachable_time': 27562, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 152, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 152, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 398338, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.917 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[75d10307-1d77-4b65-9754-0ceb2c3ed8c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.985 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2ca810a-de01-4683-930d-6efb1c388c5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.986 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aea4318-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.987 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.987 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aea4318-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.989 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:56 np0005473739 NetworkManager[44949]: <info>  [1759848176.9898] manager: (tap7aea4318-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/568)
Oct  7 10:42:56 np0005473739 kernel: tap7aea4318-40: entered promiscuous mode
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:56.995 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7aea4318-40, col_values=(('external_ids', {'iface-id': '51e9d814-5370-44c9-809c-78ed73cc4b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:42:56 np0005473739 nova_compute[259550]: 2025-10-07 14:42:56.996 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:56 np0005473739 ovn_controller[151684]: 2025-10-07T14:42:56Z|01416|binding|INFO|Releasing lport 51e9d814-5370-44c9-809c-78ed73cc4b32 from this chassis (sb_readonly=0)
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:57.011 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:57.012 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56fb61c2-4bf9-4300-8669-b9641d57df9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:57.013 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-7aea4318-48a4-451b-b36b-e364946c1859
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 7aea4318-48a4-451b-b36b-e364946c1859
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:42:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:42:57.015 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'env', 'PROCESS_TAG=haproxy-7aea4318-48a4-451b-b36b-e364946c1859', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7aea4318-48a4-451b-b36b-e364946c1859.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.093 2 DEBUG nova.compute.manager [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.093 2 DEBUG oslo_concurrency.lockutils [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.093 2 DEBUG oslo_concurrency.lockutils [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.094 2 DEBUG oslo_concurrency.lockutils [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.094 2 DEBUG nova.compute.manager [req-790f80a7-43cb-4530-b51b-3619da2da70e req-eda356a4-ecd6-422b-b82d-17dffad07565 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Processing event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:42:57 np0005473739 podman[398412]: 2025-10-07 14:42:57.391100542 +0000 UTC m=+0.047490493 container create 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:42:57 np0005473739 systemd[1]: Started libpod-conmon-5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d.scope.
Oct  7 10:42:57 np0005473739 podman[398412]: 2025-10-07 14:42:57.368044519 +0000 UTC m=+0.024434480 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:42:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:42:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/26fbd25cec77d0b294d26659ee0c2598b49ebe94e1d14db13af23b421b9aafa8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:42:57 np0005473739 podman[398412]: 2025-10-07 14:42:57.50239517 +0000 UTC m=+0.158785141 container init 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:42:57 np0005473739 podman[398412]: 2025-10-07 14:42:57.508272586 +0000 UTC m=+0.164662527 container start 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:42:57 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [NOTICE]   (398431) : New worker (398433) forked
Oct  7 10:42:57 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [NOTICE]   (398431) : Loading success.
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.757 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848177.7567456, 4954e98d-461f-46e8-9f41-c81930c02cff => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.757 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] VM Started (Lifecycle Event)#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.759 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.763 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.766 2 INFO nova.virt.libvirt.driver [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance spawned successfully.#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.766 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.784 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.787 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.805 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.805 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.805 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.806 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.806 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.806 2 DEBUG nova.virt.libvirt.driver [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.845 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.845 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848177.7575202, 4954e98d-461f-46e8-9f41-c81930c02cff => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.845 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.899 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.902 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848177.7619922, 4954e98d-461f-46e8-9f41-c81930c02cff => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.902 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.907 2 INFO nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Took 7.94 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.907 2 DEBUG nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.965 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.968 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:57 np0005473739 nova_compute[259550]: 2025-10-07 14:42:57.990 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:42:58 np0005473739 nova_compute[259550]: 2025-10-07 14:42:58.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:42:58 np0005473739 nova_compute[259550]: 2025-10-07 14:42:58.010 2 INFO nova.compute.manager [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Took 10.25 seconds to build instance.#033[00m
Oct  7 10:42:58 np0005473739 nova_compute[259550]: 2025-10-07 14:42:58.029 2 DEBUG oslo_concurrency.lockutils [None req-ed42b2c2-7876-4bb6-8efd-d80e1e0ac94c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.495s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:42:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2416: 305 pgs: 305 active+clean; 144 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 2.9 MiB/s wr, 81 op/s
Oct  7 10:42:58 np0005473739 nova_compute[259550]: 2025-10-07 14:42:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:42:59 np0005473739 nova_compute[259550]: 2025-10-07 14:42:59.171 2 DEBUG nova.compute.manager [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:42:59 np0005473739 nova_compute[259550]: 2025-10-07 14:42:59.172 2 DEBUG oslo_concurrency.lockutils [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:42:59 np0005473739 nova_compute[259550]: 2025-10-07 14:42:59.172 2 DEBUG oslo_concurrency.lockutils [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:42:59 np0005473739 nova_compute[259550]: 2025-10-07 14:42:59.172 2 DEBUG oslo_concurrency.lockutils [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:42:59 np0005473739 nova_compute[259550]: 2025-10-07 14:42:59.173 2 DEBUG nova.compute.manager [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] No waiting events found dispatching network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:42:59 np0005473739 nova_compute[259550]: 2025-10-07 14:42:59.173 2 WARNING nova.compute.manager [req-b2aaaca2-f376-44d1-b245-1e7b34114b3d req-985302ab-e698-4c75-bf91-f13163ae8a61 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received unexpected event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:43:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:00.074 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:00.075 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:00.076 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:00 np0005473739 nova_compute[259550]: 2025-10-07 14:43:00.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2417: 305 pgs: 305 active+clean; 166 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 2.5 MiB/s rd, 3.9 MiB/s wr, 155 op/s
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.400 2 DEBUG nova.compute.manager [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.401 2 DEBUG nova.compute.manager [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Refreshing instance network info cache due to event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.401 2 DEBUG oslo_concurrency.lockutils [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.401 2 DEBUG oslo_concurrency.lockutils [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.402 2 DEBUG nova.network.neutron [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Refreshing network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.574 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.575 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.575 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.575 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.576 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.577 2 INFO nova.compute.manager [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Terminating instance#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.578 2 DEBUG nova.compute.manager [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:43:01 np0005473739 kernel: tap8e464ebe-1b (unregistering): left promiscuous mode
Oct  7 10:43:01 np0005473739 NetworkManager[44949]: <info>  [1759848181.6470] device (tap8e464ebe-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:43:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:01Z|01417|binding|INFO|Releasing lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 from this chassis (sb_readonly=0)
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:01Z|01418|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 down in Southbound
Oct  7 10:43:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:01Z|01419|binding|INFO|Removing iface tap8e464ebe-1b ovn-installed in OVS
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:01 np0005473739 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Deactivated successfully.
Oct  7 10:43:01 np0005473739 systemd[1]: machine-qemu\x2d161\x2dinstance\x2d00000080.scope: Consumed 4.911s CPU time.
Oct  7 10:43:01 np0005473739 systemd-machined[214580]: Machine qemu-161-instance-00000080 terminated.
Oct  7 10:43:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.729 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:10:26 10.100.0.12'], port_security=['fa:16:3e:b7:10:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '4954e98d-461f-46e8-9f41-c81930c02cff', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aea4318-48a4-451b-b36b-e364946c1859', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d04921d-dca2-4310-91f6-b9ea410f8b20, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8e464ebe-1be0-4a3a-b8df-c088ec663aa2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:43:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.730 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 in datapath 7aea4318-48a4-451b-b36b-e364946c1859 unbound from our chassis#033[00m
Oct  7 10:43:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.732 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aea4318-48a4-451b-b36b-e364946c1859, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:43:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.733 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b0a4e504-8d0e-4d7d-be64-f2b660b0aeb4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:01.734 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 namespace which is not needed anymore#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.816 2 INFO nova.virt.libvirt.driver [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Instance destroyed successfully.#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.817 2 DEBUG nova.objects.instance [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 4954e98d-461f-46e8-9f41-c81930c02cff obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:01 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [NOTICE]   (398431) : haproxy version is 2.8.14-c23fe91
Oct  7 10:43:01 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [NOTICE]   (398431) : path to executable is /usr/sbin/haproxy
Oct  7 10:43:01 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [WARNING]  (398431) : Exiting Master process...
Oct  7 10:43:01 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [WARNING]  (398431) : Exiting Master process...
Oct  7 10:43:01 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [ALERT]    (398431) : Current worker (398433) exited with code 143 (Terminated)
Oct  7 10:43:01 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[398427]: [WARNING]  (398431) : All workers exited. Exiting... (0)
Oct  7 10:43:01 np0005473739 systemd[1]: libpod-5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d.scope: Deactivated successfully.
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.868 2 DEBUG nova.virt.libvirt.vif [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:42:44Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-20568441',display_name='tempest-TestNetworkBasicOps-server-20568441',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-20568441',id=128,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLqq1NDEWVa3Pi0bJGoGA/5QwojsWmG4PsNzBryv1MHFYQv3a/HZ2jG+uXsf9gU2YDMUeSFxRaiZcRUccSi6nnsGaKDlAkF25zSznVEcPGfGqTa/G6c3Yyx8D7jl5gKmBw==',key_name='tempest-TestNetworkBasicOps-562171512',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:42:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-tdpabr3h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:42:57Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=4954e98d-461f-46e8-9f41-c81930c02cff,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.869 2 DEBUG nova.network.os_vif_util [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.870 2 DEBUG nova.network.os_vif_util [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.870 2 DEBUG os_vif [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.872 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.872 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e464ebe-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:01 np0005473739 podman[398474]: 2025-10-07 14:43:01.87440322 +0000 UTC m=+0.045223044 container died 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:01 np0005473739 nova_compute[259550]: 2025-10-07 14:43:01.879 2 INFO os_vif [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b')#033[00m
Oct  7 10:43:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d-userdata-shm.mount: Deactivated successfully.
Oct  7 10:43:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-26fbd25cec77d0b294d26659ee0c2598b49ebe94e1d14db13af23b421b9aafa8-merged.mount: Deactivated successfully.
Oct  7 10:43:01 np0005473739 podman[398474]: 2025-10-07 14:43:01.927374268 +0000 UTC m=+0.098194102 container cleanup 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:43:01 np0005473739 systemd[1]: libpod-conmon-5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d.scope: Deactivated successfully.
Oct  7 10:43:01 np0005473739 podman[398523]: 2025-10-07 14:43:01.992243082 +0000 UTC m=+0.042534791 container remove 5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.000 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2765b280-13c8-4146-8cdf-564e5ebb4cd9]: (4, ('Tue Oct  7 02:43:01 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 (5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d)\n5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d\nTue Oct  7 02:43:01 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 (5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d)\n5a6fe4400dce962a65a7395758c8ccdd4e9b374afc8cf0d243b8a439d1fddc7d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.003 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e3da848b-7ecf-4584-bb58-2eea4086b907]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.004 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aea4318-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:02 np0005473739 kernel: tap7aea4318-40: left promiscuous mode
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.020 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.022 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3da36a0e-9bd3-4587-8d0e-27881f41cdf7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.056 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cae0ce83-f3d1-4e4f-b61a-f387b325f660]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f92f4a-26bd-4f13-bca5-eedcec55ba12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.073 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c538e9a1-138e-458a-9111-6842044c7459]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 876031, 'reachable_time': 29509, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 398540, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:02 np0005473739 systemd[1]: run-netns-ovnmeta\x2d7aea4318\x2d48a4\x2d451b\x2db36b\x2de364946c1859.mount: Deactivated successfully.
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.077 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:43:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:02.077 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[45d33ae1-263c-4d17-9557-9a53481a7221]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.465 2 INFO nova.virt.libvirt.driver [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Deleting instance files /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff_del#033[00m
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.466 2 INFO nova.virt.libvirt.driver [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Deletion of /var/lib/nova/instances/4954e98d-461f-46e8-9f41-c81930c02cff_del complete#033[00m
Oct  7 10:43:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2418: 305 pgs: 305 active+clean; 167 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 163 op/s
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.513 2 INFO nova.compute.manager [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Took 0.93 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.514 2 DEBUG oslo.service.loopingcall [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.514 2 DEBUG nova.compute.manager [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:43:02 np0005473739 nova_compute[259550]: 2025-10-07 14:43:02.515 2 DEBUG nova.network.neutron [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.549 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.549 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.549 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.549 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.550 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] No waiting events found dispatching network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.550 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.550 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.550 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.551 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.551 2 DEBUG oslo_concurrency.lockutils [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.551 2 DEBUG nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] No waiting events found dispatching network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.551 2 WARNING nova.compute.manager [req-3a2f6946-59de-4f5b-b904-fb4f32550783 req-59582d18-5d2a-4610-ac55-eddc683bc9ea 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Received unexpected event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.662 2 DEBUG nova.network.neutron [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updated VIF entry in instance network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.663 2 DEBUG nova.network.neutron [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.698 2 DEBUG oslo_concurrency.lockutils [req-806a92f8-3d3d-4302-989d-2ca9afa4401f req-1d0dfd43-0e02-4486-9895-07921637992c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-4954e98d-461f-46e8-9f41-c81930c02cff" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:43:03 np0005473739 nova_compute[259550]: 2025-10-07 14:43:03.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.015 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.209 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.209 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.210 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.210 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.479 2 DEBUG nova.network.neutron [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.495 2 INFO nova.compute.manager [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Took 1.98 seconds to deallocate network for instance.#033[00m
Oct  7 10:43:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2419: 305 pgs: 305 active+clean; 143 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.3 MiB/s wr, 176 op/s
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.539 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.539 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:04 np0005473739 nova_compute[259550]: 2025-10-07 14:43:04.607 2 DEBUG oslo_concurrency.processutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:43:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/857675004' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:43:05 np0005473739 nova_compute[259550]: 2025-10-07 14:43:05.049 2 DEBUG oslo_concurrency.processutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:05 np0005473739 nova_compute[259550]: 2025-10-07 14:43:05.056 2 DEBUG nova.compute.provider_tree [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:43:05 np0005473739 nova_compute[259550]: 2025-10-07 14:43:05.074 2 DEBUG nova.scheduler.client.report [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:43:05 np0005473739 nova_compute[259550]: 2025-10-07 14:43:05.097 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.558s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:05 np0005473739 nova_compute[259550]: 2025-10-07 14:43:05.126 2 INFO nova.scheduler.client.report [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 4954e98d-461f-46e8-9f41-c81930c02cff#033[00m
Oct  7 10:43:05 np0005473739 nova_compute[259550]: 2025-10-07 14:43:05.188 2 DEBUG oslo_concurrency.lockutils [None req-704a837f-aa1d-4475-b501-28ad56bbf218 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "4954e98d-461f-46e8-9f41-c81930c02cff" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.613s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:06 np0005473739 nova_compute[259550]: 2025-10-07 14:43:06.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:06 np0005473739 podman[398564]: 2025-10-07 14:43:06.069256471 +0000 UTC m=+0.052953509 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 10:43:06 np0005473739 podman[398565]: 2025-10-07 14:43:06.08014381 +0000 UTC m=+0.061390162 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 10:43:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2420: 305 pgs: 305 active+clean; 121 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 163 op/s
Oct  7 10:43:06 np0005473739 nova_compute[259550]: 2025-10-07 14:43:06.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.071 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.098 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.098 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.099 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.126 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.127 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.127 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.127 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.128 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:43:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/639504587' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.633 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.505s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.756 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.756 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.963 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.965 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3522MB free_disk=59.94275665283203GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.965 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:07 np0005473739 nova_compute[259550]: 2025-10-07 14:43:07.966 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.012 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.058 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 77918bef-8f72-4152-ac55-f4d4c98477ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.059 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.059 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.102 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2421: 305 pgs: 305 active+clean; 121 MiB data, 935 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 145 op/s
Oct  7 10:43:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:43:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2491770374' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.574 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.579 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.597 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.619 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:43:08 np0005473739 nova_compute[259550]: 2025-10-07 14:43:08.620 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2422: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 145 op/s
Oct  7 10:43:11 np0005473739 nova_compute[259550]: 2025-10-07 14:43:11.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:12 np0005473739 nova_compute[259550]: 2025-10-07 14:43:12.506 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:12 np0005473739 nova_compute[259550]: 2025-10-07 14:43:12.506 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2423: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 844 KiB/s rd, 47 KiB/s wr, 71 op/s
Oct  7 10:43:12 np0005473739 nova_compute[259550]: 2025-10-07 14:43:12.560 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:43:12 np0005473739 nova_compute[259550]: 2025-10-07 14:43:12.667 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:12 np0005473739 nova_compute[259550]: 2025-10-07 14:43:12.667 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:12 np0005473739 nova_compute[259550]: 2025-10-07 14:43:12.676 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:43:12 np0005473739 nova_compute[259550]: 2025-10-07 14:43:12.677 2 INFO nova.compute.claims [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.057 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.504 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:43:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:43:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/840452907' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.599 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.604 2 DEBUG nova.compute.provider_tree [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.648 2 DEBUG nova.scheduler.client.report [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.687 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.020s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.688 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.750 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.751 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.781 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.842 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.963 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.964 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.965 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Creating image(s)#033[00m
Oct  7 10:43:13 np0005473739 nova_compute[259550]: 2025-10-07 14:43:13.985 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.012 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.040 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.045 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.099 2 DEBUG nova.policy [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.145 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.147 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.148 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.148 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.175 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.180 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1f692a08-811a-41fb-a8a2-aa936481a256_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:14 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct  7 10:43:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2424: 305 pgs: 305 active+clean; 121 MiB data, 928 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 27 op/s
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.669 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1f692a08-811a-41fb-a8a2-aa936481a256_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.727 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.773 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.773 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.822 2 DEBUG nova.objects.instance [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 1f692a08-811a-41fb-a8a2-aa936481a256 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.851 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.861 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.861 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Ensure instance console log exists: /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.862 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.862 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.862 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.961 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.962 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.968 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:43:14 np0005473739 nova_compute[259550]: 2025-10-07 14:43:14.969 2 INFO nova.compute.claims [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.140 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:43:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/593319723' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.625 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.632 2 DEBUG nova.compute.provider_tree [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.660 2 DEBUG nova.scheduler.client.report [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.731 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.769s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.732 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.796 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.797 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.848 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:43:15 np0005473739 nova_compute[259550]: 2025-10-07 14:43:15.898 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.132 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.134 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.135 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Creating image(s)#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.160 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.190 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.215 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.219 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.294 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.295 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.296 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.296 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.320 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.323 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2425: 305 pgs: 305 active+clean; 131 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 573 KiB/s wr, 24 op/s
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.604 2 DEBUG nova.policy [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.687 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.781 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.816 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848181.8144217, 4954e98d-461f-46e8-9f41-c81930c02cff => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.817 2 INFO nova.compute.manager [-] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.880 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.884 2 DEBUG nova.compute.manager [None req-879938de-38e4-4ab2-8ae4-9d8ae4c5f2ab - - - - - -] [instance: 4954e98d-461f-46e8-9f41-c81930c02cff] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:16 np0005473739 nova_compute[259550]: 2025-10-07 14:43:16.934 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Successfully created port: b3a7ccba-7b5f-4e87-af92-723dd36cc703 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:43:17 np0005473739 nova_compute[259550]: 2025-10-07 14:43:17.078 2 DEBUG nova.objects.instance [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 998e2894-41dd-4eb6-9b5c-08a2e5da0acf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:17 np0005473739 nova_compute[259550]: 2025-10-07 14:43:17.150 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:43:17 np0005473739 nova_compute[259550]: 2025-10-07 14:43:17.151 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Ensure instance console log exists: /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:43:17 np0005473739 nova_compute[259550]: 2025-10-07 14:43:17.152 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:17 np0005473739 nova_compute[259550]: 2025-10-07 14:43:17.152 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:17 np0005473739 nova_compute[259550]: 2025-10-07 14:43:17.152 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:18 np0005473739 nova_compute[259550]: 2025-10-07 14:43:18.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2426: 305 pgs: 305 active+clean; 131 MiB data, 934 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 572 KiB/s wr, 12 op/s
Oct  7 10:43:19 np0005473739 nova_compute[259550]: 2025-10-07 14:43:19.119 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Successfully created port: 5fb4e372-50c4-49a3-a717-ddc2c99673c7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:43:20 np0005473739 nova_compute[259550]: 2025-10-07 14:43:20.169 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Successfully updated port: 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:43:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:20.220 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=43, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=42) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:43:20 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:20.221 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:43:20 np0005473739 nova_compute[259550]: 2025-10-07 14:43:20.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:20 np0005473739 nova_compute[259550]: 2025-10-07 14:43:20.232 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:20 np0005473739 nova_compute[259550]: 2025-10-07 14:43:20.232 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:20 np0005473739 nova_compute[259550]: 2025-10-07 14:43:20.232 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:43:20 np0005473739 nova_compute[259550]: 2025-10-07 14:43:20.355 2 DEBUG nova.compute.manager [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:20 np0005473739 nova_compute[259550]: 2025-10-07 14:43:20.359 2 DEBUG nova.compute.manager [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Refreshing instance network info cache due to event network-changed-8e464ebe-1be0-4a3a-b8df-c088ec663aa2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:43:20 np0005473739 nova_compute[259550]: 2025-10-07 14:43:20.359 2 DEBUG oslo_concurrency.lockutils [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2427: 305 pgs: 305 active+clean; 192 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.6 MiB/s wr, 42 op/s
Oct  7 10:43:21 np0005473739 podman[399022]: 2025-10-07 14:43:21.060301438 +0000 UTC m=+0.047772112 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:43:21 np0005473739 podman[399023]: 2025-10-07 14:43:21.103783573 +0000 UTC m=+0.084370944 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  7 10:43:21 np0005473739 nova_compute[259550]: 2025-10-07 14:43:21.504 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:43:21 np0005473739 nova_compute[259550]: 2025-10-07 14:43:21.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2428: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  7 10:43:22 np0005473739 nova_compute[259550]: 2025-10-07 14:43:22.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:43:22
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.data', 'vms', 'images', 'cephfs.cephfs.meta', 'volumes', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'backups', 'default.rgw.control']
Oct  7 10:43:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:43:22 np0005473739 nova_compute[259550]: 2025-10-07 14:43:22.756 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Successfully updated port: b3a7ccba-7b5f-4e87-af92-723dd36cc703 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:43:22 np0005473739 nova_compute[259550]: 2025-10-07 14:43:22.845 2 DEBUG nova.compute.manager [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:22 np0005473739 nova_compute[259550]: 2025-10-07 14:43:22.846 2 DEBUG nova.compute.manager [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing instance network info cache due to event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:43:22 np0005473739 nova_compute[259550]: 2025-10-07 14:43:22.846 2 DEBUG oslo_concurrency.lockutils [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:22 np0005473739 nova_compute[259550]: 2025-10-07 14:43:22.846 2 DEBUG oslo_concurrency.lockutils [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:22 np0005473739 nova_compute[259550]: 2025-10-07 14:43:22.846 2 DEBUG nova.network.neutron [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.018 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:43:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:43:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.122 2 DEBUG nova.network.neutron [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.216 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.216 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance network_info: |[{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.217 2 DEBUG oslo_concurrency.lockutils [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.217 2 DEBUG nova.network.neutron [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Refreshing network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.221 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start _get_guest_xml network_info=[{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.222 2 DEBUG nova.network.neutron [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.229 2 WARNING nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.235 2 DEBUG nova.virt.libvirt.host [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.236 2 DEBUG nova.virt.libvirt.host [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.240 2 DEBUG nova.virt.libvirt.host [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.240 2 DEBUG nova.virt.libvirt.host [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.241 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.241 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.241 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.241 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.242 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.243 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.243 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.243 2 DEBUG nova.virt.hardware [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.246 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:43:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3653239994' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.685 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.705 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.710 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.827 2 DEBUG nova.network.neutron [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:23 np0005473739 nova_compute[259550]: 2025-10-07 14:43:23.871 2 DEBUG oslo_concurrency.lockutils [req-04542ca0-a889-476d-bd31-99b7bc2c9c2a req-69a7b8a9-51c1-48a9-aaf2-fe617328a907 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:43:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4249436716' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.132 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.422s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.134 2 DEBUG nova.virt.libvirt.vif [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1037921195',display_name='tempest-TestNetworkBasicOps-server-1037921195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1037921195',id=130,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtzetRjoFkj/DFDF9WAwZ04LCKLWqJ+1zvvsm0AAVovAYiR1b+Pd5gjAzqKDLKwmDA6OpvtCVvIhI+zBY1NadamdON/n6P9hhZKAqnvip6hxYov7U6qVbPGlxpeUTPRYA==',key_name='tempest-TestNetworkBasicOps-2019301416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-sa2bmdzi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:15Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=998e2894-41dd-4eb6-9b5c-08a2e5da0acf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.134 2 DEBUG nova.network.os_vif_util [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.135 2 DEBUG nova.network.os_vif_util [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.137 2 DEBUG nova.objects.instance [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 998e2894-41dd-4eb6-9b5c-08a2e5da0acf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.211 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <uuid>998e2894-41dd-4eb6-9b5c-08a2e5da0acf</uuid>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <name>instance-00000082</name>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-1037921195</nova:name>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:43:23</nova:creationTime>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <nova:port uuid="8e464ebe-1be0-4a3a-b8df-c088ec663aa2">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <entry name="serial">998e2894-41dd-4eb6-9b5c-08a2e5da0acf</entry>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <entry name="uuid">998e2894-41dd-4eb6-9b5c-08a2e5da0acf</entry>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:b7:10:26"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <target dev="tap8e464ebe-1b"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/console.log" append="off"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:43:24 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:43:24 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:43:24 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:43:24 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.212 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Preparing to wait for external event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.212 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.213 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.213 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.214 2 DEBUG nova.virt.libvirt.vif [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1037921195',display_name='tempest-TestNetworkBasicOps-server-1037921195',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1037921195',id=130,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtzetRjoFkj/DFDF9WAwZ04LCKLWqJ+1zvvsm0AAVovAYiR1b+Pd5gjAzqKDLKwmDA6OpvtCVvIhI+zBY1NadamdON/n6P9hhZKAqnvip6hxYov7U6qVbPGlxpeUTPRYA==',key_name='tempest-TestNetworkBasicOps-2019301416',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-sa2bmdzi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:15Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=998e2894-41dd-4eb6-9b5c-08a2e5da0acf,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.214 2 DEBUG nova.network.os_vif_util [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.215 2 DEBUG nova.network.os_vif_util [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.215 2 DEBUG os_vif [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.216 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.217 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8e464ebe-1b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8e464ebe-1b, col_values=(('external_ids', {'iface-id': '8e464ebe-1be0-4a3a-b8df-c088ec663aa2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b7:10:26', 'vm-uuid': '998e2894-41dd-4eb6-9b5c-08a2e5da0acf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:24 np0005473739 NetworkManager[44949]: <info>  [1759848204.2233] manager: (tap8e464ebe-1b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/569)
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.224 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.232 2 INFO os_vif [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b')#033[00m
Oct  7 10:43:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2429: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 3.5 MiB/s wr, 54 op/s
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.564 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.565 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.565 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:b7:10:26, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.566 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Using config drive#033[00m
Oct  7 10:43:24 np0005473739 nova_compute[259550]: 2025-10-07 14:43:24.590 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.062 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Successfully updated port: 5fb4e372-50c4-49a3-a717-ddc2c99673c7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.193 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Creating config drive at /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.197 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo3v9vf1l execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.297 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.298 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.298 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.349 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpo3v9vf1l" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.374 2 DEBUG nova.storage.rbd_utils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.379 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.459 2 DEBUG nova.compute.manager [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-changed-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.460 2 DEBUG nova.compute.manager [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing instance network info cache due to event network-changed-5fb4e372-50c4-49a3-a717-ddc2c99673c7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.461 2 DEBUG oslo_concurrency.lockutils [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.464 2 DEBUG nova.network.neutron [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Updated VIF entry in instance network info cache for port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.465 2 DEBUG nova.network.neutron [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Updating instance_info_cache with network_info: [{"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.522 2 DEBUG oslo_concurrency.lockutils [req-35464970-6c5b-48f6-9a84-74a2d130e115 req-b469562a-651e-4f0d-9645-bc498167b193 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-998e2894-41dd-4eb6-9b5c-08a2e5da0acf" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.618 2 DEBUG oslo_concurrency.processutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config 998e2894-41dd-4eb6-9b5c-08a2e5da0acf_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.240s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.620 2 INFO nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Deleting local config drive /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf/disk.config because it was imported into RBD.#033[00m
Oct  7 10:43:25 np0005473739 kernel: tap8e464ebe-1b: entered promiscuous mode
Oct  7 10:43:25 np0005473739 NetworkManager[44949]: <info>  [1759848205.6921] manager: (tap8e464ebe-1b): new Tun device (/org/freedesktop/NetworkManager/Devices/570)
Oct  7 10:43:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:25Z|01420|binding|INFO|Claiming lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for this chassis.
Oct  7 10:43:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:25Z|01421|binding|INFO|8e464ebe-1be0-4a3a-b8df-c088ec663aa2: Claiming fa:16:3e:b7:10:26 10.100.0.12
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:25Z|01422|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 ovn-installed in OVS
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:25Z|01423|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 up in Southbound
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.716 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:10:26 10.100.0.12'], port_security=['fa:16:3e:b7:10:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '998e2894-41dd-4eb6-9b5c-08a2e5da0acf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aea4318-48a4-451b-b36b-e364946c1859', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '7', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d04921d-dca2-4310-91f6-b9ea410f8b20, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8e464ebe-1be0-4a3a-b8df-c088ec663aa2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.717 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 in datapath 7aea4318-48a4-451b-b36b-e364946c1859 bound to our chassis#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.719 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7aea4318-48a4-451b-b36b-e364946c1859#033[00m
Oct  7 10:43:25 np0005473739 systemd-udevd[399199]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0aff0d62-cc95-4cb5-84d0-3752fcd8ff61]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.732 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7aea4318-41 in ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.734 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7aea4318-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.735 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ed1c3f0b-cd9e-4b74-8c71-fb592c868a0c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.736 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca8ae5f9-a51b-4b64-8c40-47ac24b99876]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 NetworkManager[44949]: <info>  [1759848205.7406] device (tap8e464ebe-1b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:43:25 np0005473739 NetworkManager[44949]: <info>  [1759848205.7416] device (tap8e464ebe-1b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:43:25 np0005473739 systemd-machined[214580]: New machine qemu-162-instance-00000082.
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.749 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[73919b5d-0e68-4f9c-a100-05a094e2a89a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 systemd[1]: Started Virtual Machine qemu-162-instance-00000082.
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.766 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9328a19-ca09-46e4-a8bd-7af469c0d18b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.799 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b87a705c-9860-4e14-9566-7ba8b393f67a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.804 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1aa84e8-5569-4fa7-96f2-1c50288c8189]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 NetworkManager[44949]: <info>  [1759848205.8048] manager: (tap7aea4318-40): new Veth device (/org/freedesktop/NetworkManager/Devices/571)
Oct  7 10:43:25 np0005473739 systemd-udevd[399204]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.837 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5ba26be4-57c0-4e14-85fb-ea830c980e2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.840 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f44338c4-a9b3-41fb-bf6b-afc46361a96c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 NetworkManager[44949]: <info>  [1759848205.8655] device (tap7aea4318-40): carrier: link connected
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.872 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e76c14-534f-4dbf-8bc2-b19a0b76952c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 nova_compute[259550]: 2025-10-07 14:43:25.872 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.889 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63a42d49-8a6f-4452-a476-1080f948a797]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aea4318-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 408], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878943, 'reachable_time': 40799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399233, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.907 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e08437e8-af4e-4970-825a-653053b33c27]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:73fd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 878943, 'tstamp': 878943}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399234, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.924 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c7aec9a-d448-485f-852d-825a0ba9149c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7aea4318-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:73:fd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 2, 'rx_bytes': 196, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 408], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878943, 'reachable_time': 40799, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 399235, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:25.960 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f4d01440-a950-40c0-aac8-da8d930b7c9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.023 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a0d6912-a0e3-43eb-ba90-56c73eed9ee8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.024 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aea4318-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.025 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7aea4318-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:26 np0005473739 nova_compute[259550]: 2025-10-07 14:43:26.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:26 np0005473739 kernel: tap7aea4318-40: entered promiscuous mode
Oct  7 10:43:26 np0005473739 NetworkManager[44949]: <info>  [1759848206.0281] manager: (tap7aea4318-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/572)
Oct  7 10:43:26 np0005473739 nova_compute[259550]: 2025-10-07 14:43:26.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.036 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7aea4318-40, col_values=(('external_ids', {'iface-id': '51e9d814-5370-44c9-809c-78ed73cc4b32'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:26 np0005473739 nova_compute[259550]: 2025-10-07 14:43:26.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:26Z|01424|binding|INFO|Releasing lport 51e9d814-5370-44c9-809c-78ed73cc4b32 from this chassis (sb_readonly=0)
Oct  7 10:43:26 np0005473739 nova_compute[259550]: 2025-10-07 14:43:26.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:26 np0005473739 nova_compute[259550]: 2025-10-07 14:43:26.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.052 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.052 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b8a27d2-7390-499e-af0e-436bfea7a677]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.054 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-7aea4318-48a4-451b-b36b-e364946c1859
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/7aea4318-48a4-451b-b36b-e364946c1859.pid.haproxy
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 7aea4318-48a4-451b-b36b-e364946c1859
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:43:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:26.054 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'env', 'PROCESS_TAG=haproxy-7aea4318-48a4-451b-b36b-e364946c1859', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7aea4318-48a4-451b-b36b-e364946c1859.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:43:26 np0005473739 podman[399267]: 2025-10-07 14:43:26.463695312 +0000 UTC m=+0.089330046 container create 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:43:26 np0005473739 podman[399267]: 2025-10-07 14:43:26.397978075 +0000 UTC m=+0.023612829 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:43:26 np0005473739 systemd[1]: Started libpod-conmon-8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb.scope.
Oct  7 10:43:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2430: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 3.5 MiB/s wr, 58 op/s
Oct  7 10:43:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:43:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc5e3c38f58495ce9e1ded90bc2192e137de291ef67b1de2c7c83ee6a5d2171d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:26 np0005473739 podman[399267]: 2025-10-07 14:43:26.562469967 +0000 UTC m=+0.188104711 container init 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:43:26 np0005473739 podman[399267]: 2025-10-07 14:43:26.572495193 +0000 UTC m=+0.198129927 container start 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:43:26 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [NOTICE]   (399328) : New worker (399330) forked
Oct  7 10:43:26 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [NOTICE]   (399328) : Loading success.
Oct  7 10:43:26 np0005473739 nova_compute[259550]: 2025-10-07 14:43:26.988 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848206.9881957, 998e2894-41dd-4eb6-9b5c-08a2e5da0acf => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:26 np0005473739 nova_compute[259550]: 2025-10-07 14:43:26.989 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] VM Started (Lifecycle Event)#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.041 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.046 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848206.9884768, 998e2894-41dd-4eb6-9b5c-08a2e5da0acf => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.046 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.118 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.122 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.164 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:43:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:27.223 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '43'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.607 2 DEBUG nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.608 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.608 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.608 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.608 2 DEBUG nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Processing event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG oslo_concurrency.lockutils [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.609 2 DEBUG nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] No waiting events found dispatching network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.610 2 WARNING nova.compute.manager [req-9ddc1b7d-7809-4bb5-b13f-fd0e307374be req-843a9556-b417-4382-ae8b-45f3c8cd3280 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received unexpected event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.610 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.613 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848207.6135912, 998e2894-41dd-4eb6-9b5c-08a2e5da0acf => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.614 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.615 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.618 2 INFO nova.virt.libvirt.driver [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance spawned successfully.#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.618 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.695 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.700 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.700 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.701 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.701 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.702 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.702 2 DEBUG nova.virt.libvirt.driver [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:27 np0005473739 nova_compute[259550]: 2025-10-07 14:43:27.706 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.025 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.112 2 INFO nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Took 11.98 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.112 2 DEBUG nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.192 2 INFO nova.compute.manager [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Took 13.25 seconds to build instance.#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.212 2 DEBUG nova.network.neutron [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.223 2 DEBUG oslo_concurrency.lockutils [None req-fce985f5-1b78-426e-91b0-9238f6cbf53f 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.249 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.250 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance network_info: |[{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.250 2 DEBUG oslo_concurrency.lockutils [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.251 2 DEBUG nova.network.neutron [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing network info cache for port 5fb4e372-50c4-49a3-a717-ddc2c99673c7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.254 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start _get_guest_xml network_info=[{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.259 2 WARNING nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.265 2 DEBUG nova.virt.libvirt.host [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.265 2 DEBUG nova.virt.libvirt.host [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.271 2 DEBUG nova.virt.libvirt.host [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.272 2 DEBUG nova.virt.libvirt.host [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.272 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.272 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.273 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.273 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.273 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.274 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.274 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.274 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.274 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.275 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.275 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.275 2 DEBUG nova.virt.hardware [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.278 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2431: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 3.0 MiB/s wr, 46 op/s
Oct  7 10:43:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:43:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3798614902' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.802 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.826 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:28 np0005473739 nova_compute[259550]: 2025-10-07 14:43:28.829 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:43:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1707179847' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.296 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.298 2 DEBUG nova.virt.libvirt.vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:13Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.299 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.300 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.301 2 DEBUG nova.virt.libvirt.vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:13Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.301 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.302 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.303 2 DEBUG nova.objects.instance [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 1f692a08-811a-41fb-a8a2-aa936481a256 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.330 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <uuid>1f692a08-811a-41fb-a8a2-aa936481a256</uuid>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <name>instance-00000081</name>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-589430566</nova:name>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:43:28</nova:creationTime>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:port uuid="b3a7ccba-7b5f-4e87-af92-723dd36cc703">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <nova:port uuid="5fb4e372-50c4-49a3-a717-ddc2c99673c7">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe5c:bb5e" ipVersion="6"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <entry name="serial">1f692a08-811a-41fb-a8a2-aa936481a256</entry>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <entry name="uuid">1f692a08-811a-41fb-a8a2-aa936481a256</entry>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/1f692a08-811a-41fb-a8a2-aa936481a256_disk">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/1f692a08-811a-41fb-a8a2-aa936481a256_disk.config">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:e6:0b:88"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <target dev="tapb3a7ccba-7b"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:5c:bb:5e"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <target dev="tap5fb4e372-50"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/console.log" append="off"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:43:29 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:43:29 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:43:29 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:43:29 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.331 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Preparing to wait for external event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.332 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.332 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.332 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.332 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Preparing to wait for external event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.333 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.333 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.333 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.334 2 DEBUG nova.virt.libvirt.vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:13Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.334 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.335 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.336 2 DEBUG os_vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.337 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.340 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3a7ccba-7b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.341 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3a7ccba-7b, col_values=(('external_ids', {'iface-id': 'b3a7ccba-7b5f-4e87-af92-723dd36cc703', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:0b:88', 'vm-uuid': '1f692a08-811a-41fb-a8a2-aa936481a256'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.342 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 NetworkManager[44949]: <info>  [1759848209.3433] manager: (tapb3a7ccba-7b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/573)
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.345 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.350 2 INFO os_vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b')#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.351 2 DEBUG nova.virt.libvirt.vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:13Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.351 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.352 2 DEBUG nova.network.os_vif_util [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.352 2 DEBUG os_vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.353 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.354 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.356 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5fb4e372-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.356 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5fb4e372-50, col_values=(('external_ids', {'iface-id': '5fb4e372-50c4-49a3-a717-ddc2c99673c7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:5c:bb:5e', 'vm-uuid': '1f692a08-811a-41fb-a8a2-aa936481a256'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 NetworkManager[44949]: <info>  [1759848209.3587] manager: (tap5fb4e372-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/574)
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.366 2 INFO os_vif [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50')#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.452 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.452 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.453 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:e6:0b:88, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.453 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:5c:bb:5e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.453 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Using config drive#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.483 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.513 2 DEBUG nova.network.neutron [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updated VIF entry in instance network info cache for port 5fb4e372-50c4-49a3-a717-ddc2c99673c7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.515 2 DEBUG nova.network.neutron [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:29 np0005473739 nova_compute[259550]: 2025-10-07 14:43:29.690 2 DEBUG oslo_concurrency.lockutils [req-284e9a51-4a1f-49a0-8c10-dccee89fea9f req-43acaa58-2967-4e8e-b04e-d0e9b49458b0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.046 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Creating config drive at /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.052 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdk21nyc execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.221 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxdk21nyc" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.258 2 DEBUG nova.storage.rbd_utils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.263 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.465 2 DEBUG oslo_concurrency.processutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config 1f692a08-811a-41fb-a8a2-aa936481a256_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.201s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.467 2 INFO nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Deleting local config drive /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256/disk.config because it was imported into RBD.#033[00m
Oct  7 10:43:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2432: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 923 KiB/s rd, 3.0 MiB/s wr, 83 op/s
Oct  7 10:43:30 np0005473739 kernel: tapb3a7ccba-7b: entered promiscuous mode
Oct  7 10:43:30 np0005473739 NetworkManager[44949]: <info>  [1759848210.5246] manager: (tapb3a7ccba-7b): new Tun device (/org/freedesktop/NetworkManager/Devices/575)
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.529 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01425|binding|INFO|Claiming lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 for this chassis.
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01426|binding|INFO|b3a7ccba-7b5f-4e87-af92-723dd36cc703: Claiming fa:16:3e:e6:0b:88 10.100.0.7
Oct  7 10:43:30 np0005473739 NetworkManager[44949]: <info>  [1759848210.5442] manager: (tap5fb4e372-50): new Tun device (/org/freedesktop/NetworkManager/Devices/576)
Oct  7 10:43:30 np0005473739 kernel: tap5fb4e372-50: entered promiscuous mode
Oct  7 10:43:30 np0005473739 systemd-udevd[399478]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:43:30 np0005473739 systemd-udevd[399479]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01427|binding|INFO|Setting lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 ovn-installed in OVS
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01428|if_status|INFO|Dropped 8 log messages in last 165 seconds (most recently, 165 seconds ago) due to excessive rate
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01429|if_status|INFO|Not updating pb chassis for 5fb4e372-50c4-49a3-a717-ddc2c99673c7 now as sb is readonly
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01430|binding|INFO|Claiming lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 for this chassis.
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01431|binding|INFO|5fb4e372-50c4-49a3-a717-ddc2c99673c7: Claiming fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01432|binding|INFO|Setting lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 up in Southbound
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.576 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0b:88 10.100.0.7'], port_security=['fa:16:3e:e6:0b:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1f692a08-811a-41fb-a8a2-aa936481a256', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262bb8b3-c881-4ed3-8240-c435878fb605, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b3a7ccba-7b5f-4e87-af92-723dd36cc703) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:43:30 np0005473739 NetworkManager[44949]: <info>  [1759848210.5773] device (tap5fb4e372-50): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.577 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b3a7ccba-7b5f-4e87-af92-723dd36cc703 in datapath bb059ee7-3091-491e-8da2-c9bd1da0f922 bound to our chassis#033[00m
Oct  7 10:43:30 np0005473739 NetworkManager[44949]: <info>  [1759848210.5781] device (tapb3a7ccba-7b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:43:30 np0005473739 NetworkManager[44949]: <info>  [1759848210.5787] device (tap5fb4e372-50): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:43:30 np0005473739 NetworkManager[44949]: <info>  [1759848210.5790] device (tapb3a7ccba-7b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.579 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb059ee7-3091-491e-8da2-c9bd1da0f922#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01433|binding|INFO|Setting lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 ovn-installed in OVS
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:30 np0005473739 systemd-machined[214580]: New machine qemu-163-instance-00000081.
Oct  7 10:43:30 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:30Z|01434|binding|INFO|Setting lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 up in Southbound
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6b95b19e-0f31-42ed-83f6-c58da9f705d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.603 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e'], port_security=['fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5c:bb5e/64', 'neutron:device_id': '1f692a08-811a-41fb-a8a2-aa936481a256', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c3782d-89d5-4d69-ae40-86969f172913, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb4e372-50c4-49a3-a717-ddc2c99673c7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:43:30 np0005473739 systemd[1]: Started Virtual Machine qemu-163-instance-00000081.
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.636 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[fbd6c16b-195c-44e8-9cc7-a8b42fd945bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.639 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[59723376-4700-4f74-ae9d-e12b80779074]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.666 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4805cf07-e0ce-42c4-9cb6-7770aee1dc47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9cb512b1-95a8-492b-bc9d-3e16c4fa9037]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb059ee7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3b:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874469, 'reachable_time': 30371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399496, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.714 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[86891c32-5720-4a0d-bfe3-99b8ef15184d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb059ee7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874481, 'tstamp': 874481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399498, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb059ee7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874484, 'tstamp': 874484}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399498, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.717 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb059ee7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.720 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb059ee7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.721 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.721 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb059ee7-30, col_values=(('external_ids', {'iface-id': '770f7899-3ca8-4bf3-9f06-6b8c25522fc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.722 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.723 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb4e372-50c4-49a3-a717-ddc2c99673c7 in datapath e6e769bc-2b33-4210-8062-fbc8d16f9127 unbound from our chassis#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.724 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e6e769bc-2b33-4210-8062-fbc8d16f9127#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.744 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[56c878ff-2136-440f-aae4-1dda040b8918]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.793 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bf4f208c-3528-4d05-b60d-b50c664507b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.797 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fb622c-7cc2-4efc-81ea-e6f56c9e1444]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.829 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c214d935-c7b1-439b-9c7c-abddb9b7025f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[42d509ee-5500-4076-ad4e-88449ebf75fc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6e769bc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ce:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 19, 'tx_packets': 4, 'rx_bytes': 1802, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 19, 'tx_packets': 4, 'rx_bytes': 1802, 'tx_bytes': 312, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874559, 'reachable_time': 36097, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 19, 'inoctets': 1536, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 19, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1536, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 19, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399504, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.865 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7fca998e-0bef-4c57-92b6-9d9ba875ba98]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape6e769bc-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874571, 'tstamp': 874571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 399505, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.867 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6e769bc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:30 np0005473739 nova_compute[259550]: 2025-10-07 14:43:30.870 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.870 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6e769bc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.871 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.871 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape6e769bc-20, col_values=(('external_ids', {'iface-id': '21cca283-5f07-4e28-8ee2-a58e7356e156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:30.871 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.328 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.328 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.328 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.329 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.329 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.330 2 INFO nova.compute.manager [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Terminating instance#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.331 2 DEBUG nova.compute.manager [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:43:31 np0005473739 kernel: tap8e464ebe-1b (unregistering): left promiscuous mode
Oct  7 10:43:31 np0005473739 NetworkManager[44949]: <info>  [1759848211.3722] device (tap8e464ebe-1b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:31Z|01435|binding|INFO|Releasing lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 from this chassis (sb_readonly=0)
Oct  7 10:43:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:31Z|01436|binding|INFO|Setting lport 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 down in Southbound
Oct  7 10:43:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:31Z|01437|binding|INFO|Removing iface tap8e464ebe-1b ovn-installed in OVS
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.388 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.395 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b7:10:26 10.100.0.12'], port_security=['fa:16:3e:b7:10:26 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '998e2894-41dd-4eb6-9b5c-08a2e5da0acf', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7aea4318-48a4-451b-b36b-e364946c1859', 'neutron:port_capabilities': '', 'neutron:port_name': 'tempest-TestNetworkBasicOps-843309094', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '9', 'neutron:security_group_ids': '46186043-39e3-4b2d-9425-b7b9cc0cd458', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.231', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=9d04921d-dca2-4310-91f6-b9ea410f8b20, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=8e464ebe-1be0-4a3a-b8df-c088ec663aa2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.396 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 8e464ebe-1be0-4a3a-b8df-c088ec663aa2 in datapath 7aea4318-48a4-451b-b36b-e364946c1859 unbound from our chassis#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.398 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7aea4318-48a4-451b-b36b-e364946c1859, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.398 2 DEBUG nova.compute.manager [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.399 2 DEBUG oslo_concurrency.lockutils [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.399 2 DEBUG oslo_concurrency.lockutils [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.400 2 DEBUG oslo_concurrency.lockutils [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.400 2 DEBUG nova.compute.manager [req-030d67b5-f84c-44ab-ba46-4b383d875a7f req-8d90cfaf-b444-4b64-a545-36c73ea9fd68 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Processing event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.400 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.399 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fa9fe960-e07a-436d-a7dc-1254273e8a16]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.403 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 namespace which is not needed anymore#033[00m
Oct  7 10:43:31 np0005473739 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Deactivated successfully.
Oct  7 10:43:31 np0005473739 systemd[1]: machine-qemu\x2d162\x2dinstance\x2d00000082.scope: Consumed 4.899s CPU time.
Oct  7 10:43:31 np0005473739 systemd-machined[214580]: Machine qemu-162-instance-00000082 terminated.
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.536 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848211.5357327, 1f692a08-811a-41fb-a8a2-aa936481a256 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.536 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] VM Started (Lifecycle Event)#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.553 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:31 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [NOTICE]   (399328) : haproxy version is 2.8.14-c23fe91
Oct  7 10:43:31 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [NOTICE]   (399328) : path to executable is /usr/sbin/haproxy
Oct  7 10:43:31 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [WARNING]  (399328) : Exiting Master process...
Oct  7 10:43:31 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [WARNING]  (399328) : Exiting Master process...
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [ALERT]    (399328) : Current worker (399330) exited with code 143 (Terminated)
Oct  7 10:43:31 np0005473739 neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859[399320]: [WARNING]  (399328) : All workers exited. Exiting... (0)
Oct  7 10:43:31 np0005473739 systemd[1]: libpod-8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb.scope: Deactivated successfully.
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.559 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.561 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848211.5359128, 1f692a08-811a-41fb-a8a2-aa936481a256 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.561 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:43:31 np0005473739 podman[399569]: 2025-10-07 14:43:31.56629604 +0000 UTC m=+0.055912677 container died 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.570 2 INFO nova.virt.libvirt.driver [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Instance destroyed successfully.#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.570 2 DEBUG nova.objects.instance [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 998e2894-41dd-4eb6-9b5c-08a2e5da0acf obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.585 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.587 2 DEBUG nova.virt.libvirt.vif [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:43:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1037921195',display_name='tempest-TestNetworkBasicOps-server-1037921195',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1037921195',id=130,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLtzetRjoFkj/DFDF9WAwZ04LCKLWqJ+1zvvsm0AAVovAYiR1b+Pd5gjAzqKDLKwmDA6OpvtCVvIhI+zBY1NadamdON/n6P9hhZKAqnvip6hxYov7U6qVbPGlxpeUTPRYA==',key_name='tempest-TestNetworkBasicOps-2019301416',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:28Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-sa2bmdzi',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:43:28Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=998e2894-41dd-4eb6-9b5c-08a2e5da0acf,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.588 2 DEBUG nova.network.os_vif_util [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "address": "fa:16:3e:b7:10:26", "network": {"id": "7aea4318-48a4-451b-b36b-e364946c1859", "bridge": "br-int", "label": "tempest-network-smoke--1193440101", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.231", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8e464ebe-1b", "ovs_interfaceid": "8e464ebe-1be0-4a3a-b8df-c088ec663aa2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.588 2 DEBUG nova.network.os_vif_util [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.589 2 DEBUG os_vif [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.595 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8e464ebe-1b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb-userdata-shm.mount: Deactivated successfully.
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dc5e3c38f58495ce9e1ded90bc2192e137de291ef67b1de2c7c83ee6a5d2171d-merged.mount: Deactivated successfully.
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.608 2 INFO os_vif [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b7:10:26,bridge_name='br-int',has_traffic_filtering=True,id=8e464ebe-1be0-4a3a-b8df-c088ec663aa2,network=Network(7aea4318-48a4-451b-b36b-e364946c1859),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap8e464ebe-1b')#033[00m
Oct  7 10:43:31 np0005473739 podman[399569]: 2025-10-07 14:43:31.622812372 +0000 UTC m=+0.112429029 container cleanup 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:43:31 np0005473739 systemd[1]: libpod-conmon-8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb.scope: Deactivated successfully.
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.632 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.660 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:43:31 np0005473739 podman[399621]: 2025-10-07 14:43:31.689203058 +0000 UTC m=+0.043943500 container remove 8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.695 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10a5a678-f295-4a07-8579-7a14bfb5c51e]: (4, ('Tue Oct  7 02:43:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 (8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb)\n8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb\nTue Oct  7 02:43:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 (8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb)\n8852128e1d2f57e3c12ad3a1525687fb7ea55264698488b6575d93ce75bd97fb\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5b41ad7d-5641-423d-b6e9-002f0a94041b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.698 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7aea4318-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.699 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 kernel: tap7aea4318-40: left promiscuous mode
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.704 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1fa88831-1303-4ae3-bfc9-ff1204569016]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:31 np0005473739 nova_compute[259550]: 2025-10-07 14:43:31.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.733 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca6333b3-d497-40b5-a7ad-6323ddd06a89]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0805193-3a4f-4b34-bfc5-e1fd228a0776]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.750 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5e33cff6-4118-4850-9207-2b058e0461c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 878935, 'reachable_time': 19767, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 399637, 'error': None, 'target': 'ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.752 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7aea4318-48a4-451b-b36b-e364946c1859 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:43:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:31.752 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[e204ee96-8e04-46e5-bb0b-85c06fee4fea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:31 np0005473739 systemd[1]: run-netns-ovnmeta\x2d7aea4318\x2d48a4\x2d451b\x2db36b\x2de364946c1859.mount: Deactivated successfully.
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.039 2 INFO nova.virt.libvirt.driver [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Deleting instance files /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf_del#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.040 2 INFO nova.virt.libvirt.driver [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Deletion of /var/lib/nova/instances/998e2894-41dd-4eb6-9b5c-08a2e5da0acf_del complete#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.096 2 INFO nova.compute.manager [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Took 0.76 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.096 2 DEBUG oslo.service.loopingcall [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.097 2 DEBUG nova.compute.manager [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.097 2 DEBUG nova.network.neutron [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.282 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.282 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.299 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.374 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.374 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.384 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.384 2 INFO nova.compute.claims [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2433: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 953 KiB/s wr, 113 op/s
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.520 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0014538743529009571 of space, bias 1.0, pg target 0.43616230587028715 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741088785' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:43:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:43:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:43:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2741088785' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:43:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:43:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/523965864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.955 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.962 2 DEBUG nova.compute.provider_tree [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:43:32 np0005473739 nova_compute[259550]: 2025-10-07 14:43:32.979 2 DEBUG nova.scheduler.client.report [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.003 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.004 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.024 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.052 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.052 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.078 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.101 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:43:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.202 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.204 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.205 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating image(s)#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.228 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.256 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.284 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.288 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.326 2 DEBUG nova.policy [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '52bb2c10051444f181ee0572525fbe9d', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'ac40ef14492f40768b3852a40da26621', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.343 2 DEBUG nova.network.neutron [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.367 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.368 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.368 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.369 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.393 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.398 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.435 2 INFO nova.compute.manager [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Took 1.34 seconds to deallocate network for instance.#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.489 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.489 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.490 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.490 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.490 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No event matching network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 in dict_keys([('network-vif-plugged', '5fb4e372-50c4-49a3-a717-ddc2c99673c7')]) pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:325#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.491 2 WARNING nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received unexpected event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.491 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.491 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.491 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.492 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.492 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Processing event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.492 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.492 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.493 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.493 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.493 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.493 2 WARNING nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received unexpected event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.494 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.494 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.494 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.494 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.495 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] No waiting events found dispatching network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.495 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-unplugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.495 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.495 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.496 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.496 2 DEBUG oslo_concurrency.lockutils [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.496 2 DEBUG nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] No waiting events found dispatching network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.496 2 WARNING nova.compute.manager [req-083f2dae-8981-48d7-a698-5b0adb6eb82d req-e36338f8-7ab6-49cb-a111-f6a18d52b09b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Received unexpected event network-vif-plugged-8e464ebe-1be0-4a3a-b8df-c088ec663aa2 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.497 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance event wait completed in 1 seconds for network-vif-plugged,network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.503 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848213.502678, 1f692a08-811a-41fb-a8a2-aa936481a256 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.504 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.509 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.520 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.521 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.523 2 INFO nova.virt.libvirt.driver [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance spawned successfully.#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.524 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.536 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.539 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.571 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.572 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.572 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.573 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.573 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.574 2 DEBUG nova.virt.libvirt.driver [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.582 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.614 2 DEBUG oslo_concurrency.processutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.681 2 INFO nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Took 19.72 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.682 2 DEBUG nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.737 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.339s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.817 2 INFO nova.compute.manager [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Took 21.17 seconds to build instance.#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.826 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] resizing rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.868 2 DEBUG oslo_concurrency.lockutils [None req-b970392d-1fc8-4792-87a9-2d1fc505e054 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 21.362s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.942 2 DEBUG nova.objects.instance [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'migration_context' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.954 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.954 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Ensure instance console log exists: /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.955 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.955 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:33 np0005473739 nova_compute[259550]: 2025-10-07 14:43:33.956 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:43:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1135145531' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:43:34 np0005473739 nova_compute[259550]: 2025-10-07 14:43:34.091 2 DEBUG oslo_concurrency.processutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:34 np0005473739 nova_compute[259550]: 2025-10-07 14:43:34.096 2 DEBUG nova.compute.provider_tree [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:43:34 np0005473739 nova_compute[259550]: 2025-10-07 14:43:34.110 2 DEBUG nova.scheduler.client.report [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:43:34 np0005473739 nova_compute[259550]: 2025-10-07 14:43:34.131 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.610s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:34 np0005473739 nova_compute[259550]: 2025-10-07 14:43:34.154 2 INFO nova.scheduler.client.report [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 998e2894-41dd-4eb6-9b5c-08a2e5da0acf#033[00m
Oct  7 10:43:34 np0005473739 nova_compute[259550]: 2025-10-07 14:43:34.229 2 DEBUG oslo_concurrency.lockutils [None req-055d41eb-a0a5-4b7c-9144-c99db6c72ad4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "998e2894-41dd-4eb6-9b5c-08a2e5da0acf" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.901s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:34 np0005473739 nova_compute[259550]: 2025-10-07 14:43:34.337 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Successfully created port: 3ac63514-77e7-4d94-a67c-94806ca3b58b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:43:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2434: 305 pgs: 305 active+clean; 211 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.1 MiB/s wr, 164 op/s
Oct  7 10:43:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2435: 305 pgs: 305 active+clean; 213 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 214 op/s
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.550 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Successfully updated port: 3ac63514-77e7-4d94-a67c-94806ca3b58b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.566 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.566 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.566 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.683 2 DEBUG nova.compute.manager [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.683 2 DEBUG nova.compute.manager [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.684 2 DEBUG oslo_concurrency.lockutils [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:36 np0005473739 nova_compute[259550]: 2025-10-07 14:43:36.712 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:43:37 np0005473739 podman[399850]: 2025-10-07 14:43:37.07297672 +0000 UTC m=+0.054544971 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:43:37 np0005473739 podman[399851]: 2025-10-07 14:43:37.078729253 +0000 UTC m=+0.060413087 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:43:38 np0005473739 nova_compute[259550]: 2025-10-07 14:43:38.026 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2436: 305 pgs: 305 active+clean; 213 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 2.6 MiB/s rd, 1.8 MiB/s wr, 210 op/s
Oct  7 10:43:39 np0005473739 nova_compute[259550]: 2025-10-07 14:43:39.897 2 DEBUG nova.network.neutron [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.124 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.125 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance network_info: |[{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.125 2 DEBUG oslo_concurrency.lockutils [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.125 2 DEBUG nova.network.neutron [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.128 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start _get_guest_xml network_info=[{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.132 2 WARNING nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.137 2 DEBUG nova.virt.libvirt.host [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.137 2 DEBUG nova.virt.libvirt.host [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.140 2 DEBUG nova.virt.libvirt.host [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.141 2 DEBUG nova.virt.libvirt.host [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.143 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.143 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.144 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.145 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.145 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.146 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.146 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.146 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.147 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.147 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.147 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.148 2 DEBUG nova.virt.hardware [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.152 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2437: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.8 MiB/s wr, 256 op/s
Oct  7 10:43:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:43:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3647233733' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.594 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.615 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:40 np0005473739 nova_compute[259550]: 2025-10-07 14:43:40.620 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:43:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3271776861' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.055 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.435s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.057 2 DEBUG nova.virt.libvirt.vif [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP0AMqBzzYmRrc/gnJzbbMyBAFmuqR5iC2+H5lS9P1NxQlnpTWhcfEteNAmj5N76nsDrvP+kS3KGhT6YiYXIeHex+K2fKyOb9r6ICTnlnIC+U793tuGi+owzMBnIl2+nw==',key_name='tempest-TestShelveInstance-2096115086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:33Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.057 2 DEBUG nova.network.os_vif_util [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.058 2 DEBUG nova.network.os_vif_util [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.059 2 DEBUG nova.objects.instance [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.214 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <uuid>30241223-64c5-4a88-8ba2-ee340fe6cbd3</uuid>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <name>instance-00000083</name>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestShelveInstance-server-308957706</nova:name>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:43:40</nova:creationTime>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <nova:user uuid="52bb2c10051444f181ee0572525fbe9d">tempest-TestShelveInstance-703128978-project-member</nova:user>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <nova:project uuid="ac40ef14492f40768b3852a40da26621">tempest-TestShelveInstance-703128978</nova:project>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <nova:port uuid="3ac63514-77e7-4d94-a67c-94806ca3b58b">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <entry name="serial">30241223-64c5-4a88-8ba2-ee340fe6cbd3</entry>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <entry name="uuid">30241223-64c5-4a88-8ba2-ee340fe6cbd3</entry>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:32:1b:50"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <target dev="tap3ac63514-77"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/console.log" append="off"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:43:41 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:43:41 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:43:41 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:43:41 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.216 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Preparing to wait for external event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.216 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.216 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.216 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.217 2 DEBUG nova.virt.libvirt.vif [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP0AMqBzzYmRrc/gnJzbbMyBAFmuqR5iC2+H5lS9P1NxQlnpTWhcfEteNAmj5N76nsDrvP+kS3KGhT6YiYXIeHex+K2fKyOb9r6ICTnlnIC+U793tuGi+owzMBnIl2+nw==',key_name='tempest-TestShelveInstance-2096115086',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:43:33Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.217 2 DEBUG nova.network.os_vif_util [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.218 2 DEBUG nova.network.os_vif_util [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.218 2 DEBUG os_vif [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.219 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.220 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ac63514-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.224 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ac63514-77, col_values=(('external_ids', {'iface-id': '3ac63514-77e7-4d94-a67c-94806ca3b58b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:1b:50', 'vm-uuid': '30241223-64c5-4a88-8ba2-ee340fe6cbd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:43:41 np0005473739 NetworkManager[44949]: <info>  [1759848221.2281] manager: (tap3ac63514-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/577)
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.233 2 INFO os_vif [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77')#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.483 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.483 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.484 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No VIF found with MAC fa:16:3e:32:1b:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.484 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Using config drive#033[00m
Oct  7 10:43:41 np0005473739 nova_compute[259550]: 2025-10-07 14:43:41.505 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.496 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating config drive at /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.501 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph08stra8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2438: 305 pgs: 305 active+clean; 213 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 1.8 MiB/s wr, 219 op/s
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.643 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph08stra8" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.674 2 DEBUG nova.storage.rbd_utils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.678 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.714 2 DEBUG nova.network.neutron [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated VIF entry in instance network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.715 2 DEBUG nova.network.neutron [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.732 2 DEBUG oslo_concurrency.lockutils [req-9b124d71-a07c-449e-b8fe-0ea3dc4fa8f0 req-ab269b09-d91d-47c0-bf6b-4cf9bb42bd2c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.858 2 DEBUG oslo_concurrency.processutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.180s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.859 2 INFO nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deleting local config drive /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config because it was imported into RBD.#033[00m
Oct  7 10:43:42 np0005473739 kernel: tap3ac63514-77: entered promiscuous mode
Oct  7 10:43:42 np0005473739 NetworkManager[44949]: <info>  [1759848222.9116] manager: (tap3ac63514-77): new Tun device (/org/freedesktop/NetworkManager/Devices/578)
Oct  7 10:43:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:42Z|01438|binding|INFO|Claiming lport 3ac63514-77e7-4d94-a67c-94806ca3b58b for this chassis.
Oct  7 10:43:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:42Z|01439|binding|INFO|3ac63514-77e7-4d94-a67c-94806ca3b58b: Claiming fa:16:3e:32:1b:50 10.100.0.11
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.914 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.920 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:1b:50 10.100.0.11'], port_security=['fa:16:3e:32:1b:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30241223-64c5-4a88-8ba2-ee340fe6cbd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac40ef14492f40768b3852a40da26621', 'neutron:revision_number': '2', 'neutron:security_group_ids': '56b8c028-3f77-4dba-a2e9-4a1cb7c88d4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c5eb1a-8a6c-4afe-ae4e-424959d231e5, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3ac63514-77e7-4d94-a67c-94806ca3b58b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.921 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b in datapath 7c054d6f-68ec-4f0b-9362-221001cc6b67 bound to our chassis#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.922 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c054d6f-68ec-4f0b-9362-221001cc6b67#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1e68a6ef-7a67-441b-b861-5b0d7864b3ad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.938 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c054d6f-61 in ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.941 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c054d6f-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.941 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c839e750-94bc-4663-9122-0918ea0a6949]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.942 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba6eec26-3071-46b5-9231-ecc3b5f4adf3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:42Z|01440|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b ovn-installed in OVS
Oct  7 10:43:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:42Z|01441|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b up in Southbound
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.953 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[a4be15e7-51da-4180-a863-743ad1d8a39d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:42 np0005473739 systemd-udevd[400028]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:43:42 np0005473739 systemd-machined[214580]: New machine qemu-164-instance-00000083.
Oct  7 10:43:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:42.967 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91ac9f56-8053-435b-9bd9-6e7433371011]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:42 np0005473739 NetworkManager[44949]: <info>  [1759848222.9732] device (tap3ac63514-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:43:42 np0005473739 NetworkManager[44949]: <info>  [1759848222.9746] device (tap3ac63514-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:43:42 np0005473739 systemd[1]: Started Virtual Machine qemu-164-instance-00000083.
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.984 2 DEBUG nova.compute.manager [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.984 2 DEBUG nova.compute.manager [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing instance network info cache due to event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.984 2 DEBUG oslo_concurrency.lockutils [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.985 2 DEBUG oslo_concurrency.lockutils [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:42 np0005473739 nova_compute[259550]: 2025-10-07 14:43:42.985 2 DEBUG nova.network.neutron [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.006 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0742acfe-1e55-488a-b27f-60b63d8cb65b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.011 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[962ec913-5cb5-44d2-8067-0b6f83f1b59e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 NetworkManager[44949]: <info>  [1759848223.0129] manager: (tap7c054d6f-60): new Veth device (/org/freedesktop/NetworkManager/Devices/579)
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.027 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.046 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b826749c-122b-4c5d-b235-9178e44e80d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.049 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6380e460-f3be-4e10-b4d9-4995d3d50adf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 NetworkManager[44949]: <info>  [1759848223.0718] device (tap7c054d6f-60): carrier: link connected
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.077 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[15359c49-2705-4f5c-9e64-a0e4199428d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.095 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a5aca605-a496-4857-85b7-5ab50bf9cae5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c054d6f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:e1:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880663, 'reachable_time': 28101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 400059, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.110 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4f1243b3-c883-4e04-ad7f-b25bf5e63790]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:e109'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 880663, 'tstamp': 880663}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 400060, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.126 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1edbfd2f-5c2d-4722-b4b0-e68d84e87d54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c054d6f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:e1:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 196, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 413], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880663, 'reachable_time': 28101, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 168, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 168, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 400061, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.163 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b147962f-7a4c-49ce-8f0b-99455d7c8b5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.228 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2c506245-5dbf-4e72-9c01-3ac2a45aae4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.230 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c054d6f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.230 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.231 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c054d6f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:43 np0005473739 NetworkManager[44949]: <info>  [1759848223.2336] manager: (tap7c054d6f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/580)
Oct  7 10:43:43 np0005473739 kernel: tap7c054d6f-60: entered promiscuous mode
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.235 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c054d6f-60, col_values=(('external_ids', {'iface-id': 'bb603baf-bbde-4821-a893-8713cfab0527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:43:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:43Z|01442|binding|INFO|Releasing lport bb603baf-bbde-4821-a893-8713cfab0527 from this chassis (sb_readonly=0)
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.252 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.253 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.255 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[32368f3b-514f-4ccf-bc75-ea46a04ef455]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.256 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:43:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:43:43.258 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'env', 'PROCESS_TAG=haproxy-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c054d6f-68ec-4f0b-9362-221001cc6b67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:43:43 np0005473739 podman[400133]: 2025-10-07 14:43:43.645049218 +0000 UTC m=+0.066413687 container create d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:43:43 np0005473739 systemd[1]: Started libpod-conmon-d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5.scope.
Oct  7 10:43:43 np0005473739 podman[400133]: 2025-10-07 14:43:43.609362619 +0000 UTC m=+0.030727118 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:43:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:43:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3d6983957f31550d23f88388bbfaa03a6c047e05d35d4dc2cf9e78281c838558/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:43 np0005473739 podman[400133]: 2025-10-07 14:43:43.745445577 +0000 UTC m=+0.166810066 container init d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:43:43 np0005473739 podman[400133]: 2025-10-07 14:43:43.751909798 +0000 UTC m=+0.173274267 container start d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:43:43 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [NOTICE]   (400152) : New worker (400160) forked
Oct  7 10:43:43 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [NOTICE]   (400152) : Loading success.
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.834 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848223.832886, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.835 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Started (Lifecycle Event)#033[00m
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.868 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.877 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848223.8333333, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.878 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.898 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.904 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:43:43 np0005473739 nova_compute[259550]: 2025-10-07 14:43:43.926 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:43:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:44Z|01443|binding|INFO|Releasing lport 21cca283-5f07-4e28-8ee2-a58e7356e156 from this chassis (sb_readonly=0)
Oct  7 10:43:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:44Z|01444|binding|INFO|Releasing lport bb603baf-bbde-4821-a893-8713cfab0527 from this chassis (sb_readonly=0)
Oct  7 10:43:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:44Z|01445|binding|INFO|Releasing lport 770f7899-3ca8-4bf3-9f06-6b8c25522fc3 from this chassis (sb_readonly=0)
Oct  7 10:43:44 np0005473739 nova_compute[259550]: 2025-10-07 14:43:44.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2439: 305 pgs: 305 active+clean; 214 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.8 MiB/s wr, 164 op/s
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:43:44 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0c003f2b-afef-4910-973c-4a6210c7053e does not exist
Oct  7 10:43:44 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9b3d8ee3-158c-48c2-809b-ecf993fba6ee does not exist
Oct  7 10:43:44 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ecce9d25-5ee6-4081-8ffa-928d7117b828 does not exist
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:43:44 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.015 2 DEBUG nova.network.neutron [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updated VIF entry in instance network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.015 2 DEBUG nova.network.neutron [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.047 2 DEBUG oslo_concurrency.lockutils [req-a1011205-2c79-4721-bd4d-5ae7f23d7522 req-a62804d1-be80-489e-b1c1-0d656e8dd600 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.076 2 DEBUG nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.077 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.077 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.077 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.078 2 DEBUG nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Processing event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.078 2 DEBUG nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.078 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.078 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.079 2 DEBUG oslo_concurrency.lockutils [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.079 2 DEBUG nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] No waiting events found dispatching network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.079 2 WARNING nova.compute.manager [req-55ce368e-8b7e-45a1-999b-cf4eabc711e9 req-4c55df33-8a2e-42d0-aed7-06ccfa7dbd15 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received unexpected event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.080 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.085 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848225.0850458, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.085 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.089 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.097 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance spawned successfully.#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.098 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.105 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.109 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.120 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.121 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.122 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.122 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.122 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.123 2 DEBUG nova.virt.libvirt.driver [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.131 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.189 2 INFO nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Took 11.99 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.189 2 DEBUG nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.259 2 INFO nova.compute.manager [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Took 12.91 seconds to build instance.#033[00m
Oct  7 10:43:45 np0005473739 nova_compute[259550]: 2025-10-07 14:43:45.277 2 DEBUG oslo_concurrency.lockutils [None req-dc02a26b-a92c-40cc-9f5a-4535c9e7d779 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.995s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:43:45 np0005473739 podman[400432]: 2025-10-07 14:43:45.327989671 +0000 UTC m=+0.047928404 container create f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:43:45 np0005473739 systemd[1]: Started libpod-conmon-f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b.scope.
Oct  7 10:43:45 np0005473739 podman[400432]: 2025-10-07 14:43:45.308265537 +0000 UTC m=+0.028204290 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:43:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:43:45 np0005473739 podman[400432]: 2025-10-07 14:43:45.44529547 +0000 UTC m=+0.165234223 container init f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:43:45 np0005473739 podman[400432]: 2025-10-07 14:43:45.453687613 +0000 UTC m=+0.173626336 container start f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 10:43:45 np0005473739 podman[400432]: 2025-10-07 14:43:45.457965606 +0000 UTC m=+0.177904339 container attach f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 10:43:45 np0005473739 romantic_euclid[400447]: 167 167
Oct  7 10:43:45 np0005473739 systemd[1]: libpod-f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b.scope: Deactivated successfully.
Oct  7 10:43:45 np0005473739 podman[400432]: 2025-10-07 14:43:45.462327682 +0000 UTC m=+0.182266415 container died f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:43:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ecf1083d65766b1319589dcab4cb62ad6a105d78fbe5fa4ec90070a361ccd109-merged.mount: Deactivated successfully.
Oct  7 10:43:45 np0005473739 podman[400432]: 2025-10-07 14:43:45.514745375 +0000 UTC m=+0.234684108 container remove f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_euclid, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef)
Oct  7 10:43:45 np0005473739 systemd[1]: libpod-conmon-f2601deb65130247d8a9abfd0709737a5c5fd078f3c7add590ee50a3f254024b.scope: Deactivated successfully.
Oct  7 10:43:45 np0005473739 podman[400471]: 2025-10-07 14:43:45.747735509 +0000 UTC m=+0.070240129 container create 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:43:45 np0005473739 systemd[1]: Started libpod-conmon-84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead.scope.
Oct  7 10:43:45 np0005473739 podman[400471]: 2025-10-07 14:43:45.715054759 +0000 UTC m=+0.037559399 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:43:45 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:43:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:45 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:45 np0005473739 podman[400471]: 2025-10-07 14:43:45.86370151 +0000 UTC m=+0.186206140 container init 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 10:43:45 np0005473739 podman[400471]: 2025-10-07 14:43:45.872875574 +0000 UTC m=+0.195380184 container start 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:43:45 np0005473739 podman[400471]: 2025-10-07 14:43:45.876962513 +0000 UTC m=+0.199467153 container attach 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:43:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:46Z|00162|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:0b:88 10.100.0.7
Oct  7 10:43:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:46Z|00163|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:0b:88 10.100.0.7
Oct  7 10:43:46 np0005473739 nova_compute[259550]: 2025-10-07 14:43:46.227 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2440: 305 pgs: 305 active+clean; 228 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.8 MiB/s wr, 119 op/s
Oct  7 10:43:46 np0005473739 nova_compute[259550]: 2025-10-07 14:43:46.661 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848211.5633817, 998e2894-41dd-4eb6-9b5c-08a2e5da0acf => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:43:46 np0005473739 nova_compute[259550]: 2025-10-07 14:43:46.662 2 INFO nova.compute.manager [-] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:43:46 np0005473739 nova_compute[259550]: 2025-10-07 14:43:46.684 2 DEBUG nova.compute.manager [None req-27780ea9-04bd-4c01-9797-bec878a8ee5e - - - - - -] [instance: 998e2894-41dd-4eb6-9b5c-08a2e5da0acf] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:43:47 np0005473739 funny_lewin[400487]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:43:47 np0005473739 funny_lewin[400487]: --> relative data size: 1.0
Oct  7 10:43:47 np0005473739 funny_lewin[400487]: --> All data devices are unavailable
Oct  7 10:43:47 np0005473739 systemd[1]: libpod-84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead.scope: Deactivated successfully.
Oct  7 10:43:47 np0005473739 systemd[1]: libpod-84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead.scope: Consumed 1.070s CPU time.
Oct  7 10:43:47 np0005473739 podman[400471]: 2025-10-07 14:43:47.043955032 +0000 UTC m=+1.366459642 container died 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:43:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-db383d80af92d9b7a156342bcf4a4d40533c795885f289b7689ec4affa172617-merged.mount: Deactivated successfully.
Oct  7 10:43:47 np0005473739 podman[400471]: 2025-10-07 14:43:47.098521682 +0000 UTC m=+1.421026292 container remove 84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_lewin, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:43:47 np0005473739 systemd[1]: libpod-conmon-84c407d2b5e20902c4f0120c9d63678849b73ab343691e2d22d93ab20ca55ead.scope: Deactivated successfully.
Oct  7 10:43:47 np0005473739 podman[400671]: 2025-10-07 14:43:47.70451498 +0000 UTC m=+0.039817769 container create aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 10:43:47 np0005473739 systemd[1]: Started libpod-conmon-aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06.scope.
Oct  7 10:43:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:43:47 np0005473739 podman[400671]: 2025-10-07 14:43:47.686844941 +0000 UTC m=+0.022147750 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:43:47 np0005473739 podman[400671]: 2025-10-07 14:43:47.897094699 +0000 UTC m=+0.232397498 container init aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:43:47 np0005473739 podman[400671]: 2025-10-07 14:43:47.904592858 +0000 UTC m=+0.239895647 container start aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:43:47 np0005473739 podman[400671]: 2025-10-07 14:43:47.90841623 +0000 UTC m=+0.243719029 container attach aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:43:47 np0005473739 stoic_booth[400687]: 167 167
Oct  7 10:43:47 np0005473739 systemd[1]: libpod-aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06.scope: Deactivated successfully.
Oct  7 10:43:47 np0005473739 conmon[400687]: conmon aafd776df87ef4e29ae9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06.scope/container/memory.events
Oct  7 10:43:47 np0005473739 podman[400671]: 2025-10-07 14:43:47.91143631 +0000 UTC m=+0.246739099 container died aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:43:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-df57f6cb54f4c95ec9f82aef2177fdd617b059a8d229b66890a9357625cf6232-merged.mount: Deactivated successfully.
Oct  7 10:43:47 np0005473739 podman[400671]: 2025-10-07 14:43:47.954057183 +0000 UTC m=+0.289359972 container remove aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:43:47 np0005473739 systemd[1]: libpod-conmon-aafd776df87ef4e29ae93488caa06fa192a67111ce54f1932390c1f7f089ae06.scope: Deactivated successfully.
Oct  7 10:43:48 np0005473739 nova_compute[259550]: 2025-10-07 14:43:48.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:48 np0005473739 podman[400711]: 2025-10-07 14:43:48.189110801 +0000 UTC m=+0.042697326 container create 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:43:48 np0005473739 systemd[1]: Started libpod-conmon-454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462.scope.
Oct  7 10:43:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:43:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:48 np0005473739 podman[400711]: 2025-10-07 14:43:48.168656317 +0000 UTC m=+0.022242862 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:43:48 np0005473739 podman[400711]: 2025-10-07 14:43:48.270584977 +0000 UTC m=+0.124171522 container init 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:43:48 np0005473739 podman[400711]: 2025-10-07 14:43:48.276604357 +0000 UTC m=+0.130190872 container start 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:43:48 np0005473739 podman[400711]: 2025-10-07 14:43:48.280607433 +0000 UTC m=+0.134193978 container attach 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:43:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2441: 305 pgs: 305 active+clean; 228 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.1 MiB/s wr, 69 op/s
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]: {
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:    "0": [
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:        {
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "devices": [
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "/dev/loop3"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            ],
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_name": "ceph_lv0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_size": "21470642176",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "name": "ceph_lv0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "tags": {
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cluster_name": "ceph",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.crush_device_class": "",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.encrypted": "0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osd_id": "0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.type": "block",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.vdo": "0"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            },
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "type": "block",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "vg_name": "ceph_vg0"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:        }
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:    ],
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:    "1": [
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:        {
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "devices": [
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "/dev/loop4"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            ],
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_name": "ceph_lv1",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_size": "21470642176",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "name": "ceph_lv1",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "tags": {
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cluster_name": "ceph",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.crush_device_class": "",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.encrypted": "0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osd_id": "1",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.type": "block",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.vdo": "0"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            },
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "type": "block",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "vg_name": "ceph_vg1"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:        }
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:    ],
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:    "2": [
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:        {
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "devices": [
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "/dev/loop5"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            ],
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_name": "ceph_lv2",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_size": "21470642176",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "name": "ceph_lv2",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "tags": {
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.cluster_name": "ceph",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.crush_device_class": "",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.encrypted": "0",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osd_id": "2",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.type": "block",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:                "ceph.vdo": "0"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            },
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "type": "block",
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:            "vg_name": "ceph_vg2"
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:        }
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]:    ]
Oct  7 10:43:49 np0005473739 keen_dewdney[400728]: }
Oct  7 10:43:49 np0005473739 systemd[1]: libpod-454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462.scope: Deactivated successfully.
Oct  7 10:43:49 np0005473739 nova_compute[259550]: 2025-10-07 14:43:49.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:49 np0005473739 podman[400737]: 2025-10-07 14:43:49.134009076 +0000 UTC m=+0.036407518 container died 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:43:49 np0005473739 systemd[1]: var-lib-containers-storage-overlay-33441e79bbc0244ca68d80985e415202b3adfba1a9a49836632e1079bb984713-merged.mount: Deactivated successfully.
Oct  7 10:43:49 np0005473739 podman[400737]: 2025-10-07 14:43:49.251556841 +0000 UTC m=+0.153955263 container remove 454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dewdney, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:43:49 np0005473739 systemd[1]: libpod-conmon-454c93a5e5aee370f18c2905c78ee1512d2ffb7c757f6014c2fdc7a837288462.scope: Deactivated successfully.
Oct  7 10:43:49 np0005473739 podman[400892]: 2025-10-07 14:43:49.904393313 +0000 UTC m=+0.063241872 container create 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:43:49 np0005473739 systemd[1]: Started libpod-conmon-1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d.scope.
Oct  7 10:43:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:43:49 np0005473739 podman[400892]: 2025-10-07 14:43:49.883720175 +0000 UTC m=+0.042568754 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:43:49 np0005473739 podman[400892]: 2025-10-07 14:43:49.992331871 +0000 UTC m=+0.151180450 container init 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 10:43:50 np0005473739 podman[400892]: 2025-10-07 14:43:50.001383052 +0000 UTC m=+0.160231611 container start 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:43:50 np0005473739 confident_maxwell[400909]: 167 167
Oct  7 10:43:50 np0005473739 podman[400892]: 2025-10-07 14:43:50.00582362 +0000 UTC m=+0.164672209 container attach 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:43:50 np0005473739 systemd[1]: libpod-1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d.scope: Deactivated successfully.
Oct  7 10:43:50 np0005473739 podman[400892]: 2025-10-07 14:43:50.007771411 +0000 UTC m=+0.166619980 container died 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:43:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3ee5d97b3679c563faff53f35ac8ee773e4ee62e5f7e47125ee8f30f25d5e979-merged.mount: Deactivated successfully.
Oct  7 10:43:50 np0005473739 podman[400892]: 2025-10-07 14:43:50.047053085 +0000 UTC m=+0.205901644 container remove 1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_maxwell, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:43:50 np0005473739 systemd[1]: libpod-conmon-1efd59ff41e2ed43ae3f08e6ba24f558d80f2bf05d2d0cd1adefdcf5031e160d.scope: Deactivated successfully.
Oct  7 10:43:50 np0005473739 podman[400933]: 2025-10-07 14:43:50.246432956 +0000 UTC m=+0.046196249 container create ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:43:50 np0005473739 systemd[1]: Started libpod-conmon-ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13.scope.
Oct  7 10:43:50 np0005473739 podman[400933]: 2025-10-07 14:43:50.226871866 +0000 UTC m=+0.026635169 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:43:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:43:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:43:50 np0005473739 podman[400933]: 2025-10-07 14:43:50.354634561 +0000 UTC m=+0.154397874 container init ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:43:50 np0005473739 podman[400933]: 2025-10-07 14:43:50.361473993 +0000 UTC m=+0.161237296 container start ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:43:50 np0005473739 podman[400933]: 2025-10-07 14:43:50.366967759 +0000 UTC m=+0.166731072 container attach ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:43:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2442: 305 pgs: 305 active+clean; 241 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 2.1 MiB/s wr, 159 op/s
Oct  7 10:43:51 np0005473739 nova_compute[259550]: 2025-10-07 14:43:51.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:51 np0005473739 jovial_williams[400950]: {
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "osd_id": 2,
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "type": "bluestore"
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:    },
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "osd_id": 1,
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "type": "bluestore"
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:    },
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "osd_id": 0,
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:        "type": "bluestore"
Oct  7 10:43:51 np0005473739 jovial_williams[400950]:    }
Oct  7 10:43:51 np0005473739 jovial_williams[400950]: }
Oct  7 10:43:51 np0005473739 podman[400933]: 2025-10-07 14:43:51.390865465 +0000 UTC m=+1.190628758 container died ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:43:51 np0005473739 systemd[1]: libpod-ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13.scope: Deactivated successfully.
Oct  7 10:43:51 np0005473739 systemd[1]: libpod-ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13.scope: Consumed 1.036s CPU time.
Oct  7 10:43:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c52947d3a0114f83ff37dabfff821bc1540cc191168f77849050f5447d33e49b-merged.mount: Deactivated successfully.
Oct  7 10:43:51 np0005473739 podman[400933]: 2025-10-07 14:43:51.457365233 +0000 UTC m=+1.257128526 container remove ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_williams, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:43:51 np0005473739 systemd[1]: libpod-conmon-ab5f187e5f39f07cc06041d3a4a8a59b0701ce35fe60a035d2c24794560f1e13.scope: Deactivated successfully.
Oct  7 10:43:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:43:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:43:51 np0005473739 podman[400984]: 2025-10-07 14:43:51.52577356 +0000 UTC m=+0.096959968 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Oct  7 10:43:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:43:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:43:51 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 91abf953-6e1c-4dd5-a0cc-f90a1c672c20 does not exist
Oct  7 10:43:51 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7eae22cf-15f7-4aa3-aad6-7c7d3fce76da does not exist
Oct  7 10:43:51 np0005473739 podman[400992]: 2025-10-07 14:43:51.570423817 +0000 UTC m=+0.143577547 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Oct  7 10:43:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:43:51 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:43:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2443: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 136 op/s
Oct  7 10:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:43:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:43:52 np0005473739 nova_compute[259550]: 2025-10-07 14:43:52.837 2 DEBUG nova.compute.manager [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:43:52 np0005473739 nova_compute[259550]: 2025-10-07 14:43:52.838 2 DEBUG nova.compute.manager [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:43:52 np0005473739 nova_compute[259550]: 2025-10-07 14:43:52.839 2 DEBUG oslo_concurrency.lockutils [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:43:52 np0005473739 nova_compute[259550]: 2025-10-07 14:43:52.839 2 DEBUG oslo_concurrency.lockutils [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:43:52 np0005473739 nova_compute[259550]: 2025-10-07 14:43:52.839 2 DEBUG nova.network.neutron [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:43:53 np0005473739 nova_compute[259550]: 2025-10-07 14:43:53.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:53 np0005473739 nova_compute[259550]: 2025-10-07 14:43:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:43:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2444: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 136 op/s
Oct  7 10:43:54 np0005473739 nova_compute[259550]: 2025-10-07 14:43:54.614 2 DEBUG nova.network.neutron [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated VIF entry in instance network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:43:54 np0005473739 nova_compute[259550]: 2025-10-07 14:43:54.615 2 DEBUG nova.network.neutron [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:43:54 np0005473739 nova_compute[259550]: 2025-10-07 14:43:54.695 2 DEBUG oslo_concurrency.lockutils [req-6824cd4e-bbba-4380-bbda-f675831a4079 req-013a092c-4ce7-4346-997f-42345ed013ed 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:43:55 np0005473739 nova_compute[259550]: 2025-10-07 14:43:55.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:55 np0005473739 nova_compute[259550]: 2025-10-07 14:43:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:43:55 np0005473739 nova_compute[259550]: 2025-10-07 14:43:55.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:43:56 np0005473739 nova_compute[259550]: 2025-10-07 14:43:56.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2445: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 132 op/s
Oct  7 10:43:56 np0005473739 nova_compute[259550]: 2025-10-07 14:43:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:43:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:57Z|00164|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:32:1b:50 10.100.0.11
Oct  7 10:43:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:43:57Z|00165|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:1b:50 10.100.0.11
Oct  7 10:43:57 np0005473739 nova_compute[259550]: 2025-10-07 14:43:57.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:43:58 np0005473739 nova_compute[259550]: 2025-10-07 14:43:58.036 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:43:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:43:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2446: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.1 MiB/s wr, 114 op/s
Oct  7 10:43:58 np0005473739 nova_compute[259550]: 2025-10-07 14:43:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:43:58 np0005473739 nova_compute[259550]: 2025-10-07 14:43:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:00.075 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:00.076 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:00.077 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2447: 305 pgs: 305 active+clean; 273 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 2.5 MiB/s wr, 146 op/s
Oct  7 10:44:01 np0005473739 nova_compute[259550]: 2025-10-07 14:44:01.237 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.325 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.325 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.325 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.326 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.326 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.327 2 INFO nova.compute.manager [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Terminating instance#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.328 2 DEBUG nova.compute.manager [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.364 2 DEBUG nova.compute.manager [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.365 2 DEBUG nova.compute.manager [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing instance network info cache due to event network-changed-b3a7ccba-7b5f-4e87-af92-723dd36cc703. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.365 2 DEBUG oslo_concurrency.lockutils [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.365 2 DEBUG oslo_concurrency.lockutils [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.366 2 DEBUG nova.network.neutron [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Refreshing network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:44:02 np0005473739 kernel: tapb3a7ccba-7b (unregistering): left promiscuous mode
Oct  7 10:44:02 np0005473739 NetworkManager[44949]: <info>  [1759848242.3831] device (tapb3a7ccba-7b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:44:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:02Z|01446|binding|INFO|Releasing lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 from this chassis (sb_readonly=0)
Oct  7 10:44:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:02Z|01447|binding|INFO|Setting lport b3a7ccba-7b5f-4e87-af92-723dd36cc703 down in Southbound
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:02Z|01448|binding|INFO|Removing iface tapb3a7ccba-7b ovn-installed in OVS
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.395 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 kernel: tap5fb4e372-50 (unregistering): left promiscuous mode
Oct  7 10:44:02 np0005473739 NetworkManager[44949]: <info>  [1759848242.4266] device (tap5fb4e372-50): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.431 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:0b:88 10.100.0.7'], port_security=['fa:16:3e:e6:0b:88 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '1f692a08-811a-41fb-a8a2-aa936481a256', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262bb8b3-c881-4ed3-8240-c435878fb605, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b3a7ccba-7b5f-4e87-af92-723dd36cc703) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.432 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b3a7ccba-7b5f-4e87-af92-723dd36cc703 in datapath bb059ee7-3091-491e-8da2-c9bd1da0f922 unbound from our chassis#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.434 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network bb059ee7-3091-491e-8da2-c9bd1da0f922#033[00m
Oct  7 10:44:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:02Z|01449|binding|INFO|Releasing lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 from this chassis (sb_readonly=0)
Oct  7 10:44:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:02Z|01450|binding|INFO|Setting lport 5fb4e372-50c4-49a3-a717-ddc2c99673c7 down in Southbound
Oct  7 10:44:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:02Z|01451|binding|INFO|Removing iface tap5fb4e372-50 ovn-installed in OVS
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.464 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f94fcbea-6379-4cbf-82e6-757b4fc5a223]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000081.scope: Deactivated successfully.
Oct  7 10:44:02 np0005473739 systemd[1]: machine-qemu\x2d163\x2dinstance\x2d00000081.scope: Consumed 14.464s CPU time.
Oct  7 10:44:02 np0005473739 systemd-machined[214580]: Machine qemu-163-instance-00000081 terminated.
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.494 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e'], port_security=['fa:16:3e:5c:bb:5e 2001:db8::f816:3eff:fe5c:bb5e'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:fe5c:bb5e/64', 'neutron:device_id': '1f692a08-811a-41fb-a8a2-aa936481a256', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c3782d-89d5-4d69-ae40-86969f172913, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5fb4e372-50c4-49a3-a717-ddc2c99673c7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.494 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[85ec7141-7f8d-4349-a242-9520a23484ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.499 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c820d1e0-3d47-4e21-a210-042c1b11d639]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.523 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2f535674-d0e9-4dd3-ae0a-34731e06c90e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2448: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 2.2 MiB/s wr, 89 op/s
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.542 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1ab606dd-f8fb-41f4-879e-47e85cc4a37f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapbb059ee7-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:59:3b:d9'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 402], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874469, 'reachable_time': 36531, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401106, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 NetworkManager[44949]: <info>  [1759848242.5578] manager: (tap5fb4e372-50): new Tun device (/org/freedesktop/NetworkManager/Devices/581)
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.562 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a94dfaa-8d15-471a-9b91-2eceaa2fae5a]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapbb059ee7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874481, 'tstamp': 874481}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401111, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapbb059ee7-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874484, 'tstamp': 874484}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401111, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.565 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb059ee7-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.577 2 INFO nova.virt.libvirt.driver [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Instance destroyed successfully.#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.578 2 DEBUG nova.objects.instance [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 1f692a08-811a-41fb-a8a2-aa936481a256 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.591 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbb059ee7-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.591 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.591 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapbb059ee7-30, col_values=(('external_ids', {'iface-id': '770f7899-3ca8-4bf3-9f06-6b8c25522fc3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.592 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.593 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5fb4e372-50c4-49a3-a717-ddc2c99673c7 in datapath e6e769bc-2b33-4210-8062-fbc8d16f9127 unbound from our chassis#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.595 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network e6e769bc-2b33-4210-8062-fbc8d16f9127#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.604 2 DEBUG nova.virt.libvirt.vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:43:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.605 2 DEBUG nova.network.os_vif_util [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.606 2 DEBUG nova.network.os_vif_util [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.606 2 DEBUG os_vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.609 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3a7ccba-7b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.615 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cfb2399b-7059-430f-97f7-17b379601fa0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.618 2 INFO os_vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:0b:88,bridge_name='br-int',has_traffic_filtering=True,id=b3a7ccba-7b5f-4e87-af92-723dd36cc703,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3a7ccba-7b')#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.619 2 DEBUG nova.virt.libvirt.vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:43:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-589430566',display_name='tempest-TestGettingAddress-server-589430566',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-589430566',id=129,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:33Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-9t1w4huy',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:43:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=1f692a08-811a-41fb-a8a2-aa936481a256,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.619 2 DEBUG nova.network.os_vif_util [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.620 2 DEBUG nova.network.os_vif_util [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.620 2 DEBUG os_vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.621 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5fb4e372-50, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.622 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.625 2 INFO os_vif [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:5c:bb:5e,bridge_name='br-int',has_traffic_filtering=True,id=5fb4e372-50c4-49a3-a717-ddc2c99673c7,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5fb4e372-50')#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.646 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b52962a6-3248-45f5-a9f6-186df33c6c11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.649 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3a9c7e09-1b92-4917-b7c3-d889da4f1a58]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.685 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[72a90c09-2462-4aa0-b429-ba85e9062769]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.705 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ffad0b-08a8-474b-9517-4cce7e83b1b2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tape6e769bc-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:70:ce:9b'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2772, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 30, 'tx_packets': 5, 'rx_bytes': 2772, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 403], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874559, 'reachable_time': 39280, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 30, 'inoctets': 2352, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 30, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2352, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 30, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401156, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81238723-49a7-4e74-bc15-1fe11411a067]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tape6e769bc-21'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 874571, 'tstamp': 874571}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 401157, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.725 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6e769bc-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 nova_compute[259550]: 2025-10-07 14:44:02.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.729 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tape6e769bc-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tape6e769bc-20, col_values=(('external_ids', {'iface-id': '21cca283-5f07-4e28-8ee2-a58e7356e156'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:02.730 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.039 2 INFO nova.virt.libvirt.driver [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Deleting instance files /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256_del#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.040 2 INFO nova.virt.libvirt.driver [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Deletion of /var/lib/nova/instances/1f692a08-811a-41fb-a8a2-aa936481a256_del complete#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.042 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.177 2 INFO nova.compute.manager [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Took 0.85 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.178 2 DEBUG oslo.service.loopingcall [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.178 2 DEBUG nova.compute.manager [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.178 2 DEBUG nova.network.neutron [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.532 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.532 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.644 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.849 2 DEBUG nova.network.neutron [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updated VIF entry in instance network info cache for port b3a7ccba-7b5f-4e87-af92-723dd36cc703. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.849 2 DEBUG nova.network.neutron [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "address": "fa:16:3e:e6:0b:88", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3a7ccba-7b", "ovs_interfaceid": "b3a7ccba-7b5f-4e87-af92-723dd36cc703", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.921 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.922 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.932 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:44:03 np0005473739 nova_compute[259550]: 2025-10-07 14:44:03.933 2 INFO nova.compute.claims [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.046 2 DEBUG oslo_concurrency.lockutils [req-7eab557c-4dfb-4cd7-a605-8f56b6e6a2d9 req-345cd34b-37c5-4eff-954e-335fb55a7c11 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1f692a08-811a-41fb-a8a2-aa936481a256" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.207 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.226 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.227 2 DEBUG nova.compute.provider_tree [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.251 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.277 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.368 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2449: 305 pgs: 305 active+clean; 230 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.2 MiB/s wr, 82 op/s
Oct  7 10:44:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1471044954' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.818 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.828 2 DEBUG nova.compute.provider_tree [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.916 2 DEBUG nova.scheduler.client.report [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.996 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-unplugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.996 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.997 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.997 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.998 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-unplugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.998 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-unplugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.998 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.999 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.999 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:04 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.999 2 DEBUG oslo_concurrency.lockutils [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:04.999 2 DEBUG nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.000 2 WARNING nova.compute.manager [req-2986abf3-bedf-4663-b3b7-992cc8a9cfad req-87377927-9b3b-49a2-b92a-8c52e9bae936 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received unexpected event network-vif-plugged-b3a7ccba-7b5f-4e87-af92-723dd36cc703 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.005 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.005 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.098 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.099 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.147 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.186 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.343 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.345 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.345 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Creating image(s)#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.369 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.398 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.430 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.434 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.506 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.507 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.508 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.508 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.532 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.536 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.901 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.930 2 DEBUG nova.policy [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.964 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.997 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.997 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:44:05 np0005473739 nova_compute[259550]: 2025-10-07 14:44:05.997 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.064 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Skipping network cache update for instance because it is being deleted. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9875#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.064 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.079 2 DEBUG nova.objects.instance [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 489f6a90-6b26-4b8b-aa3e-095d1d8df333 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.114 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.115 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Ensure instance console log exists: /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.116 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.116 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.116 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2450: 305 pgs: 305 active+clean; 207 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 368 KiB/s rd, 2.2 MiB/s wr, 106 op/s
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.616 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.617 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.617 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:44:06 np0005473739 nova_compute[259550]: 2025-10-07 14:44:06.617 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.216 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-unplugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.217 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.217 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-unplugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-unplugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-deleted-b3a7ccba-7b5f-4e87-af92-723dd36cc703 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.218 2 INFO nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Neutron deleted interface b3a7ccba-7b5f-4e87-af92-723dd36cc703; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.218 2 DEBUG nova.network.neutron [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [{"id": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "address": "fa:16:3e:5c:bb:5e", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe5c:bb5e", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5fb4e372-50", "ovs_interfaceid": "5fb4e372-50c4-49a3-a717-ddc2c99673c7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.381 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Detach interface failed, port_id=b3a7ccba-7b5f-4e87-af92-723dd36cc703, reason: Instance 1f692a08-811a-41fb-a8a2-aa936481a256 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.381 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.381 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.382 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.382 2 DEBUG oslo_concurrency.lockutils [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.382 2 DEBUG nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] No waiting events found dispatching network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.382 2 WARNING nova.compute.manager [req-3f05a6a0-1348-45da-aad1-0b70734a2053 req-0807bc21-5507-487d-9861-21770513c122 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received unexpected event network-vif-plugged-5fb4e372-50c4-49a3-a717-ddc2c99673c7 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.427 2 DEBUG nova.network.neutron [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.448 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Successfully created port: 677a2058-6a17-4388-9011-69553854c197 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.561 2 INFO nova.compute.manager [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Took 4.38 seconds to deallocate network for instance.#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.755 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.756 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:07 np0005473739 nova_compute[259550]: 2025-10-07 14:44:07.914 2 DEBUG oslo_concurrency.processutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:08 np0005473739 nova_compute[259550]: 2025-10-07 14:44:08.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:08 np0005473739 podman[401348]: 2025-10-07 14:44:08.100072181 +0000 UTC m=+0.086488650 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:44:08 np0005473739 podman[401349]: 2025-10-07 14:44:08.116892858 +0000 UTC m=+0.103355298 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3)
Oct  7 10:44:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:08 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/675125889' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:08 np0005473739 nova_compute[259550]: 2025-10-07 14:44:08.410 2 DEBUG oslo_concurrency.processutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:08 np0005473739 nova_compute[259550]: 2025-10-07 14:44:08.417 2 DEBUG nova.compute.provider_tree [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:44:08 np0005473739 nova_compute[259550]: 2025-10-07 14:44:08.459 2 DEBUG nova.scheduler.client.report [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:44:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2451: 305 pgs: 305 active+clean; 207 MiB data, 994 MiB used, 59 GiB / 60 GiB avail; 365 KiB/s rd, 2.2 MiB/s wr, 105 op/s
Oct  7 10:44:08 np0005473739 nova_compute[259550]: 2025-10-07 14:44:08.614 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.859s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:08 np0005473739 nova_compute[259550]: 2025-10-07 14:44:08.797 2 INFO nova.scheduler.client.report [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 1f692a08-811a-41fb-a8a2-aa936481a256#033[00m
Oct  7 10:44:08 np0005473739 nova_compute[259550]: 2025-10-07 14:44:08.965 2 DEBUG oslo_concurrency.lockutils [None req-4ff448fd-1764-496f-ba8c-4277dcb81ce8 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "1f692a08-811a-41fb-a8a2-aa936481a256" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:08 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:44:08 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.326 2 DEBUG nova.compute.manager [req-531cdbe6-eccd-4c27-9041-194ef7857684 req-a4d4f773-4004-4e61-af6a-b09f30c2ab2a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Received event network-vif-deleted-5fb4e372-50c4-49a3-a717-ddc2c99673c7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.722 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Successfully updated port: 677a2058-6a17-4388-9011-69553854c197 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.742 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.743 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" acquired by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.743 2 INFO nova.compute.manager [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Shelving#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.772 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.773 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.773 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.802 2 DEBUG nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Shutting down instance from state 1 _clean_shutdown /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4071#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.981 2 DEBUG nova.compute.manager [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-changed-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.982 2 DEBUG nova.compute.manager [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing instance network info cache due to event network-changed-677a2058-6a17-4388-9011-69553854c197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.982 2 DEBUG oslo_concurrency.lockutils [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:09 np0005473739 nova_compute[259550]: 2025-10-07 14:44:09.984 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:44:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2452: 305 pgs: 305 active+clean; 230 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 369 KiB/s rd, 3.3 MiB/s wr, 110 op/s
Oct  7 10:44:11 np0005473739 nova_compute[259550]: 2025-10-07 14:44:11.924 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:12 np0005473739 kernel: tap3ac63514-77 (unregistering): left promiscuous mode
Oct  7 10:44:12 np0005473739 NetworkManager[44949]: <info>  [1759848252.0449] device (tap3ac63514-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:44:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:12Z|01452|binding|INFO|Releasing lport 3ac63514-77e7-4d94-a67c-94806ca3b58b from this chassis (sb_readonly=0)
Oct  7 10:44:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:12Z|01453|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b down in Southbound
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:12Z|01454|binding|INFO|Removing iface tap3ac63514-77 ovn-installed in OVS
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:12 np0005473739 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct  7 10:44:12 np0005473739 systemd[1]: machine-qemu\x2d164\x2dinstance\x2d00000083.scope: Consumed 13.633s CPU time.
Oct  7 10:44:12 np0005473739 systemd-machined[214580]: Machine qemu-164-instance-00000083 terminated.
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.183 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.183 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.183 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.186 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:1b:50 10.100.0.11'], port_security=['fa:16:3e:32:1b:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30241223-64c5-4a88-8ba2-ee340fe6cbd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac40ef14492f40768b3852a40da26621', 'neutron:revision_number': '4', 'neutron:security_group_ids': '56b8c028-3f77-4dba-a2e9-4a1cb7c88d4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c5eb1a-8a6c-4afe-ae4e-424959d231e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3ac63514-77e7-4d94-a67c-94806ca3b58b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.187 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b in datapath 7c054d6f-68ec-4f0b-9362-221001cc6b67 unbound from our chassis#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.189 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c054d6f-68ec-4f0b-9362-221001cc6b67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.190 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92c2388b-57e8-4c11-9387-b6f2309a8a27]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.190 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 namespace which is not needed anymore#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.250 2 DEBUG nova.network.neutron [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [NOTICE]   (400152) : haproxy version is 2.8.14-c23fe91
Oct  7 10:44:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [NOTICE]   (400152) : path to executable is /usr/sbin/haproxy
Oct  7 10:44:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [WARNING]  (400152) : Exiting Master process...
Oct  7 10:44:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [WARNING]  (400152) : Exiting Master process...
Oct  7 10:44:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [ALERT]    (400152) : Current worker (400160) exited with code 143 (Terminated)
Oct  7 10:44:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[400148]: [WARNING]  (400152) : All workers exited. Exiting... (0)
Oct  7 10:44:12 np0005473739 systemd[1]: libpod-d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5.scope: Deactivated successfully.
Oct  7 10:44:12 np0005473739 podman[401432]: 2025-10-07 14:44:12.335855619 +0000 UTC m=+0.050293758 container died d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:44:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5-userdata-shm.mount: Deactivated successfully.
Oct  7 10:44:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3d6983957f31550d23f88388bbfaa03a6c047e05d35d4dc2cf9e78281c838558-merged.mount: Deactivated successfully.
Oct  7 10:44:12 np0005473739 podman[401432]: 2025-10-07 14:44:12.379564901 +0000 UTC m=+0.094003040 container cleanup d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:44:12 np0005473739 systemd[1]: libpod-conmon-d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5.scope: Deactivated successfully.
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.416 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.416 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.417 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.417 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.417 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:12 np0005473739 podman[401468]: 2025-10-07 14:44:12.448260197 +0000 UTC m=+0.042720396 container remove d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.454 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[646ed011-f0a3-4aae-b006-c82918b60cd9]: (4, ('Tue Oct  7 02:44:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 (d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5)\nd8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5\nTue Oct  7 02:44:12 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 (d8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5)\nd8fd899a1c297dad7d0b170fc81218411df024bcceba65ea847c80a0bab69af5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.456 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf82003b-7d57-406c-b89a-ac2c20597524]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.457 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c054d6f-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:12 np0005473739 kernel: tap7c054d6f-60: left promiscuous mode
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.480 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2fbb7f3c-a9a9-4eb7-845d-3cf077be028b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.524 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69fce859-9062-42d9-9f09-a9cb91222d9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.525 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b2239ab1-75f8-4431-8d4b-d7b38183d8e4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2453: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 173 KiB/s rd, 2.5 MiB/s wr, 93 op/s
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.544 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee33aa2-c237-4062-a6c4-44ed77fe5f2e]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 880656, 'reachable_time': 20673, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401487, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:12 np0005473739 systemd[1]: run-netns-ovnmeta\x2d7c054d6f\x2d68ec\x2d4f0b\x2d9362\x2d221001cc6b67.mount: Deactivated successfully.
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.547 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:44:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:12.547 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e6772d-a823-45f5-bc7e-ba5ba3a78b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.569 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.570 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance network_info: |[{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.571 2 DEBUG oslo_concurrency.lockutils [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.571 2 DEBUG nova.network.neutron [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing network info cache for port 677a2058-6a17-4388-9011-69553854c197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.575 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start _get_guest_xml network_info=[{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.580 2 WARNING nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.586 2 DEBUG nova.virt.libvirt.host [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.586 2 DEBUG nova.virt.libvirt.host [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.593 2 DEBUG nova.virt.libvirt.host [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.595 2 DEBUG nova.virt.libvirt.host [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.596 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.596 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.596 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.597 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.597 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.597 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.597 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.598 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.598 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.598 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.599 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.599 2 DEBUG nova.virt.hardware [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.603 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.780 2 DEBUG nova.compute.manager [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-unplugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.780 2 DEBUG oslo_concurrency.lockutils [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.781 2 DEBUG oslo_concurrency.lockutils [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.781 2 DEBUG oslo_concurrency.lockutils [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.782 2 DEBUG nova.compute.manager [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] No waiting events found dispatching network-vif-unplugged-3ac63514-77e7-4d94-a67c-94806ca3b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.782 2 WARNING nova.compute.manager [req-b63efb60-74ce-4595-9e3e-a6e3df76a15d req-1feac593-ac3c-4b08-a07a-aa3798c68b44 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received unexpected event network-vif-unplugged-3ac63514-77e7-4d94-a67c-94806ca3b58b for instance with vm_state active and task_state shelving.#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.821 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance shutdown successfully after 3 seconds.#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.827 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance destroyed successfully.#033[00m
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.827 2 DEBUG nova.objects.instance [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3841762384' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:12 np0005473739 nova_compute[259550]: 2025-10-07 14:44:12.883 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:44:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1451467580' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.050 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.075 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.080 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.231 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Beginning cold snapshot process#033[00m
Oct  7 10:44:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:44:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1497203963' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.547 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.548 2 DEBUG nova.virt.libvirt.vif [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-992946339',display_name='tempest-TestNetworkBasicOps-server-992946339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-992946339',id=132,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNb4KI5K3ESp/qtDfTCTovy9nAGo/33FtwIKSe6Yo7uBCrstvg9OTDvMktEqMWbObIphCTkLTVovrjRh9e99psr4PmzBYLqjNAws3HaKHjxoqr6GDuKbnjMBh642Y3hcvA==',key_name='tempest-TestNetworkBasicOps-2088718198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-jb8fwwnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:05Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=489f6a90-6b26-4b8b-aa3e-095d1d8df333,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.548 2 DEBUG nova.network.os_vif_util [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.549 2 DEBUG nova.network.os_vif_util [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.551 2 DEBUG nova.objects.instance [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 489f6a90-6b26-4b8b-aa3e-095d1d8df333 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.568 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.568 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.569 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.569 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.569 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.571 2 INFO nova.compute.manager [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Terminating instance#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.572 2 DEBUG nova.compute.manager [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.601 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.601 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:44:13 np0005473739 kernel: tapc75fde3c-84 (unregistering): left promiscuous mode
Oct  7 10:44:13 np0005473739 NetworkManager[44949]: <info>  [1759848253.6434] device (tapc75fde3c-84): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:44:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:13Z|01455|binding|INFO|Releasing lport c75fde3c-8461-4ed7-9c14-7f14f5794599 from this chassis (sb_readonly=0)
Oct  7 10:44:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:13Z|01456|binding|INFO|Setting lport c75fde3c-8461-4ed7-9c14-7f14f5794599 down in Southbound
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.660 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:13Z|01457|binding|INFO|Removing iface tapc75fde3c-84 ovn-installed in OVS
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 kernel: tapdfe40ca6-70 (unregistering): left promiscuous mode
Oct  7 10:44:13 np0005473739 NetworkManager[44949]: <info>  [1759848253.6860] device (tapdfe40ca6-70): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:13Z|01458|binding|INFO|Releasing lport dfe40ca6-700f-4101-8729-3d1ee103c5ea from this chassis (sb_readonly=1)
Oct  7 10:44:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:13Z|01459|binding|INFO|Removing iface tapdfe40ca6-70 ovn-installed in OVS
Oct  7 10:44:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:13Z|01460|if_status|INFO|Dropped 4 log messages in last 166 seconds (most recently, 166 seconds ago) due to excessive rate
Oct  7 10:44:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:13Z|01461|if_status|INFO|Not setting lport dfe40ca6-700f-4101-8729-3d1ee103c5ea down as sb is readonly
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:13Z|01462|binding|INFO|Setting lport dfe40ca6-700f-4101-8729-3d1ee103c5ea down in Southbound
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.705 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:41:99:4d 10.100.0.4'], port_security=['fa:16:3e:41:99:4d 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '77918bef-8f72-4152-ac55-f4d4c98477ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262bb8b3-c881-4ed3-8240-c435878fb605, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=c75fde3c-8461-4ed7-9c14-7f14f5794599) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.706 161536 INFO neutron.agent.ovn.metadata.agent [-] Port c75fde3c-8461-4ed7-9c14-7f14f5794599 in datapath bb059ee7-3091-491e-8da2-c9bd1da0f922 unbound from our chassis#033[00m
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.707 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network bb059ee7-3091-491e-8da2-c9bd1da0f922, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.708 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[71d25368-0e90-42c3-baa3-328aa8418771]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.708 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 namespace which is not needed anymore#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.723 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <uuid>489f6a90-6b26-4b8b-aa3e-095d1d8df333</uuid>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <name>instance-00000084</name>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-992946339</nova:name>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:44:12</nova:creationTime>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <nova:port uuid="677a2058-6a17-4388-9011-69553854c197">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <entry name="serial">489f6a90-6b26-4b8b-aa3e-095d1d8df333</entry>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <entry name="uuid">489f6a90-6b26-4b8b-aa3e-095d1d8df333</entry>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:72:0b:2c"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <target dev="tap677a2058-6a"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/console.log" append="off"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:44:13 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:44:13 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:44:13 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:44:13 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.723 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Preparing to wait for external event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.723 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.724 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.724 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.725 2 DEBUG nova.virt.libvirt.vif [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-992946339',display_name='tempest-TestNetworkBasicOps-server-992946339',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-992946339',id=132,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNb4KI5K3ESp/qtDfTCTovy9nAGo/33FtwIKSe6Yo7uBCrstvg9OTDvMktEqMWbObIphCTkLTVovrjRh9e99psr4PmzBYLqjNAws3HaKHjxoqr6GDuKbnjMBh642Y3hcvA==',key_name='tempest-TestNetworkBasicOps-2088718198',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-jb8fwwnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:05Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=489f6a90-6b26-4b8b-aa3e-095d1d8df333,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.725 2 DEBUG nova.network.os_vif_util [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.726 2 DEBUG nova.network.os_vif_util [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.726 2 DEBUG os_vif [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.727 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap677a2058-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap677a2058-6a, col_values=(('external_ids', {'iface-id': '677a2058-6a17-4388-9011-69553854c197', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:72:0b:2c', 'vm-uuid': '489f6a90-6b26-4b8b-aa3e-095d1d8df333'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 NetworkManager[44949]: <info>  [1759848253.7334] manager: (tap677a2058-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/582)
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.734 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.743 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.744 2 INFO os_vif [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a')#033[00m
Oct  7 10:44:13 np0005473739 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Deactivated successfully.
Oct  7 10:44:13 np0005473739 systemd[1]: machine-qemu\x2d160\x2dinstance\x2d0000007f.scope: Consumed 16.471s CPU time.
Oct  7 10:44:13 np0005473739 systemd-machined[214580]: Machine qemu-160-instance-0000007f terminated.
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.784 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556'], port_security=['fa:16:3e:b4:c5:56 2001:db8::f816:3eff:feb4:c556'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '2001:db8::f816:3eff:feb4:c556/64', 'neutron:device_id': '77918bef-8f72-4152-ac55-f4d4c98477ec', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'a500b116-d64f-4be8-9413-85a351e36563', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=36c3782d-89d5-4d69-ae40-86969f172913, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=dfe40ca6-700f-4101-8729-3d1ee103c5ea) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:13 np0005473739 NetworkManager[44949]: <info>  [1759848253.7901] manager: (tapc75fde3c-84): new Tun device (/org/freedesktop/NetworkManager/Devices/583)
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.790 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.791 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000007f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.822 2 INFO nova.virt.libvirt.driver [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Instance destroyed successfully.#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.822 2 DEBUG nova.objects.instance [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 77918bef-8f72-4152-ac55-f4d4c98477ec obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:13 np0005473739 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [NOTICE]   (397660) : haproxy version is 2.8.14-c23fe91
Oct  7 10:44:13 np0005473739 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [NOTICE]   (397660) : path to executable is /usr/sbin/haproxy
Oct  7 10:44:13 np0005473739 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [WARNING]  (397660) : Exiting Master process...
Oct  7 10:44:13 np0005473739 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [ALERT]    (397660) : Current worker (397662) exited with code 143 (Terminated)
Oct  7 10:44:13 np0005473739 neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922[397654]: [WARNING]  (397660) : All workers exited. Exiting... (0)
Oct  7 10:44:13 np0005473739 systemd[1]: libpod-be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d.scope: Deactivated successfully.
Oct  7 10:44:13 np0005473739 podman[401607]: 2025-10-07 14:44:13.860040593 +0000 UTC m=+0.060787687 container died be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:44:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d-userdata-shm.mount: Deactivated successfully.
Oct  7 10:44:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-aade3835522dc3103f13522e66cfc9c234f96ccd29b80c7bd03443b1f8c3128d-merged.mount: Deactivated successfully.
Oct  7 10:44:13 np0005473739 podman[401607]: 2025-10-07 14:44:13.905108921 +0000 UTC m=+0.105855995 container cleanup be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:44:13 np0005473739 systemd[1]: libpod-conmon-be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d.scope: Deactivated successfully.
Oct  7 10:44:13 np0005473739 podman[401659]: 2025-10-07 14:44:13.96678624 +0000 UTC m=+0.038328019 container remove be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.973 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c63df306-df07-4655-b347-7cc726e7e32b]: (4, ('Tue Oct  7 02:44:13 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 (be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d)\nbe3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d\nTue Oct  7 02:44:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 (be3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d)\nbe3371cc49634eaa170b0266059927104c59d26531a8115efc898e6f547bbf1d\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.975 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[527641f8-4909-431f-a323-bb4503d22a2d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:13.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbb059ee7-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.979 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.980 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3467MB free_disk=59.877315521240234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.980 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.980 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:13 np0005473739 kernel: tapbb059ee7-30: left promiscuous mode
Oct  7 10:44:13 np0005473739 nova_compute[259550]: 2025-10-07 14:44:13.993 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.003 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.005 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8118be85-df07-444a-80a0-6fe1eba319b7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.038 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2cedabd5-06c8-4175-a83b-e2f050959bef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.039 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df55eda0-85e7-4581-8081-9925c5ddd591]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.055 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[58750837-4637-4b7f-a22e-8687727a56e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874462, 'reachable_time': 26257, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401686, 'error': None, 'target': 'ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 systemd[1]: run-netns-ovnmeta\x2dbb059ee7\x2d3091\x2d491e\x2d8da2\x2dc9bd1da0f922.mount: Deactivated successfully.
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.057 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-bb059ee7-3091-491e-8da2-c9bd1da0f922 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.057 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[d07df711-96bc-43b9-bcf2-923c2a2b18f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.058 161536 INFO neutron.agent.ovn.metadata.agent [-] Port dfe40ca6-700f-4101-8729-3d1ee103c5ea in datapath e6e769bc-2b33-4210-8062-fbc8d16f9127 unbound from our chassis#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.059 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network e6e769bc-2b33-4210-8062-fbc8d16f9127, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.060 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dacf9111-baa9-4d7e-8e6d-b93f42879e12]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.060 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 namespace which is not needed anymore#033[00m
Oct  7 10:44:14 np0005473739 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [NOTICE]   (397831) : haproxy version is 2.8.14-c23fe91
Oct  7 10:44:14 np0005473739 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [NOTICE]   (397831) : path to executable is /usr/sbin/haproxy
Oct  7 10:44:14 np0005473739 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [WARNING]  (397831) : Exiting Master process...
Oct  7 10:44:14 np0005473739 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [ALERT]    (397831) : Current worker (397833) exited with code 143 (Terminated)
Oct  7 10:44:14 np0005473739 neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127[397827]: [WARNING]  (397831) : All workers exited. Exiting... (0)
Oct  7 10:44:14 np0005473739 systemd[1]: libpod-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109.scope: Deactivated successfully.
Oct  7 10:44:14 np0005473739 conmon[397827]: conmon 6a035aa3c4170c2ee13b <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109.scope/container/memory.events
Oct  7 10:44:14 np0005473739 podman[401704]: 2025-10-07 14:44:14.195188521 +0000 UTC m=+0.049721882 container died 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:44:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109-userdata-shm.mount: Deactivated successfully.
Oct  7 10:44:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-975fa376099815760edd3ac270db67c4e9de90d314e2f36412b98c8c8fb0e428-merged.mount: Deactivated successfully.
Oct  7 10:44:14 np0005473739 podman[401704]: 2025-10-07 14:44:14.229615817 +0000 UTC m=+0.084149178 container cleanup 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:44:14 np0005473739 systemd[1]: libpod-conmon-6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109.scope: Deactivated successfully.
Oct  7 10:44:14 np0005473739 podman[401734]: 2025-10-07 14:44:14.285994485 +0000 UTC m=+0.037420346 container remove 6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.291 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f95fb358-059f-48fe-b246-41c28251e711]: (4, ('Tue Oct  7 02:44:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 (6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109)\n6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109\nTue Oct  7 02:44:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 (6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109)\n6a035aa3c4170c2ee13b10ea42dcc31dff6d3e15bc141fa4c5d8958ddc0a8109\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.292 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[296cdcb6-0554-4455-a361-acaded6513ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.293 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tape6e769bc-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.295 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 kernel: tape6e769bc-20: left promiscuous mode
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[85c118ea-dcb2-4bed-a7a3-0722d413c066]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.358 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[053a9e94-a80a-4c98-a7dc-d7596ec35c3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.359 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ee73da-b3ab-42c6-a44c-aa4c5e53ed00]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.378 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f41a595-c07d-44a8-97ca-361113a9d1cc]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 874552, 'reachable_time': 15137, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 401762, 'error': None, 'target': 'ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.380 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-e6e769bc-2b33-4210-8062-fbc8d16f9127 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:44:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:14.380 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b26fa508-a95c-4dbb-9138-3b9d7c1b2b0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2454: 305 pgs: 305 active+clean; 246 MiB data, 1013 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 1.8 MiB/s wr, 61 op/s
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.646 2 DEBUG nova.virt.libvirt.vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:42:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:42:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.646 2 DEBUG nova.network.os_vif_util [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.647 2 DEBUG nova.network.os_vif_util [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.647 2 DEBUG os_vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.649 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc75fde3c-84, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.702 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.710 2 INFO os_vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:41:99:4d,bridge_name='br-int',has_traffic_filtering=True,id=c75fde3c-8461-4ed7-9c14-7f14f5794599,network=Network(bb059ee7-3091-491e-8da2-c9bd1da0f922),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapc75fde3c-84')#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.710 2 DEBUG nova.virt.libvirt.vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:42:19Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-649311904',display_name='tempest-TestGettingAddress-server-649311904',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-649311904',id=127,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJS2AmQrUlmu5YVTf1yrQvI4CZzSnk54sIr+9stKkSL+woxduk/9H3kxdtIwX7d/xD1ib0NHMo1X5YmZmom5A1TTTF41NilHAzjQ833X7MiDKqUOlI3YhPdenpYyVhrm/A==',key_name='tempest-TestGettingAddress-1285818028',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:42:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-pje5d30x',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:42:45Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=77918bef-8f72-4152-ac55-f4d4c98477ec,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.711 2 DEBUG nova.network.os_vif_util [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.711 2 DEBUG nova.network.os_vif_util [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.711 2 DEBUG os_vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapdfe40ca6-70, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.721 2 INFO os_vif [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b4:c5:56,bridge_name='br-int',has_traffic_filtering=True,id=dfe40ca6-700f-4101-8729-3d1ee103c5ea,network=Network(e6e769bc-2b33-4210-8062-fbc8d16f9127),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapdfe40ca6-70')#033[00m
Oct  7 10:44:14 np0005473739 systemd[1]: run-netns-ovnmeta\x2de6e769bc\x2d2b33\x2d4210\x2d8062\x2dfbc8d16f9127.mount: Deactivated successfully.
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.953 2 DEBUG nova.virt.libvirt.imagebackend [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.966 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.967 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.967 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:72:0b:2c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.968 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Using config drive#033[00m
Oct  7 10:44:14 np0005473739 nova_compute[259550]: 2025-10-07 14:44:14.989 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.032 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 77918bef-8f72-4152-ac55-f4d4c98477ec actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 30241223-64c5-4a88-8ba2-ee340fe6cbd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 489f6a90-6b26-4b8b-aa3e-095d1d8df333 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=59GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.108 2 INFO nova.virt.libvirt.driver [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Deleting instance files /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec_del#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.109 2 INFO nova.virt.libvirt.driver [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Deletion of /var/lib/nova/instances/77918bef-8f72-4152-ac55-f4d4c98477ec_del complete#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.114 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.280 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] creating snapshot(a6640c287f554325b70ebfdf06813bd3) on rbd image(30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.436 2 DEBUG nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.437 2 DEBUG nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing instance network info cache due to event network-changed-c75fde3c-8461-4ed7-9c14-7f14f5794599. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.437 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.437 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.438 2 DEBUG nova.network.neutron [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Refreshing network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:44:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3890338425' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.564 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.571 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.699 2 INFO nova.compute.manager [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Took 2.13 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.700 2 DEBUG oslo.service.loopingcall [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.700 2 DEBUG nova.compute.manager [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.700 2 DEBUG nova.network.neutron [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.769 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.988 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:44:15 np0005473739 nova_compute[259550]: 2025-10-07 14:44:15.988 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.028 2 DEBUG nova.network.neutron [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updated VIF entry in instance network info cache for port 677a2058-6a17-4388-9011-69553854c197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.029 2 DEBUG nova.network.neutron [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e278 do_prune osdmap full prune enabled
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.090 2 DEBUG oslo_concurrency.lockutils [req-38c9fefa-eb20-4853-af54-c979d08df39b req-e042fe2a-1d22-45b1-9f28-b27137f20afe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e279 e279: 3 total, 3 up, 3 in
Oct  7 10:44:16 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e279: 3 total, 3 up, 3 in
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.118 2 DEBUG nova.compute.manager [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-unplugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.118 2 DEBUG oslo_concurrency.lockutils [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.118 2 DEBUG oslo_concurrency.lockutils [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.119 2 DEBUG oslo_concurrency.lockutils [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.119 2 DEBUG nova.compute.manager [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-unplugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.119 2 DEBUG nova.compute.manager [req-6a034fb3-90e0-48ed-b962-4f376111076d req-a952655c-092c-4fac-9f52-ec20d6bd1b35 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-unplugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.172 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] cloning vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk@a6640c287f554325b70ebfdf06813bd3 to images/b28f4653-03be-4806-84c3-f036c6d93d6e clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.217 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Creating config drive at /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.221 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppl9bavil execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.287 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] flattening images/b28f4653-03be-4806-84c3-f036c6d93d6e flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.371 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmppl9bavil" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.402 2 DEBUG nova.storage.rbd_utils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.405 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2456: 305 pgs: 305 active+clean; 215 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 2.1 MiB/s wr, 49 op/s
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.742 2 DEBUG oslo_concurrency.processutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config 489f6a90-6b26-4b8b-aa3e-095d1d8df333_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.337s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.743 2 INFO nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Deleting local config drive /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333/disk.config because it was imported into RBD.#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.757 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] removing snapshot(a6640c287f554325b70ebfdf06813bd3) on rbd image(30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.787 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:16 np0005473739 kernel: tap677a2058-6a: entered promiscuous mode
Oct  7 10:44:16 np0005473739 NetworkManager[44949]: <info>  [1759848256.8045] manager: (tap677a2058-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/584)
Oct  7 10:44:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:16Z|01463|binding|INFO|Claiming lport 677a2058-6a17-4388-9011-69553854c197 for this chassis.
Oct  7 10:44:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:16Z|01464|binding|INFO|677a2058-6a17-4388-9011-69553854c197: Claiming fa:16:3e:72:0b:2c 10.100.0.13
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:16 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:16Z|01465|binding|INFO|Setting lport 677a2058-6a17-4388-9011-69553854c197 ovn-installed in OVS
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.827 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:16 np0005473739 nova_compute[259550]: 2025-10-07 14:44:16.831 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:16 np0005473739 systemd-machined[214580]: New machine qemu-165-instance-00000084.
Oct  7 10:44:16 np0005473739 systemd-udevd[402008]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:44:16 np0005473739 NetworkManager[44949]: <info>  [1759848256.8497] device (tap677a2058-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:44:16 np0005473739 systemd[1]: Started Virtual Machine qemu-165-instance-00000084.
Oct  7 10:44:16 np0005473739 NetworkManager[44949]: <info>  [1759848256.8524] device (tap677a2058-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.023 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:0b:2c 10.100.0.13'], port_security=['fa:16:3e:72:0b:2c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '489f6a90-6b26-4b8b-aa3e-095d1d8df333', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'cc3234eb-21e6-48c7-8919-4558ed1fcfca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2a2dd7-3564-4953-a9fd-9479810a55c3, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=677a2058-6a17-4388-9011-69553854c197) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:17Z|01466|binding|INFO|Setting lport 677a2058-6a17-4388-9011-69553854c197 up in Southbound
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.024 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 677a2058-6a17-4388-9011-69553854c197 in datapath 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 bound to our chassis#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.026 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.038 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d615ec37-072f-4fe3-966b-102a0245a6b6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.039 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5daef5b9-01 in ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.041 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5daef5b9-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.041 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c5510e64-d753-45f5-80e9-18f679af9cbe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.041 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[784b7d7f-10f1-492a-a633-c3e74a18248f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.043 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.053 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[bfbd4810-00a3-484b-8529-78a2dd8dcc92]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.068 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f7fc9226-c063-49aa-8707-30768097b84d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.095 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[797681ea-878b-4d42-b86e-d78ceafc42ed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e279 do_prune osdmap full prune enabled
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.104 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fe2c2035-8e8b-445a-b1d8-bdb3fa3cf5ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 NetworkManager[44949]: <info>  [1759848257.1051] manager: (tap5daef5b9-00): new Veth device (/org/freedesktop/NetworkManager/Devices/585)
Oct  7 10:44:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e280 e280: 3 total, 3 up, 3 in
Oct  7 10:44:17 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e280: 3 total, 3 up, 3 in
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.145 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7034a049-9bba-45af-a31a-45800e23ceb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.148 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[260a8273-47e1-4d1d-a8f5-902bdfb27d5d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.162 2 DEBUG nova.storage.rbd_utils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] creating snapshot(snap) on rbd image(b28f4653-03be-4806-84c3-f036c6d93d6e) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 10:44:17 np0005473739 NetworkManager[44949]: <info>  [1759848257.1731] device (tap5daef5b9-00): carrier: link connected
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.182 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a3a75384-5350-4e8d-8836-a4cadca5d6d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.201 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30be3b7f-aba0-4f19-9683-93cb80b57123]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5daef5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:3b:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 420], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884073, 'reachable_time': 24792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402062, 'error': None, 'target': 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.216 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a768caf9-1dad-49f6-8585-84326eb765c9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef3:3be4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 884073, 'tstamp': 884073}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402065, 'error': None, 'target': 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.233 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[830e83ec-2a7f-4ae3-a97e-5607761a6b80]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5daef5b9-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f3:3b:e4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 420], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884073, 'reachable_time': 24792, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402066, 'error': None, 'target': 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.268 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f2fb9176-aac9-4c6a-8ddf-51b04984d2d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.330 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0ec2042-4bf6-47c5-bbf7-18b8d4fbc570]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.334 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5daef5b9-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.334 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.335 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5daef5b9-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:17 np0005473739 NetworkManager[44949]: <info>  [1759848257.3383] manager: (tap5daef5b9-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/586)
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.338 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:17 np0005473739 kernel: tap5daef5b9-00: entered promiscuous mode
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.343 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5daef5b9-00, col_values=(('external_ids', {'iface-id': '62e3fee2-0588-4b81-9e92-ba04a35b5aca'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:17Z|01467|binding|INFO|Releasing lport 62e3fee2-0588-4b81-9e92-ba04a35b5aca from this chassis (sb_readonly=0)
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.361 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.366 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.367 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61cf6c52-6371-49f6-a9c2-c7d55e2fbbe4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.367 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0.pid.haproxy
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:44:17 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:17.368 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'env', 'PROCESS_TAG=haproxy-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.576 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848242.5747437, 1f692a08-811a-41fb-a8a2-aa936481a256 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.576 2 INFO nova.compute.manager [-] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:44:17 np0005473739 podman[402145]: 2025-10-07 14:44:17.695395758 +0000 UTC m=+0.027865652 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:44:17 np0005473739 podman[402145]: 2025-10-07 14:44:17.798074467 +0000 UTC m=+0.130544331 container create feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.818 2 DEBUG nova.compute.manager [None req-ef77697e-7eb7-4875-8a52-1fdcdeb9ba98 - - - - - -] [instance: 1f692a08-811a-41fb-a8a2-aa936481a256] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:17 np0005473739 systemd[1]: Started libpod-conmon-feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6.scope.
Oct  7 10:44:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:44:17 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23924fe35529e2e0cddf4b199e3d7462d2dc12d09d3b0995ef3a978cbf4b85b8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:17 np0005473739 podman[402145]: 2025-10-07 14:44:17.958406519 +0000 UTC m=+0.290876403 container init feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:44:17 np0005473739 podman[402145]: 2025-10-07 14:44:17.963990837 +0000 UTC m=+0.296460701 container start feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.974 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-unplugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.975 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.975 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.976 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.978 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-unplugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.978 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-unplugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.978 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.979 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.979 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.979 2 DEBUG oslo_concurrency.lockutils [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.980 2 DEBUG nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:17 np0005473739 nova_compute[259550]: 2025-10-07 14:44:17.980 2 WARNING nova.compute.manager [req-e027fb3b-e105-4170-878c-2b10d15dc856 req-cd197e4a-64d3-4e2f-9fcf-57d4dca8d70f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received unexpected event network-vif-plugged-c75fde3c-8461-4ed7-9c14-7f14f5794599 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:44:17 np0005473739 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [NOTICE]   (402164) : New worker (402166) forked
Oct  7 10:44:17 np0005473739 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [NOTICE]   (402164) : Loading success.
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.029 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848258.0292382, 489f6a90-6b26-4b8b-aa3e-095d1d8df333 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.030 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] VM Started (Lifecycle Event)#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.046 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.051 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.055 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848258.029427, 489f6a90-6b26-4b8b-aa3e-095d1d8df333 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.055 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.081 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.085 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.115 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:44:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e280 do_prune osdmap full prune enabled
Oct  7 10:44:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e281 e281: 3 total, 3 up, 3 in
Oct  7 10:44:18 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e281: 3 total, 3 up, 3 in
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.300 2 DEBUG nova.compute.manager [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.301 2 DEBUG oslo_concurrency.lockutils [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.301 2 DEBUG oslo_concurrency.lockutils [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.301 2 DEBUG oslo_concurrency.lockutils [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.301 2 DEBUG nova.compute.manager [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] No waiting events found dispatching network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.302 2 WARNING nova.compute.manager [req-9d21b70c-6abc-4316-944a-ec0b01bbfb7b req-96be565d-8aaa-4b9e-b505-fa8f33e470da 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received unexpected event network-vif-plugged-dfe40ca6-700f-4101-8729-3d1ee103c5ea for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:44:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2459: 305 pgs: 305 active+clean; 215 MiB data, 997 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 10 KiB/s wr, 43 op/s
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.944 2 DEBUG nova.network.neutron [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updated VIF entry in instance network info cache for port c75fde3c-8461-4ed7-9c14-7f14f5794599. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.944 2 DEBUG nova.network.neutron [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [{"id": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "address": "fa:16:3e:41:99:4d", "network": {"id": "bb059ee7-3091-491e-8da2-c9bd1da0f922", "bridge": "br-int", "label": "tempest-network-smoke--19962388", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapc75fde3c-84", "ovs_interfaceid": "c75fde3c-8461-4ed7-9c14-7f14f5794599", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}, {"id": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "address": "fa:16:3e:b4:c5:56", "network": {"id": "e6e769bc-2b33-4210-8062-fbc8d16f9127", "bridge": "br-int", "label": "tempest-network-smoke--1742670396", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:feb4:c556", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapdfe40ca6-70", "ovs_interfaceid": "dfe40ca6-700f-4101-8729-3d1ee103c5ea", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.978 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-77918bef-8f72-4152-ac55-f4d4c98477ec" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.978 2 DEBUG nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.978 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.979 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.979 2 DEBUG oslo_concurrency.lockutils [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.979 2 DEBUG nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] No waiting events found dispatching network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:18 np0005473739 nova_compute[259550]: 2025-10-07 14:44:18.979 2 WARNING nova.compute.manager [req-cb444d0d-7cbf-4718-a78f-255f405a2aaf req-f2a6cbe1-5ddd-45ac-9bfb-975def28fa09 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received unexpected event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b for instance with vm_state active and task_state shelving_image_uploading.#033[00m
Oct  7 10:44:19 np0005473739 nova_compute[259550]: 2025-10-07 14:44:19.604 2 DEBUG nova.network.neutron [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:19 np0005473739 nova_compute[259550]: 2025-10-07 14:44:19.626 2 INFO nova.compute.manager [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Took 3.93 seconds to deallocate network for instance.#033[00m
Oct  7 10:44:19 np0005473739 nova_compute[259550]: 2025-10-07 14:44:19.695 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:19 np0005473739 nova_compute[259550]: 2025-10-07 14:44:19.696 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:19 np0005473739 nova_compute[259550]: 2025-10-07 14:44:19.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:19 np0005473739 nova_compute[259550]: 2025-10-07 14:44:19.780 2 DEBUG oslo_concurrency.processutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:19 np0005473739 nova_compute[259550]: 2025-10-07 14:44:19.994 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Snapshot image upload complete#033[00m
Oct  7 10:44:19 np0005473739 nova_compute[259550]: 2025-10-07 14:44:19.994 2 DEBUG nova.compute.manager [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.058 2 INFO nova.compute.manager [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Shelve offloading#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.063 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-deleted-dfe40ca6-700f-4101-8729-3d1ee103c5ea external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.064 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.064 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.064 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.065 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.065 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Processing event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.065 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.066 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.066 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.066 2 DEBUG oslo_concurrency.lockutils [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.067 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] No waiting events found dispatching network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.067 2 WARNING nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received unexpected event network-vif-plugged-677a2058-6a17-4388-9011-69553854c197 for instance with vm_state building and task_state spawning.#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.067 2 DEBUG nova.compute.manager [req-27048c1c-1d18-4c43-893c-303862170867 req-683b3800-2580-47a3-aebc-b0b3161f01a2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Received event network-vif-deleted-c75fde3c-8461-4ed7-9c14-7f14f5794599 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.068 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.073 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.077 2 INFO nova.virt.libvirt.driver [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance spawned successfully.#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.077 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.078 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance destroyed successfully.#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.079 2 DEBUG nova.compute.manager [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.080 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848260.079563, 489f6a90-6b26-4b8b-aa3e-095d1d8df333 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.080 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.102 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.103 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.103 2 DEBUG nova.network.neutron [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.109 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.114 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.115 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.115 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.116 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.116 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.116 2 DEBUG nova.virt.libvirt.driver [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.124 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.153 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.188 2 INFO nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Took 14.85 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.189 2 DEBUG nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4257071256' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.226 2 DEBUG oslo_concurrency.processutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.232 2 DEBUG nova.compute.provider_tree [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.253 2 INFO nova.compute.manager [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Took 16.37 seconds to build instance.#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.259 2 DEBUG nova.scheduler.client.report [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.284 2 DEBUG oslo_concurrency.lockutils [None req-f13a6827-529c-4e49-b70d-b8f7a778641b 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 16.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.290 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.324 2 INFO nova.scheduler.client.report [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 77918bef-8f72-4152-ac55-f4d4c98477ec#033[00m
Oct  7 10:44:20 np0005473739 nova_compute[259550]: 2025-10-07 14:44:20.406 2 DEBUG oslo_concurrency.lockutils [None req-dc4458a5-ceab-47e0-b99b-e47c1dd20848 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "77918bef-8f72-4152-ac55-f4d4c98477ec" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 6.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2460: 305 pgs: 305 active+clean; 220 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 5.9 MiB/s wr, 199 op/s
Oct  7 10:44:21 np0005473739 nova_compute[259550]: 2025-10-07 14:44:21.586 2 DEBUG nova.network.neutron [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:21 np0005473739 nova_compute[259550]: 2025-10-07 14:44:21.606 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:22 np0005473739 podman[402197]: 2025-10-07 14:44:22.084718318 +0000 UTC m=+0.061891956 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  7 10:44:22 np0005473739 podman[402198]: 2025-10-07 14:44:22.13299157 +0000 UTC m=+0.109199683 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2461: 305 pgs: 305 active+clean; 246 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 7.6 MiB/s rd, 7.3 MiB/s wr, 182 op/s
Oct  7 10:44:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:22.691 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=44, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=43) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:22 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:22.693 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:44:22
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'backups', 'default.rgw.control', 'default.rgw.log', '.rgw.root', 'default.rgw.meta', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'vms', 'images', 'volumes']
Oct  7 10:44:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.788 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance destroyed successfully.#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.788 2 DEBUG nova.objects.instance [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'resources' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.801 2 DEBUG nova.virt.libvirt.vif [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP0AMqBzzYmRrc/gnJzbbMyBAFmuqR5iC2+H5lS9P1NxQlnpTWhcfEteNAmj5N76nsDrvP+kS3KGhT6YiYXIeHex+K2fKyOb9r6ICTnlnIC+U793tuGi+owzMBnIl2+nw==',key_name='tempest-TestShelveInstance-2096115086',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=4,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member',shelved_at='2025-10-07T14:44:19.994816',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='b28f4653-03be-4806-84c3-f036c6d93d6e'},tags=<?>,task_state='shelving_offloading',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:44:14Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='shelved') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.802 2 DEBUG nova.network.os_vif_util [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.804 2 DEBUG nova.network.os_vif_util [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.805 2 DEBUG os_vif [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.806 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ac63514-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.810 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.812 2 INFO os_vif [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77')#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.867 2 DEBUG nova.compute.manager [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.868 2 DEBUG nova.compute.manager [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.868 2 DEBUG oslo_concurrency.lockutils [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.868 2 DEBUG oslo_concurrency.lockutils [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:22 np0005473739 nova_compute[259550]: 2025-10-07 14:44:22.868 2 DEBUG nova.network.neutron [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:44:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:44:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.213 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deleting instance files /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3_del#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.214 2 INFO nova.virt.libvirt.driver [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deletion of /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3_del complete#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.310 2 INFO nova.scheduler.client.report [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Deleted allocations for instance 30241223-64c5-4a88-8ba2-ee340fe6cbd3#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.361 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.362 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.412 2 DEBUG oslo_concurrency.processutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2658636029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.888 2 DEBUG oslo_concurrency.processutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.893 2 DEBUG nova.compute.provider_tree [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.913 2 DEBUG nova.scheduler.client.report [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.936 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.574s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.972 2 DEBUG nova.network.neutron [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated VIF entry in instance network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.972 2 DEBUG nova.network.neutron [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": null, "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "unbound", "details": {}, "devname": "tap3ac63514-77", "ovs_interfaceid": null, "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.986 2 DEBUG oslo_concurrency.lockutils [None req-787bbbce-4e3b-4b87-b906-82d1550d336b 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" "released" by "nova.compute.manager.ComputeManager.shelve_instance.<locals>.do_shelve_instance" :: held 14.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:23 np0005473739 nova_compute[259550]: 2025-10-07 14:44:23.996 2 DEBUG oslo_concurrency.lockutils [req-5e6df965-71fe-4121-8f73-17f6bd77a1cd req-4ddef9cc-4022-46c7-9ec6-554e8e386bf2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:24 np0005473739 nova_compute[259550]: 2025-10-07 14:44:24.304 2 DEBUG nova.compute.manager [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-changed-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:24 np0005473739 nova_compute[259550]: 2025-10-07 14:44:24.304 2 DEBUG nova.compute.manager [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing instance network info cache due to event network-changed-677a2058-6a17-4388-9011-69553854c197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:44:24 np0005473739 nova_compute[259550]: 2025-10-07 14:44:24.305 2 DEBUG oslo_concurrency.lockutils [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:24 np0005473739 nova_compute[259550]: 2025-10-07 14:44:24.305 2 DEBUG oslo_concurrency.lockutils [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:24 np0005473739 nova_compute[259550]: 2025-10-07 14:44:24.305 2 DEBUG nova.network.neutron [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing network info cache for port 677a2058-6a17-4388-9011-69553854c197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:44:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2462: 305 pgs: 305 active+clean; 196 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 7.9 MiB/s rd, 5.9 MiB/s wr, 241 op/s
Oct  7 10:44:25 np0005473739 nova_compute[259550]: 2025-10-07 14:44:25.492 2 DEBUG nova.network.neutron [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updated VIF entry in instance network info cache for port 677a2058-6a17-4388-9011-69553854c197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:44:25 np0005473739 nova_compute[259550]: 2025-10-07 14:44:25.493 2 DEBUG nova.network.neutron [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:25 np0005473739 nova_compute[259550]: 2025-10-07 14:44:25.514 2 DEBUG oslo_concurrency.lockutils [req-544e9844-68ff-4024-98a3-54d0a20a9dcc req-eebcacea-bf9c-4d52-a516-c4e5bcf16488 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:25Z|01468|binding|INFO|Releasing lport 62e3fee2-0588-4b81-9e92-ba04a35b5aca from this chassis (sb_readonly=0)
Oct  7 10:44:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:25.694 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '44'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:25 np0005473739 nova_compute[259550]: 2025-10-07 14:44:25.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2463: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.5 MiB/s rd, 5.0 MiB/s wr, 246 op/s
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.299 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848252.2985566, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.300 2 INFO nova.compute.manager [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.327 2 DEBUG nova.compute.manager [None req-b8a2af00-c87b-4d6c-860a-150608e9dace - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.378 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.378 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" acquired by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.379 2 INFO nova.compute.manager [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Unshelving#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.464 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.465 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.472 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'pci_requests' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.493 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'numa_topology' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.507 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.508 2 INFO nova.compute.claims [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.609 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:27 np0005473739 nova_compute[259550]: 2025-10-07 14:44:27.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2351130135' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.115 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.120 2 DEBUG nova.compute.provider_tree [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:44:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e281 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e281 do_prune osdmap full prune enabled
Oct  7 10:44:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e282 e282: 3 total, 3 up, 3 in
Oct  7 10:44:28 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e282: 3 total, 3 up, 3 in
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.138 2 DEBUG nova.scheduler.client.report [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.159 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.694s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.348 2 INFO nova.network.neutron [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating port 3ac63514-77e7-4d94-a67c-94806ca3b58b with attributes {'binding:host_id': 'compute-0.ctlplane.example.com', 'device_owner': 'compute:nova'}#033[00m
Oct  7 10:44:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2465: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 7.0 MiB/s rd, 4.7 MiB/s wr, 232 op/s
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.818 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848253.8180351, 77918bef-8f72-4152-ac55-f4d4c98477ec => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.819 2 INFO nova.compute.manager [-] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:44:28 np0005473739 nova_compute[259550]: 2025-10-07 14:44:28.838 2 DEBUG nova.compute.manager [None req-1b7f581f-5081-40a0-803a-0ac631cfaf51 - - - - - -] [instance: 77918bef-8f72-4152-ac55-f4d4c98477ec] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:29 np0005473739 nova_compute[259550]: 2025-10-07 14:44:29.082 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:29 np0005473739 nova_compute[259550]: 2025-10-07 14:44:29.083 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:29 np0005473739 nova_compute[259550]: 2025-10-07 14:44:29.083 2 DEBUG nova.network.neutron [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:44:29 np0005473739 nova_compute[259550]: 2025-10-07 14:44:29.161 2 DEBUG nova.compute.manager [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:29 np0005473739 nova_compute[259550]: 2025-10-07 14:44:29.162 2 DEBUG nova.compute.manager [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:44:29 np0005473739 nova_compute[259550]: 2025-10-07 14:44:29.162 2 DEBUG oslo_concurrency.lockutils [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #117. Immutable memtables: 0.
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.363698) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 69] Flushing memtable with next log file: 117
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269363789, "job": 69, "event": "flush_started", "num_memtables": 1, "num_entries": 2081, "num_deletes": 252, "total_data_size": 3310780, "memory_usage": 3359984, "flush_reason": "Manual Compaction"}
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 69] Level-0 flush table #118: started
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269378541, "cf_name": "default", "job": 69, "event": "table_file_creation", "file_number": 118, "file_size": 3254448, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 49753, "largest_seqno": 51833, "table_properties": {"data_size": 3245073, "index_size": 5869, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2437, "raw_key_size": 19509, "raw_average_key_size": 20, "raw_value_size": 3226214, "raw_average_value_size": 3360, "num_data_blocks": 260, "num_entries": 960, "num_filter_entries": 960, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848054, "oldest_key_time": 1759848054, "file_creation_time": 1759848269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 118, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 69] Flush lasted 14875 microseconds, and 7001 cpu microseconds.
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.378584) [db/flush_job.cc:967] [default] [JOB 69] Level-0 flush table #118: 3254448 bytes OK
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.378604) [db/memtable_list.cc:519] [default] Level-0 commit table #118 started
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.380407) [db/memtable_list.cc:722] [default] Level-0 commit table #118: memtable #1 done
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.380424) EVENT_LOG_v1 {"time_micros": 1759848269380419, "job": 69, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.380443) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 69] Try to delete WAL files size 3302033, prev total WAL file size 3302033, number of live WAL files 2.
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000114.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.381379) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730034373639' seq:72057594037927935, type:22 .. '7061786F730035303231' seq:0, type:0; will stop at (end)
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 70] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 69 Base level 0, inputs: [118(3178KB)], [116(8285KB)]
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269381453, "job": 70, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [118], "files_L6": [116], "score": -1, "input_data_size": 11738451, "oldest_snapshot_seqno": -1}
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 70] Generated table #119: 7374 keys, 10043767 bytes, temperature: kUnknown
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269449671, "cf_name": "default", "job": 70, "event": "table_file_creation", "file_number": 119, "file_size": 10043767, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9994741, "index_size": 29458, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18501, "raw_key_size": 190866, "raw_average_key_size": 25, "raw_value_size": 9863285, "raw_average_value_size": 1337, "num_data_blocks": 1153, "num_entries": 7374, "num_filter_entries": 7374, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848269, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 119, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.449926) [db/compaction/compaction_job.cc:1663] [default] [JOB 70] Compacted 1@0 + 1@6 files to L6 => 10043767 bytes
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.451618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 171.9 rd, 147.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.1, 8.1 +0.0 blob) out(9.6 +0.0 blob), read-write-amplify(6.7) write-amplify(3.1) OK, records in: 7894, records dropped: 520 output_compression: NoCompression
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.451655) EVENT_LOG_v1 {"time_micros": 1759848269451638, "job": 70, "event": "compaction_finished", "compaction_time_micros": 68305, "compaction_time_cpu_micros": 25563, "output_level": 6, "num_output_files": 1, "total_output_size": 10043767, "num_input_records": 7894, "num_output_records": 7374, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000118.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269452629, "job": 70, "event": "table_file_deletion", "file_number": 118}
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000116.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848269454646, "job": 70, "event": "table_file_deletion", "file_number": 116}
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.381247) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454714) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454722) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454724) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:29 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:29.454728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2466: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 4.0 MiB/s rd, 1.1 MiB/s wr, 138 op/s
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.146 2 DEBUG nova.network.neutron [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.167 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.168 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.169 2 INFO nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating image(s)#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.189 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.193 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.195 2 DEBUG oslo_concurrency.lockutils [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.195 2 DEBUG nova.network.neutron [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.236 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.255 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.259 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "93592b775c19eb4c64da7128d6922cbb2e2cd89e" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.261 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "93592b775c19eb4c64da7128d6922cbb2e2cd89e" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.506 2 DEBUG nova.virt.libvirt.imagebackend [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/b28f4653-03be-4806-84c3-f036c6d93d6e/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/b28f4653-03be-4806-84c3-f036c6d93d6e/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.566 2 DEBUG nova.virt.libvirt.imagebackend [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/b28f4653-03be-4806-84c3-f036c6d93d6e/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.567 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] cloning images/b28f4653-03be-4806-84c3-f036c6d93d6e@snap to None/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.702 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "93592b775c19eb4c64da7128d6922cbb2e2cd89e" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.441s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.863 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'migration_context' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:31 np0005473739 nova_compute[259550]: 2025-10-07 14:44:31.921 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] flattening vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.336 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Image rbd:vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk:id=openstack:conf=/etc/ceph/ceph.conf flattened successfully while unshelving instance. _try_fetch_image_cache /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11007#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.338 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.338 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Ensure instance console log exists: /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.339 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.339 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.340 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.342 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start _get_guest_xml network_info=[{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:44:09Z,direct_url=<?>,disk_format='raw',id=b28f4653-03be-4806-84c3-f036c6d93d6e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-308957706-shelved',owner='ac40ef14492f40768b3852a40da26621',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:44:19Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.346 2 WARNING nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.349 2 DEBUG nova.virt.libvirt.host [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.350 2 DEBUG nova.virt.libvirt.host [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.353 2 DEBUG nova.virt.libvirt.host [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.354 2 DEBUG nova.virt.libvirt.host [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.354 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.354 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T14:44:09Z,direct_url=<?>,disk_format='raw',id=b28f4653-03be-4806-84c3-f036c6d93d6e,min_disk=1,min_ram=0,name='tempest-TestShelveInstance-server-308957706-shelved',owner='ac40ef14492f40768b3852a40da26621',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T14:44:19Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.355 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.355 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.356 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.356 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.356 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.357 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.357 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.357 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.358 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.358 2 DEBUG nova.virt.hardware [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.358 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.375 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2467: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.4 KiB/s wr, 115 op/s
Oct  7 10:44:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:44:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1905787020' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:44:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:44:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1905787020' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014249405808426125 of space, bias 1.0, pg target 0.42748217425278373 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:44:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:44:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3431466215' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.898 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.923 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:32 np0005473739 nova_compute[259550]: 2025-10-07 14:44:32.927 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e282 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.156 2 DEBUG nova.network.neutron [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated VIF entry in instance network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.157 2 DEBUG nova.network.neutron [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.175 2 DEBUG oslo_concurrency.lockutils [req-3e3897e1-aee8-4359-b4ac-8aa77ba227cd req-d1bc7b15-ea24-4538-b331-9fae3921f118 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:44:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/559260670' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:44:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:33Z|00166|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:72:0b:2c 10.100.0.13
Oct  7 10:44:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:33Z|00167|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:0b:2c 10.100.0.13
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.407 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.409 2 DEBUG nova.virt.libvirt.vif [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='b28f4653-03be-4806-84c3-f036c6d93d6e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-2096115086',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member',shelved_at='2025-10-07T14:44:19.994816',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='b28f4653-03be-4806-84c3-f036c6d93d6e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:27Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.409 2 DEBUG nova.network.os_vif_util [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.410 2 DEBUG nova.network.os_vif_util [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.412 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'pci_devices' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.426 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <uuid>30241223-64c5-4a88-8ba2-ee340fe6cbd3</uuid>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <name>instance-00000083</name>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestShelveInstance-server-308957706</nova:name>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:44:32</nova:creationTime>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <nova:user uuid="52bb2c10051444f181ee0572525fbe9d">tempest-TestShelveInstance-703128978-project-member</nova:user>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <nova:project uuid="ac40ef14492f40768b3852a40da26621">tempest-TestShelveInstance-703128978</nova:project>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="b28f4653-03be-4806-84c3-f036c6d93d6e"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <nova:port uuid="3ac63514-77e7-4d94-a67c-94806ca3b58b">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <entry name="serial">30241223-64c5-4a88-8ba2-ee340fe6cbd3</entry>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <entry name="uuid">30241223-64c5-4a88-8ba2-ee340fe6cbd3</entry>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:32:1b:50"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <target dev="tap3ac63514-77"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/console.log" append="off"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:44:33 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:44:33 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:44:33 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:44:33 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.428 2 DEBUG nova.compute.manager [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Preparing to wait for external event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.429 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.429 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.429 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.430 2 DEBUG nova.virt.libvirt.vif [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='b28f4653-03be-4806-84c3-f036c6d93d6e',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data=None,key_name='tempest-TestShelveInstance-2096115086',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:43:45Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=4,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member',shelved_at='2025-10-07T14:44:19.994816',shelved_host='compute-0.ctlplane.example.com',shelved_image_id='b28f4653-03be-4806-84c3-f036c6d93d6e'},tags=<?>,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:27Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='shelved_offloaded') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.430 2 DEBUG nova.network.os_vif_util [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.431 2 DEBUG nova.network.os_vif_util [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.431 2 DEBUG os_vif [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.432 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.433 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.435 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap3ac63514-77, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.436 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap3ac63514-77, col_values=(('external_ids', {'iface-id': '3ac63514-77e7-4d94-a67c-94806ca3b58b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:32:1b:50', 'vm-uuid': '30241223-64c5-4a88-8ba2-ee340fe6cbd3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:33 np0005473739 NetworkManager[44949]: <info>  [1759848273.4392] manager: (tap3ac63514-77): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/587)
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.444 2 INFO os_vif [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77')#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.505 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.505 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.505 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] No VIF found with MAC fa:16:3e:32:1b:50, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.506 2 INFO nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Using config drive#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.530 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.549 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'ec2_ids' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:33 np0005473739 nova_compute[259550]: 2025-10-07 14:44:33.663 2 DEBUG nova.objects.instance [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'keypairs' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.090 2 INFO nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Creating config drive at /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.095 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph60qwz87 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.237 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmph60qwz87" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.261 2 DEBUG nova.storage.rbd_utils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] rbd image 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.265 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.438 2 DEBUG oslo_concurrency.processutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config 30241223-64c5-4a88-8ba2-ee340fe6cbd3_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.440 2 INFO nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deleting local config drive /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3/disk.config because it was imported into RBD.#033[00m
Oct  7 10:44:34 np0005473739 kernel: tap3ac63514-77: entered promiscuous mode
Oct  7 10:44:34 np0005473739 NetworkManager[44949]: <info>  [1759848274.5025] manager: (tap3ac63514-77): new Tun device (/org/freedesktop/NetworkManager/Devices/588)
Oct  7 10:44:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:34Z|01469|binding|INFO|Claiming lport 3ac63514-77e7-4d94-a67c-94806ca3b58b for this chassis.
Oct  7 10:44:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:34Z|01470|binding|INFO|3ac63514-77e7-4d94-a67c-94806ca3b58b: Claiming fa:16:3e:32:1b:50 10.100.0.11
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.514 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:1b:50 10.100.0.11'], port_security=['fa:16:3e:32:1b:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30241223-64c5-4a88-8ba2-ee340fe6cbd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac40ef14492f40768b3852a40da26621', 'neutron:revision_number': '7', 'neutron:security_group_ids': '56b8c028-3f77-4dba-a2e9-4a1cb7c88d4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c5eb1a-8a6c-4afe-ae4e-424959d231e5, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3ac63514-77e7-4d94-a67c-94806ca3b58b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.515 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b in datapath 7c054d6f-68ec-4f0b-9362-221001cc6b67 bound to our chassis#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.517 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7c054d6f-68ec-4f0b-9362-221001cc6b67#033[00m
Oct  7 10:44:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:34Z|01471|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b ovn-installed in OVS
Oct  7 10:44:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:34Z|01472|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b up in Southbound
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.528 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0eaa18e9-696b-4c1e-8abc-71733c6c5126]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.529 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7c054d6f-61 in ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.531 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7c054d6f-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.531 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[45fe601d-98e4-407f-9c27-6f27a745337c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.532 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5dd542cb-6e5a-451c-9ead-6960d8f546ff]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 systemd-udevd[402657]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:44:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2468: 305 pgs: 305 active+clean; 249 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 4.6 MiB/s rd, 5.4 MiB/s wr, 157 op/s
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.544 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8da4d0b6-b777-443b-96c3-3e8f3fa4566f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 NetworkManager[44949]: <info>  [1759848274.5516] device (tap3ac63514-77): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:44:34 np0005473739 NetworkManager[44949]: <info>  [1759848274.5526] device (tap3ac63514-77): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:44:34 np0005473739 systemd-machined[214580]: New machine qemu-166-instance-00000083.
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.557 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[242eadb5-51d7-4ce2-9268-961932eb0a81]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 systemd[1]: Started Virtual Machine qemu-166-instance-00000083.
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.590 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b770f80d-8e72-4def-9781-232fca6e21a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 NetworkManager[44949]: <info>  [1759848274.5989] manager: (tap7c054d6f-60): new Veth device (/org/freedesktop/NetworkManager/Devices/589)
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[622bc70f-d694-43ec-bb8a-e5d247b233e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.633 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e8c9efc6-bd51-44b7-9d9b-5f3ad92fbfe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.636 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[76e77f01-2477-4477-b64f-5f0604dab6e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 NetworkManager[44949]: <info>  [1759848274.6595] device (tap7c054d6f-60): carrier: link connected
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.664 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[02b88fb8-9ac6-448c-96a7-75429c46eb9d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.682 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[10e837b7-90ab-42ed-bdc0-7a2bc38c135b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c054d6f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:e1:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885822, 'reachable_time': 26783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402689, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.698 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[487776a1-426e-446d-8b9a-def27c398ca9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef7:e109'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 885822, 'tstamp': 885822}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 402690, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.718 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bfa3bee8-39ec-4193-9fec-aed31ae2c80a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7c054d6f-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f7:e1:09'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 422], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885822, 'reachable_time': 26783, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 402691, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.751 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[219c5a66-73e2-4398-adcb-a48b84dadd27]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.815 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dac79552-5b36-4503-aa18-8b5c29cd65b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.817 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c054d6f-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.817 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.817 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7c054d6f-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:34 np0005473739 NetworkManager[44949]: <info>  [1759848274.8797] manager: (tap7c054d6f-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/590)
Oct  7 10:44:34 np0005473739 kernel: tap7c054d6f-60: entered promiscuous mode
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.882 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.883 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7c054d6f-60, col_values=(('external_ids', {'iface-id': 'bb603baf-bbde-4821-a893-8713cfab0527'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:34 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:34Z|01473|binding|INFO|Releasing lport bb603baf-bbde-4821-a893-8713cfab0527 from this chassis (sb_readonly=0)
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.888 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.888 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab4d01d3-bdcf-4ef7-a294-4046b9058277]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.889 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/7c054d6f-68ec-4f0b-9362-221001cc6b67.pid.haproxy
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 7c054d6f-68ec-4f0b-9362-221001cc6b67
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:44:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:34.890 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'env', 'PROCESS_TAG=haproxy-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7c054d6f-68ec-4f0b-9362-221001cc6b67.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:34 np0005473739 nova_compute[259550]: 2025-10-07 14:44:34.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.144 2 DEBUG nova.compute.manager [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.145 2 DEBUG oslo_concurrency.lockutils [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.145 2 DEBUG oslo_concurrency.lockutils [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.145 2 DEBUG oslo_concurrency.lockutils [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.146 2 DEBUG nova.compute.manager [req-a3f041aa-2ee5-493a-b6d2-8ff584fa7185 req-414f9eda-4031-4cad-9ac1-04a24f8f6548 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Processing event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:44:35 np0005473739 podman[402765]: 2025-10-07 14:44:35.27500347 +0000 UTC m=+0.057658184 container create 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:44:35 np0005473739 systemd[1]: Started libpod-conmon-6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5.scope.
Oct  7 10:44:35 np0005473739 podman[402765]: 2025-10-07 14:44:35.242046074 +0000 UTC m=+0.024700788 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:44:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:44:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d869432837bfb3671cd5715ce91e9446229f7e47546d9d9b9ecd83497bca9d04/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:35 np0005473739 podman[402765]: 2025-10-07 14:44:35.356013123 +0000 UTC m=+0.138667847 container init 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:44:35 np0005473739 podman[402765]: 2025-10-07 14:44:35.362472715 +0000 UTC m=+0.145127429 container start 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 10:44:35 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [NOTICE]   (402784) : New worker (402786) forked
Oct  7 10:44:35 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [NOTICE]   (402784) : Loading success.
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.451 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848275.450709, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.452 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Started (Lifecycle Event)#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.455 2 DEBUG nova.compute.manager [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.459 2 DEBUG nova.virt.libvirt.driver [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.463 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance spawned successfully.#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.499 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Synchronizing instance power state after lifecycle event "Started"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.532 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.533 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848275.452111, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.533 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.591 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.595 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848275.4588773, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.595 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.648 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.653 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: shelved_offloaded, current task_state: spawning, current DB power_state: 4, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:44:35 np0005473739 nova_compute[259550]: 2025-10-07 14:44:35.694 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:44:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2469: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 172 op/s
Oct  7 10:44:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e282 do_prune osdmap full prune enabled
Oct  7 10:44:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e283 e283: 3 total, 3 up, 3 in
Oct  7 10:44:36 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e283: 3 total, 3 up, 3 in
Oct  7 10:44:37 np0005473739 nova_compute[259550]: 2025-10-07 14:44:37.259 2 DEBUG nova.compute.manager [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:37 np0005473739 nova_compute[259550]: 2025-10-07 14:44:37.259 2 DEBUG oslo_concurrency.lockutils [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:37 np0005473739 nova_compute[259550]: 2025-10-07 14:44:37.259 2 DEBUG oslo_concurrency.lockutils [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:37 np0005473739 nova_compute[259550]: 2025-10-07 14:44:37.260 2 DEBUG oslo_concurrency.lockutils [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:37 np0005473739 nova_compute[259550]: 2025-10-07 14:44:37.260 2 DEBUG nova.compute.manager [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] No waiting events found dispatching network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:44:37 np0005473739 nova_compute[259550]: 2025-10-07 14:44:37.260 2 WARNING nova.compute.manager [req-3735e836-3f11-4167-9e5d-eb5000e00dff req-7d77194a-cf0f-4e16-8fc7-a092fc54d4e5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received unexpected event network-vif-plugged-3ac63514-77e7-4d94-a67c-94806ca3b58b for instance with vm_state shelved_offloaded and task_state spawning.#033[00m
Oct  7 10:44:37 np0005473739 nova_compute[259550]: 2025-10-07 14:44:37.274 2 DEBUG nova.compute.manager [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:44:37 np0005473739 nova_compute[259550]: 2025-10-07 14:44:37.518 2 DEBUG oslo_concurrency.lockutils [None req-5d17815d-66e0-4f58-a14c-ceaf4b75503e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" "released" by "nova.compute.manager.ComputeManager.unshelve_instance.<locals>.do_unshelve_instance" :: held 10.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:38 np0005473739 nova_compute[259550]: 2025-10-07 14:44:38.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:38 np0005473739 nova_compute[259550]: 2025-10-07 14:44:38.438 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2471: 305 pgs: 305 active+clean; 279 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.1 MiB/s rd, 7.2 MiB/s wr, 172 op/s
Oct  7 10:44:39 np0005473739 nova_compute[259550]: 2025-10-07 14:44:39.018 2 INFO nova.compute.manager [None req-32f44190-fbd1-404b-86af-3a2b1fbdb35d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Get console output#033[00m
Oct  7 10:44:39 np0005473739 nova_compute[259550]: 2025-10-07 14:44:39.025 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:44:39 np0005473739 podman[402795]: 2025-10-07 14:44:39.074853451 +0000 UTC m=+0.064754623 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.build-date=20251001, container_name=multipathd)
Oct  7 10:44:39 np0005473739 podman[402796]: 2025-10-07 14:44:39.080803399 +0000 UTC m=+0.066713184 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:44:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:39Z|00168|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:0b:2c 10.100.0.13
Oct  7 10:44:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2472: 305 pgs: 305 active+clean; 226 MiB data, 1003 MiB used, 59 GiB / 60 GiB avail; 7.2 MiB/s rd, 7.2 MiB/s wr, 274 op/s
Oct  7 10:44:41 np0005473739 nova_compute[259550]: 2025-10-07 14:44:41.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:42Z|00169|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:0b:2c 10.100.0.13
Oct  7 10:44:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2473: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 7.2 MiB/s wr, 284 op/s
Oct  7 10:44:43 np0005473739 nova_compute[259550]: 2025-10-07 14:44:43.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e283 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e283 do_prune osdmap full prune enabled
Oct  7 10:44:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 e284: 3 total, 3 up, 3 in
Oct  7 10:44:43 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e284: 3 total, 3 up, 3 in
Oct  7 10:44:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:43.320 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:0c:94 10.100.0.2 2001:db8::f816:3eff:fe31:c94'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe31:c94/64', 'neutron:device_id': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff172ff9-04e5-4286-b498-dfa958b1473c) old=Port_Binding(mac=['fa:16:3e:31:0c:94 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:43.322 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff172ff9-04e5-4286-b498-dfa958b1473c in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e updated#033[00m
Oct  7 10:44:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:43.323 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:44:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:43.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[66fb655f-860b-4594-864b-a96f80c15e8f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:43 np0005473739 nova_compute[259550]: 2025-10-07 14:44:43.441 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:44Z|00170|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:72:0b:2c 10.100.0.13
Oct  7 10:44:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2475: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.9 MiB/s rd, 19 KiB/s wr, 140 op/s
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.720 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.721 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.721 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.722 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.722 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.723 2 INFO nova.compute.manager [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Terminating instance#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.724 2 DEBUG nova.compute.manager [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.727 2 DEBUG nova.compute.manager [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-changed-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.727 2 DEBUG nova.compute.manager [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing instance network info cache due to event network-changed-677a2058-6a17-4388-9011-69553854c197. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.728 2 DEBUG oslo_concurrency.lockutils [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.728 2 DEBUG oslo_concurrency.lockutils [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:44:45 np0005473739 nova_compute[259550]: 2025-10-07 14:44:45.728 2 DEBUG nova.network.neutron [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Refreshing network info cache for port 677a2058-6a17-4388-9011-69553854c197 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:44:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2476: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 113 op/s
Oct  7 10:44:46 np0005473739 kernel: tap677a2058-6a (unregistering): left promiscuous mode
Oct  7 10:44:46 np0005473739 NetworkManager[44949]: <info>  [1759848286.6213] device (tap677a2058-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:46Z|01474|binding|INFO|Releasing lport 677a2058-6a17-4388-9011-69553854c197 from this chassis (sb_readonly=0)
Oct  7 10:44:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:46Z|01475|binding|INFO|Setting lport 677a2058-6a17-4388-9011-69553854c197 down in Southbound
Oct  7 10:44:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:46Z|01476|binding|INFO|Removing iface tap677a2058-6a ovn-installed in OVS
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.640 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:72:0b:2c 10.100.0.13'], port_security=['fa:16:3e:72:0b:2c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '489f6a90-6b26-4b8b-aa3e-095d1d8df333', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'cc3234eb-21e6-48c7-8919-4558ed1fcfca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=bd2a2dd7-3564-4953-a9fd-9479810a55c3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=677a2058-6a17-4388-9011-69553854c197) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.641 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 677a2058-6a17-4388-9011-69553854c197 in datapath 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 unbound from our chassis#033[00m
Oct  7 10:44:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.642 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:44:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[32e353a1-f8eb-486c-a37a-c466dd63a675]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:46.645 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 namespace which is not needed anymore#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:46 np0005473739 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Deactivated successfully.
Oct  7 10:44:46 np0005473739 systemd[1]: machine-qemu\x2d165\x2dinstance\x2d00000084.scope: Consumed 14.010s CPU time.
Oct  7 10:44:46 np0005473739 systemd-machined[214580]: Machine qemu-165-instance-00000084 terminated.
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.768 2 INFO nova.virt.libvirt.driver [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Instance destroyed successfully.#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.770 2 DEBUG nova.objects.instance [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 489f6a90-6b26-4b8b-aa3e-095d1d8df333 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.783 2 DEBUG nova.virt.libvirt.vif [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:44:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-992946339',display_name='tempest-TestNetworkBasicOps-server-992946339',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-992946339',id=132,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBNb4KI5K3ESp/qtDfTCTovy9nAGo/33FtwIKSe6Yo7uBCrstvg9OTDvMktEqMWbObIphCTkLTVovrjRh9e99psr4PmzBYLqjNAws3HaKHjxoqr6GDuKbnjMBh642Y3hcvA==',key_name='tempest-TestNetworkBasicOps-2088718198',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:44:20Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-jb8fwwnm',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:44:20Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=489f6a90-6b26-4b8b-aa3e-095d1d8df333,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.784 2 DEBUG nova.network.os_vif_util [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "1.2.3.4", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.785 2 DEBUG nova.network.os_vif_util [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.785 2 DEBUG os_vif [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.789 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap677a2058-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:44:46 np0005473739 nova_compute[259550]: 2025-10-07 14:44:46.796 2 INFO os_vif [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:72:0b:2c,bridge_name='br-int',has_traffic_filtering=True,id=677a2058-6a17-4388-9011-69553854c197,network=Network(5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap677a2058-6a')#033[00m
Oct  7 10:44:46 np0005473739 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [NOTICE]   (402164) : haproxy version is 2.8.14-c23fe91
Oct  7 10:44:46 np0005473739 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [NOTICE]   (402164) : path to executable is /usr/sbin/haproxy
Oct  7 10:44:46 np0005473739 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [WARNING]  (402164) : Exiting Master process...
Oct  7 10:44:46 np0005473739 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [ALERT]    (402164) : Current worker (402166) exited with code 143 (Terminated)
Oct  7 10:44:46 np0005473739 neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0[402160]: [WARNING]  (402164) : All workers exited. Exiting... (0)
Oct  7 10:44:46 np0005473739 systemd[1]: libpod-feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6.scope: Deactivated successfully.
Oct  7 10:44:46 np0005473739 podman[402855]: 2025-10-07 14:44:46.882121431 +0000 UTC m=+0.146716941 container died feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:44:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6-userdata-shm.mount: Deactivated successfully.
Oct  7 10:44:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-23924fe35529e2e0cddf4b199e3d7462d2dc12d09d3b0995ef3a978cbf4b85b8-merged.mount: Deactivated successfully.
Oct  7 10:44:47 np0005473739 podman[402855]: 2025-10-07 14:44:47.533366991 +0000 UTC m=+0.797962491 container cleanup feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:44:47 np0005473739 systemd[1]: libpod-conmon-feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6.scope: Deactivated successfully.
Oct  7 10:44:47 np0005473739 podman[402912]: 2025-10-07 14:44:47.661144637 +0000 UTC m=+0.105806863 container remove feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.668 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[289059dd-4b67-45f8-bd7c-edaa5243ccc6]: (4, ('Tue Oct  7 02:44:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 (feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6)\nfeedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6\nTue Oct  7 02:44:47 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 (feedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6)\nfeedfdeda3cfb21668a6b8139488b3856933bd275d7697f77bcb9a67018bf8c6\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.670 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5045bc36-2448-4702-a497-88f23ca7eee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.672 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5daef5b9-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:44:47 np0005473739 nova_compute[259550]: 2025-10-07 14:44:47.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:47 np0005473739 kernel: tap5daef5b9-00: left promiscuous mode
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ea25d5b-2427-4081-888a-2c1feaf8605e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:47 np0005473739 nova_compute[259550]: 2025-10-07 14:44:47.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.706 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cab14aad-da9e-43db-ae35-1b42763c5751]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.709 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4cd1788-d628-45b5-897d-67e01382fd15]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.729 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3ce141-2030-45b3-af3b-283fbdcc91e5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 884065, 'reachable_time': 18791, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 402930, 'error': None, 'target': 'ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:47 np0005473739 systemd[1]: run-netns-ovnmeta\x2d5daef5b9\x2d08c8\x2d4a4d\x2db8b3\x2d4bcbbb7ff9e0.mount: Deactivated successfully.
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.734 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:44:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:47.734 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[34efa42b-7056-43a5-ba09-1cafa5eada8b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:48 np0005473739 nova_compute[259550]: 2025-10-07 14:44:48.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:48 np0005473739 nova_compute[259550]: 2025-10-07 14:44:48.207 2 DEBUG nova.network.neutron [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updated VIF entry in instance network info cache for port 677a2058-6a17-4388-9011-69553854c197. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:44:48 np0005473739 nova_compute[259550]: 2025-10-07 14:44:48.208 2 DEBUG nova.network.neutron [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [{"id": "677a2058-6a17-4388-9011-69553854c197", "address": "fa:16:3e:72:0b:2c", "network": {"id": "5daef5b9-08c8-4a4d-b8b3-4bcbbb7ff9e0", "bridge": "br-int", "label": "tempest-network-smoke--1430895330", "subnets": [{"cidr": "10.100.0.0/28", "dns": [{"address": "9.8.7.6", "type": "dns", "version": 4, "meta": {}}], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap677a2058-6a", "ovs_interfaceid": "677a2058-6a17-4388-9011-69553854c197", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:48 np0005473739 nova_compute[259550]: 2025-10-07 14:44:48.240 2 DEBUG oslo_concurrency.lockutils [req-c7c12ce4-150d-4898-8409-49790055c207 req-7ec9b6af-620a-4fbb-8fe9-b5e5ec722d0e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-489f6a90-6b26-4b8b-aa3e-095d1d8df333" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:44:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2477: 305 pgs: 305 active+clean; 200 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 15 KiB/s wr, 112 op/s
Oct  7 10:44:49 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:49Z|00171|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:32:1b:50 10.100.0.11
Oct  7 10:44:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2478: 305 pgs: 305 active+clean; 151 MiB data, 964 MiB used, 59 GiB / 60 GiB avail; 718 KiB/s rd, 23 KiB/s wr, 51 op/s
Oct  7 10:44:50 np0005473739 nova_compute[259550]: 2025-10-07 14:44:50.918 2 INFO nova.virt.libvirt.driver [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Deleting instance files /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333_del#033[00m
Oct  7 10:44:50 np0005473739 nova_compute[259550]: 2025-10-07 14:44:50.920 2 INFO nova.virt.libvirt.driver [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Deletion of /var/lib/nova/instances/489f6a90-6b26-4b8b-aa3e-095d1d8df333_del complete#033[00m
Oct  7 10:44:50 np0005473739 nova_compute[259550]: 2025-10-07 14:44:50.985 2 INFO nova.compute.manager [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Took 5.26 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:44:50 np0005473739 nova_compute[259550]: 2025-10-07 14:44:50.986 2 DEBUG oslo.service.loopingcall [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:44:50 np0005473739 nova_compute[259550]: 2025-10-07 14:44:50.986 2 DEBUG nova.compute.manager [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:44:50 np0005473739 nova_compute[259550]: 2025-10-07 14:44:50.987 2 DEBUG nova.network.neutron [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:44:51 np0005473739 nova_compute[259550]: 2025-10-07 14:44:51.791 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:52.112 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:31:0c:94 10.100.0.2 2001:db8:0:1:f816:3eff:fe31:c94 2001:db8::f816:3eff:fe31:c94'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fe31:c94/64 2001:db8::f816:3eff:fe31:c94/64', 'neutron:device_id': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=ff172ff9-04e5-4286-b498-dfa958b1473c) old=Port_Binding(mac=['fa:16:3e:31:0c:94 10.100.0.2 2001:db8::f816:3eff:fe31:c94'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fe31:c94/64', 'neutron:device_id': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:44:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:52.113 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port ff172ff9-04e5-4286-b498-dfa958b1473c in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e updated#033[00m
Oct  7 10:44:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:52.114 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:44:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:44:52.115 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4102f13-22b8-4f9e-b038-82836459fa27]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:44:52 np0005473739 podman[403047]: 2025-10-07 14:44:52.202768515 +0000 UTC m=+0.063674084 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:44:52 np0005473739 podman[403055]: 2025-10-07 14:44:52.243793556 +0000 UTC m=+0.088049312 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  7 10:44:52 np0005473739 nova_compute[259550]: 2025-10-07 14:44:52.530 2 DEBUG nova.network.neutron [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2479: 305 pgs: 305 active+clean; 121 MiB data, 949 MiB used, 59 GiB / 60 GiB avail; 668 KiB/s rd, 24 KiB/s wr, 83 op/s
Oct  7 10:44:52 np0005473739 nova_compute[259550]: 2025-10-07 14:44:52.551 2 INFO nova.compute.manager [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Took 1.56 seconds to deallocate network for instance.#033[00m
Oct  7 10:44:52 np0005473739 nova_compute[259550]: 2025-10-07 14:44:52.604 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:52 np0005473739 nova_compute[259550]: 2025-10-07 14:44:52.605 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:52 np0005473739 nova_compute[259550]: 2025-10-07 14:44:52.628 2 DEBUG nova.compute.manager [req-f1926a45-f8bf-491d-b002-f00ab61f7e0d req-3b478405-0f6d-4e76-b7ef-77e76d9c6470 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Received event network-vif-deleted-677a2058-6a17-4388-9011-69553854c197 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:44:52 np0005473739 nova_compute[259550]: 2025-10-07 14:44:52.686 2 DEBUG oslo_concurrency.processutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 33e4a086-9953-407e-b033-a2290e691009 does not exist
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev dd7092c9-5e7b-4a54-8f6d-fd3e6a75803a does not exist
Oct  7 10:44:52 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 71a79f65-7663-416f-aa74-259e0a0e1edc does not exist
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:44:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:44:53 np0005473739 nova_compute[259550]: 2025-10-07 14:44:53.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014197318' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:53 np0005473739 nova_compute[259550]: 2025-10-07 14:44:53.145 2 DEBUG oslo_concurrency.processutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.458s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:53 np0005473739 nova_compute[259550]: 2025-10-07 14:44:53.152 2 DEBUG nova.compute.provider_tree [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:44:53 np0005473739 nova_compute[259550]: 2025-10-07 14:44:53.168 2 DEBUG nova.scheduler.client.report [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:44:53 np0005473739 nova_compute[259550]: 2025-10-07 14:44:53.188 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.583s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:44:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:44:53 np0005473739 nova_compute[259550]: 2025-10-07 14:44:53.224 2 INFO nova.scheduler.client.report [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 489f6a90-6b26-4b8b-aa3e-095d1d8df333#033[00m
Oct  7 10:44:53 np0005473739 nova_compute[259550]: 2025-10-07 14:44:53.291 2 DEBUG oslo_concurrency.lockutils [None req-5335d8dd-faf6-447e-aa56-235b04cf7c8d 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "489f6a90-6b26-4b8b-aa3e-095d1d8df333" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 7.570s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:53 np0005473739 podman[403393]: 2025-10-07 14:44:53.560127194 +0000 UTC m=+0.054777997 container create 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 10:44:53 np0005473739 systemd[1]: Started libpod-conmon-1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8.scope.
Oct  7 10:44:53 np0005473739 podman[403393]: 2025-10-07 14:44:53.528643768 +0000 UTC m=+0.023294601 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:44:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:44:53 np0005473739 podman[403393]: 2025-10-07 14:44:53.669039729 +0000 UTC m=+0.163690562 container init 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:44:53 np0005473739 podman[403393]: 2025-10-07 14:44:53.677380451 +0000 UTC m=+0.172031254 container start 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:44:53 np0005473739 mystifying_tu[403410]: 167 167
Oct  7 10:44:53 np0005473739 systemd[1]: libpod-1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8.scope: Deactivated successfully.
Oct  7 10:44:53 np0005473739 conmon[403410]: conmon 1c03e18050742a9c3871 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8.scope/container/memory.events
Oct  7 10:44:53 np0005473739 podman[403393]: 2025-10-07 14:44:53.690498539 +0000 UTC m=+0.185149362 container attach 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:44:53 np0005473739 podman[403393]: 2025-10-07 14:44:53.690994763 +0000 UTC m=+0.185645596 container died 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:44:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-84ddfbfefeedf8572146f4c416ac9c073bc1793965a93f3f136979121f808990-merged.mount: Deactivated successfully.
Oct  7 10:44:53 np0005473739 podman[403393]: 2025-10-07 14:44:53.73714687 +0000 UTC m=+0.231797673 container remove 1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_tu, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:44:53 np0005473739 systemd[1]: libpod-conmon-1c03e18050742a9c38714f7b914382405c80afc0dce750d93901bc543345bbd8.scope: Deactivated successfully.
Oct  7 10:44:53 np0005473739 podman[403433]: 2025-10-07 14:44:53.936402566 +0000 UTC m=+0.069716294 container create a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:44:53 np0005473739 podman[403433]: 2025-10-07 14:44:53.888296137 +0000 UTC m=+0.021609885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:44:53 np0005473739 systemd[1]: Started libpod-conmon-a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78.scope.
Oct  7 10:44:54 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:44:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:54 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:54 np0005473739 podman[403433]: 2025-10-07 14:44:54.041546501 +0000 UTC m=+0.174860239 container init a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:44:54 np0005473739 podman[403433]: 2025-10-07 14:44:54.050699713 +0000 UTC m=+0.184013441 container start a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:44:54 np0005473739 podman[403433]: 2025-10-07 14:44:54.054463454 +0000 UTC m=+0.187777182 container attach a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 10:44:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2480: 305 pgs: 305 active+clean; 121 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 590 KiB/s rd, 31 KiB/s wr, 77 op/s
Oct  7 10:44:54 np0005473739 nova_compute[259550]: 2025-10-07 14:44:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:55 np0005473739 sad_wu[403449]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:44:55 np0005473739 sad_wu[403449]: --> relative data size: 1.0
Oct  7 10:44:55 np0005473739 sad_wu[403449]: --> All data devices are unavailable
Oct  7 10:44:55 np0005473739 systemd[1]: libpod-a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78.scope: Deactivated successfully.
Oct  7 10:44:55 np0005473739 systemd[1]: libpod-a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78.scope: Consumed 1.070s CPU time.
Oct  7 10:44:55 np0005473739 podman[403433]: 2025-10-07 14:44:55.187328936 +0000 UTC m=+1.320642664 container died a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:44:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-74a421e3ecf1420828fb70aec85052fc02a549c67de9bfc4f4bb81c486a68c77-merged.mount: Deactivated successfully.
Oct  7 10:44:55 np0005473739 podman[403433]: 2025-10-07 14:44:55.249661693 +0000 UTC m=+1.382975421 container remove a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:44:55 np0005473739 systemd[1]: libpod-conmon-a24978e74d61831996cb27327fe091e363b7d25a23ffcf00993e154a24134f78.scope: Deactivated successfully.
Oct  7 10:44:55 np0005473739 podman[403631]: 2025-10-07 14:44:55.860756976 +0000 UTC m=+0.039081890 container create b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:44:55 np0005473739 systemd[1]: Started libpod-conmon-b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926.scope.
Oct  7 10:44:55 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:44:55 np0005473739 podman[403631]: 2025-10-07 14:44:55.843481677 +0000 UTC m=+0.021806611 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:44:55 np0005473739 podman[403631]: 2025-10-07 14:44:55.942446148 +0000 UTC m=+0.120771092 container init b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:44:55 np0005473739 podman[403631]: 2025-10-07 14:44:55.948268402 +0000 UTC m=+0.126593316 container start b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:44:55 np0005473739 podman[403631]: 2025-10-07 14:44:55.951315763 +0000 UTC m=+0.129640677 container attach b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:44:55 np0005473739 crazy_driscoll[403647]: 167 167
Oct  7 10:44:55 np0005473739 systemd[1]: libpod-b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926.scope: Deactivated successfully.
Oct  7 10:44:55 np0005473739 podman[403631]: 2025-10-07 14:44:55.954478327 +0000 UTC m=+0.132803241 container died b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:44:55 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f48645466a88738f2876d9f4b425e97f67c615a0c3da1869072129a11c730440-merged.mount: Deactivated successfully.
Oct  7 10:44:55 np0005473739 podman[403631]: 2025-10-07 14:44:55.991061359 +0000 UTC m=+0.169386273 container remove b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=crazy_driscoll, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:44:55 np0005473739 systemd[1]: libpod-conmon-b8fd56fb954c6e0e32001d8d1d27cecc473416ec1b66a448d3c7087661a08926.scope: Deactivated successfully.
Oct  7 10:44:56 np0005473739 podman[403671]: 2025-10-07 14:44:56.171534186 +0000 UTC m=+0.045761267 container create f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:44:56 np0005473739 systemd[1]: Started libpod-conmon-f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20.scope.
Oct  7 10:44:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:44:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:56 np0005473739 podman[403671]: 2025-10-07 14:44:56.15474085 +0000 UTC m=+0.028967951 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:44:56 np0005473739 podman[403671]: 2025-10-07 14:44:56.26044992 +0000 UTC m=+0.134677011 container init f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:44:56 np0005473739 podman[403671]: 2025-10-07 14:44:56.267501197 +0000 UTC m=+0.141728278 container start f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:44:56 np0005473739 podman[403671]: 2025-10-07 14:44:56.272947112 +0000 UTC m=+0.147174213 container attach f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:44:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2481: 305 pgs: 305 active+clean; 122 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 31 KiB/s wr, 73 op/s
Oct  7 10:44:56 np0005473739 nova_compute[259550]: 2025-10-07 14:44:56.793 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.085 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.086 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.135 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:44:57 np0005473739 interesting_carver[403687]: {
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:    "0": [
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:        {
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "devices": [
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "/dev/loop3"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            ],
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_name": "ceph_lv0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_size": "21470642176",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "name": "ceph_lv0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "tags": {
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cluster_name": "ceph",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.crush_device_class": "",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.encrypted": "0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osd_id": "0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.type": "block",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.vdo": "0"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            },
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "type": "block",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "vg_name": "ceph_vg0"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:        }
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:    ],
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:    "1": [
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:        {
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "devices": [
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "/dev/loop4"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            ],
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_name": "ceph_lv1",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_size": "21470642176",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "name": "ceph_lv1",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "tags": {
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cluster_name": "ceph",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.crush_device_class": "",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.encrypted": "0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osd_id": "1",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.type": "block",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.vdo": "0"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            },
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "type": "block",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "vg_name": "ceph_vg1"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:        }
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:    ],
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:    "2": [
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:        {
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "devices": [
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "/dev/loop5"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            ],
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_name": "ceph_lv2",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_size": "21470642176",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "name": "ceph_lv2",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "tags": {
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.cluster_name": "ceph",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.crush_device_class": "",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.encrypted": "0",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osd_id": "2",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.type": "block",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:                "ceph.vdo": "0"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            },
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "type": "block",
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:            "vg_name": "ceph_vg2"
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:        }
Oct  7 10:44:57 np0005473739 interesting_carver[403687]:    ]
Oct  7 10:44:57 np0005473739 interesting_carver[403687]: }
Oct  7 10:44:57 np0005473739 systemd[1]: libpod-f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20.scope: Deactivated successfully.
Oct  7 10:44:57 np0005473739 podman[403671]: 2025-10-07 14:44:57.170973412 +0000 UTC m=+1.045200523 container died f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:44:57 np0005473739 systemd[1]: var-lib-containers-storage-overlay-321bdfd1240132c6fed542f0070e34f6b5c4472ecaa895da6fdc78ca1c1c40f4-merged.mount: Deactivated successfully.
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.213 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.215 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.224 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.224 2 INFO nova.compute.claims [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:44:57 np0005473739 podman[403671]: 2025-10-07 14:44:57.229720653 +0000 UTC m=+1.103947744 container remove f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_carver, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:44:57 np0005473739 systemd[1]: libpod-conmon-f4bacfe1bd2268ac51d79b4194f451ad70affa905dbe8ea61fa444cd33914e20.scope: Deactivated successfully.
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.353 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:44:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1526407548' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:44:57 np0005473739 podman[403869]: 2025-10-07 14:44:57.873564447 +0000 UTC m=+0.043769064 container create b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.883 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.530s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.891 2 DEBUG nova.compute.provider_tree [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:44:57 np0005473739 systemd[1]: Started libpod-conmon-b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae.scope.
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.911 2 DEBUG nova.scheduler.client.report [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.932 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.717s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.933 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:44:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:44:57 np0005473739 podman[403869]: 2025-10-07 14:44:57.855372774 +0000 UTC m=+0.025577401 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:44:57 np0005473739 podman[403869]: 2025-10-07 14:44:57.960301293 +0000 UTC m=+0.130505900 container init b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 10:44:57 np0005473739 podman[403869]: 2025-10-07 14:44:57.967162595 +0000 UTC m=+0.137367202 container start b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 10:44:57 np0005473739 podman[403869]: 2025-10-07 14:44:57.971205193 +0000 UTC m=+0.141409810 container attach b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct  7 10:44:57 np0005473739 wonderful_wescoff[403887]: 167 167
Oct  7 10:44:57 np0005473739 systemd[1]: libpod-b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae.scope: Deactivated successfully.
Oct  7 10:44:57 np0005473739 podman[403869]: 2025-10-07 14:44:57.974417168 +0000 UTC m=+0.144621795 container died b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.988 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:44:57 np0005473739 nova_compute[259550]: 2025-10-07 14:44:57.988 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:44:57 np0005473739 systemd[1]: var-lib-containers-storage-overlay-51aa93e43371a2fb144fa7469f6831d0cc0bb132a1ed5267db33d423510cb91a-merged.mount: Deactivated successfully.
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.014 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:44:58 np0005473739 podman[403869]: 2025-10-07 14:44:58.016674081 +0000 UTC m=+0.186878678 container remove b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_wescoff, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.032 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:44:58 np0005473739 systemd[1]: libpod-conmon-b2758d14066cb7f6b26f91a605d4515306be85431d8aeccacbb17b6ecceb61ae.scope: Deactivated successfully.
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.141 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.142 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.142 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Creating image(s)#033[00m
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #120. Immutable memtables: 0.
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.145060) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 71] Flushing memtable with next log file: 120
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298145105, "job": 71, "event": "flush_started", "num_memtables": 1, "num_entries": 527, "num_deletes": 258, "total_data_size": 499393, "memory_usage": 509496, "flush_reason": "Manual Compaction"}
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 71] Level-0 flush table #121: started
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298150287, "cf_name": "default", "job": 71, "event": "table_file_creation", "file_number": 121, "file_size": 495149, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 51834, "largest_seqno": 52360, "table_properties": {"data_size": 492134, "index_size": 987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 7177, "raw_average_key_size": 18, "raw_value_size": 485915, "raw_average_value_size": 1285, "num_data_blocks": 43, "num_entries": 378, "num_filter_entries": 378, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848270, "oldest_key_time": 1759848270, "file_creation_time": 1759848298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 121, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 71] Flush lasted 5276 microseconds, and 2739 cpu microseconds.
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.150334) [db/flush_job.cc:967] [default] [JOB 71] Level-0 flush table #121: 495149 bytes OK
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.150356) [db/memtable_list.cc:519] [default] Level-0 commit table #121 started
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.151606) [db/memtable_list.cc:722] [default] Level-0 commit table #121: memtable #1 done
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.151624) EVENT_LOG_v1 {"time_micros": 1759848298151618, "job": 71, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.151644) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 71] Try to delete WAL files size 496310, prev total WAL file size 496310, number of live WAL files 2.
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000117.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.152601) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032303131' seq:72057594037927935, type:22 .. '6C6F676D0032323634' seq:0, type:0; will stop at (end)
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 72] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 71 Base level 0, inputs: [121(483KB)], [119(9808KB)]
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298152651, "job": 72, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [121], "files_L6": [119], "score": -1, "input_data_size": 10538916, "oldest_snapshot_seqno": -1}
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.173 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.202 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 72] Generated table #122: 7220 keys, 10424762 bytes, temperature: kUnknown
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298214365, "cf_name": "default", "job": 72, "event": "table_file_creation", "file_number": 122, "file_size": 10424762, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10375677, "index_size": 29901, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18117, "raw_key_size": 188663, "raw_average_key_size": 26, "raw_value_size": 10245805, "raw_average_value_size": 1419, "num_data_blocks": 1171, "num_entries": 7220, "num_filter_entries": 7220, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848298, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 122, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.214622) [db/compaction/compaction_job.cc:1663] [default] [JOB 72] Compacted 1@0 + 1@6 files to L6 => 10424762 bytes
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.216076) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.6 rd, 168.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.6 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(42.3) write-amplify(21.1) OK, records in: 7752, records dropped: 532 output_compression: NoCompression
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.216097) EVENT_LOG_v1 {"time_micros": 1759848298216087, "job": 72, "event": "compaction_finished", "compaction_time_micros": 61792, "compaction_time_cpu_micros": 26104, "output_level": 6, "num_output_files": 1, "total_output_size": 10424762, "num_input_records": 7752, "num_output_records": 7220, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000121.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298216291, "job": 72, "event": "table_file_deletion", "file_number": 121}
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000119.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848298217816, "job": 72, "event": "table_file_deletion", "file_number": 119}
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.152176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217949) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217958) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217960) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217961) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:58 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:44:58.217963) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:44:58 np0005473739 podman[403917]: 2025-10-07 14:44:58.222224385 +0000 UTC m=+0.054237183 container create 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.233 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.239 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:58 np0005473739 systemd[1]: Started libpod-conmon-67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00.scope.
Oct  7 10:44:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:44:58Z|01477|binding|INFO|Releasing lport bb603baf-bbde-4821-a893-8713cfab0527 from this chassis (sb_readonly=0)
Oct  7 10:44:58 np0005473739 podman[403917]: 2025-10-07 14:44:58.199265944 +0000 UTC m=+0.031278792 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:44:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:44:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:44:58 np0005473739 podman[403917]: 2025-10-07 14:44:58.329999179 +0000 UTC m=+0.162012007 container init 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.336 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.338 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.339 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.340 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:58 np0005473739 podman[403917]: 2025-10-07 14:44:58.342657566 +0000 UTC m=+0.174670364 container start 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:44:58 np0005473739 podman[403917]: 2025-10-07 14:44:58.377322947 +0000 UTC m=+0.209335755 container attach 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.387 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.391 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6721439b-34d7-4282-bbcd-37424c3f2691_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.445 2 DEBUG nova.policy [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:44:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2482: 305 pgs: 305 active+clean; 122 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 31 KiB/s wr, 72 op/s
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.725 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6721439b-34d7-4282-bbcd-37424c3f2691_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.790 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.884 2 DEBUG nova.objects.instance [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 6721439b-34d7-4282-bbcd-37424c3f2691 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.916 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.917 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Ensure instance console log exists: /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.917 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.918 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.918 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:58 np0005473739 nova_compute[259550]: 2025-10-07 14:44:58.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]: {
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "osd_id": 2,
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "type": "bluestore"
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:    },
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "osd_id": 1,
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "type": "bluestore"
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:    },
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "osd_id": 0,
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:        "type": "bluestore"
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]:    }
Oct  7 10:44:59 np0005473739 pensive_hellman[403981]: }
Oct  7 10:44:59 np0005473739 systemd[1]: libpod-67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00.scope: Deactivated successfully.
Oct  7 10:44:59 np0005473739 systemd[1]: libpod-67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00.scope: Consumed 1.129s CPU time.
Oct  7 10:44:59 np0005473739 podman[404125]: 2025-10-07 14:44:59.526959175 +0000 UTC m=+0.023640839 container died 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:44:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e991d5602cd84285f17c7710c5c0c265e561df83e2a11ec2e44b423af5d40067-merged.mount: Deactivated successfully.
Oct  7 10:44:59 np0005473739 podman[404125]: 2025-10-07 14:44:59.589729694 +0000 UTC m=+0.086411348 container remove 67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_hellman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:44:59 np0005473739 systemd[1]: libpod-conmon-67d767b2c57a1b496ae23819a01ee168717c3ac2642198f18e185b9aeaa7fb00.scope: Deactivated successfully.
Oct  7 10:44:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:44:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:44:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:44:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:44:59 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 668fb5dd-f230-499a-8367-d125a9eed801 does not exist
Oct  7 10:44:59 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev be1b15d7-42ef-45f4-9da3-916d7b42d20a does not exist
Oct  7 10:44:59 np0005473739 nova_compute[259550]: 2025-10-07 14:44:59.649 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Successfully created port: 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:44:59 np0005473739 nova_compute[259550]: 2025-10-07 14:44:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:00.076 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:00.077 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:00.078 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:00 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:45:00 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:45:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2483: 305 pgs: 305 active+clean; 151 MiB data, 946 MiB used, 59 GiB / 60 GiB avail; 565 KiB/s rd, 1.3 MiB/s wr, 86 op/s
Oct  7 10:45:00 np0005473739 nova_compute[259550]: 2025-10-07 14:45:00.965 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Successfully updated port: 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:45:00 np0005473739 nova_compute[259550]: 2025-10-07 14:45:00.982 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:00 np0005473739 nova_compute[259550]: 2025-10-07 14:45:00.982 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:00 np0005473739 nova_compute[259550]: 2025-10-07 14:45:00.983 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:45:01 np0005473739 nova_compute[259550]: 2025-10-07 14:45:01.118 2 DEBUG nova.compute.manager [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:01 np0005473739 nova_compute[259550]: 2025-10-07 14:45:01.118 2 DEBUG nova.compute.manager [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing instance network info cache due to event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:01 np0005473739 nova_compute[259550]: 2025-10-07 14:45:01.118 2 DEBUG oslo_concurrency.lockutils [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:01 np0005473739 nova_compute[259550]: 2025-10-07 14:45:01.224 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:45:01 np0005473739 nova_compute[259550]: 2025-10-07 14:45:01.766 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848286.7657456, 489f6a90-6b26-4b8b-aa3e-095d1d8df333 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:01 np0005473739 nova_compute[259550]: 2025-10-07 14:45:01.766 2 INFO nova.compute.manager [-] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:45:01 np0005473739 nova_compute[259550]: 2025-10-07 14:45:01.793 2 DEBUG nova.compute.manager [None req-b3932491-802b-48df-ae99-bc6c3c17fb7d - - - - - -] [instance: 489f6a90-6b26-4b8b-aa3e-095d1d8df333] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:01 np0005473739 nova_compute[259550]: 2025-10-07 14:45:01.798 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2484: 305 pgs: 305 active+clean; 169 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 202 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct  7 10:45:02 np0005473739 nova_compute[259550]: 2025-10-07 14:45:02.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:03 np0005473739 nova_compute[259550]: 2025-10-07 14:45:03.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.410 2 DEBUG nova.network.neutron [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.432 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.432 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance network_info: |[{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.432 2 DEBUG oslo_concurrency.lockutils [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.433 2 DEBUG nova.network.neutron [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.436 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start _get_guest_xml network_info=[{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.442 2 WARNING nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.451 2 DEBUG nova.virt.libvirt.host [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.452 2 DEBUG nova.virt.libvirt.host [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.457 2 DEBUG nova.virt.libvirt.host [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.457 2 DEBUG nova.virt.libvirt.host [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.458 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.458 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.458 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.459 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.460 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.460 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.460 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.460 2 DEBUG nova.virt.hardware [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.463 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2485: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  7 10:45:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:45:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3375971057' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.959 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.496s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.987 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:04 np0005473739 nova_compute[259550]: 2025-10-07 14:45:04.992 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:45:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2283799090' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.439 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.448s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.443 2 DEBUG nova.virt.libvirt.vif [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-965086068',display_name='tempest-TestGettingAddress-server-965086068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-965086068',id=133,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-0owwxica',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:58Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6721439b-34d7-4282-bbcd-37424c3f2691,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.444 2 DEBUG nova.network.os_vif_util [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.446 2 DEBUG nova.network.os_vif_util [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.448 2 DEBUG nova.objects.instance [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 6721439b-34d7-4282-bbcd-37424c3f2691 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.466 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <uuid>6721439b-34d7-4282-bbcd-37424c3f2691</uuid>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <name>instance-00000085</name>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-965086068</nova:name>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:45:04</nova:creationTime>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <nova:port uuid="54bf17d9-ad25-4326-b981-fb4fe6afaf7c">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe28:be4b" ipVersion="6"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe28:be4b" ipVersion="6"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <entry name="serial">6721439b-34d7-4282-bbcd-37424c3f2691</entry>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <entry name="uuid">6721439b-34d7-4282-bbcd-37424c3f2691</entry>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/6721439b-34d7-4282-bbcd-37424c3f2691_disk">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/6721439b-34d7-4282-bbcd-37424c3f2691_disk.config">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:28:be:4b"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <target dev="tap54bf17d9-ad"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/console.log" append="off"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:45:05 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:45:05 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:45:05 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:45:05 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.468 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Preparing to wait for external event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.468 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.468 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.468 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.469 2 DEBUG nova.virt.libvirt.vif [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-965086068',display_name='tempest-TestGettingAddress-server-965086068',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-965086068',id=133,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-0owwxica',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:44:58Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6721439b-34d7-4282-bbcd-37424c3f2691,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.470 2 DEBUG nova.network.os_vif_util [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.471 2 DEBUG nova.network.os_vif_util [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.471 2 DEBUG os_vif [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.472 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.477 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap54bf17d9-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap54bf17d9-ad, col_values=(('external_ids', {'iface-id': '54bf17d9-ad25-4326-b981-fb4fe6afaf7c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:28:be:4b', 'vm-uuid': '6721439b-34d7-4282-bbcd-37424c3f2691'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:05 np0005473739 NetworkManager[44949]: <info>  [1759848305.4809] manager: (tap54bf17d9-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/591)
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.487 2 INFO os_vif [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad')#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.551 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.551 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.552 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:28:be:4b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.552 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Using config drive#033[00m
Oct  7 10:45:05 np0005473739 nova_compute[259550]: 2025-10-07 14:45:05.576 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.048 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Creating config drive at /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config#033[00m
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.054 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphymbhq_g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.214 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphymbhq_g" returned: 0 in 0.160s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.237 2 DEBUG nova.storage.rbd_utils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.241 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2486: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.606 2 DEBUG oslo_concurrency.processutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config 6721439b-34d7-4282-bbcd-37424c3f2691_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.365s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.607 2 INFO nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Deleting local config drive /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691/disk.config because it was imported into RBD.#033[00m
Oct  7 10:45:06 np0005473739 kernel: tap54bf17d9-ad: entered promiscuous mode
Oct  7 10:45:06 np0005473739 NetworkManager[44949]: <info>  [1759848306.6599] manager: (tap54bf17d9-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/592)
Oct  7 10:45:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:06Z|01478|binding|INFO|Claiming lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c for this chassis.
Oct  7 10:45:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:06Z|01479|binding|INFO|54bf17d9-ad25-4326-b981-fb4fe6afaf7c: Claiming fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.672 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b'], port_security=['fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe28:be4b/64 2001:db8::f816:3eff:fe28:be4b/64', 'neutron:device_id': '6721439b-34d7-4282-bbcd-37424c3f2691', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb742e76-491f-4442-ba5f-a90a2210bfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=54bf17d9-ad25-4326-b981-fb4fe6afaf7c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.673 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e bound to our chassis#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.674 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e#033[00m
Oct  7 10:45:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:06Z|01480|binding|INFO|Setting lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c ovn-installed in OVS
Oct  7 10:45:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:06Z|01481|binding|INFO|Setting lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c up in Southbound
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.685 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[366499c8-c867-4ad3-ae6b-b56a985dc797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.686 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb5308f20-41 in ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.688 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb5308f20-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1b19d4aa-02c9-4bed-8c62-9bcb2068d84a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.690 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[405c57e1-0309-4d9a-a2b2-bc4b82a66734]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 systemd-udevd[404327]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.702 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[0bd67542-2b49-4b0b-b1f0-c36df37ea5ae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 systemd-machined[214580]: New machine qemu-167-instance-00000085.
Oct  7 10:45:06 np0005473739 NetworkManager[44949]: <info>  [1759848306.7096] device (tap54bf17d9-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:45:06 np0005473739 NetworkManager[44949]: <info>  [1759848306.7108] device (tap54bf17d9-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:45:06 np0005473739 systemd[1]: Started Virtual Machine qemu-167-instance-00000085.
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.731 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c4a83f2a-b7c9-4dbf-adb3-5ebe6a591aff]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.764 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[22f6d106-d949-4436-b2c4-2d99d59b2eec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.768 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[01a484bf-3f0c-4d01-a4fe-77302e1565a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 NetworkManager[44949]: <info>  [1759848306.7704] manager: (tapb5308f20-40): new Veth device (/org/freedesktop/NetworkManager/Devices/593)
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.798 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5b5f098b-68c2-45a7-8323-1cdd749ca9c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.801 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cdf3760c-bf5b-4cae-9444-29de2b3cb013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 NetworkManager[44949]: <info>  [1759848306.8288] device (tapb5308f20-40): carrier: link connected
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.834 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[124af055-3f2b-4b58-897a-0a9f3f08f1d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.850 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1032c42e-4b12-4552-b7e9-d7e031a92492]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5308f20-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889039, 'reachable_time': 17557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404359, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.865 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0139ffb9-16bc-4204-a056-138d52bdd31a]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe31:c94'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889039, 'tstamp': 889039}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 404360, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.881 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de6696d6-7ebc-4be5-b37c-e39ac7ae3e1b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5308f20-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889039, 'reachable_time': 17557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 404361, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.914 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e59e5dce-3cc1-486a-abdf-e23e370036d4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.974 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[773e6cd7-0458-43df-b91a-80eced2adb49]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5308f20-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.976 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5308f20-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:06 np0005473739 NetworkManager[44949]: <info>  [1759848306.9792] manager: (tapb5308f20-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/594)
Oct  7 10:45:06 np0005473739 kernel: tapb5308f20-40: entered promiscuous mode
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.982 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5308f20-40, col_values=(('external_ids', {'iface-id': 'ff172ff9-04e5-4286-b498-dfa958b1473c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:06Z|01482|binding|INFO|Releasing lport ff172ff9-04e5-4286-b498-dfa958b1473c from this chassis (sb_readonly=0)
Oct  7 10:45:06 np0005473739 nova_compute[259550]: 2025-10-07 14:45:06.983 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.985 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.986 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b58b9e09-e21f-4635-b5d2-4c24b9f37662]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.987 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e.pid.haproxy
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:45:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:06.988 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'env', 'PROCESS_TAG=haproxy-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.000 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:07 np0005473739 podman[404435]: 2025-10-07 14:45:07.421435063 +0000 UTC m=+0.100411941 container create 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:45:07 np0005473739 podman[404435]: 2025-10-07 14:45:07.345342551 +0000 UTC m=+0.024319449 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:45:07 np0005473739 systemd[1]: Started libpod-conmon-5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee.scope.
Oct  7 10:45:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:45:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c7939ed48599346f2fcd08610b2ba91dee0ea97c56d4a7dfd84d89d159ce37fa/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:45:07 np0005473739 podman[404435]: 2025-10-07 14:45:07.569389876 +0000 UTC m=+0.248366844 container init 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:45:07 np0005473739 podman[404435]: 2025-10-07 14:45:07.574753569 +0000 UTC m=+0.253730487 container start 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:45:07 np0005473739 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [NOTICE]   (404455) : New worker (404457) forked
Oct  7 10:45:07 np0005473739 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [NOTICE]   (404455) : Loading success.
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.645 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848307.6439042, 6721439b-34d7-4282-bbcd-37424c3f2691 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.646 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] VM Started (Lifecycle Event)#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.670 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.675 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848307.6442134, 6721439b-34d7-4282-bbcd-37424c3f2691 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.675 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.693 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.696 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.717 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.935 2 DEBUG nova.compute.manager [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.936 2 DEBUG oslo_concurrency.lockutils [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.936 2 DEBUG oslo_concurrency.lockutils [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.936 2 DEBUG oslo_concurrency.lockutils [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.937 2 DEBUG nova.compute.manager [req-365a78cc-551e-403e-afb4-e40b2bb8f46c req-6581a1aa-8591-4add-92a5-16cccc414221 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Processing event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.937 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.950 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.950 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848307.945736, 6721439b-34d7-4282-bbcd-37424c3f2691 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.951 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.956 2 INFO nova.virt.libvirt.driver [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance spawned successfully.#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.956 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:07 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:07.999 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.008 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.013 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.013 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.014 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.014 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.015 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.015 2 DEBUG nova.virt.libvirt.driver [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.060 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.090 2 INFO nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Took 9.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.090 2 DEBUG nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.160 2 INFO nova.compute.manager [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Took 10.96 seconds to build instance.#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.178 2 DEBUG oslo_concurrency.lockutils [None req-88b5df22-9dda-400e-84e4-34401fb2ecb7 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.092s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.323 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.324 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.324 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:45:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2487: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.677 2 DEBUG nova.network.neutron [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updated VIF entry in instance network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.678 2 DEBUG nova.network.neutron [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:08 np0005473739 nova_compute[259550]: 2025-10-07 14:45:08.714 2 DEBUG oslo_concurrency.lockutils [req-5f41e937-9b33-40ee-8a07-c3a119236b68 req-2be50423-2877-452c-8e37-dc05f2b95e1c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.045 2 DEBUG nova.compute.manager [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.045 2 DEBUG oslo_concurrency.lockutils [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.046 2 DEBUG oslo_concurrency.lockutils [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.046 2 DEBUG oslo_concurrency.lockutils [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.046 2 DEBUG nova.compute.manager [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] No waiting events found dispatching network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.046 2 WARNING nova.compute.manager [req-f92ce701-af4a-43a4-a061-d504080da8eb req-feb6ac02-60ab-4800-8f8f-96fa90b65b9b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received unexpected event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c for instance with vm_state active and task_state None.#033[00m
Oct  7 10:45:10 np0005473739 podman[404466]: 2025-10-07 14:45:10.07139718 +0000 UTC m=+0.060065418 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:45:10 np0005473739 podman[404467]: 2025-10-07 14:45:10.101790957 +0000 UTC m=+0.090459415 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.161 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [{"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.182 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.183 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.183 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.206 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.207 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.207 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.207 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.208 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2488: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 615 KiB/s rd, 1.8 MiB/s wr, 49 op/s
Oct  7 10:45:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:45:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1954414701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.684 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.770 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.771 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.777 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.777 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000083 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.969 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.970 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3282MB free_disk=59.921688079833984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.971 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:10 np0005473739 nova_compute[259550]: 2025-10-07 14:45:10.971 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.053 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 30241223-64c5-4a88-8ba2-ee340fe6cbd3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.054 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 6721439b-34d7-4282-bbcd-37424c3f2691 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.054 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.055 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.106 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:45:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/280385227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.574 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.578 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.593 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.625 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:45:11 np0005473739 nova_compute[259550]: 2025-10-07 14:45:11.625 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.654s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.185 2 DEBUG nova.compute.manager [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.185 2 DEBUG nova.compute.manager [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing instance network info cache due to event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.186 2 DEBUG oslo_concurrency.lockutils [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.186 2 DEBUG oslo_concurrency.lockutils [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.186 2 DEBUG nova.network.neutron [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.264 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.264 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.264 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.265 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.265 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.266 2 INFO nova.compute.manager [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Terminating instance#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.267 2 DEBUG nova.compute.manager [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.511 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=45, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=44) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:45:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.514 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:45:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2489: 305 pgs: 305 active+clean; 169 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 523 KiB/s wr, 67 op/s
Oct  7 10:45:12 np0005473739 kernel: tap3ac63514-77 (unregistering): left promiscuous mode
Oct  7 10:45:12 np0005473739 NetworkManager[44949]: <info>  [1759848312.7908] device (tap3ac63514-77): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:12Z|01483|binding|INFO|Releasing lport 3ac63514-77e7-4d94-a67c-94806ca3b58b from this chassis (sb_readonly=0)
Oct  7 10:45:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:12Z|01484|binding|INFO|Setting lport 3ac63514-77e7-4d94-a67c-94806ca3b58b down in Southbound
Oct  7 10:45:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:12Z|01485|binding|INFO|Removing iface tap3ac63514-77 ovn-installed in OVS
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.803 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.811 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:32:1b:50 10.100.0.11'], port_security=['fa:16:3e:32:1b:50 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '30241223-64c5-4a88-8ba2-ee340fe6cbd3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'ac40ef14492f40768b3852a40da26621', 'neutron:revision_number': '9', 'neutron:security_group_ids': '56b8c028-3f77-4dba-a2e9-4a1cb7c88d4a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=46c5eb1a-8a6c-4afe-ae4e-424959d231e5, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=3ac63514-77e7-4d94-a67c-94806ca3b58b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:45:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.812 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b in datapath 7c054d6f-68ec-4f0b-9362-221001cc6b67 unbound from our chassis#033[00m
Oct  7 10:45:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.814 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7c054d6f-68ec-4f0b-9362-221001cc6b67, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:45:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.816 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[af04e36e-d471-4ddf-9ac9-bde30ce35f99]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:12.816 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 namespace which is not needed anymore#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:12 np0005473739 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000083.scope: Deactivated successfully.
Oct  7 10:45:12 np0005473739 systemd[1]: machine-qemu\x2d166\x2dinstance\x2d00000083.scope: Consumed 14.692s CPU time.
Oct  7 10:45:12 np0005473739 systemd-machined[214580]: Machine qemu-166-instance-00000083 terminated.
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.899 2 INFO nova.virt.libvirt.driver [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Instance destroyed successfully.#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.900 2 DEBUG nova.objects.instance [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lazy-loading 'resources' on Instance uuid 30241223-64c5-4a88-8ba2-ee340fe6cbd3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.923 2 DEBUG nova.virt.libvirt.vif [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=True,config_drive='True',created_at=2025-10-07T14:43:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestShelveInstance-server-308957706',display_name='tempest-TestShelveInstance-server-308957706',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testshelveinstance-server-308957706',id=131,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIP0AMqBzzYmRrc/gnJzbbMyBAFmuqR5iC2+H5lS9P1NxQlnpTWhcfEteNAmj5N76nsDrvP+kS3KGhT6YiYXIeHex+K2fKyOb9r6ICTnlnIC+U793tuGi+owzMBnIl2+nw==',key_name='tempest-TestShelveInstance-2096115086',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:44:37Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='ac40ef14492f40768b3852a40da26621',ramdisk_id='',reservation_id='r-zct1uv2p',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',clean_attempts='1',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestShelveInstance-703128978',owner_user_name='tempest-TestShelveInstance-703128978-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:44:37Z,user_data=None,user_id='52bb2c10051444f181ee0572525fbe9d',uuid=30241223-64c5-4a88-8ba2-ee340fe6cbd3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.925 2 DEBUG nova.network.os_vif_util [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converting VIF {"id": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "address": "fa:16:3e:32:1b:50", "network": {"id": "7c054d6f-68ec-4f0b-9362-221001cc6b67", "bridge": "br-int", "label": "tempest-TestShelveInstance-1500718968-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "ac40ef14492f40768b3852a40da26621", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap3ac63514-77", "ovs_interfaceid": "3ac63514-77e7-4d94-a67c-94806ca3b58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.926 2 DEBUG nova.network.os_vif_util [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.926 2 DEBUG os_vif [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.930 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap3ac63514-77, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:45:12 np0005473739 nova_compute[259550]: 2025-10-07 14:45:12.938 2 INFO os_vif [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:32:1b:50,bridge_name='br-int',has_traffic_filtering=True,id=3ac63514-77e7-4d94-a67c-94806ca3b58b,network=Network(7c054d6f-68ec-4f0b-9362-221001cc6b67),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap3ac63514-77')#033[00m
Oct  7 10:45:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [NOTICE]   (402784) : haproxy version is 2.8.14-c23fe91
Oct  7 10:45:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [NOTICE]   (402784) : path to executable is /usr/sbin/haproxy
Oct  7 10:45:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [WARNING]  (402784) : Exiting Master process...
Oct  7 10:45:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [ALERT]    (402784) : Current worker (402786) exited with code 143 (Terminated)
Oct  7 10:45:12 np0005473739 neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67[402780]: [WARNING]  (402784) : All workers exited. Exiting... (0)
Oct  7 10:45:13 np0005473739 systemd[1]: libpod-6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5.scope: Deactivated successfully.
Oct  7 10:45:13 np0005473739 podman[404582]: 2025-10-07 14:45:13.007010859 +0000 UTC m=+0.082927195 container died 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:45:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5-userdata-shm.mount: Deactivated successfully.
Oct  7 10:45:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d869432837bfb3671cd5715ce91e9446229f7e47546d9d9b9ecd83497bca9d04-merged.mount: Deactivated successfully.
Oct  7 10:45:13 np0005473739 podman[404582]: 2025-10-07 14:45:13.05031712 +0000 UTC m=+0.126233446 container cleanup 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:45:13 np0005473739 systemd[1]: libpod-conmon-6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5.scope: Deactivated successfully.
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:13 np0005473739 podman[404630]: 2025-10-07 14:45:13.120381642 +0000 UTC m=+0.045543611 container remove 6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.127 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d287f618-b7c1-4912-8d1a-b4823dc9d91d]: (4, ('Tue Oct  7 02:45:12 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 (6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5)\n6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5\nTue Oct  7 02:45:13 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 (6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5)\n6bab4fb163156ca0b79f507c023b9f33230ba09018270b8499d2a6ae039e01c5\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.129 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d7ecce13-7855-4aeb-9b32-ae74708c63cc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.130 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7c054d6f-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:13 np0005473739 kernel: tap7c054d6f-60: left promiscuous mode
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.151 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[490ea24b-be20-4995-8f06-8ff18ae6f405]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.178 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a1a0db0-9950-467d-9467-94e179633768]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.179 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[960e5abe-413a-4120-a0da-6234ba0c1f4d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.195 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aa60b555-2e6d-413a-8ded-b6510337a229]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 885814, 'reachable_time': 28954, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 404644, 'error': None, 'target': 'ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.198 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7c054d6f-68ec-4f0b-9362-221001cc6b67 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:45:13 np0005473739 systemd[1]: run-netns-ovnmeta\x2d7c054d6f\x2d68ec\x2d4f0b\x2d9362\x2d221001cc6b67.mount: Deactivated successfully.
Oct  7 10:45:13 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:13.198 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[4d38e81d-8717-48b2-ad27-961ff0f1c962]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.442 2 INFO nova.virt.libvirt.driver [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deleting instance files /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3_del#033[00m
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.444 2 INFO nova.virt.libvirt.driver [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deletion of /var/lib/nova/instances/30241223-64c5-4a88-8ba2-ee340fe6cbd3_del complete#033[00m
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.500 2 INFO nova.compute.manager [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Took 1.23 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.501 2 DEBUG oslo.service.loopingcall [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.501 2 DEBUG nova.compute.manager [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:45:13 np0005473739 nova_compute[259550]: 2025-10-07 14:45:13.502 2 DEBUG nova.network.neutron [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.307 2 DEBUG nova.compute.manager [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.307 2 DEBUG nova.compute.manager [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing instance network info cache due to event network-changed-3ac63514-77e7-4d94-a67c-94806ca3b58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.308 2 DEBUG oslo_concurrency.lockutils [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.308 2 DEBUG oslo_concurrency.lockutils [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.308 2 DEBUG nova.network.neutron [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Refreshing network info cache for port 3ac63514-77e7-4d94-a67c-94806ca3b58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.396 2 DEBUG nova.network.neutron [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.443 2 INFO nova.compute.manager [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Took 0.94 seconds to deallocate network for instance.#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.466 2 DEBUG nova.network.neutron [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updated VIF entry in instance network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.466 2 DEBUG nova.network.neutron [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:14.516 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '45'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.530 2 INFO nova.network.neutron [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Port 3ac63514-77e7-4d94-a67c-94806ca3b58b from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.530 2 DEBUG nova.network.neutron [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2490: 305 pgs: 305 active+clean; 113 MiB data, 943 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 24 KiB/s wr, 94 op/s
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.614 2 DEBUG oslo_concurrency.lockutils [req-333bc3e3-bd9f-4d72-b454-66fe546b0145 req-1a28779c-2f53-43f8-9055-c2d219467742 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-30241223-64c5-4a88-8ba2-ee340fe6cbd3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.630 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.631 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.632 2 DEBUG oslo_concurrency.lockutils [req-024aebf4-3160-490a-b33c-e28332418504 req-956c92ba-dbbd-4fe4-83aa-2329d0d30a0f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:14 np0005473739 nova_compute[259550]: 2025-10-07 14:45:14.752 2 DEBUG oslo_concurrency.processutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:45:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/517420342' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:45:15 np0005473739 nova_compute[259550]: 2025-10-07 14:45:15.228 2 DEBUG oslo_concurrency.processutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:15 np0005473739 nova_compute[259550]: 2025-10-07 14:45:15.236 2 DEBUG nova.compute.provider_tree [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:45:15 np0005473739 nova_compute[259550]: 2025-10-07 14:45:15.319 2 DEBUG nova.scheduler.client.report [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:45:15 np0005473739 nova_compute[259550]: 2025-10-07 14:45:15.583 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.952s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:15 np0005473739 nova_compute[259550]: 2025-10-07 14:45:15.674 2 INFO nova.scheduler.client.report [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Deleted allocations for instance 30241223-64c5-4a88-8ba2-ee340fe6cbd3#033[00m
Oct  7 10:45:15 np0005473739 nova_compute[259550]: 2025-10-07 14:45:15.936 2 DEBUG oslo_concurrency.lockutils [None req-cddf79d1-047f-43bf-ad14-9213fdd3e50e 52bb2c10051444f181ee0572525fbe9d ac40ef14492f40768b3852a40da26621 - - default default] Lock "30241223-64c5-4a88-8ba2-ee340fe6cbd3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:16 np0005473739 nova_compute[259550]: 2025-10-07 14:45:16.424 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:16 np0005473739 nova_compute[259550]: 2025-10-07 14:45:16.430 2 DEBUG nova.compute.manager [req-17ea9564-6d64-4522-ab2a-8bdf1557703c req-ec972ed5-28b3-4b5a-9371-e094740b0b62 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Received event network-vif-deleted-3ac63514-77e7-4d94-a67c-94806ca3b58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2491: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct  7 10:45:17 np0005473739 nova_compute[259550]: 2025-10-07 14:45:17.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.177 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.178 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.197 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.262 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.263 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.271 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.272 2 INFO nova.compute.claims [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.395 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2492: 305 pgs: 305 active+clean; 88 MiB data, 931 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 107 op/s
Oct  7 10:45:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:45:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1839292411' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.860 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.866 2 DEBUG nova.compute.provider_tree [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.898 2 DEBUG nova.scheduler.client.report [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.958 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.695s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:18 np0005473739 nova_compute[259550]: 2025-10-07 14:45:18.958 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.022 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.023 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.042 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.062 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.154 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.155 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.156 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Creating image(s)#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.176 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.202 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.222 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.226 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.300 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.301 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.302 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.302 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.325 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.331 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.416 2 DEBUG nova.policy [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.765 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.434s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.837 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.951 2 DEBUG nova.objects.instance [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.971 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.972 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Ensure instance console log exists: /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.972 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.973 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:19 np0005473739 nova_compute[259550]: 2025-10-07 14:45:19.973 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2493: 305 pgs: 305 active+clean; 88 MiB data, 920 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 15 KiB/s wr, 108 op/s
Oct  7 10:45:20 np0005473739 nova_compute[259550]: 2025-10-07 14:45:20.880 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Successfully created port: 706c4bba-81cd-4c03-ac73-d8225f4ea15f _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:45:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:20Z|00172|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:28:be:4b 10.100.0.9
Oct  7 10:45:20 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:20Z|00173|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:28:be:4b 10.100.0.9
Oct  7 10:45:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:21Z|01486|binding|INFO|Releasing lport ff172ff9-04e5-4286-b498-dfa958b1473c from this chassis (sb_readonly=0)
Oct  7 10:45:21 np0005473739 nova_compute[259550]: 2025-10-07 14:45:21.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:21 np0005473739 nova_compute[259550]: 2025-10-07 14:45:21.939 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Successfully updated port: 706c4bba-81cd-4c03-ac73-d8225f4ea15f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:45:21 np0005473739 nova_compute[259550]: 2025-10-07 14:45:21.953 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:21 np0005473739 nova_compute[259550]: 2025-10-07 14:45:21.954 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:21 np0005473739 nova_compute[259550]: 2025-10-07 14:45:21.954 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:45:22 np0005473739 nova_compute[259550]: 2025-10-07 14:45:22.034 2 DEBUG nova.compute.manager [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:22 np0005473739 nova_compute[259550]: 2025-10-07 14:45:22.034 2 DEBUG nova.compute.manager [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:22 np0005473739 nova_compute[259550]: 2025-10-07 14:45:22.035 2 DEBUG oslo_concurrency.lockutils [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:22 np0005473739 nova_compute[259550]: 2025-10-07 14:45:22.164 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2494: 305 pgs: 305 active+clean; 116 MiB data, 942 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 129 op/s
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:45:22
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'cephfs.cephfs.meta', 'images', '.mgr', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', '.rgw.root', 'backups', 'default.rgw.log', 'default.rgw.meta']
Oct  7 10:45:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:45:22 np0005473739 nova_compute[259550]: 2025-10-07 14:45:22.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:23 np0005473739 podman[404856]: 2025-10-07 14:45:23.067907062 +0000 UTC m=+0.056643628 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Oct  7 10:45:23 np0005473739 podman[404857]: 2025-10-07 14:45:23.104176186 +0000 UTC m=+0.087501318 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:45:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:45:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.285 2 DEBUG nova.network.neutron [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.313 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.313 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance network_info: |[{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.313 2 DEBUG oslo_concurrency.lockutils [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.314 2 DEBUG nova.network.neutron [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.316 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start _get_guest_xml network_info=[{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.320 2 WARNING nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.324 2 DEBUG nova.virt.libvirt.host [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.325 2 DEBUG nova.virt.libvirt.host [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.332 2 DEBUG nova.virt.libvirt.host [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.332 2 DEBUG nova.virt.libvirt.host [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.333 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.333 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.334 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.335 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.335 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.335 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.335 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.336 2 DEBUG nova.virt.hardware [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.338 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:45:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490112040' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.822 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.484s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.848 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:23 np0005473739 nova_compute[259550]: 2025-10-07 14:45:23.852 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:45:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/825657784' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.346 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.493s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.347 2 DEBUG nova.virt.libvirt.vif [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-229487908',display_name='tempest-TestNetworkBasicOps-server-229487908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-229487908',id=134,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGRTZ02aMW7THUpB/Fbhor+/2PP8itgwraBW7FQrHv4wlPChnOyVpBM/gFf/sXxSnz2gDDJ7JKqZawr/DUsuuU6d+XBKYnr5LnbNHtnmtR34eSX9Sg3yAhhpxjufRRETtA==',key_name='tempest-TestNetworkBasicOps-262800501',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-82k3uyu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:19Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.348 2 DEBUG nova.network.os_vif_util [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.349 2 DEBUG nova.network.os_vif_util [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.350 2 DEBUG nova.objects.instance [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.365 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <uuid>5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed</uuid>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <name>instance-00000086</name>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-229487908</nova:name>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:45:23</nova:creationTime>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <nova:port uuid="706c4bba-81cd-4c03-ac73-d8225f4ea15f">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <entry name="serial">5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed</entry>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <entry name="uuid">5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed</entry>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:57:3a:c8"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <target dev="tap706c4bba-81"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/console.log" append="off"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:45:24 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:45:24 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:45:24 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:45:24 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.367 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Preparing to wait for external event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.367 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.367 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.367 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.368 2 DEBUG nova.virt.libvirt.vif [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-229487908',display_name='tempest-TestNetworkBasicOps-server-229487908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-229487908',id=134,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGRTZ02aMW7THUpB/Fbhor+/2PP8itgwraBW7FQrHv4wlPChnOyVpBM/gFf/sXxSnz2gDDJ7JKqZawr/DUsuuU6d+XBKYnr5LnbNHtnmtR34eSX9Sg3yAhhpxjufRRETtA==',key_name='tempest-TestNetworkBasicOps-262800501',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-82k3uyu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:19Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.368 2 DEBUG nova.network.os_vif_util [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.369 2 DEBUG nova.network.os_vif_util [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.369 2 DEBUG os_vif [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.370 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.371 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap706c4bba-81, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.373 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap706c4bba-81, col_values=(('external_ids', {'iface-id': '706c4bba-81cd-4c03-ac73-d8225f4ea15f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:57:3a:c8', 'vm-uuid': '5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:24 np0005473739 NetworkManager[44949]: <info>  [1759848324.3761] manager: (tap706c4bba-81): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/595)
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.381 2 INFO os_vif [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81')#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.438 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.439 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.439 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:57:3a:c8, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.440 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Using config drive#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.460 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2495: 305 pgs: 305 active+clean; 164 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 3.8 MiB/s wr, 140 op/s
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.992 2 DEBUG nova.network.neutron [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:45:24 np0005473739 nova_compute[259550]: 2025-10-07 14:45:24.993 2 DEBUG nova.network.neutron [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.020 2 DEBUG oslo_concurrency.lockutils [req-9c1d4134-bee6-4687-b4c5-1fc653ae13d3 req-7b72fa82-5b93-4bc0-a825-14bfea1247e8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.132 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Creating config drive at /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.137 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyb75_pmj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.280 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyb75_pmj" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.305 2 DEBUG nova.storage.rbd_utils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.309 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.537 2 DEBUG oslo_concurrency.processutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.538 2 INFO nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Deleting local config drive /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed/disk.config because it was imported into RBD.#033[00m
Oct  7 10:45:25 np0005473739 kernel: tap706c4bba-81: entered promiscuous mode
Oct  7 10:45:25 np0005473739 NetworkManager[44949]: <info>  [1759848325.5964] manager: (tap706c4bba-81): new Tun device (/org/freedesktop/NetworkManager/Devices/596)
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:25Z|01487|binding|INFO|Claiming lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f for this chassis.
Oct  7 10:45:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:25Z|01488|binding|INFO|706c4bba-81cd-4c03-ac73-d8225f4ea15f: Claiming fa:16:3e:57:3a:c8 10.100.0.13
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.611 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:3a:c8 10.100.0.13'], port_security=['fa:16:3e:57:3a:c8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a790910-04e4-4ed9-9209-184147e62b8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '497680c8-b146-4d33-ad78-53e98937483b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17109e69-8b68-4e08-a1dd-9e4119d12812, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=706c4bba-81cd-4c03-ac73-d8225f4ea15f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.612 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 706c4bba-81cd-4c03-ac73-d8225f4ea15f in datapath 8a790910-04e4-4ed9-9209-184147e62b8b bound to our chassis#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.613 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a790910-04e4-4ed9-9209-184147e62b8b#033[00m
Oct  7 10:45:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:25Z|01489|binding|INFO|Setting lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f ovn-installed in OVS
Oct  7 10:45:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:25Z|01490|binding|INFO|Setting lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f up in Southbound
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.628 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0f7a17cc-525f-486e-9b16-d17f6da1f2e9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.629 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap8a790910-01 in ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.633 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap8a790910-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.634 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c456e1ff-47b1-46e0-b80d-0e08cecfe251]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 systemd-udevd[405037]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.635 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bff3ecad-3edd-4c7a-989a-172327bc9e2c]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.648 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6d0dd468-e97d-4b6f-9d02-d67338b4dcf9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 NetworkManager[44949]: <info>  [1759848325.6510] device (tap706c4bba-81): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:45:25 np0005473739 NetworkManager[44949]: <info>  [1759848325.6525] device (tap706c4bba-81): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:45:25 np0005473739 systemd-machined[214580]: New machine qemu-168-instance-00000086.
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.663 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[981977d7-c175-4822-b67e-9ec1a6f990a2]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 systemd[1]: Started Virtual Machine qemu-168-instance-00000086.
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.690 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2216b25d-510d-47e7-a820-d685ad6a7898]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 NetworkManager[44949]: <info>  [1759848325.6966] manager: (tap8a790910-00): new Veth device (/org/freedesktop/NetworkManager/Devices/597)
Oct  7 10:45:25 np0005473739 systemd-udevd[405041]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.695 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3c27e757-7762-42d1-bfa1-7927941cdbb1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.728 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[af8057e9-a11b-4020-a108-52cce910ab0a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.732 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8492a89f-95e2-4223-aa42-e08e8c371dbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 NetworkManager[44949]: <info>  [1759848325.7572] device (tap8a790910-00): carrier: link connected
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.763 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ab512807-7442-4d8d-bac7-b87a08c62ab1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c272c7c5-5a34-47cb-aaf5-2b7ec1684687]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a790910-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890932, 'reachable_time': 40705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405070, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.799 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0d1ab51e-a4c1-4a81-b7e9-35c1a6b0bf03]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fedb:8cb2'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890932, 'tstamp': 890932}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405071, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.819 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[635ba17c-3b6a-435f-a803-e66c58b1eb6f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a790910-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890932, 'reachable_time': 40705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 405072, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.854 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[797eb0e1-4266-498d-b82c-cbb122e3ee7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.926 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8a682635-c981-49c3-80b0-188e335fd475]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.927 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a790910-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.928 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.928 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a790910-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:25 np0005473739 NetworkManager[44949]: <info>  [1759848325.9305] manager: (tap8a790910-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/598)
Oct  7 10:45:25 np0005473739 kernel: tap8a790910-00: entered promiscuous mode
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.932 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.933 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a790910-00, col_values=(('external_ids', {'iface-id': '473a96c8-dafe-4956-8316-8a82bc1c870e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:25Z|01491|binding|INFO|Releasing lport 473a96c8-dafe-4956-8316-8a82bc1c870e from this chassis (sb_readonly=0)
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.935 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.936 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/8a790910-04e4-4ed9-9209-184147e62b8b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/8a790910-04e4-4ed9-9209-184147e62b8b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.937 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[780e492b-fe69-4ebc-a2d6-2e481441f029]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.938 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-8a790910-04e4-4ed9-9209-184147e62b8b
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/8a790910-04e4-4ed9-9209-184147e62b8b.pid.haproxy
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 8a790910-04e4-4ed9-9209-184147e62b8b
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:45:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:25.938 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'env', 'PROCESS_TAG=haproxy-8a790910-04e4-4ed9-9209-184147e62b8b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/8a790910-04e4-4ed9-9209-184147e62b8b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:45:25 np0005473739 nova_compute[259550]: 2025-10-07 14:45:25.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:26Z|01492|binding|INFO|Releasing lport 473a96c8-dafe-4956-8316-8a82bc1c870e from this chassis (sb_readonly=0)
Oct  7 10:45:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:26Z|01493|binding|INFO|Releasing lport ff172ff9-04e5-4286-b498-dfa958b1473c from this chassis (sb_readonly=0)
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.054 2 DEBUG nova.compute.manager [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.054 2 DEBUG oslo_concurrency.lockutils [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.054 2 DEBUG oslo_concurrency.lockutils [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.055 2 DEBUG oslo_concurrency.lockutils [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.055 2 DEBUG nova.compute.manager [req-bb153ed5-ab9e-4a48-b936-270a358512f7 req-c20d61ab-9b3c-410d-a129-959184d52893 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Processing event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.087 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:26 np0005473739 podman[405146]: 2025-10-07 14:45:26.318037271 +0000 UTC m=+0.028419826 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.545 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.546 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848326.5447032, 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.547 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] VM Started (Lifecycle Event)#033[00m
Oct  7 10:45:26 np0005473739 podman[405146]: 2025-10-07 14:45:26.547642484 +0000 UTC m=+0.258025019 container create c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.551 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.558 2 INFO nova.virt.libvirt.driver [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance spawned successfully.#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.559 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:45:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2496: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 513 KiB/s rd, 3.9 MiB/s wr, 111 op/s
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.567 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.573 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.578 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.579 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.579 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.579 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.580 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.580 2 DEBUG nova.virt.libvirt.driver [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.590 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.591 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848326.545007, 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.591 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.612 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.614 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848326.5492656, 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.615 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:45:26 np0005473739 systemd[1]: Started libpod-conmon-c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78.scope.
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.639 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.642 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.653 2 INFO nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Took 7.50 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.653 2 DEBUG nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:45:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/36405587fcc29b64184ee9f2074d129e3df4d671c99cc0a15f619b8e476e7194/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.662 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.705 2 INFO nova.compute.manager [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Took 8.46 seconds to build instance.#033[00m
Oct  7 10:45:26 np0005473739 nova_compute[259550]: 2025-10-07 14:45:26.728 2 DEBUG oslo_concurrency.lockutils [None req-07a62854-c379-4a60-9e93-29e954988aa4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.550s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:26 np0005473739 podman[405146]: 2025-10-07 14:45:26.734140441 +0000 UTC m=+0.444522996 container init c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:45:26 np0005473739 podman[405146]: 2025-10-07 14:45:26.740134261 +0000 UTC m=+0.450516796 container start c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:45:26 np0005473739 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [NOTICE]   (405165) : New worker (405167) forked
Oct  7 10:45:26 np0005473739 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [NOTICE]   (405165) : Loading success.
Oct  7 10:45:27 np0005473739 nova_compute[259550]: 2025-10-07 14:45:27.897 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848312.896735, 30241223-64c5-4a88-8ba2-ee340fe6cbd3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:27 np0005473739 nova_compute[259550]: 2025-10-07 14:45:27.899 2 INFO nova.compute.manager [-] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:45:27 np0005473739 nova_compute[259550]: 2025-10-07 14:45:27.917 2 DEBUG nova.compute.manager [None req-d540ec5c-04e6-4b64-8c0f-8119cac6d71a - - - - - -] [instance: 30241223-64c5-4a88-8ba2-ee340fe6cbd3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:28 np0005473739 nova_compute[259550]: 2025-10-07 14:45:28.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:28 np0005473739 nova_compute[259550]: 2025-10-07 14:45:28.138 2 DEBUG nova.compute.manager [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:28 np0005473739 nova_compute[259550]: 2025-10-07 14:45:28.139 2 DEBUG oslo_concurrency.lockutils [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:28 np0005473739 nova_compute[259550]: 2025-10-07 14:45:28.139 2 DEBUG oslo_concurrency.lockutils [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:28 np0005473739 nova_compute[259550]: 2025-10-07 14:45:28.140 2 DEBUG oslo_concurrency.lockutils [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:28 np0005473739 nova_compute[259550]: 2025-10-07 14:45:28.140 2 DEBUG nova.compute.manager [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:45:28 np0005473739 nova_compute[259550]: 2025-10-07 14:45:28.140 2 WARNING nova.compute.manager [req-083f0ea5-58c8-4bda-81c6-429841405d32 req-224b72f6-0abd-4fee-a6a7-f3eb9e9e3105 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.#033[00m
Oct  7 10:45:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2497: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 407 KiB/s rd, 3.9 MiB/s wr, 97 op/s
Oct  7 10:45:29 np0005473739 nova_compute[259550]: 2025-10-07 14:45:29.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2498: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 3.9 MiB/s wr, 140 op/s
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.383 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.383 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.404 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.496 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.497 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.504 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.504 2 INFO nova.compute.claims [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2499: 305 pgs: 305 active+clean; 167 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 3.9 MiB/s wr, 166 op/s
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.659 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:45:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2992572256' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:45:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:45:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2992572256' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011073049952790258 of space, bias 1.0, pg target 0.3321914985837077 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:45:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.975 2 DEBUG nova.compute.manager [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.976 2 DEBUG nova.compute.manager [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.977 2 DEBUG oslo_concurrency.lockutils [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.977 2 DEBUG oslo_concurrency.lockutils [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:32 np0005473739 nova_compute[259550]: 2025-10-07 14:45:32.978 2 DEBUG nova.network.neutron [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:45:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4082682108' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.098 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.105 2 DEBUG nova.compute.provider_tree [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.126 2 DEBUG nova.scheduler.client.report [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.132 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.153 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.656s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.154 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.219 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.220 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.242 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.258 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.334 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.335 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.336 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Creating image(s)#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.358 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.381 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.408 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.411 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.443 2 DEBUG nova.policy [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.478 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.479 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.480 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.480 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.506 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.512 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b598361e-dd69-448e-ade6-931a3d8c84cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.888 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b598361e-dd69-448e-ade6-931a3d8c84cb_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.376s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:33 np0005473739 nova_compute[259550]: 2025-10-07 14:45:33.954 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.044 2 DEBUG nova.objects.instance [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid b598361e-dd69-448e-ade6-931a3d8c84cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.059 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.059 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Ensure instance console log exists: /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.060 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.060 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.060 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.214 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Successfully created port: fef175ac-72ee-4716-9970-ff3dccaea9f9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.366 2 DEBUG nova.network.neutron [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.367 2 DEBUG nova.network.neutron [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.391 2 DEBUG oslo_concurrency.lockutils [req-d93f139a-2053-42c3-b4f5-b4a714a364f1 req-9454c4f6-6551-48ee-88a2-b5b622374b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2500: 305 pgs: 305 active+clean; 190 MiB data, 967 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.2 MiB/s wr, 125 op/s
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.900 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Successfully updated port: fef175ac-72ee-4716-9970-ff3dccaea9f9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.916 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.916 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.916 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.999 2 DEBUG nova.compute.manager [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:34 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.999 2 DEBUG nova.compute.manager [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing instance network info cache due to event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:35 np0005473739 nova_compute[259550]: 2025-10-07 14:45:34.999 2 DEBUG oslo_concurrency.lockutils [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:35 np0005473739 nova_compute[259550]: 2025-10-07 14:45:35.910 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:45:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2501: 305 pgs: 305 active+clean; 213 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.9 MiB/s wr, 109 op/s
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.948 2 DEBUG nova.network.neutron [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.972 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.972 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance network_info: |[{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.973 2 DEBUG oslo_concurrency.lockutils [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.973 2 DEBUG nova.network.neutron [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.976 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start _get_guest_xml network_info=[{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.981 2 WARNING nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.987 2 DEBUG nova.virt.libvirt.host [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.987 2 DEBUG nova.virt.libvirt.host [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.993 2 DEBUG nova.virt.libvirt.host [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.993 2 DEBUG nova.virt.libvirt.host [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.994 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.994 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.994 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.994 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.995 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.995 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.995 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.995 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.996 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.996 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.996 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.996 2 DEBUG nova.virt.hardware [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:45:37 np0005473739 nova_compute[259550]: 2025-10-07 14:45:37.999 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.178 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.179 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.202 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.269 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.270 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.277 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.278 2 INFO nova.compute.claims [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.433 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:45:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3629840250' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.478 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.479s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.501 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.505 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2502: 305 pgs: 305 active+clean; 213 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct  7 10:45:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:45:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4152527710' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.903 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.909 2 DEBUG nova.compute.provider_tree [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.938 2 DEBUG nova.scheduler.client.report [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.959 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.961 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:45:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:45:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2014971068' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.983 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.986 2 DEBUG nova.virt.libvirt.vif [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1906442223',display_name='tempest-TestGettingAddress-server-1906442223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1906442223',id=135,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-xwt5qdp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=b598361e-dd69-448e-ade6-931a3d8c84cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.986 2 DEBUG nova.network.os_vif_util [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.988 2 DEBUG nova.network.os_vif_util [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:38 np0005473739 nova_compute[259550]: 2025-10-07 14:45:38.989 2 DEBUG nova.objects.instance [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid b598361e-dd69-448e-ade6-931a3d8c84cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.012 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <uuid>b598361e-dd69-448e-ade6-931a3d8c84cb</uuid>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <name>instance-00000087</name>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1906442223</nova:name>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:45:37</nova:creationTime>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <nova:port uuid="fef175ac-72ee-4716-9970-ff3dccaea9f9">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fef5:9cff" ipVersion="6"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fef5:9cff" ipVersion="6"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <entry name="serial">b598361e-dd69-448e-ade6-931a3d8c84cb</entry>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <entry name="uuid">b598361e-dd69-448e-ade6-931a3d8c84cb</entry>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b598361e-dd69-448e-ade6-931a3d8c84cb_disk">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:f5:9c:ff"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <target dev="tapfef175ac-72"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/console.log" append="off"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:45:39 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:45:39 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:45:39 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:45:39 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.014 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Preparing to wait for external event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.015 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.016 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.016 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.017 2 DEBUG nova.virt.libvirt.vif [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1906442223',display_name='tempest-TestGettingAddress-server-1906442223',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1906442223',id=135,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-xwt5qdp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:33Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=b598361e-dd69-448e-ade6-931a3d8c84cb,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.017 2 DEBUG nova.network.os_vif_util [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.020 2 DEBUG nova.network.os_vif_util [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.020 2 DEBUG os_vif [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.022 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.022 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.023 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.026 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.027 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.030 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfef175ac-72, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.031 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfef175ac-72, col_values=(('external_ids', {'iface-id': 'fef175ac-72ee-4716-9970-ff3dccaea9f9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f5:9c:ff', 'vm-uuid': 'b598361e-dd69-448e-ade6-931a3d8c84cb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:39 np0005473739 NetworkManager[44949]: <info>  [1759848339.0340] manager: (tapfef175ac-72): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/599)
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.042 2 INFO os_vif [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72')#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.055 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.083 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.114 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.114 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.115 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:f5:9c:ff, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.115 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Using config drive#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.136 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.201 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.202 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.203 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Creating image(s)#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.220 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.242 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.271 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.276 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:39Z|00174|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:57:3a:c8 10.100.0.13
Oct  7 10:45:39 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:39Z|00175|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:57:3a:c8 10.100.0.13
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.368 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.369 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.370 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.371 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.396 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.401 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.745 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.344s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.803 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.895 2 DEBUG nova.objects.instance [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 36592d37-1eb3-431f-8cd3-aa0d320b2e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.910 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.911 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Ensure instance console log exists: /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.911 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.912 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.912 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:39 np0005473739 nova_compute[259550]: 2025-10-07 14:45:39.960 2 DEBUG nova.policy [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.102 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Creating config drive at /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.108 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkiomsf2g execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.277 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpkiomsf2g" returned: 0 in 0.169s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.308 2 DEBUG nova.storage.rbd_utils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.312 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.485 2 DEBUG oslo_concurrency.processutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config b598361e-dd69-448e-ade6-931a3d8c84cb_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.173s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.486 2 INFO nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Deleting local config drive /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb/disk.config because it was imported into RBD.#033[00m
Oct  7 10:45:40 np0005473739 kernel: tapfef175ac-72: entered promiscuous mode
Oct  7 10:45:40 np0005473739 NetworkManager[44949]: <info>  [1759848340.5492] manager: (tapfef175ac-72): new Tun device (/org/freedesktop/NetworkManager/Devices/600)
Oct  7 10:45:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:40Z|01494|binding|INFO|Claiming lport fef175ac-72ee-4716-9970-ff3dccaea9f9 for this chassis.
Oct  7 10:45:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:40Z|01495|binding|INFO|fef175ac-72ee-4716-9970-ff3dccaea9f9: Claiming fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.562 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff'], port_security=['fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8:0:1:f816:3eff:fef5:9cff/64 2001:db8::f816:3eff:fef5:9cff/64', 'neutron:device_id': 'b598361e-dd69-448e-ade6-931a3d8c84cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb742e76-491f-4442-ba5f-a90a2210bfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fef175ac-72ee-4716-9970-ff3dccaea9f9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.563 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fef175ac-72ee-4716-9970-ff3dccaea9f9 in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e bound to our chassis#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.564 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e#033[00m
Oct  7 10:45:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2503: 305 pgs: 305 active+clean; 227 MiB data, 1005 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.9 MiB/s wr, 132 op/s
Oct  7 10:45:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:40Z|01496|binding|INFO|Setting lport fef175ac-72ee-4716-9970-ff3dccaea9f9 ovn-installed in OVS
Oct  7 10:45:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:40Z|01497|binding|INFO|Setting lport fef175ac-72ee-4716-9970-ff3dccaea9f9 up in Southbound
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.582 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.589 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[06a986a1-8791-4e72-bdce-3e6cbdaa3b25]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:40 np0005473739 systemd-machined[214580]: New machine qemu-169-instance-00000087.
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.643 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0c7e3553-b459-47af-8f49-210135e8c0d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.647 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[39166131-f29c-430a-bd4b-c2fc38d047c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:40 np0005473739 systemd[1]: Started Virtual Machine qemu-169-instance-00000087.
Oct  7 10:45:40 np0005473739 systemd-udevd[405733]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:45:40 np0005473739 podman[405687]: 2025-10-07 14:45:40.68199523 +0000 UTC m=+0.090266470 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible)
Oct  7 10:45:40 np0005473739 NetworkManager[44949]: <info>  [1759848340.6895] device (tapfef175ac-72): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:45:40 np0005473739 NetworkManager[44949]: <info>  [1759848340.6915] device (tapfef175ac-72): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.695 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[951a0130-533b-497d-b6a2-b04a907805f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:40 np0005473739 podman[405689]: 2025-10-07 14:45:40.710799776 +0000 UTC m=+0.119073496 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.713 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[403d515d-218f-4f9f-abb1-06ce77b7a858]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5308f20-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889039, 'reachable_time': 17557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 24, 'inoctets': 1880, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 24, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1880, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 24, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405743, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.736 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[28c4e6b2-88a9-4d5e-9713-c6956159c0a4]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5308f20-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889050, 'tstamp': 889050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405745, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5308f20-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889053, 'tstamp': 889053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405745, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.742 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5308f20-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:40 np0005473739 nova_compute[259550]: 2025-10-07 14:45:40.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.746 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5308f20-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5308f20-40, col_values=(('external_ids', {'iface-id': 'ff172ff9-04e5-4286-b498-dfa958b1473c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:40.747 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.031 2 DEBUG nova.network.neutron [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updated VIF entry in instance network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.033 2 DEBUG nova.network.neutron [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.063 2 DEBUG oslo_concurrency.lockutils [req-de085423-146c-4053-bf01-6ffe6761abf7 req-d77c7638-45df-4129-a481-df13a8cf1c9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.254 2 DEBUG nova.compute.manager [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.255 2 DEBUG oslo_concurrency.lockutils [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.255 2 DEBUG oslo_concurrency.lockutils [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.256 2 DEBUG oslo_concurrency.lockutils [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.256 2 DEBUG nova.compute.manager [req-fc1c5a4f-d230-41c2-9690-6efd6984b901 req-ead66b94-e7b0-4b70-ae9a-cf2d436568c3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Processing event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.482 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848341.480767, b598361e-dd69-448e-ade6-931a3d8c84cb => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.482 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] VM Started (Lifecycle Event)#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.484 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.488 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.492 2 INFO nova.virt.libvirt.driver [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance spawned successfully.#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.492 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.510 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.515 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.519 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.520 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.520 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.520 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.521 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.521 2 DEBUG nova.virt.libvirt.driver [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.564 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.565 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848341.4810252, b598361e-dd69-448e-ade6-931a3d8c84cb => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.565 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.593 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.596 2 INFO nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Took 8.26 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.596 2 DEBUG nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.602 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848341.4879448, b598361e-dd69-448e-ade6-931a3d8c84cb => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.602 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.623 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Successfully created port: 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.631 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.635 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.668 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.682 2 INFO nova.compute.manager [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Took 9.22 seconds to build instance.#033[00m
Oct  7 10:45:41 np0005473739 nova_compute[259550]: 2025-10-07 14:45:41.700 2 DEBUG oslo_concurrency.lockutils [None req-3512b466-5a05-4f01-869f-9c2080dd40e3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.317s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2504: 305 pgs: 305 active+clean; 256 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1004 KiB/s rd, 4.2 MiB/s wr, 113 op/s
Oct  7 10:45:42 np0005473739 nova_compute[259550]: 2025-10-07 14:45:42.745 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Successfully updated port: 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:45:42 np0005473739 nova_compute[259550]: 2025-10-07 14:45:42.765 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:42 np0005473739 nova_compute[259550]: 2025-10-07 14:45:42.765 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:42 np0005473739 nova_compute[259550]: 2025-10-07 14:45:42.765 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:45:42 np0005473739 nova_compute[259550]: 2025-10-07 14:45:42.959 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.129 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.341 2 DEBUG nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.343 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.343 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.344 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.345 2 DEBUG nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] No waiting events found dispatching network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.345 2 WARNING nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received unexpected event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.346 2 DEBUG nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.347 2 DEBUG nova.compute.manager [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing instance network info cache due to event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:43 np0005473739 nova_compute[259550]: 2025-10-07 14:45:43.348 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.483 2 DEBUG nova.network.neutron [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.519 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.519 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance network_info: |[{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.519 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.520 2 DEBUG nova.network.neutron [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.522 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start _get_guest_xml network_info=[{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.527 2 WARNING nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.534 2 DEBUG nova.virt.libvirt.host [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.535 2 DEBUG nova.virt.libvirt.host [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.544 2 DEBUG nova.virt.libvirt.host [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.545 2 DEBUG nova.virt.libvirt.host [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.546 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.546 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.547 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.547 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.548 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.548 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.549 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.549 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.550 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.550 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.550 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.551 2 DEBUG nova.virt.hardware [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:45:44 np0005473739 nova_compute[259550]: 2025-10-07 14:45:44.556 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2505: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 5.7 MiB/s wr, 160 op/s
Oct  7 10:45:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:45:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2156965025' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.090 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.534s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.123 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.129 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:45:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1478757834' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.653 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.524s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.655 2 DEBUG nova.virt.libvirt.vif [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1103641822',display_name='tempest-TestNetworkBasicOps-server-1103641822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1103641822',id=136,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPMP7i8ec4yKwVDxErtIGw1wPOouS90pUC/2M/KSrqUJIp7tpUDqB6OrTVom+DAk9JZ3iNgnjUuCMQr+/u1V//z0y/ybLMjjWdhld2MXrTrpN1FeSHNloBYJfIHxQTEMw==',key_name='tempest-TestNetworkBasicOps-329000454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-7sywmp2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:39Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=36592d37-1eb3-431f-8cd3-aa0d320b2e86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.655 2 DEBUG nova.network.os_vif_util [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.656 2 DEBUG nova.network.os_vif_util [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.658 2 DEBUG nova.objects.instance [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 36592d37-1eb3-431f-8cd3-aa0d320b2e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.695 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <uuid>36592d37-1eb3-431f-8cd3-aa0d320b2e86</uuid>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <name>instance-00000088</name>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-1103641822</nova:name>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:45:44</nova:creationTime>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <nova:port uuid="44d8ec34-7fbc-440a-bab4-b9b8d29ff249">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.3" ipVersion="4"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <entry name="serial">36592d37-1eb3-431f-8cd3-aa0d320b2e86</entry>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <entry name="uuid">36592d37-1eb3-431f-8cd3-aa0d320b2e86</entry>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:0c:97:e5"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <target dev="tap44d8ec34-7f"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/console.log" append="off"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:45:45 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:45:45 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:45:45 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:45:45 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.697 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Preparing to wait for external event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.697 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.698 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.698 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.699 2 DEBUG nova.virt.libvirt.vif [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:45:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1103641822',display_name='tempest-TestNetworkBasicOps-server-1103641822',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1103641822',id=136,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPMP7i8ec4yKwVDxErtIGw1wPOouS90pUC/2M/KSrqUJIp7tpUDqB6OrTVom+DAk9JZ3iNgnjUuCMQr+/u1V//z0y/ybLMjjWdhld2MXrTrpN1FeSHNloBYJfIHxQTEMw==',key_name='tempest-TestNetworkBasicOps-329000454',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-7sywmp2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:45:39Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=36592d37-1eb3-431f-8cd3-aa0d320b2e86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.699 2 DEBUG nova.network.os_vif_util [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.700 2 DEBUG nova.network.os_vif_util [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.700 2 DEBUG os_vif [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.701 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.702 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.708 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap44d8ec34-7f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap44d8ec34-7f, col_values=(('external_ids', {'iface-id': '44d8ec34-7fbc-440a-bab4-b9b8d29ff249', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:97:e5', 'vm-uuid': '36592d37-1eb3-431f-8cd3-aa0d320b2e86'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:45 np0005473739 NetworkManager[44949]: <info>  [1759848345.7124] manager: (tap44d8ec34-7f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/601)
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.719 2 INFO os_vif [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f')#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.775 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.776 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.776 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:0c:97:e5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.776 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Using config drive#033[00m
Oct  7 10:45:45 np0005473739 nova_compute[259550]: 2025-10-07 14:45:45.800 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.201 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Creating config drive at /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.207 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0llzze4n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.369 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0llzze4n" returned: 0 in 0.162s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.405 2 DEBUG nova.storage.rbd_utils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.410 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.461 2 DEBUG nova.network.neutron [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updated VIF entry in instance network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.462 2 DEBUG nova.network.neutron [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.485 2 DEBUG oslo_concurrency.lockutils [req-9fbdf9c8-1895-4ce4-b1c7-8f13aa198f1a req-b22aadb2-e8df-4085-a248-fdbb5234f935 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.506 2 DEBUG nova.compute.manager [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.507 2 DEBUG nova.compute.manager [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing instance network info cache due to event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.508 2 DEBUG oslo_concurrency.lockutils [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.508 2 DEBUG oslo_concurrency.lockutils [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.508 2 DEBUG nova.network.neutron [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2506: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 4.6 MiB/s wr, 189 op/s
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.584 2 DEBUG oslo_concurrency.processutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config 36592d37-1eb3-431f-8cd3-aa0d320b2e86_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.174s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.585 2 INFO nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Deleting local config drive /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86/disk.config because it was imported into RBD.#033[00m
Oct  7 10:45:46 np0005473739 kernel: tap44d8ec34-7f: entered promiscuous mode
Oct  7 10:45:46 np0005473739 NetworkManager[44949]: <info>  [1759848346.6341] manager: (tap44d8ec34-7f): new Tun device (/org/freedesktop/NetworkManager/Devices/602)
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:46Z|01498|binding|INFO|Claiming lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 for this chassis.
Oct  7 10:45:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:46Z|01499|binding|INFO|44d8ec34-7fbc-440a-bab4-b9b8d29ff249: Claiming fa:16:3e:0c:97:e5 10.100.0.3
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.646 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:97:e5 10.100.0.3'], port_security=['fa:16:3e:0c:97:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '36592d37-1eb3-431f-8cd3-aa0d320b2e86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a790910-04e4-4ed9-9209-184147e62b8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3c2506e0-287c-4f8f-b28e-99b2bb4e4542', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17109e69-8b68-4e08-a1dd-9e4119d12812, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=44d8ec34-7fbc-440a-bab4-b9b8d29ff249) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.648 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 in datapath 8a790910-04e4-4ed9-9209-184147e62b8b bound to our chassis#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.651 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a790910-04e4-4ed9-9209-184147e62b8b#033[00m
Oct  7 10:45:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:46Z|01500|binding|INFO|Setting lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 ovn-installed in OVS
Oct  7 10:45:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:46Z|01501|binding|INFO|Setting lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 up in Southbound
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.669 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c34e5cc1-a8a2-4c72-89f1-820370e2c847]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:46 np0005473739 systemd-udevd[405925]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:45:46 np0005473739 NetworkManager[44949]: <info>  [1759848346.6978] device (tap44d8ec34-7f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:45:46 np0005473739 NetworkManager[44949]: <info>  [1759848346.6990] device (tap44d8ec34-7f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:45:46 np0005473739 systemd-machined[214580]: New machine qemu-170-instance-00000088.
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.703 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[55d84597-5474-4e62-8b95-ec0c5793dc33]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.706 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3e2e2fd9-d8c5-4e6f-be34-306848a50577]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:46 np0005473739 systemd[1]: Started Virtual Machine qemu-170-instance-00000088.
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.742 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b23e60c3-1fcd-4a33-af4e-eb903bc31501]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.763 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e058d444-8682-43c4-8d56-8310982c7dd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a790910-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 5, 'rx_bytes': 874, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890932, 'reachable_time': 40705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 405932, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5be632a5-9085-46d2-9e8c-7e1c45d57d95]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8a790910-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890945, 'tstamp': 890945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405937, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8a790910-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890948, 'tstamp': 890948}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 405937, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.781 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a790910-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:46 np0005473739 nova_compute[259550]: 2025-10-07 14:45:46.784 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.784 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a790910-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.785 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.785 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a790910-00, col_values=(('external_ids', {'iface-id': '473a96c8-dafe-4956-8316-8a82bc1c870e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:45:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:45:46.785 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.088 2 DEBUG nova.compute.manager [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.088 2 DEBUG oslo_concurrency.lockutils [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.089 2 DEBUG oslo_concurrency.lockutils [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.089 2 DEBUG oslo_concurrency.lockutils [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.089 2 DEBUG nova.compute.manager [req-68b24e60-ff78-46fd-9bd2-0680dad5325e req-3bbb8378-1de1-4a0a-bb5c-b099b81a8a08 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Processing event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.692 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848347.6916924, 36592d37-1eb3-431f-8cd3-aa0d320b2e86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.693 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] VM Started (Lifecycle Event)#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.696 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.706 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.710 2 INFO nova.virt.libvirt.driver [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance spawned successfully.#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.710 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.730 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.736 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.742 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.742 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.742 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.743 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.743 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.744 2 DEBUG nova.virt.libvirt.driver [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.773 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.774 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848347.6918135, 36592d37-1eb3-431f-8cd3-aa0d320b2e86 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.774 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.812 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.815 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848347.6990612, 36592d37-1eb3-431f-8cd3-aa0d320b2e86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.815 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.827 2 INFO nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Took 8.63 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.828 2 DEBUG nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.835 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.840 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.875 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.896 2 INFO nova.compute.manager [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Took 9.65 seconds to build instance.#033[00m
Oct  7 10:45:47 np0005473739 nova_compute[259550]: 2025-10-07 14:45:47.917 2 DEBUG oslo_concurrency.lockutils [None req-60b971cc-24f3-4da5-ac36-b46bed7dfc51 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.739s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:48 np0005473739 nova_compute[259550]: 2025-10-07 14:45:48.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:48 np0005473739 nova_compute[259550]: 2025-10-07 14:45:48.513 2 DEBUG nova.network.neutron [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updated VIF entry in instance network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:45:48 np0005473739 nova_compute[259550]: 2025-10-07 14:45:48.514 2 DEBUG nova.network.neutron [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:48 np0005473739 nova_compute[259550]: 2025-10-07 14:45:48.538 2 DEBUG oslo_concurrency.lockutils [req-9899075b-6f3e-42e1-8c71-e9d4e2b44036 req-d887fa40-3bf3-4598-8fbd-cba1c7c9edeb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2507: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 3.9 MiB/s wr, 164 op/s
Oct  7 10:45:49 np0005473739 nova_compute[259550]: 2025-10-07 14:45:49.188 2 DEBUG nova.compute.manager [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:49 np0005473739 nova_compute[259550]: 2025-10-07 14:45:49.189 2 DEBUG oslo_concurrency.lockutils [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:45:49 np0005473739 nova_compute[259550]: 2025-10-07 14:45:49.189 2 DEBUG oslo_concurrency.lockutils [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:45:49 np0005473739 nova_compute[259550]: 2025-10-07 14:45:49.189 2 DEBUG oslo_concurrency.lockutils [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:45:49 np0005473739 nova_compute[259550]: 2025-10-07 14:45:49.190 2 DEBUG nova.compute.manager [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] No waiting events found dispatching network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:45:49 np0005473739 nova_compute[259550]: 2025-10-07 14:45:49.190 2 WARNING nova.compute.manager [req-4915b7f2-5515-4e64-b7b7-9521460f1cc9 req-e1d1720e-d95e-4583-b060-50d86948542a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received unexpected event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:45:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2508: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.9 MiB/s wr, 194 op/s
Oct  7 10:45:50 np0005473739 nova_compute[259550]: 2025-10-07 14:45:50.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2509: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 2.8 MiB/s wr, 172 op/s
Oct  7 10:45:52 np0005473739 nova_compute[259550]: 2025-10-07 14:45:52.596 2 DEBUG nova.compute.manager [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:45:52 np0005473739 nova_compute[259550]: 2025-10-07 14:45:52.596 2 DEBUG nova.compute.manager [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing instance network info cache due to event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:45:52 np0005473739 nova_compute[259550]: 2025-10-07 14:45:52.597 2 DEBUG oslo_concurrency.lockutils [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:45:52 np0005473739 nova_compute[259550]: 2025-10-07 14:45:52.597 2 DEBUG oslo_concurrency.lockutils [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:45:52 np0005473739 nova_compute[259550]: 2025-10-07 14:45:52.597 2 DEBUG nova.network.neutron [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:45:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:45:53 np0005473739 nova_compute[259550]: 2025-10-07 14:45:53.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:53 np0005473739 nova_compute[259550]: 2025-10-07 14:45:53.940 2 DEBUG nova.network.neutron [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updated VIF entry in instance network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:45:53 np0005473739 nova_compute[259550]: 2025-10-07 14:45:53.941 2 DEBUG nova.network.neutron [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:45:53 np0005473739 nova_compute[259550]: 2025-10-07 14:45:53.961 2 DEBUG oslo_concurrency.lockutils [req-5da339b6-0ede-4aa3-95ff-33c13064a9fa req-7a8c2f99-1999-4788-a39c-56c639a3bf9a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:45:54 np0005473739 podman[405981]: 2025-10-07 14:45:54.09901322 +0000 UTC m=+0.082110144 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:45:54 np0005473739 podman[405982]: 2025-10-07 14:45:54.131544614 +0000 UTC m=+0.109846401 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_controller)
Oct  7 10:45:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2510: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.9 MiB/s rd, 1.6 MiB/s wr, 178 op/s
Oct  7 10:45:55 np0005473739 nova_compute[259550]: 2025-10-07 14:45:55.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:55 np0005473739 nova_compute[259550]: 2025-10-07 14:45:55.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2511: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 30 KiB/s wr, 105 op/s
Oct  7 10:45:57 np0005473739 nova_compute[259550]: 2025-10-07 14:45:57.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:58 np0005473739 nova_compute[259550]: 2025-10-07 14:45:58.136 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:45:58 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #50. Immutable memtables: 7.
Oct  7 10:45:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:45:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:58Z|00176|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f5:9c:ff 10.100.0.5
Oct  7 10:45:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:45:58Z|00177|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f5:9c:ff 10.100.0.5
Oct  7 10:45:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2512: 305 pgs: 305 active+clean; 293 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 29 KiB/s wr, 75 op/s
Oct  7 10:45:58 np0005473739 nova_compute[259550]: 2025-10-07 14:45:58.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:59 np0005473739 nova_compute[259550]: 2025-10-07 14:45:59.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:59 np0005473739 nova_compute[259550]: 2025-10-07 14:45:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:45:59 np0005473739 nova_compute[259550]: 2025-10-07 14:45:59.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:46:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:00.077 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:00.078 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:00.079 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:00 np0005473739 podman[406198]: 2025-10-07 14:46:00.567633918 +0000 UTC m=+0.071091441 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:46:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2513: 305 pgs: 305 active+clean; 317 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Oct  7 10:46:00 np0005473739 podman[406198]: 2025-10-07 14:46:00.710606048 +0000 UTC m=+0.214063551 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 10:46:00 np0005473739 nova_compute[259550]: 2025-10-07 14:46:00.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:46:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:46:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:01 np0005473739 nova_compute[259550]: 2025-10-07 14:46:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:02 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7f44f9c6-6201-4589-bd4d-49faf7eff2d0 does not exist
Oct  7 10:46:02 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a0ffc880-6633-4f52-8834-548f86b4a498 does not exist
Oct  7 10:46:02 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev eedd7a50-7723-4978-9da4-9468ae34395f does not exist
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:46:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2514: 305 pgs: 305 active+clean; 326 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct  7 10:46:02 np0005473739 podman[406624]: 2025-10-07 14:46:02.909118015 +0000 UTC m=+0.094908174 container create 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:46:02 np0005473739 podman[406624]: 2025-10-07 14:46:02.844536579 +0000 UTC m=+0.030326768 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:46:02 np0005473739 systemd[1]: Started libpod-conmon-755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e.scope.
Oct  7 10:46:02 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:46:03 np0005473739 podman[406624]: 2025-10-07 14:46:03.087203459 +0000 UTC m=+0.272993648 container init 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:46:03 np0005473739 podman[406624]: 2025-10-07 14:46:03.100374219 +0000 UTC m=+0.286164378 container start 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:46:03 np0005473739 systemd[1]: libpod-755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e.scope: Deactivated successfully.
Oct  7 10:46:03 np0005473739 great_dewdney[406640]: 167 167
Oct  7 10:46:03 np0005473739 conmon[406640]: conmon 755894ed99e581743bcb <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e.scope/container/memory.events
Oct  7 10:46:03 np0005473739 podman[406624]: 2025-10-07 14:46:03.117754901 +0000 UTC m=+0.303545080 container attach 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:46:03 np0005473739 podman[406624]: 2025-10-07 14:46:03.118882621 +0000 UTC m=+0.304672790 container died 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:46:03 np0005473739 nova_compute[259550]: 2025-10-07 14:46:03.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-186ed782a6c8c480bd70c76b73d9d0e7953d185ab183d83db0aaab9ef9e3bc7e-merged.mount: Deactivated successfully.
Oct  7 10:46:03 np0005473739 podman[406624]: 2025-10-07 14:46:03.242750093 +0000 UTC m=+0.428540262 container remove 755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_dewdney, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:46:03 np0005473739 systemd[1]: libpod-conmon-755894ed99e581743bcb09f98fc44097ab60d494542408c8a752df999be1de2e.scope: Deactivated successfully.
Oct  7 10:46:03 np0005473739 podman[406665]: 2025-10-07 14:46:03.444126186 +0000 UTC m=+0.044480504 container create ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:46:03 np0005473739 systemd[1]: Started libpod-conmon-ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27.scope.
Oct  7 10:46:03 np0005473739 podman[406665]: 2025-10-07 14:46:03.424312689 +0000 UTC m=+0.024667027 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:46:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:46:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:03 np0005473739 podman[406665]: 2025-10-07 14:46:03.5624085 +0000 UTC m=+0.162762838 container init ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:46:03 np0005473739 podman[406665]: 2025-10-07 14:46:03.573554927 +0000 UTC m=+0.173909245 container start ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 10:46:03 np0005473739 podman[406665]: 2025-10-07 14:46:03.579255078 +0000 UTC m=+0.179609426 container attach ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:46:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:04Z|00178|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0c:97:e5 10.100.0.3
Oct  7 10:46:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:04Z|00179|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0c:97:e5 10.100.0.3
Oct  7 10:46:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2515: 305 pgs: 305 active+clean; 343 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 3.6 MiB/s wr, 114 op/s
Oct  7 10:46:04 np0005473739 cranky_lovelace[406681]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:46:04 np0005473739 cranky_lovelace[406681]: --> relative data size: 1.0
Oct  7 10:46:04 np0005473739 cranky_lovelace[406681]: --> All data devices are unavailable
Oct  7 10:46:04 np0005473739 systemd[1]: libpod-ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27.scope: Deactivated successfully.
Oct  7 10:46:04 np0005473739 systemd[1]: libpod-ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27.scope: Consumed 1.002s CPU time.
Oct  7 10:46:04 np0005473739 podman[406710]: 2025-10-07 14:46:04.694284496 +0000 UTC m=+0.026053604 container died ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:46:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-902ecb59b89a63880f774a76a43d737757bf10cd5557043e43d8b1bc2a91277e-merged.mount: Deactivated successfully.
Oct  7 10:46:04 np0005473739 podman[406710]: 2025-10-07 14:46:04.751324722 +0000 UTC m=+0.083093810 container remove ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_lovelace, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:46:04 np0005473739 systemd[1]: libpod-conmon-ff09399390728e6fd9209524499db4c7c48f39822d924f785f86da3c8547cd27.scope: Deactivated successfully.
Oct  7 10:46:05 np0005473739 podman[406862]: 2025-10-07 14:46:05.398101464 +0000 UTC m=+0.045925732 container create 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 10:46:05 np0005473739 systemd[1]: Started libpod-conmon-6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12.scope.
Oct  7 10:46:05 np0005473739 podman[406862]: 2025-10-07 14:46:05.380767862 +0000 UTC m=+0.028592060 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:46:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:46:05 np0005473739 podman[406862]: 2025-10-07 14:46:05.496211201 +0000 UTC m=+0.144035429 container init 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:46:05 np0005473739 podman[406862]: 2025-10-07 14:46:05.502944691 +0000 UTC m=+0.150768859 container start 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  7 10:46:05 np0005473739 podman[406862]: 2025-10-07 14:46:05.506027292 +0000 UTC m=+0.153851520 container attach 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:46:05 np0005473739 great_booth[406878]: 167 167
Oct  7 10:46:05 np0005473739 systemd[1]: libpod-6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12.scope: Deactivated successfully.
Oct  7 10:46:05 np0005473739 conmon[406878]: conmon 6f4bbc641f7c670d4f20 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12.scope/container/memory.events
Oct  7 10:46:05 np0005473739 podman[406862]: 2025-10-07 14:46:05.509023552 +0000 UTC m=+0.156847730 container died 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:46:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d21e11ec8698bf9cf2c7e8cb0a4a28b3e08633dd689d14b35c1edc7fe9731f5d-merged.mount: Deactivated successfully.
Oct  7 10:46:05 np0005473739 podman[406862]: 2025-10-07 14:46:05.546914989 +0000 UTC m=+0.194739167 container remove 6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_booth, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:46:05 np0005473739 systemd[1]: libpod-conmon-6f4bbc641f7c670d4f2024ad469e519ae93b8b2ca7829e31ac93eabc68553c12.scope: Deactivated successfully.
Oct  7 10:46:05 np0005473739 nova_compute[259550]: 2025-10-07 14:46:05.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:05 np0005473739 podman[406903]: 2025-10-07 14:46:05.740819112 +0000 UTC m=+0.050534024 container create 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:46:05 np0005473739 systemd[1]: Started libpod-conmon-01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd.scope.
Oct  7 10:46:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:46:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:05 np0005473739 podman[406903]: 2025-10-07 14:46:05.725486215 +0000 UTC m=+0.035201157 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:46:05 np0005473739 podman[406903]: 2025-10-07 14:46:05.82383175 +0000 UTC m=+0.133546682 container init 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:46:05 np0005473739 podman[406903]: 2025-10-07 14:46:05.830547338 +0000 UTC m=+0.140262260 container start 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:46:05 np0005473739 podman[406903]: 2025-10-07 14:46:05.83626457 +0000 UTC m=+0.145979482 container attach 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 10:46:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2516: 305 pgs: 305 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 581 KiB/s rd, 4.2 MiB/s wr, 112 op/s
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]: {
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:    "0": [
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:        {
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "devices": [
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "/dev/loop3"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            ],
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_name": "ceph_lv0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_size": "21470642176",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "name": "ceph_lv0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "tags": {
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cluster_name": "ceph",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.crush_device_class": "",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.encrypted": "0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osd_id": "0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.type": "block",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.vdo": "0"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            },
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "type": "block",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "vg_name": "ceph_vg0"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:        }
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:    ],
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:    "1": [
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:        {
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "devices": [
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "/dev/loop4"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            ],
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_name": "ceph_lv1",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_size": "21470642176",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "name": "ceph_lv1",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "tags": {
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cluster_name": "ceph",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.crush_device_class": "",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.encrypted": "0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osd_id": "1",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.type": "block",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.vdo": "0"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            },
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "type": "block",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "vg_name": "ceph_vg1"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:        }
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:    ],
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:    "2": [
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:        {
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "devices": [
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "/dev/loop5"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            ],
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_name": "ceph_lv2",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_size": "21470642176",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "name": "ceph_lv2",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "tags": {
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.cluster_name": "ceph",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.crush_device_class": "",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.encrypted": "0",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osd_id": "2",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.type": "block",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:                "ceph.vdo": "0"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            },
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "type": "block",
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:            "vg_name": "ceph_vg2"
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:        }
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]:    ]
Oct  7 10:46:06 np0005473739 sharp_volhard[406919]: }
Oct  7 10:46:06 np0005473739 systemd[1]: libpod-01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd.scope: Deactivated successfully.
Oct  7 10:46:06 np0005473739 podman[406903]: 2025-10-07 14:46:06.660826707 +0000 UTC m=+0.970541619 container died 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:46:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-78ee251f6fdc96c20145c48fe4311a45cc39cbb54b358ae2b401089c5567d3dd-merged.mount: Deactivated successfully.
Oct  7 10:46:06 np0005473739 podman[406903]: 2025-10-07 14:46:06.877508116 +0000 UTC m=+1.187223028 container remove 01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_volhard, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:46:06 np0005473739 systemd[1]: libpod-conmon-01014c370c28924a54bbe9814b1e0d0e5de9c83945ea9b395d8664a6919efefd.scope: Deactivated successfully.
Oct  7 10:46:07 np0005473739 podman[407080]: 2025-10-07 14:46:07.52548913 +0000 UTC m=+0.039798679 container create f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:46:07 np0005473739 systemd[1]: Started libpod-conmon-f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a.scope.
Oct  7 10:46:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:46:07 np0005473739 podman[407080]: 2025-10-07 14:46:07.506906016 +0000 UTC m=+0.021215585 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:46:07 np0005473739 podman[407080]: 2025-10-07 14:46:07.607993693 +0000 UTC m=+0.122303272 container init f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:46:07 np0005473739 podman[407080]: 2025-10-07 14:46:07.615642746 +0000 UTC m=+0.129952285 container start f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:46:07 np0005473739 podman[407080]: 2025-10-07 14:46:07.620389573 +0000 UTC m=+0.134699152 container attach f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:46:07 np0005473739 ecstatic_engelbart[407096]: 167 167
Oct  7 10:46:07 np0005473739 systemd[1]: libpod-f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a.scope: Deactivated successfully.
Oct  7 10:46:07 np0005473739 podman[407080]: 2025-10-07 14:46:07.623987148 +0000 UTC m=+0.138296707 container died f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:46:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ef955d2f46904b757e6033e7da2bbe169199f2078693365f1fb99736884883b5-merged.mount: Deactivated successfully.
Oct  7 10:46:07 np0005473739 podman[407080]: 2025-10-07 14:46:07.66921019 +0000 UTC m=+0.183519779 container remove f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_engelbart, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:46:07 np0005473739 systemd[1]: libpod-conmon-f3e8bb995fb4816216b942dbb58e4eb2587c7aee6ee0446dcec4ca9b6a8fc03a.scope: Deactivated successfully.
Oct  7 10:46:07 np0005473739 podman[407120]: 2025-10-07 14:46:07.858968054 +0000 UTC m=+0.054445049 container create 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:46:07 np0005473739 systemd[1]: Started libpod-conmon-90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d.scope.
Oct  7 10:46:07 np0005473739 podman[407120]: 2025-10-07 14:46:07.835409907 +0000 UTC m=+0.030886822 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:46:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:46:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:46:07 np0005473739 podman[407120]: 2025-10-07 14:46:07.956275681 +0000 UTC m=+0.151752626 container init 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 10:46:07 np0005473739 podman[407120]: 2025-10-07 14:46:07.965130146 +0000 UTC m=+0.160607051 container start 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:46:07 np0005473739 podman[407120]: 2025-10-07 14:46:07.973646622 +0000 UTC m=+0.169123567 container attach 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:46:07 np0005473739 nova_compute[259550]: 2025-10-07 14:46:07.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:07 np0005473739 nova_compute[259550]: 2025-10-07 14:46:07.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:46:07 np0005473739 nova_compute[259550]: 2025-10-07 14:46:07.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:46:08 np0005473739 nova_compute[259550]: 2025-10-07 14:46:08.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2517: 305 pgs: 305 active+clean; 350 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 579 KiB/s rd, 4.2 MiB/s wr, 112 op/s
Oct  7 10:46:08 np0005473739 nova_compute[259550]: 2025-10-07 14:46:08.919 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:46:08 np0005473739 nova_compute[259550]: 2025-10-07 14:46:08.919 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:46:08 np0005473739 nova_compute[259550]: 2025-10-07 14:46:08.919 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:46:08 np0005473739 nova_compute[259550]: 2025-10-07 14:46:08.920 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 6721439b-34d7-4282-bbcd-37424c3f2691 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]: {
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "osd_id": 2,
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "type": "bluestore"
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:    },
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "osd_id": 1,
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "type": "bluestore"
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:    },
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "osd_id": 0,
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:        "type": "bluestore"
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]:    }
Oct  7 10:46:09 np0005473739 sharp_franklin[407137]: }
Oct  7 10:46:09 np0005473739 systemd[1]: libpod-90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d.scope: Deactivated successfully.
Oct  7 10:46:09 np0005473739 systemd[1]: libpod-90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d.scope: Consumed 1.155s CPU time.
Oct  7 10:46:09 np0005473739 podman[407170]: 2025-10-07 14:46:09.187946798 +0000 UTC m=+0.031878987 container died 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:46:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f28816c9c30a9f8d04f6d84884d8e3ce2a59e289a51a3312602bca47fa894999-merged.mount: Deactivated successfully.
Oct  7 10:46:09 np0005473739 podman[407170]: 2025-10-07 14:46:09.263157138 +0000 UTC m=+0.107089297 container remove 90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_franklin, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:46:09 np0005473739 systemd[1]: libpod-conmon-90ebdcd38f01eecda1e7826d2ccec0bf8afd16ea609366e5331914bcef50b47d.scope: Deactivated successfully.
Oct  7 10:46:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:46:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:46:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0ff82a8e-793c-4e9b-a8c9-f7c5759b9bd8 does not exist
Oct  7 10:46:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f8bca1d4-5e82-496e-88b3-ca3321896d90 does not exist
Oct  7 10:46:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:46:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2518: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 679 KiB/s rd, 4.3 MiB/s wr, 126 op/s
Oct  7 10:46:10 np0005473739 nova_compute[259550]: 2025-10-07 14:46:10.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:11 np0005473739 podman[407237]: 2025-10-07 14:46:11.092187624 +0000 UTC m=+0.071503832 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:46:11 np0005473739 podman[407236]: 2025-10-07 14:46:11.094397203 +0000 UTC m=+0.074335637 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.295 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.322 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.322 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.323 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.353 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.354 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.354 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.354 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.354 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:46:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2519: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 584 KiB/s rd, 2.9 MiB/s wr, 111 op/s
Oct  7 10:46:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:46:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1391210274' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.844 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.956 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.957 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000085 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.960 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.961 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000087 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.966 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.966 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000086 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.971 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:46:12 np0005473739 nova_compute[259550]: 2025-10-07 14:46:12.972 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000088 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.143 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.161 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.162 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=2853MB free_disk=59.80624771118164GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.162 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.162 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 6721439b-34d7-4282-bbcd-37424c3f2691 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b598361e-dd69-448e-ade6-931a3d8c84cb actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.270 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 36592d37-1eb3-431f-8cd3-aa0d320b2e86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.271 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.271 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=59GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.378 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:46:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:46:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3249140781' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.843 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.853 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.874 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.902 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:46:13 np0005473739 nova_compute[259550]: 2025-10-07 14:46:13.903 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2520: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 388 KiB/s rd, 2.2 MiB/s wr, 68 op/s
Oct  7 10:46:15 np0005473739 nova_compute[259550]: 2025-10-07 14:46:15.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:16 np0005473739 nova_compute[259550]: 2025-10-07 14:46:16.562 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2521: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 704 KiB/s wr, 44 op/s
Oct  7 10:46:16 np0005473739 nova_compute[259550]: 2025-10-07 14:46:16.594 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:18 np0005473739 nova_compute[259550]: 2025-10-07 14:46:18.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2522: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 75 KiB/s wr, 16 op/s
Oct  7 10:46:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2523: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 100 KiB/s rd, 75 KiB/s wr, 16 op/s
Oct  7 10:46:20 np0005473739 nova_compute[259550]: 2025-10-07 14:46:20.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.663 2 DEBUG nova.compute.manager [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.664 2 DEBUG nova.compute.manager [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing instance network info cache due to event network-changed-fef175ac-72ee-4716-9970-ff3dccaea9f9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.664 2 DEBUG oslo_concurrency.lockutils [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.664 2 DEBUG oslo_concurrency.lockutils [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.665 2 DEBUG nova.network.neutron [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Refreshing network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.747 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.748 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.748 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.748 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.749 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.750 2 INFO nova.compute.manager [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Terminating instance#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.750 2 DEBUG nova.compute.manager [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:46:21 np0005473739 kernel: tapfef175ac-72 (unregistering): left promiscuous mode
Oct  7 10:46:21 np0005473739 NetworkManager[44949]: <info>  [1759848381.8010] device (tapfef175ac-72): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:46:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:21Z|01502|binding|INFO|Releasing lport fef175ac-72ee-4716-9970-ff3dccaea9f9 from this chassis (sb_readonly=0)
Oct  7 10:46:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:21Z|01503|binding|INFO|Setting lport fef175ac-72ee-4716-9970-ff3dccaea9f9 down in Southbound
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:21 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:21Z|01504|binding|INFO|Removing iface tapfef175ac-72 ovn-installed in OVS
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.819 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff'], port_security=['fa:16:3e:f5:9c:ff 10.100.0.5 2001:db8:0:1:f816:3eff:fef5:9cff 2001:db8::f816:3eff:fef5:9cff'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28 2001:db8:0:1:f816:3eff:fef5:9cff/64 2001:db8::f816:3eff:fef5:9cff/64', 'neutron:device_id': 'b598361e-dd69-448e-ade6-931a3d8c84cb', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb742e76-491f-4442-ba5f-a90a2210bfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fef175ac-72ee-4716-9970-ff3dccaea9f9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.824 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fef175ac-72ee-4716-9970-ff3dccaea9f9 in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e unbound from our chassis#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.826 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.845 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5a98c9a-e29a-4950-b862-99717ea11eeb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.877 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5b9c8ca6-8a24-45dd-8a8a-f18540b16eae]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:21 np0005473739 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Deactivated successfully.
Oct  7 10:46:21 np0005473739 systemd[1]: machine-qemu\x2d169\x2dinstance\x2d00000087.scope: Consumed 18.232s CPU time.
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.882 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e389a431-1383-47b9-be2f-52a77984cbec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:21 np0005473739 systemd-machined[214580]: Machine qemu-169-instance-00000087 terminated.
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.909 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[56f2d332-d5bd-4f23-8ed2-8f8bb7e05016]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.927 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cae84e81-d335-4377-b01d-8944b1be0bce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb5308f20-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:31:0c:94'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 425], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889039, 'reachable_time': 17557, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 40, 'inoctets': 3040, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 40, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 3040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 40, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407333, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.946 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[41ace20e-f684-4c9c-8554-51327cfc4ab9]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb5308f20-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889050, 'tstamp': 889050}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407334, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb5308f20-41'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 889053, 'tstamp': 889053}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407334, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.949 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5308f20-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.951 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.956 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb5308f20-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.956 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.957 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb5308f20-40, col_values=(('external_ids', {'iface-id': 'ff172ff9-04e5-4286-b498-dfa958b1473c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:21 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:21.957 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.988 2 INFO nova.virt.libvirt.driver [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Instance destroyed successfully.#033[00m
Oct  7 10:46:21 np0005473739 nova_compute[259550]: 2025-10-07 14:46:21.989 2 DEBUG nova.objects.instance [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid b598361e-dd69-448e-ade6-931a3d8c84cb obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.001 2 DEBUG nova.virt.libvirt.vif [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:45:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1906442223',display_name='tempest-TestGettingAddress-server-1906442223',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1906442223',id=135,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:45:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-xwt5qdp6',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:45:41Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=b598361e-dd69-448e-ade6-931a3d8c84cb,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.002 2 DEBUG nova.network.os_vif_util [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.192", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.003 2 DEBUG nova.network.os_vif_util [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.003 2 DEBUG os_vif [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.006 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.006 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfef175ac-72, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.010 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.013 2 INFO os_vif [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f5:9c:ff,bridge_name='br-int',has_traffic_filtering=True,id=fef175ac-72ee-4716-9970-ff3dccaea9f9,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfef175ac-72')#033[00m
Oct  7 10:46:22 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.414 2 INFO nova.virt.libvirt.driver [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Deleting instance files /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb_del#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.415 2 INFO nova.virt.libvirt.driver [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Deletion of /var/lib/nova/instances/b598361e-dd69-448e-ade6-931a3d8c84cb_del complete#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.466 2 INFO nova.compute.manager [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Took 0.72 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.467 2 DEBUG oslo.service.loopingcall [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.467 2 DEBUG nova.compute.manager [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:46:22 np0005473739 nova_compute[259550]: 2025-10-07 14:46:22.467 2 DEBUG nova.network.neutron [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2524: 305 pgs: 305 active+clean; 359 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 24 KiB/s wr, 3 op/s
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:46:22
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'volumes', 'images', '.mgr', '.rgw.root', 'default.rgw.meta', 'vms', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'default.rgw.control']
Oct  7 10:46:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:46:23 np0005473739 nova_compute[259550]: 2025-10-07 14:46:23.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:23 np0005473739 ceph-mgr[74587]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3626055412
Oct  7 10:46:23 np0005473739 nova_compute[259550]: 2025-10-07 14:46:23.870 2 DEBUG nova.compute.manager [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-unplugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:23 np0005473739 nova_compute[259550]: 2025-10-07 14:46:23.870 2 DEBUG oslo_concurrency.lockutils [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:23 np0005473739 nova_compute[259550]: 2025-10-07 14:46:23.871 2 DEBUG oslo_concurrency.lockutils [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:23 np0005473739 nova_compute[259550]: 2025-10-07 14:46:23.871 2 DEBUG oslo_concurrency.lockutils [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:23 np0005473739 nova_compute[259550]: 2025-10-07 14:46:23.871 2 DEBUG nova.compute.manager [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] No waiting events found dispatching network-vif-unplugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:23 np0005473739 nova_compute[259550]: 2025-10-07 14:46:23.872 2 DEBUG nova.compute.manager [req-2d957e6b-235e-4b31-a5fd-8e2e16e417d7 req-51583556-21de-4b56-99bb-2f55eb670537 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-unplugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:46:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:24.113 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=46, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=45) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:46:24 np0005473739 nova_compute[259550]: 2025-10-07 14:46:24.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:24.115 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:46:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2525: 305 pgs: 305 active+clean; 311 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 16 KiB/s wr, 16 op/s
Oct  7 10:46:24 np0005473739 nova_compute[259550]: 2025-10-07 14:46:24.607 2 DEBUG nova.network.neutron [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:24 np0005473739 nova_compute[259550]: 2025-10-07 14:46:24.637 2 INFO nova.compute.manager [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Took 2.17 seconds to deallocate network for instance.#033[00m
Oct  7 10:46:24 np0005473739 nova_compute[259550]: 2025-10-07 14:46:24.699 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:24 np0005473739 nova_compute[259550]: 2025-10-07 14:46:24.699 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:24 np0005473739 nova_compute[259550]: 2025-10-07 14:46:24.828 2 DEBUG oslo_concurrency.processutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:46:24 np0005473739 nova_compute[259550]: 2025-10-07 14:46:24.882 2 INFO nova.compute.manager [None req-1cb980d6-c83b-42bf-aa5b-ec5eb306eed2 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Get console output#033[00m
Oct  7 10:46:24 np0005473739 nova_compute[259550]: 2025-10-07 14:46:24.888 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:46:25 np0005473739 podman[407368]: 2025-10-07 14:46:25.104815854 +0000 UTC m=+0.092582832 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 10:46:25 np0005473739 podman[407378]: 2025-10-07 14:46:25.112699564 +0000 UTC m=+0.099153527 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 10:46:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:46:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3434801302' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.356 2 DEBUG oslo_concurrency.processutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.528s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.363 2 DEBUG nova.compute.provider_tree [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.384 2 DEBUG nova.scheduler.client.report [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.418 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.718s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.455 2 INFO nova.scheduler.client.report [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance b598361e-dd69-448e-ade6-931a3d8c84cb#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.519 2 DEBUG oslo_concurrency.lockutils [None req-88b5056a-4386-42a4-a11a-d037e3d90bdb d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.772s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.535 2 DEBUG nova.network.neutron [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updated VIF entry in instance network info cache for port fef175ac-72ee-4716-9970-ff3dccaea9f9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.536 2 DEBUG nova.network.neutron [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Updating instance_info_cache with network_info: [{"id": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "address": "fa:16:3e:f5:9c:ff", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fef5:9cff", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfef175ac-72", "ovs_interfaceid": "fef175ac-72ee-4716-9970-ff3dccaea9f9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.563 2 DEBUG oslo_concurrency.lockutils [req-bc881283-9820-4a93-90fa-84c14cbfd03c req-48c5dfb0-dee0-40f5-a50a-a8c713f57b4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b598361e-dd69-448e-ade6-931a3d8c84cb" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.931 2 DEBUG nova.compute.manager [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.932 2 DEBUG oslo_concurrency.lockutils [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.932 2 DEBUG oslo_concurrency.lockutils [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.932 2 DEBUG oslo_concurrency.lockutils [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.932 2 DEBUG nova.compute.manager [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.932 2 WARNING nova.compute.manager [req-bfd89792-1506-4133-a804-5279a04f9632 req-d0443ca2-7929-4aa6-8822-105f589f92dc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.971 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.971 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.972 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.972 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b598361e-dd69-448e-ade6-931a3d8c84cb-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.972 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] No waiting events found dispatching network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.973 2 WARNING nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received unexpected event network-vif-plugged-fef175ac-72ee-4716-9970-ff3dccaea9f9 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.973 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Received event network-vif-deleted-fef175ac-72ee-4716-9970-ff3dccaea9f9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.973 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.974 2 DEBUG nova.compute.manager [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.974 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.974 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:46:25 np0005473739 nova_compute[259550]: 2025-10-07 14:46:25.974 2 DEBUG nova.network.neutron [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:46:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2526: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 6.1 KiB/s wr, 29 op/s
Oct  7 10:46:26 np0005473739 nova_compute[259550]: 2025-10-07 14:46:26.965 2 INFO nova.compute.manager [None req-2043385e-e005-4e1e-b6ef-1ddf16bcb290 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Get console output#033[00m
Oct  7 10:46:26 np0005473739 nova_compute[259550]: 2025-10-07 14:46:26.970 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:46:27 np0005473739 nova_compute[259550]: 2025-10-07 14:46:27.008 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:27 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:27.116 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '46'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.094 2 DEBUG nova.compute.manager [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.095 2 DEBUG oslo_concurrency.lockutils [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.095 2 DEBUG oslo_concurrency.lockutils [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.096 2 DEBUG oslo_concurrency.lockutils [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.096 2 DEBUG nova.compute.manager [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.096 2 WARNING nova.compute.manager [req-c3fc73c9-6230-43f9-bcc1-f6a5087c4bd1 req-056eb8c9-f9ba-4888-b8b3-e49da5ea92a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.#033[00m
Oct  7 10:46:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.192 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.330 2 DEBUG nova.compute.manager [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.331 2 DEBUG nova.compute.manager [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing instance network info cache due to event network-changed-54bf17d9-ad25-4326-b981-fb4fe6afaf7c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.331 2 DEBUG oslo_concurrency.lockutils [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.332 2 DEBUG oslo_concurrency.lockutils [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.332 2 DEBUG nova.network.neutron [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Refreshing network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.461 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.462 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.463 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.464 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.465 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.467 2 INFO nova.compute.manager [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Terminating instance#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.469 2 DEBUG nova.compute.manager [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:46:28 np0005473739 kernel: tap54bf17d9-ad (unregistering): left promiscuous mode
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.522 2 DEBUG nova.network.neutron [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.523 2 DEBUG nova.network.neutron [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:28 np0005473739 NetworkManager[44949]: <info>  [1759848388.5261] device (tap54bf17d9-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:46:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:28Z|01505|binding|INFO|Releasing lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c from this chassis (sb_readonly=0)
Oct  7 10:46:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:28Z|01506|binding|INFO|Setting lport 54bf17d9-ad25-4326-b981-fb4fe6afaf7c down in Southbound
Oct  7 10:46:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:28Z|01507|binding|INFO|Removing iface tap54bf17d9-ad ovn-installed in OVS
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.541 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b'], port_security=['fa:16:3e:28:be:4b 10.100.0.9 2001:db8:0:1:f816:3eff:fe28:be4b 2001:db8::f816:3eff:fe28:be4b'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28 2001:db8:0:1:f816:3eff:fe28:be4b/64 2001:db8::f816:3eff:fe28:be4b/64', 'neutron:device_id': '6721439b-34d7-4282-bbcd-37424c3f2691', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb742e76-491f-4442-ba5f-a90a2210bfd3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0dad457a-42a0-40e6-bb17-b6ab5f921cac, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=54bf17d9-ad25-4326-b981-fb4fe6afaf7c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.542 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c in datapath b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e unbound from our chassis#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.544 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.545 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7c75f240-18d3-40f4-b562-c1e8ec0f175a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.545 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e namespace which is not needed anymore#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.554 2 DEBUG oslo_concurrency.lockutils [req-77fbdb24-3438-42f6-9b70-e7de51b8bd83 req-04f2f9cc-d43c-4ca5-bdf0-352474c195f8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:46:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2527: 305 pgs: 305 active+clean; 279 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 5.1 KiB/s wr, 29 op/s
Oct  7 10:46:28 np0005473739 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Deactivated successfully.
Oct  7 10:46:28 np0005473739 systemd[1]: machine-qemu\x2d167\x2dinstance\x2d00000085.scope: Consumed 16.699s CPU time.
Oct  7 10:46:28 np0005473739 systemd-machined[214580]: Machine qemu-167-instance-00000085 terminated.
Oct  7 10:46:28 np0005473739 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [NOTICE]   (404455) : haproxy version is 2.8.14-c23fe91
Oct  7 10:46:28 np0005473739 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [NOTICE]   (404455) : path to executable is /usr/sbin/haproxy
Oct  7 10:46:28 np0005473739 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [WARNING]  (404455) : Exiting Master process...
Oct  7 10:46:28 np0005473739 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [WARNING]  (404455) : Exiting Master process...
Oct  7 10:46:28 np0005473739 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [ALERT]    (404455) : Current worker (404457) exited with code 143 (Terminated)
Oct  7 10:46:28 np0005473739 neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e[404451]: [WARNING]  (404455) : All workers exited. Exiting... (0)
Oct  7 10:46:28 np0005473739 systemd[1]: libpod-5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee.scope: Deactivated successfully.
Oct  7 10:46:28 np0005473739 podman[407458]: 2025-10-07 14:46:28.680459706 +0000 UTC m=+0.043055966 container died 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.693 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.703 2 INFO nova.virt.libvirt.driver [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Instance destroyed successfully.#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.704 2 DEBUG nova.objects.instance [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 6721439b-34d7-4282-bbcd-37424c3f2691 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:46:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee-userdata-shm.mount: Deactivated successfully.
Oct  7 10:46:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c7939ed48599346f2fcd08610b2ba91dee0ea97c56d4a7dfd84d89d159ce37fa-merged.mount: Deactivated successfully.
Oct  7 10:46:28 np0005473739 podman[407458]: 2025-10-07 14:46:28.723268274 +0000 UTC m=+0.085864534 container cleanup 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.724 2 DEBUG nova.virt.libvirt.vif [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:44:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-965086068',display_name='tempest-TestGettingAddress-server-965086068',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-965086068',id=133,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJduBxT7TahFqoelvVZ/7dLsZLsZopzCWy0c1s/fKLvT5f1/UUPBtVog3rnrfhVqOaBhsvpFnl4NRHZsXU2RV8U7aQqRvvSPu/+lGVEKuUSnHnWjBec95G9VYq2BFT7AFA==',key_name='tempest-TestGettingAddress-1301392792',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:45:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-0owwxica',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:45:08Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6721439b-34d7-4282-bbcd-37424c3f2691,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.724 2 DEBUG nova.network.os_vif_util [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.726 2 DEBUG nova.network.os_vif_util [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.726 2 DEBUG os_vif [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.728 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap54bf17d9-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.736 2 INFO os_vif [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:28:be:4b,bridge_name='br-int',has_traffic_filtering=True,id=54bf17d9-ad25-4326-b981-fb4fe6afaf7c,network=Network(b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap54bf17d9-ad')#033[00m
Oct  7 10:46:28 np0005473739 systemd[1]: libpod-conmon-5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee.scope: Deactivated successfully.
Oct  7 10:46:28 np0005473739 podman[407496]: 2025-10-07 14:46:28.813977396 +0000 UTC m=+0.056678338 container remove 5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.820 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c3075aad-aca7-4c41-837c-5f500a944a57]: (4, ('Tue Oct  7 02:46:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e (5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee)\n5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee\nTue Oct  7 02:46:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e (5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee)\n5d80a86c4a08fd5a18f13a45dd6fdb0d28daf471f99c5243f05fbe2e2bfcf1ee\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.822 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[96044539-de61-4b7e-968c-f42d3f0b97f3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.823 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb5308f20-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:28 np0005473739 kernel: tapb5308f20-40: left promiscuous mode
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.829 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 nova_compute[259550]: 2025-10-07 14:46:28.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.844 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1e1ded75-a976-4518-b02e-e409d2db23ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.889 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70c5dfee-cd88-47a7-b026-767c5f52d167]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.890 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e3baea-ad4e-4161-a4a0-26ece632dd8b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.910 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb8858c-812a-4cfd-abbc-07fcd03424f7]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 889032, 'reachable_time': 34764, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407529, 'error': None, 'target': 'ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:28 np0005473739 systemd[1]: run-netns-ovnmeta\x2db5308f20\x2d4ad6\x2d4f12\x2da55c\x2d6d0b55a0cb4e.mount: Deactivated successfully.
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.915 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:46:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:28.915 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[64ae2793-f2a8-4469-8237-9ed9b3c92e42]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:29 np0005473739 nova_compute[259550]: 2025-10-07 14:46:29.193 2 INFO nova.virt.libvirt.driver [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Deleting instance files /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691_del#033[00m
Oct  7 10:46:29 np0005473739 nova_compute[259550]: 2025-10-07 14:46:29.194 2 INFO nova.virt.libvirt.driver [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Deletion of /var/lib/nova/instances/6721439b-34d7-4282-bbcd-37424c3f2691_del complete#033[00m
Oct  7 10:46:29 np0005473739 nova_compute[259550]: 2025-10-07 14:46:29.260 2 INFO nova.compute.manager [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Took 0.79 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:46:29 np0005473739 nova_compute[259550]: 2025-10-07 14:46:29.261 2 DEBUG oslo.service.loopingcall [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:46:29 np0005473739 nova_compute[259550]: 2025-10-07 14:46:29.261 2 DEBUG nova.compute.manager [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:46:29 np0005473739 nova_compute[259550]: 2025-10-07 14:46:29.261 2 DEBUG nova.network.neutron [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:46:29 np0005473739 nova_compute[259550]: 2025-10-07 14:46:29.605 2 INFO nova.compute.manager [None req-c3cf273d-9b0a-400b-805e-c260ef5c81bc 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Get console output#033[00m
Oct  7 10:46:29 np0005473739 nova_compute[259550]: 2025-10-07 14:46:29.611 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.427 2 DEBUG nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.428 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.428 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.428 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.428 2 DEBUG nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.428 2 WARNING nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG oslo_concurrency.lockutils [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.429 2 DEBUG nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.430 2 WARNING nova.compute.manager [req-22c0fea0-0787-4b35-a8e2-4a60fdba7af5 req-e36000eb-18f6-445b-81a4-deb272364c2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state active and task_state None.#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.542 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-unplugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.543 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.543 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.543 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.543 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] No waiting events found dispatching network-vif-unplugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-unplugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG nova.compute.manager [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.544 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:46:30 np0005473739 nova_compute[259550]: 2025-10-07 14:46:30.545 2 DEBUG nova.network.neutron [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:46:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2528: 305 pgs: 305 active+clean; 232 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 13 KiB/s wr, 36 op/s
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.256 2 DEBUG nova.network.neutron [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.300 2 INFO nova.compute.manager [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Took 3.04 seconds to deallocate network for instance.#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.382 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.383 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.484 2 DEBUG oslo_concurrency.processutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.570 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.570 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.570 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.571 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.571 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.572 2 INFO nova.compute.manager [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Terminating instance#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.573 2 DEBUG nova.compute.manager [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2529: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 13 KiB/s wr, 57 op/s
Oct  7 10:46:32 np0005473739 kernel: tap44d8ec34-7f (unregistering): left promiscuous mode
Oct  7 10:46:32 np0005473739 NetworkManager[44949]: <info>  [1759848392.6345] device (tap44d8ec34-7f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:46:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:32Z|01508|binding|INFO|Releasing lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 from this chassis (sb_readonly=0)
Oct  7 10:46:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:32Z|01509|binding|INFO|Setting lport 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 down in Southbound
Oct  7 10:46:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:32Z|01510|binding|INFO|Removing iface tap44d8ec34-7f ovn-installed in OVS
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.681 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.682 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.689 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:97:e5 10.100.0.3'], port_security=['fa:16:3e:0c:97:e5 10.100.0.3'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.3/28', 'neutron:device_id': '36592d37-1eb3-431f-8cd3-aa0d320b2e86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a790910-04e4-4ed9-9209-184147e62b8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3c2506e0-287c-4f8f-b28e-99b2bb4e4542', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17109e69-8b68-4e08-a1dd-9e4119d12812, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=44d8ec34-7fbc-440a-bab4-b9b8d29ff249) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.690 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 in datapath 8a790910-04e4-4ed9-9209-184147e62b8b unbound from our chassis#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.691 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 8a790910-04e4-4ed9-9209-184147e62b8b#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.682 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.697 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.014s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.697 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] No waiting events found dispatching network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.698 2 WARNING nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received unexpected event network-vif-plugged-54bf17d9-ad25-4326-b981-fb4fe6afaf7c for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.698 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Received event network-vif-deleted-54bf17d9-ad25-4326-b981-fb4fe6afaf7c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.698 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.698 2 DEBUG nova.compute.manager [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing instance network info cache due to event network-changed-44d8ec34-7fbc-440a-bab4-b9b8d29ff249. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.698 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.699 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.699 2 DEBUG nova.network.neutron [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Refreshing network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.717 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0084a6e7-d41b-47e3-8b25-0969043ed4c4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015212169124829069 of space, bias 1.0, pg target 0.45636507374487206 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:46:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:46:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:46:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/445394989' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:46:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:46:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/445394989' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.757 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b446ef1c-1829-4559-98f2-8c0d535d2194]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.760 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[e10819de-a06a-4169-be00-fe8f27622b39]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:32 np0005473739 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Deactivated successfully.
Oct  7 10:46:32 np0005473739 systemd[1]: machine-qemu\x2d170\x2dinstance\x2d00000088.scope: Consumed 19.169s CPU time.
Oct  7 10:46:32 np0005473739 systemd-machined[214580]: Machine qemu-170-instance-00000088 terminated.
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.824 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0e9b66af-0a5e-495c-b737-80cc84ab6ae0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.840 2 INFO nova.virt.libvirt.driver [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Instance destroyed successfully.#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.841 2 DEBUG nova.objects.instance [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 36592d37-1eb3-431f-8cd3-aa0d320b2e86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.847 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9f8b81fb-436d-4e25-aeb9-47c1fdfb2b7d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap8a790910-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:db:8c:b2'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 428], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890932, 'reachable_time': 40705, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407570, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.862 2 DEBUG nova.virt.libvirt.vif [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:45:36Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1103641822',display_name='tempest-TestNetworkBasicOps-server-1103641822',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1103641822',id=136,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBFPMP7i8ec4yKwVDxErtIGw1wPOouS90pUC/2M/KSrqUJIp7tpUDqB6OrTVom+DAk9JZ3iNgnjUuCMQr+/u1V//z0y/ybLMjjWdhld2MXrTrpN1FeSHNloBYJfIHxQTEMw==',key_name='tempest-TestNetworkBasicOps-329000454',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:45:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-7sywmp2b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:45:47Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=36592d37-1eb3-431f-8cd3-aa0d320b2e86,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.862 2 DEBUG nova.network.os_vif_util [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.188", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.863 2 DEBUG nova.network.os_vif_util [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.863 2 DEBUG os_vif [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.865 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap44d8ec34-7f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.871 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.873 2 INFO os_vif [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0c:97:e5,bridge_name='br-int',has_traffic_filtering=True,id=44d8ec34-7fbc-440a-bab4-b9b8d29ff249,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap44d8ec34-7f')#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.874 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d1238902-2537-4b93-a74c-48c1879dee8e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap8a790910-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890945, 'tstamp': 890945}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407571, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap8a790910-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 890948, 'tstamp': 890948}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 407571, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.875 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a790910-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.878 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a790910-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.878 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.878 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap8a790910-00, col_values=(('external_ids', {'iface-id': '473a96c8-dafe-4956-8316-8a82bc1c870e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:32.879 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:46:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/199723312' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.992 2 DEBUG oslo_concurrency.processutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.508s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:46:32 np0005473739 nova_compute[259550]: 2025-10-07 14:46:32.997 2 DEBUG nova.compute.provider_tree [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.018 2 DEBUG nova.scheduler.client.report [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.040 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.658s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.058 2 DEBUG nova.network.neutron [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.059 2 DEBUG nova.network.neutron [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.069 2 DEBUG nova.network.neutron [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updated VIF entry in instance network info cache for port 54bf17d9-ad25-4326-b981-fb4fe6afaf7c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.069 2 DEBUG nova.network.neutron [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Updating instance_info_cache with network_info: [{"id": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "address": "fa:16:3e:28:be:4b", "network": {"id": "b5308f20-4ad6-4f12-a55c-6d0b55a0cb4e", "bridge": "br-int", "label": "tempest-network-smoke--434014193", "subnets": [{"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe28:be4b", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap54bf17d9-ad", "ovs_interfaceid": "54bf17d9-ad25-4326-b981-fb4fe6afaf7c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.084 2 INFO nova.scheduler.client.report [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 6721439b-34d7-4282-bbcd-37424c3f2691#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.088 2 DEBUG oslo_concurrency.lockutils [req-4b137037-baad-48a2-8cd1-d269717bf83a req-51c4dce0-c07c-468b-b597-292157aed0df 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.097 2 DEBUG oslo_concurrency.lockutils [req-866388e9-3c64-4c4a-a4e5-52adc55927ff req-012b53ee-6c60-42c7-876e-c3cdb7f465b9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6721439b-34d7-4282-bbcd-37424c3f2691" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.114 2 DEBUG nova.compute.manager [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-unplugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.114 2 DEBUG oslo_concurrency.lockutils [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.114 2 DEBUG oslo_concurrency.lockutils [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.114 2 DEBUG oslo_concurrency.lockutils [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.115 2 DEBUG nova.compute.manager [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] No waiting events found dispatching network-vif-unplugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.115 2 DEBUG nova.compute.manager [req-7657e352-3554-4fac-841c-32e4c2a3abcb req-93c1053d-8003-4310-9d97-c9174f34ff7b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-unplugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.150 2 DEBUG oslo_concurrency.lockutils [None req-1d402ecd-7f3f-4296-97e2-5f22970216b5 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6721439b-34d7-4282-bbcd-37424c3f2691" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.688s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.194 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.743 2 INFO nova.virt.libvirt.driver [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Deleting instance files /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86_del#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.744 2 INFO nova.virt.libvirt.driver [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Deletion of /var/lib/nova/instances/36592d37-1eb3-431f-8cd3-aa0d320b2e86_del complete#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.816 2 INFO nova.compute.manager [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Took 1.24 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.817 2 DEBUG oslo.service.loopingcall [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.817 2 DEBUG nova.compute.manager [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:46:33 np0005473739 nova_compute[259550]: 2025-10-07 14:46:33.817 2 DEBUG nova.network.neutron [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:46:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2530: 305 pgs: 305 active+clean; 143 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 19 KiB/s wr, 76 op/s
Oct  7 10:46:34 np0005473739 nova_compute[259550]: 2025-10-07 14:46:34.611 2 DEBUG nova.network.neutron [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updated VIF entry in instance network info cache for port 44d8ec34-7fbc-440a-bab4-b9b8d29ff249. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:46:34 np0005473739 nova_compute[259550]: 2025-10-07 14:46:34.612 2 DEBUG nova.network.neutron [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [{"id": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "address": "fa:16:3e:0c:97:e5", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.3", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap44d8ec34-7f", "ovs_interfaceid": "44d8ec34-7fbc-440a-bab4-b9b8d29ff249", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:34 np0005473739 nova_compute[259550]: 2025-10-07 14:46:34.633 2 DEBUG oslo_concurrency.lockutils [req-0c7c568f-1044-4d02-8953-2baf9d614a8a req-3ab43920-519f-413c-a6f5-21f594d24146 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-36592d37-1eb3-431f-8cd3-aa0d320b2e86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.058 2 DEBUG nova.network.neutron [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.084 2 INFO nova.compute.manager [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Took 1.27 seconds to deallocate network for instance.#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.134 2 DEBUG nova.compute.manager [req-48ef7a39-060d-4b77-a316-2952606bc25b req-39d2e024-0a8b-4cd3-9475-eb9393f99c2d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-deleted-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.154 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.155 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.218 2 DEBUG nova.compute.manager [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.219 2 DEBUG oslo_concurrency.lockutils [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.219 2 DEBUG oslo_concurrency.lockutils [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.219 2 DEBUG oslo_concurrency.lockutils [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.219 2 DEBUG nova.compute.manager [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] No waiting events found dispatching network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.220 2 WARNING nova.compute.manager [req-654ea898-ad4b-48f7-bd6f-7bbc544832b7 req-31bc120e-81d2-4e40-95b7-72ae14333586 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Received unexpected event network-vif-plugged-44d8ec34-7fbc-440a-bab4-b9b8d29ff249 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.222 2 DEBUG oslo_concurrency.processutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:46:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:46:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1126728353' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.654 2 DEBUG oslo_concurrency.processutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.432s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.662 2 DEBUG nova.compute.provider_tree [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.681 2 DEBUG nova.scheduler.client.report [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.710 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.555s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.743 2 INFO nova.scheduler.client.report [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 36592d37-1eb3-431f-8cd3-aa0d320b2e86#033[00m
Oct  7 10:46:35 np0005473739 nova_compute[259550]: 2025-10-07 14:46:35.816 2 DEBUG oslo_concurrency.lockutils [None req-c67f66b9-9ed5-427a-944e-cfaaa94fccb3 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "36592d37-1eb3-431f-8cd3-aa0d320b2e86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2531: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 53 KiB/s rd, 17 KiB/s wr, 71 op/s
Oct  7 10:46:36 np0005473739 nova_compute[259550]: 2025-10-07 14:46:36.987 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848381.986516, b598361e-dd69-448e-ade6-931a3d8c84cb => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:46:36 np0005473739 nova_compute[259550]: 2025-10-07 14:46:36.987 2 INFO nova.compute.manager [-] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.021 2 DEBUG nova.compute.manager [None req-969980e2-1d86-4e9d-803f-7f2e00cd9506 - - - - - -] [instance: b598361e-dd69-448e-ade6-931a3d8c84cb] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.102 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.103 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.103 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.104 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.105 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.106 2 INFO nova.compute.manager [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Terminating instance#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.107 2 DEBUG nova.compute.manager [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:46:37 np0005473739 kernel: tap706c4bba-81 (unregistering): left promiscuous mode
Oct  7 10:46:37 np0005473739 NetworkManager[44949]: <info>  [1759848397.2078] device (tap706c4bba-81): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:46:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:37Z|01511|binding|INFO|Releasing lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f from this chassis (sb_readonly=0)
Oct  7 10:46:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:37Z|01512|binding|INFO|Setting lport 706c4bba-81cd-4c03-ac73-d8225f4ea15f down in Southbound
Oct  7 10:46:37 np0005473739 ovn_controller[151684]: 2025-10-07T14:46:37Z|01513|binding|INFO|Removing iface tap706c4bba-81 ovn-installed in OVS
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.235 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:57:3a:c8 10.100.0.13'], port_security=['fa:16:3e:57:3a:c8 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-8a790910-04e4-4ed9-9209-184147e62b8b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '8', 'neutron:security_group_ids': '497680c8-b146-4d33-ad78-53e98937483b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=17109e69-8b68-4e08-a1dd-9e4119d12812, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=706c4bba-81cd-4c03-ac73-d8225f4ea15f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.236 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 706c4bba-81cd-4c03-ac73-d8225f4ea15f in datapath 8a790910-04e4-4ed9-9209-184147e62b8b unbound from our chassis#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.237 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 8a790910-04e4-4ed9-9209-184147e62b8b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.239 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a8f4d2d7-7c9d-47fa-a601-1b5ae12926f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.243 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b namespace which is not needed anymore#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Deactivated successfully.
Oct  7 10:46:37 np0005473739 systemd[1]: machine-qemu\x2d168\x2dinstance\x2d00000086.scope: Consumed 15.179s CPU time.
Oct  7 10:46:37 np0005473739 systemd-machined[214580]: Machine qemu-168-instance-00000086 terminated.
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.331 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.338 2 DEBUG nova.compute.manager [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.339 2 DEBUG nova.compute.manager [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing instance network info cache due to event network-changed-706c4bba-81cd-4c03-ac73-d8225f4ea15f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.339 2 DEBUG oslo_concurrency.lockutils [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.339 2 DEBUG oslo_concurrency.lockutils [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.340 2 DEBUG nova.network.neutron [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Refreshing network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.344 2 INFO nova.virt.libvirt.driver [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Instance destroyed successfully.#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.344 2 DEBUG nova.objects.instance [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.363 2 DEBUG nova.virt.libvirt.vif [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:45:17Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-229487908',display_name='tempest-TestNetworkBasicOps-server-229487908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-229487908',id=134,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGRTZ02aMW7THUpB/Fbhor+/2PP8itgwraBW7FQrHv4wlPChnOyVpBM/gFf/sXxSnz2gDDJ7JKqZawr/DUsuuU6d+XBKYnr5LnbNHtnmtR34eSX9Sg3yAhhpxjufRRETtA==',key_name='tempest-TestNetworkBasicOps-262800501',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:45:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-82k3uyu4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:45:26Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.363 2 DEBUG nova.network.os_vif_util [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.204", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.364 2 DEBUG nova.network.os_vif_util [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.365 2 DEBUG os_vif [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.367 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap706c4bba-81, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.371 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.372 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.374 2 INFO os_vif [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:57:3a:c8,bridge_name='br-int',has_traffic_filtering=True,id=706c4bba-81cd-4c03-ac73-d8225f4ea15f,network=Network(8a790910-04e4-4ed9-9209-184147e62b8b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap706c4bba-81')#033[00m
Oct  7 10:46:37 np0005473739 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [NOTICE]   (405165) : haproxy version is 2.8.14-c23fe91
Oct  7 10:46:37 np0005473739 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [NOTICE]   (405165) : path to executable is /usr/sbin/haproxy
Oct  7 10:46:37 np0005473739 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [WARNING]  (405165) : Exiting Master process...
Oct  7 10:46:37 np0005473739 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [WARNING]  (405165) : Exiting Master process...
Oct  7 10:46:37 np0005473739 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [ALERT]    (405165) : Current worker (405167) exited with code 143 (Terminated)
Oct  7 10:46:37 np0005473739 neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b[405161]: [WARNING]  (405165) : All workers exited. Exiting... (0)
Oct  7 10:46:37 np0005473739 systemd[1]: libpod-c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78.scope: Deactivated successfully.
Oct  7 10:46:37 np0005473739 podman[407648]: 2025-10-07 14:46:37.412886327 +0000 UTC m=+0.055699271 container died c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:46:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78-userdata-shm.mount: Deactivated successfully.
Oct  7 10:46:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-36405587fcc29b64184ee9f2074d129e3df4d671c99cc0a15f619b8e476e7194-merged.mount: Deactivated successfully.
Oct  7 10:46:37 np0005473739 podman[407648]: 2025-10-07 14:46:37.485616941 +0000 UTC m=+0.128429875 container cleanup c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:46:37 np0005473739 systemd[1]: libpod-conmon-c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78.scope: Deactivated successfully.
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.541 2 DEBUG nova.compute.manager [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.542 2 DEBUG oslo_concurrency.lockutils [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.543 2 DEBUG oslo_concurrency.lockutils [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.543 2 DEBUG oslo_concurrency.lockutils [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.543 2 DEBUG nova.compute.manager [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.544 2 DEBUG nova.compute.manager [req-939c2bf5-b597-44a4-8f11-62f3b49d2919 req-87dfd414-8b42-4557-a6aa-e6d68a1787b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-unplugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:46:37 np0005473739 podman[407698]: 2025-10-07 14:46:37.578094029 +0000 UTC m=+0.068588755 container remove c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.587 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0a6bfe9d-b246-4eab-be49-a6e43278a1d2]: (4, ('Tue Oct  7 02:46:37 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b (c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78)\nc3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78\nTue Oct  7 02:46:37 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b (c3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78)\nc3b9ea51585b2f4e1680a9b8c105eaa0623f4d3428df1f6219a442ccde396f78\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.589 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4a5cd98-4459-4a5c-9fe0-ea729b15668a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.590 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a790910-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:46:37 np0005473739 kernel: tap8a790910-00: left promiscuous mode
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.592 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.594 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.597 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[552e52bf-cbf9-4078-bd5b-aafdb0402a79]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.633 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[954c5a5d-ce8b-4193-b683-a0931ab2826c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.635 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5cec5436-8f1f-4f13-bf62-cc151c7873f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.656 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cbd51bfe-de0c-4f2b-94ea-42b203301f80]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 890925, 'reachable_time': 30535, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 407713, 'error': None, 'target': 'ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:37 np0005473739 systemd[1]: run-netns-ovnmeta\x2d8a790910\x2d04e4\x2d4ed9\x2d9209\x2d184147e62b8b.mount: Deactivated successfully.
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.660 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-8a790910-04e4-4ed9-9209-184147e62b8b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:46:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:46:37.661 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[40e2dccb-0af3-49bd-af0e-e831c93b9ea4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.950 2 INFO nova.virt.libvirt.driver [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Deleting instance files /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_del#033[00m
Oct  7 10:46:37 np0005473739 nova_compute[259550]: 2025-10-07 14:46:37.951 2 INFO nova.virt.libvirt.driver [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Deletion of /var/lib/nova/instances/5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed_del complete#033[00m
Oct  7 10:46:38 np0005473739 nova_compute[259550]: 2025-10-07 14:46:38.008 2 INFO nova.compute.manager [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Took 0.90 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:46:38 np0005473739 nova_compute[259550]: 2025-10-07 14:46:38.010 2 DEBUG oslo.service.loopingcall [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:46:38 np0005473739 nova_compute[259550]: 2025-10-07 14:46:38.010 2 DEBUG nova.compute.manager [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:46:38 np0005473739 nova_compute[259550]: 2025-10-07 14:46:38.010 2 DEBUG nova.network.neutron [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:46:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:38 np0005473739 nova_compute[259550]: 2025-10-07 14:46:38.196 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2532: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 14 KiB/s wr, 57 op/s
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.345 2 DEBUG nova.network.neutron [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.369 2 INFO nova.compute.manager [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Took 1.36 seconds to deallocate network for instance.#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.429 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.430 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.477 2 DEBUG oslo_concurrency.processutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.678 2 DEBUG nova.compute.manager [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.680 2 DEBUG oslo_concurrency.lockutils [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.680 2 DEBUG oslo_concurrency.lockutils [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.680 2 DEBUG oslo_concurrency.lockutils [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.681 2 DEBUG nova.compute.manager [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] No waiting events found dispatching network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.681 2 WARNING nova.compute.manager [req-2154e09b-9362-4e59-868a-87ee4d7b22e6 req-9f61893b-5ba1-48ce-bbf1-b96631eadcc9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received unexpected event network-vif-plugged-706c4bba-81cd-4c03-ac73-d8225f4ea15f for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.792 2 DEBUG nova.network.neutron [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updated VIF entry in instance network info cache for port 706c4bba-81cd-4c03-ac73-d8225f4ea15f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.793 2 DEBUG nova.network.neutron [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [{"id": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "address": "fa:16:3e:57:3a:c8", "network": {"id": "8a790910-04e4-4ed9-9209-184147e62b8b", "bridge": "br-int", "label": "tempest-network-smoke--669468088", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap706c4bba-81", "ovs_interfaceid": "706c4bba-81cd-4c03-ac73-d8225f4ea15f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.833 2 DEBUG oslo_concurrency.lockutils [req-cc9917a8-97ee-4fe0-a625-2ae4cf166f17 req-a2b11688-fd50-4a42-8cb3-d114f22f97e9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:46:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:46:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/422391706' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.913 2 DEBUG oslo_concurrency.processutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.917 2 DEBUG nova.compute.manager [req-9b411478-6eff-4ef4-911e-7818dfb93ca9 req-c097b37e-571f-4a46-9e45-37bc71f22ccb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Received event network-vif-deleted-706c4bba-81cd-4c03-ac73-d8225f4ea15f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.917 2 INFO nova.compute.manager [req-9b411478-6eff-4ef4-911e-7818dfb93ca9 req-c097b37e-571f-4a46-9e45-37bc71f22ccb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Neutron deleted interface 706c4bba-81cd-4c03-ac73-d8225f4ea15f; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.918 2 DEBUG nova.network.neutron [req-9b411478-6eff-4ef4-911e-7818dfb93ca9 req-c097b37e-571f-4a46-9e45-37bc71f22ccb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.924 2 DEBUG nova.compute.provider_tree [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.936 2 DEBUG nova.scheduler.client.report [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.940 2 DEBUG nova.compute.manager [req-9b411478-6eff-4ef4-911e-7818dfb93ca9 req-c097b37e-571f-4a46-9e45-37bc71f22ccb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Detach interface failed, port_id=706c4bba-81cd-4c03-ac73-d8225f4ea15f, reason: Instance 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.957 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.527s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:39 np0005473739 nova_compute[259550]: 2025-10-07 14:46:39.996 2 INFO nova.scheduler.client.report [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed#033[00m
Oct  7 10:46:40 np0005473739 nova_compute[259550]: 2025-10-07 14:46:40.070 2 DEBUG oslo_concurrency.lockutils [None req-6e9c86b3-e195-43a3-9cf9-520c8b7d86ba 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.967s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:46:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2533: 305 pgs: 305 active+clean; 73 MiB data, 980 MiB used, 59 GiB / 60 GiB avail; 46 KiB/s rd, 14 KiB/s wr, 61 op/s
Oct  7 10:46:41 np0005473739 nova_compute[259550]: 2025-10-07 14:46:41.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:41 np0005473739 nova_compute[259550]: 2025-10-07 14:46:41.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:42 np0005473739 podman[407739]: 2025-10-07 14:46:42.104298567 +0000 UTC m=+0.082557955 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:46:42 np0005473739 podman[407738]: 2025-10-07 14:46:42.105001175 +0000 UTC m=+0.083757247 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:46:42 np0005473739 nova_compute[259550]: 2025-10-07 14:46:42.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2534: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 54 KiB/s rd, 7.2 KiB/s wr, 78 op/s
Oct  7 10:46:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:43 np0005473739 nova_compute[259550]: 2025-10-07 14:46:43.197 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:43 np0005473739 nova_compute[259550]: 2025-10-07 14:46:43.702 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848388.700753, 6721439b-34d7-4282-bbcd-37424c3f2691 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:46:43 np0005473739 nova_compute[259550]: 2025-10-07 14:46:43.702 2 INFO nova.compute.manager [-] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:46:43 np0005473739 nova_compute[259550]: 2025-10-07 14:46:43.723 2 DEBUG nova.compute.manager [None req-43dfb925-9f01-48b0-b2ee-a9c2f4043f18 - - - - - -] [instance: 6721439b-34d7-4282-bbcd-37424c3f2691] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:46:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2535: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 6.9 KiB/s wr, 57 op/s
Oct  7 10:46:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2536: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct  7 10:46:47 np0005473739 nova_compute[259550]: 2025-10-07 14:46:47.373 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:47 np0005473739 nova_compute[259550]: 2025-10-07 14:46:47.822 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848392.8167634, 36592d37-1eb3-431f-8cd3-aa0d320b2e86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:46:47 np0005473739 nova_compute[259550]: 2025-10-07 14:46:47.823 2 INFO nova.compute.manager [-] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:46:47 np0005473739 nova_compute[259550]: 2025-10-07 14:46:47.839 2 DEBUG nova.compute.manager [None req-526f9d01-0556-496f-8db8-14ebad8ce9c8 - - - - - -] [instance: 36592d37-1eb3-431f-8cd3-aa0d320b2e86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:46:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:48 np0005473739 nova_compute[259550]: 2025-10-07 14:46:48.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2537: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:46:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2538: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:46:52 np0005473739 nova_compute[259550]: 2025-10-07 14:46:52.342 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848397.339797, 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:46:52 np0005473739 nova_compute[259550]: 2025-10-07 14:46:52.342 2 INFO nova.compute.manager [-] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:46:52 np0005473739 nova_compute[259550]: 2025-10-07 14:46:52.375 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:52 np0005473739 nova_compute[259550]: 2025-10-07 14:46:52.389 2 DEBUG nova.compute.manager [None req-467b8fd1-ec8b-4832-b886-2162dd231f4b - - - - - -] [instance: 5655bad4-63e9-44c6-8ff3-ef9b7f5c28ed] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:46:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2539: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 682 B/s wr, 23 op/s
Oct  7 10:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:46:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:46:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:53 np0005473739 nova_compute[259550]: 2025-10-07 14:46:53.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:53 np0005473739 nova_compute[259550]: 2025-10-07 14:46:53.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:53 np0005473739 nova_compute[259550]: 2025-10-07 14:46:53.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:46:54 np0005473739 nova_compute[259550]: 2025-10-07 14:46:54.169 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:46:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2540: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:46:56 np0005473739 podman[407778]: 2025-10-07 14:46:56.121849257 +0000 UTC m=+0.110775325 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller)
Oct  7 10:46:56 np0005473739 podman[407777]: 2025-10-07 14:46:56.1302297 +0000 UTC m=+0.112966474 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:46:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2541: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:46:57 np0005473739 nova_compute[259550]: 2025-10-07 14:46:57.170 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:57 np0005473739 nova_compute[259550]: 2025-10-07 14:46:57.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:57 np0005473739 nova_compute[259550]: 2025-10-07 14:46:57.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:46:58 np0005473739 nova_compute[259550]: 2025-10-07 14:46:58.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:46:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2542: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:46:59 np0005473739 nova_compute[259550]: 2025-10-07 14:46:59.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:46:59 np0005473739 nova_compute[259550]: 2025-10-07 14:46:59.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:47:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:00.079 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:00.079 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:00.079 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2543: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:47:00 np0005473739 nova_compute[259550]: 2025-10-07 14:47:00.987 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:01.960 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:9c:cb 10.100.0.2 2001:db8::f816:3eff:fec6:9ccb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec6:9ccb/64', 'neutron:device_id': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a7ef9e15-2145-4a59-b756-368bcbe72d69) old=Port_Binding(mac=['fa:16:3e:c6:9c:cb 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:47:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:01.961 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a7ef9e15-2145-4a59-b756-368bcbe72d69 in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 updated#033[00m
Oct  7 10:47:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:01.962 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 970990f9-7a8a-40de-9a55-f4c40d657453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:47:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:01.963 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[63ca3704-482b-4de8-8b67-07bd32379789]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:01 np0005473739 nova_compute[259550]: 2025-10-07 14:47:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:01 np0005473739 nova_compute[259550]: 2025-10-07 14:47:01.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:02 np0005473739 nova_compute[259550]: 2025-10-07 14:47:02.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2544: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:47:02 np0005473739 nova_compute[259550]: 2025-10-07 14:47:02.744 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:02 np0005473739 nova_compute[259550]: 2025-10-07 14:47:02.745 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:02 np0005473739 nova_compute[259550]: 2025-10-07 14:47:02.764 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:47:02 np0005473739 nova_compute[259550]: 2025-10-07 14:47:02.908 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:02 np0005473739 nova_compute[259550]: 2025-10-07 14:47:02.908 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:02 np0005473739 nova_compute[259550]: 2025-10-07 14:47:02.918 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:47:02 np0005473739 nova_compute[259550]: 2025-10-07 14:47:02.919 2 INFO nova.compute.claims [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.058 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:47:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1882769882' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.529 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.537 2 DEBUG nova.compute.provider_tree [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.554 2 DEBUG nova.scheduler.client.report [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.577 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.578 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.624 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.625 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.643 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.666 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.739 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.741 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.741 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Creating image(s)#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.763 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.786 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.809 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.813 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.869 2 DEBUG nova.policy [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4c50d2bc13fb451fa34788d0157e1827', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2b72d80a22994265ac649277e01837af', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.914 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.915 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.916 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.916 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.939 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:03 np0005473739 nova_compute[259550]: 2025-10-07 14:47:03.944 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1262caef-f43e-429a-b613-b4d54273e604_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.269 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1262caef-f43e-429a-b613-b4d54273e604_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.338 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] resizing rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.431 2 DEBUG nova.objects.instance [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'migration_context' on Instance uuid 1262caef-f43e-429a-b613-b4d54273e604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.450 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.450 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Ensure instance console log exists: /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.451 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.451 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.451 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2545: 305 pgs: 305 active+clean; 41 MiB data, 960 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.633 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Successfully created port: b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:04 np0005473739 nova_compute[259550]: 2025-10-07 14:47:04.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.358 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Successfully updated port: b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.377 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.377 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquired lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.377 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.522 2 DEBUG nova.compute.manager [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.523 2 DEBUG nova.compute.manager [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing instance network info cache due to event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.523 2 DEBUG oslo_concurrency.lockutils [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.572 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:47:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2546: 305 pgs: 305 active+clean; 64 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 536 KiB/s wr, 23 op/s
Oct  7 10:47:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:06.919 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c6:9c:cb 10.100.0.2 2001:db8:0:1:f816:3eff:fec6:9ccb 2001:db8::f816:3eff:fec6:9ccb'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8:0:1:f816:3eff:fec6:9ccb/64 2001:db8::f816:3eff:fec6:9ccb/64', 'neutron:device_id': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=a7ef9e15-2145-4a59-b756-368bcbe72d69) old=Port_Binding(mac=['fa:16:3e:c6:9c:cb 10.100.0.2 2001:db8::f816:3eff:fec6:9ccb'], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec6:9ccb/64', 'neutron:device_id': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:47:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:06.920 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port a7ef9e15-2145-4a59-b756-368bcbe72d69 in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 updated#033[00m
Oct  7 10:47:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:06.921 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 970990f9-7a8a-40de-9a55-f4c40d657453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:47:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:06.922 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[65eb4691-2945-46be-9631-38ccc6729b67]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:06 np0005473739 nova_compute[259550]: 2025-10-07 14:47:06.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:07 np0005473739 nova_compute[259550]: 2025-10-07 14:47:07.383 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #123. Immutable memtables: 0.
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.203991) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 73] Flushing memtable with next log file: 123
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428204039, "job": 73, "event": "flush_started", "num_memtables": 1, "num_entries": 1309, "num_deletes": 250, "total_data_size": 2007379, "memory_usage": 2042792, "flush_reason": "Manual Compaction"}
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 73] Level-0 flush table #124: started
Oct  7 10:47:08 np0005473739 nova_compute[259550]: 2025-10-07 14:47:08.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428211431, "cf_name": "default", "job": 73, "event": "table_file_creation", "file_number": 124, "file_size": 1190783, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 52361, "largest_seqno": 53669, "table_properties": {"data_size": 1186086, "index_size": 2093, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 12477, "raw_average_key_size": 20, "raw_value_size": 1175857, "raw_average_value_size": 1956, "num_data_blocks": 95, "num_entries": 601, "num_filter_entries": 601, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848298, "oldest_key_time": 1759848298, "file_creation_time": 1759848428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 124, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 73] Flush lasted 7489 microseconds, and 4233 cpu microseconds.
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.211479) [db/flush_job.cc:967] [default] [JOB 73] Level-0 flush table #124: 1190783 bytes OK
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.211501) [db/memtable_list.cc:519] [default] Level-0 commit table #124 started
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.212904) [db/memtable_list.cc:722] [default] Level-0 commit table #124: memtable #1 done
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.212916) EVENT_LOG_v1 {"time_micros": 1759848428212912, "job": 73, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.212955) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 73] Try to delete WAL files size 2001494, prev total WAL file size 2001494, number of live WAL files 2.
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000120.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.213738) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032303033' seq:72057594037927935, type:22 .. '6D6772737461740032323534' seq:0, type:0; will stop at (end)
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 74] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 73 Base level 0, inputs: [124(1162KB)], [122(10180KB)]
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428213843, "job": 74, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [124], "files_L6": [122], "score": -1, "input_data_size": 11615545, "oldest_snapshot_seqno": -1}
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 74] Generated table #125: 7368 keys, 9090461 bytes, temperature: kUnknown
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428272915, "cf_name": "default", "job": 74, "event": "table_file_creation", "file_number": 125, "file_size": 9090461, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9043515, "index_size": 27442, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18437, "raw_key_size": 191927, "raw_average_key_size": 26, "raw_value_size": 8914079, "raw_average_value_size": 1209, "num_data_blocks": 1072, "num_entries": 7368, "num_filter_entries": 7368, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848428, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 125, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.273233) [db/compaction/compaction_job.cc:1663] [default] [JOB 74] Compacted 1@0 + 1@6 files to L6 => 9090461 bytes
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.274492) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 196.3 rd, 153.6 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.1, 9.9 +0.0 blob) out(8.7 +0.0 blob), read-write-amplify(17.4) write-amplify(7.6) OK, records in: 7821, records dropped: 453 output_compression: NoCompression
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.274507) EVENT_LOG_v1 {"time_micros": 1759848428274499, "job": 74, "event": "compaction_finished", "compaction_time_micros": 59187, "compaction_time_cpu_micros": 26196, "output_level": 6, "num_output_files": 1, "total_output_size": 9090461, "num_input_records": 7821, "num_output_records": 7368, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000124.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428275114, "job": 74, "event": "table_file_deletion", "file_number": 124}
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000122.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848428277080, "job": 74, "event": "table_file_deletion", "file_number": 122}
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.213604) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277165) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277171) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277173) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277175) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:47:08 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:47:08.277176) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:47:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2547: 305 pgs: 305 active+clean; 64 MiB data, 966 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 536 KiB/s wr, 23 op/s
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.012 2 DEBUG nova.network.neutron [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.030 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Releasing lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.031 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance network_info: |[{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.031 2 DEBUG oslo_concurrency.lockutils [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.032 2 DEBUG nova.network.neutron [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.034 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start _get_guest_xml network_info=[{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.040 2 WARNING nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.045 2 DEBUG nova.virt.libvirt.host [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.045 2 DEBUG nova.virt.libvirt.host [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.053 2 DEBUG nova.virt.libvirt.host [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.054 2 DEBUG nova.virt.libvirt.host [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.054 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.054 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.055 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.055 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.055 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.056 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.056 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.056 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.056 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.057 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.057 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.057 2 DEBUG nova.virt.hardware [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.060 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:47:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1858221718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.509 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.532 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.538 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:47:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2944015782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.996 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:09 np0005473739 nova_compute[259550]: 2025-10-07 14:47:09.997 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.014 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.016 2 DEBUG nova.virt.libvirt.vif [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866328963',display_name='tempest-TestNetworkBasicOps-server-866328963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866328963',id=137,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPS/KeXOUgsVFvz8/06bZGZYUJsL0P8KN5zfOcHd36qPCONG0eiDz9BDiYtOjbD9G91TMKvoW2fltNYdXkyA98S8eOoqdEV3DHQkPpvlOS52YF4JxVXz9leZAC3qrtG9CA==',key_name='tempest-TestNetworkBasicOps-1170467374',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-uefbav6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:03Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=1262caef-f43e-429a-b613-b4d54273e604,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.016 2 DEBUG nova.network.os_vif_util [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.017 2 DEBUG nova.network.os_vif_util [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.018 2 DEBUG nova.objects.instance [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'pci_devices' on Instance uuid 1262caef-f43e-429a-b613-b4d54273e604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.037 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <uuid>1262caef-f43e-429a-b613-b4d54273e604</uuid>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <name>instance-00000089</name>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestNetworkBasicOps-server-866328963</nova:name>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:47:09</nova:creationTime>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <nova:user uuid="4c50d2bc13fb451fa34788d0157e1827">tempest-TestNetworkBasicOps-306784636-project-member</nova:user>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <nova:project uuid="2b72d80a22994265ac649277e01837af">tempest-TestNetworkBasicOps-306784636</nova:project>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <nova:port uuid="b06adbc1-01d9-45e3-b4b6-71bc4f85a659">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <entry name="serial">1262caef-f43e-429a-b613-b4d54273e604</entry>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <entry name="uuid">1262caef-f43e-429a-b613-b4d54273e604</entry>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/1262caef-f43e-429a-b613-b4d54273e604_disk">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/1262caef-f43e-429a-b613-b4d54273e604_disk.config">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:97:e6:f4"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <target dev="tapb06adbc1-01"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/console.log" append="off"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:47:10 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:47:10 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:47:10 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:47:10 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.039 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Preparing to wait for external event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.040 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.040 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.040 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.041 2 DEBUG nova.virt.libvirt.vif [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866328963',display_name='tempest-TestNetworkBasicOps-server-866328963',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866328963',id=137,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPS/KeXOUgsVFvz8/06bZGZYUJsL0P8KN5zfOcHd36qPCONG0eiDz9BDiYtOjbD9G91TMKvoW2fltNYdXkyA98S8eOoqdEV3DHQkPpvlOS52YF4JxVXz9leZAC3qrtG9CA==',key_name='tempest-TestNetworkBasicOps-1170467374',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-uefbav6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:03Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=1262caef-f43e-429a-b613-b4d54273e604,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.041 2 DEBUG nova.network.os_vif_util [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.042 2 DEBUG nova.network.os_vif_util [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.042 2 DEBUG os_vif [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.043 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.044 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.048 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb06adbc1-01, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.048 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb06adbc1-01, col_values=(('external_ids', {'iface-id': 'b06adbc1-01d9-45e3-b4b6-71bc4f85a659', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:97:e6:f4', 'vm-uuid': '1262caef-f43e-429a-b613-b4d54273e604'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:10 np0005473739 NetworkManager[44949]: <info>  [1759848430.0516] manager: (tapb06adbc1-01): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/603)
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.051 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.055 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.057 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.060 2 INFO os_vif [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01')#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.067 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.125 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.126 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.126 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] No VIF found with MAC fa:16:3e:97:e6:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.127 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Using config drive#033[00m
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.149 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:47:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2ee71841-74a7-4333-b1b6-d7b9897e040d does not exist
Oct  7 10:47:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 47fed264-4406-4f21-bef0-716647bec255 does not exist
Oct  7 10:47:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev de2399f8-d34e-4e54-8970-c0bb46f87b65 does not exist
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:47:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:47:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2548: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:47:10 np0005473739 podman[408364]: 2025-10-07 14:47:10.757321194 +0000 UTC m=+0.049640650 container create b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:47:10 np0005473739 systemd[1]: Started libpod-conmon-b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737.scope.
Oct  7 10:47:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:47:10 np0005473739 podman[408364]: 2025-10-07 14:47:10.734725933 +0000 UTC m=+0.027045379 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:47:10 np0005473739 podman[408364]: 2025-10-07 14:47:10.847537241 +0000 UTC m=+0.139856687 container init b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:47:10 np0005473739 podman[408364]: 2025-10-07 14:47:10.855233916 +0000 UTC m=+0.147553342 container start b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:47:10 np0005473739 podman[408364]: 2025-10-07 14:47:10.858327918 +0000 UTC m=+0.150647344 container attach b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:47:10 np0005473739 gracious_shannon[408380]: 167 167
Oct  7 10:47:10 np0005473739 systemd[1]: libpod-b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737.scope: Deactivated successfully.
Oct  7 10:47:10 np0005473739 conmon[408380]: conmon b941cab654576334b9cc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737.scope/container/memory.events
Oct  7 10:47:10 np0005473739 podman[408364]: 2025-10-07 14:47:10.865659523 +0000 UTC m=+0.157978999 container died b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 10:47:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f2da8b257e23abb35586d91ae6d4818d3e27f73cac246b84277254c481bc80f9-merged.mount: Deactivated successfully.
Oct  7 10:47:10 np0005473739 podman[408364]: 2025-10-07 14:47:10.907114395 +0000 UTC m=+0.199433831 container remove b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_shannon, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:47:10 np0005473739 systemd[1]: libpod-conmon-b941cab654576334b9ccb86e4b89e6bc130a645504e6635aa1ac830168e86737.scope: Deactivated successfully.
Oct  7 10:47:10 np0005473739 nova_compute[259550]: 2025-10-07 14:47:10.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.017 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.018 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.018 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:11 np0005473739 podman[408405]: 2025-10-07 14:47:11.084521881 +0000 UTC m=+0.040199300 container create 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:47:11 np0005473739 systemd[1]: Started libpod-conmon-4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b.scope.
Oct  7 10:47:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:47:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:11 np0005473739 podman[408405]: 2025-10-07 14:47:11.065516945 +0000 UTC m=+0.021194384 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:47:11 np0005473739 podman[408405]: 2025-10-07 14:47:11.181880378 +0000 UTC m=+0.137557827 container init 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:47:11 np0005473739 podman[408405]: 2025-10-07 14:47:11.190032295 +0000 UTC m=+0.145709714 container start 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:47:11 np0005473739 podman[408405]: 2025-10-07 14:47:11.195184242 +0000 UTC m=+0.150861671 container attach 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:47:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:47:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2433315290' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.483 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.543 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.543 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000089 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.687 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.688 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3626MB free_disk=59.967525482177734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.688 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.689 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.755 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1262caef-f43e-429a-b613-b4d54273e604 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.756 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.756 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:47:11 np0005473739 nova_compute[259550]: 2025-10-07 14:47:11.796 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.011 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Creating config drive at /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.016 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvplaiwp1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.065 2 DEBUG nova.network.neutron [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updated VIF entry in instance network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.066 2 DEBUG nova.network.neutron [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.094 2 DEBUG oslo_concurrency.lockutils [req-698c9298-83ce-4050-9aee-63362857e328 req-c1ba5745-02a4-48d4-a14b-f9b817352fc0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.174 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvplaiwp1" returned: 0 in 0.158s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.200 2 DEBUG nova.storage.rbd_utils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] rbd image 1262caef-f43e-429a-b613-b4d54273e604_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.208 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config 1262caef-f43e-429a-b613-b4d54273e604_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:47:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1514102017' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:47:12 np0005473739 epic_sinoussi[408422]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:47:12 np0005473739 epic_sinoussi[408422]: --> relative data size: 1.0
Oct  7 10:47:12 np0005473739 epic_sinoussi[408422]: --> All data devices are unavailable
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.270 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.280 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.297 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:47:12 np0005473739 systemd[1]: libpod-4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b.scope: Deactivated successfully.
Oct  7 10:47:12 np0005473739 podman[408405]: 2025-10-07 14:47:12.306327136 +0000 UTC m=+1.262004545 container died 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 10:47:12 np0005473739 systemd[1]: libpod-4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b.scope: Consumed 1.046s CPU time.
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.329 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.331 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.643s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f74861c48f30fcdc0dc0d2f60428e01267b2bf97ac6b00673905b3760e86a65e-merged.mount: Deactivated successfully.
Oct  7 10:47:12 np0005473739 podman[408405]: 2025-10-07 14:47:12.377269142 +0000 UTC m=+1.332946551 container remove 4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sinoussi, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:47:12 np0005473739 systemd[1]: libpod-conmon-4b177bdb843366e1ee63bce9eefac7a4794f789145d002bd41ac327ed17a598b.scope: Deactivated successfully.
Oct  7 10:47:12 np0005473739 podman[408539]: 2025-10-07 14:47:12.434536755 +0000 UTC m=+0.078946089 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, managed_by=edpm_ansible)
Oct  7 10:47:12 np0005473739 podman[408532]: 2025-10-07 14:47:12.434537225 +0000 UTC m=+0.088560705 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.457 2 DEBUG oslo_concurrency.processutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config 1262caef-f43e-429a-b613-b4d54273e604_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.249s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.458 2 INFO nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Deleting local config drive /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604/disk.config because it was imported into RBD.#033[00m
Oct  7 10:47:12 np0005473739 kernel: tapb06adbc1-01: entered promiscuous mode
Oct  7 10:47:12 np0005473739 NetworkManager[44949]: <info>  [1759848432.5089] manager: (tapb06adbc1-01): new Tun device (/org/freedesktop/NetworkManager/Devices/604)
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.509 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:12Z|01514|binding|INFO|Claiming lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 for this chassis.
Oct  7 10:47:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:12Z|01515|binding|INFO|b06adbc1-01d9-45e3-b4b6-71bc4f85a659: Claiming fa:16:3e:97:e6:f4 10.100.0.12
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.523 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:e6:f4 10.100.0.12'], port_security=['fa:16:3e:97:e6:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1262caef-f43e-429a-b613-b4d54273e604', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3843b3bc-7e2a-472a-b856-d1d145332927', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14a0a809-1526-4a53-a5bb-23565156e65a, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b06adbc1-01d9-45e3-b4b6-71bc4f85a659) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.524 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 in datapath 08f8ca28-b7fe-4840-94a7-78acb08138e1 bound to our chassis#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.526 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 08f8ca28-b7fe-4840-94a7-78acb08138e1#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.539 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53ca28e3-5124-42c3-930d-4eb2cd2501b8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.540 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap08f8ca28-b1 in ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.542 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap08f8ca28-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.542 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c7a6d80-86a7-45d2-8622-f29d0e390427]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.543 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a3dacf8-d1e2-4640-a04b-eefc64bbeece]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 systemd-udevd[408652]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.564 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[b2a48dd8-eace-43fc-a802-00b2ae0fa873]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 systemd-machined[214580]: New machine qemu-171-instance-00000089.
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:12 np0005473739 NetworkManager[44949]: <info>  [1759848432.5788] device (tapb06adbc1-01): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:47:12 np0005473739 systemd[1]: Started Virtual Machine qemu-171-instance-00000089.
Oct  7 10:47:12 np0005473739 NetworkManager[44949]: <info>  [1759848432.5795] device (tapb06adbc1-01): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:12Z|01516|binding|INFO|Setting lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 ovn-installed in OVS
Oct  7 10:47:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:12Z|01517|binding|INFO|Setting lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 up in Southbound
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[80a3ad5e-39ec-4957-8ddf-a3b269b77bd4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2549: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.619 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8451f590-172f-4664-a681-1fb5526b1696]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 NetworkManager[44949]: <info>  [1759848432.6257] manager: (tap08f8ca28-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/605)
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.624 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[161fb092-2181-453c-98bd-7ae412d45ccd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 systemd-udevd[408656]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.661 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[34902103-58c3-4194-8f6b-a867d78f0998]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.664 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f3a956ae-cc9e-4958-bc6a-bc48e975530b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 NetworkManager[44949]: <info>  [1759848432.6964] device (tap08f8ca28-b0): carrier: link connected
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.701 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[495c13e2-6dff-41f2-ad77-6af8ff478057]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.720 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7ea56be-7318-4d60-bc5d-799c19f6c75b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f8ca28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:64:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 901626, 'reachable_time': 40685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 408731, 'error': None, 'target': 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.737 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fccdacb6-8c23-4aad-8e72-ce8a0a85500c]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe54:6481'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 901626, 'tstamp': 901626}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 408735, 'error': None, 'target': 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.755 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a8efafb1-e7cb-4513-b024-cbe49124567a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap08f8ca28-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:54:64:81'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 436], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 901626, 'reachable_time': 40685, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 408737, 'error': None, 'target': 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.786 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[91c83e89-63ef-453b-be6a-c16fe50f87ad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.793 2 DEBUG nova.compute.manager [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.794 2 DEBUG oslo_concurrency.lockutils [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.794 2 DEBUG oslo_concurrency.lockutils [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.794 2 DEBUG oslo_concurrency.lockutils [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.795 2 DEBUG nova.compute.manager [req-1e4081ee-65a1-46fd-b6a9-33fa3fe08d64 req-92f78a4b-89ed-487e-847d-7f481951cb40 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Processing event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.853 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a4a642-5229-471b-8c35-a2cea628dcea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.855 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f8ca28-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.855 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.856 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap08f8ca28-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.857 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:12 np0005473739 NetworkManager[44949]: <info>  [1759848432.8581] manager: (tap08f8ca28-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/606)
Oct  7 10:47:12 np0005473739 kernel: tap08f8ca28-b0: entered promiscuous mode
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.861 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap08f8ca28-b0, col_values=(('external_ids', {'iface-id': '4030b5ae-6c2e-4b07-9359-70a14ce783de'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:12Z|01518|binding|INFO|Releasing lport 4030b5ae-6c2e-4b07-9359-70a14ce783de from this chassis (sb_readonly=0)
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:12 np0005473739 nova_compute[259550]: 2025-10-07 14:47:12.875 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.877 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/08f8ca28-b7fe-4840-94a7-78acb08138e1.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/08f8ca28-b7fe-4840-94a7-78acb08138e1.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.877 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fddab67-e2a6-4e31-81e3-e87bb31a8c0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.878 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-08f8ca28-b7fe-4840-94a7-78acb08138e1
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/08f8ca28-b7fe-4840-94a7-78acb08138e1.pid.haproxy
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 08f8ca28-b7fe-4840-94a7-78acb08138e1
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:47:12 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:12.880 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'env', 'PROCESS_TAG=haproxy-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/08f8ca28-b7fe-4840-94a7-78acb08138e1.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:47:13 np0005473739 podman[408824]: 2025-10-07 14:47:13.063169484 +0000 UTC m=+0.042692056 container create 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True)
Oct  7 10:47:13 np0005473739 systemd[1]: Started libpod-conmon-9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3.scope.
Oct  7 10:47:13 np0005473739 podman[408824]: 2025-10-07 14:47:13.043584003 +0000 UTC m=+0.023106395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:47:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:47:13 np0005473739 podman[408824]: 2025-10-07 14:47:13.159015801 +0000 UTC m=+0.138538183 container init 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:47:13 np0005473739 podman[408824]: 2025-10-07 14:47:13.165972396 +0000 UTC m=+0.145494758 container start 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:47:13 np0005473739 podman[408824]: 2025-10-07 14:47:13.169219983 +0000 UTC m=+0.148742445 container attach 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:47:13 np0005473739 hungry_mendeleev[408839]: 167 167
Oct  7 10:47:13 np0005473739 systemd[1]: libpod-9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3.scope: Deactivated successfully.
Oct  7 10:47:13 np0005473739 podman[408824]: 2025-10-07 14:47:13.173193138 +0000 UTC m=+0.152715500 container died 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:47:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-64084bf4a950db4c93fe8df2d4014a5bb270528b01481ff4888e9aa43078558e-merged.mount: Deactivated successfully.
Oct  7 10:47:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:13 np0005473739 podman[408824]: 2025-10-07 14:47:13.210774477 +0000 UTC m=+0.190296839 container remove 9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hungry_mendeleev, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:13 np0005473739 systemd[1]: libpod-conmon-9b3e5b9c19f982bd95c4347a54105a460c5c5a87d48af868f56c45b066bee2c3.scope: Deactivated successfully.
Oct  7 10:47:13 np0005473739 podman[408878]: 2025-10-07 14:47:13.341182623 +0000 UTC m=+0.078759424 container create c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:47:13 np0005473739 systemd[1]: Started libpod-conmon-c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078.scope.
Oct  7 10:47:13 np0005473739 podman[408878]: 2025-10-07 14:47:13.315121391 +0000 UTC m=+0.052698212 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:47:13 np0005473739 podman[408895]: 2025-10-07 14:47:13.396005141 +0000 UTC m=+0.053130073 container create 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 10:47:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:47:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd60f4915bfff9b9f5172271d3ec2486162b140a81a778a520bd35334ab4d772/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:13 np0005473739 podman[408878]: 2025-10-07 14:47:13.420799119 +0000 UTC m=+0.158375940 container init c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:47:13 np0005473739 podman[408878]: 2025-10-07 14:47:13.427458037 +0000 UTC m=+0.165034838 container start c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 10:47:13 np0005473739 systemd[1]: Started libpod-conmon-8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164.scope.
Oct  7 10:47:13 np0005473739 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [NOTICE]   (408917) : New worker (408923) forked
Oct  7 10:47:13 np0005473739 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [NOTICE]   (408917) : Loading success.
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.456 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.457 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848433.4563055, 1262caef-f43e-429a-b613-b4d54273e604 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.458 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] VM Started (Lifecycle Event)#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.464 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:47:13 np0005473739 podman[408895]: 2025-10-07 14:47:13.369435525 +0000 UTC m=+0.026560477 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.468 2 INFO nova.virt.libvirt.driver [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance spawned successfully.#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.469 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:47:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.479 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:13 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.491 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.496 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.497 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.497 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.497 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.498 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:13 np0005473739 podman[408895]: 2025-10-07 14:47:13.498829034 +0000 UTC m=+0.155953986 container init 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.498 2 DEBUG nova.virt.libvirt.driver [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:13 np0005473739 podman[408895]: 2025-10-07 14:47:13.507076013 +0000 UTC m=+0.164200945 container start 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:47:13 np0005473739 podman[408895]: 2025-10-07 14:47:13.51111618 +0000 UTC m=+0.168241112 container attach 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.535 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.536 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848433.456523, 1262caef-f43e-429a-b613-b4d54273e604 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.536 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.555 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.558 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848433.460603, 1262caef-f43e-429a-b613-b4d54273e604 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.558 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.589 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.594 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.604 2 INFO nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Took 9.86 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.604 2 DEBUG nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.638 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.682 2 INFO nova.compute.manager [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Took 10.87 seconds to build instance.#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.703 2 DEBUG oslo_concurrency.lockutils [None req-41564795-c9b1-4c50-8414-a997ff69388a 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.957s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.912 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.913 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:13 np0005473739 nova_compute[259550]: 2025-10-07 14:47:13.930 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.007 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.007 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.013 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.014 2 INFO nova.compute.claims [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.144 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]: {
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:    "0": [
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:        {
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "devices": [
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "/dev/loop3"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            ],
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_name": "ceph_lv0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_size": "21470642176",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "name": "ceph_lv0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "tags": {
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cluster_name": "ceph",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.crush_device_class": "",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.encrypted": "0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osd_id": "0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.type": "block",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.vdo": "0"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            },
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "type": "block",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "vg_name": "ceph_vg0"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:        }
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:    ],
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:    "1": [
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:        {
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "devices": [
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "/dev/loop4"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            ],
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_name": "ceph_lv1",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_size": "21470642176",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "name": "ceph_lv1",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "tags": {
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cluster_name": "ceph",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.crush_device_class": "",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.encrypted": "0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osd_id": "1",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.type": "block",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.vdo": "0"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            },
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "type": "block",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "vg_name": "ceph_vg1"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:        }
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:    ],
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:    "2": [
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:        {
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "devices": [
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "/dev/loop5"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            ],
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_name": "ceph_lv2",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_size": "21470642176",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "name": "ceph_lv2",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "tags": {
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.cluster_name": "ceph",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.crush_device_class": "",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.encrypted": "0",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osd_id": "2",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.type": "block",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:                "ceph.vdo": "0"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            },
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "type": "block",
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:            "vg_name": "ceph_vg2"
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:        }
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]:    ]
Oct  7 10:47:14 np0005473739 dreamy_turing[408921]: }
Oct  7 10:47:14 np0005473739 systemd[1]: libpod-8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164.scope: Deactivated successfully.
Oct  7 10:47:14 np0005473739 conmon[408921]: conmon 8b942641f35873fee21e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164.scope/container/memory.events
Oct  7 10:47:14 np0005473739 podman[408895]: 2025-10-07 14:47:14.372599339 +0000 UTC m=+1.029724271 container died 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:47:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-22a2dec0c41513bcc2d58bf20dccd3e26b97514a8c52e603fb2dff100fd0542c-merged.mount: Deactivated successfully.
Oct  7 10:47:14 np0005473739 podman[408895]: 2025-10-07 14:47:14.454392103 +0000 UTC m=+1.111517035 container remove 8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dreamy_turing, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:47:14 np0005473739 systemd[1]: libpod-conmon-8b942641f35873fee21ec96e31cca3cf9a0a3f4757f502961078e037c2b16164.scope: Deactivated successfully.
Oct  7 10:47:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2550: 305 pgs: 305 active+clean; 88 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 97 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  7 10:47:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:47:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1803854212' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.688 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.544s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.707 2 DEBUG nova.compute.provider_tree [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.728 2 DEBUG nova.scheduler.client.report [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.761 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.754s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.764 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.822 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.823 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.843 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.866 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.981 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.982 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:47:14 np0005473739 nova_compute[259550]: 2025-10-07 14:47:14.983 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Creating image(s)#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.006 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.029 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.058 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.062 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:15 np0005473739 podman[409150]: 2025-10-07 14:47:15.103950008 +0000 UTC m=+0.052668530 container create b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.115 2 DEBUG nova.policy [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.122 2 DEBUG nova.compute.manager [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.123 2 DEBUG oslo_concurrency.lockutils [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.123 2 DEBUG oslo_concurrency.lockutils [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.124 2 DEBUG oslo_concurrency.lockutils [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.124 2 DEBUG nova.compute.manager [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] No waiting events found dispatching network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.124 2 WARNING nova.compute.manager [req-762ff450-13c4-46dd-84ea-e038497feb04 req-dc4e8f5f-cf96-4016-8872-392575a5c89b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received unexpected event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:47:15 np0005473739 systemd[1]: Started libpod-conmon-b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e.scope.
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.162 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.165 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.166 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.166 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:15 np0005473739 podman[409150]: 2025-10-07 14:47:15.081218664 +0000 UTC m=+0.029937206 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:47:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:47:15 np0005473739 podman[409150]: 2025-10-07 14:47:15.192010079 +0000 UTC m=+0.140728611 container init b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.197 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:15 np0005473739 podman[409150]: 2025-10-07 14:47:15.202446387 +0000 UTC m=+0.151164899 container start b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:47:15 np0005473739 podman[409150]: 2025-10-07 14:47:15.207417738 +0000 UTC m=+0.156136250 container attach b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 10:47:15 np0005473739 practical_bhaskara[409185]: 167 167
Oct  7 10:47:15 np0005473739 systemd[1]: libpod-b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e.scope: Deactivated successfully.
Oct  7 10:47:15 np0005473739 podman[409150]: 2025-10-07 14:47:15.20859706 +0000 UTC m=+0.157315572 container died b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.207 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-33397d69dd89790f1aa496d5f348e5389cd27f79ae3720685eb098ad6e9fc0bf-merged.mount: Deactivated successfully.
Oct  7 10:47:15 np0005473739 podman[409150]: 2025-10-07 14:47:15.264003863 +0000 UTC m=+0.212722375 container remove b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_bhaskara, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:47:15 np0005473739 systemd[1]: libpod-conmon-b84e6fb0d051a52ef926682810582478f2c0ba87e9c44db770a6974a820ea82e.scope: Deactivated successfully.
Oct  7 10:47:15 np0005473739 podman[409248]: 2025-10-07 14:47:15.481965276 +0000 UTC m=+0.067629328 container create 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:47:15 np0005473739 systemd[1]: Started libpod-conmon-6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5.scope.
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.541 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.334s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:15 np0005473739 podman[409248]: 2025-10-07 14:47:15.454155588 +0000 UTC m=+0.039819670 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:47:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:47:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:15 np0005473739 podman[409248]: 2025-10-07 14:47:15.596612083 +0000 UTC m=+0.182276125 container init 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:47:15 np0005473739 podman[409248]: 2025-10-07 14:47:15.608613853 +0000 UTC m=+0.194277905 container start 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:47:15 np0005473739 podman[409248]: 2025-10-07 14:47:15.613604955 +0000 UTC m=+0.199269037 container attach 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.635 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.727 2 DEBUG nova.objects.instance [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid c621ddbd-d6b8-461e-9374-4f7e50d0ca5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.752 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.753 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Ensure instance console log exists: /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.754 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.754 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:15 np0005473739 nova_compute[259550]: 2025-10-07 14:47:15.755 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:16 np0005473739 nova_compute[259550]: 2025-10-07 14:47:16.537 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Successfully created port: d90f9db1-8372-46fb-93ed-9be2902fe85c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]: {
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "osd_id": 2,
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "type": "bluestore"
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:    },
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "osd_id": 1,
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "type": "bluestore"
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:    },
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "osd_id": 0,
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:        "type": "bluestore"
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]:    }
Oct  7 10:47:16 np0005473739 peaceful_kalam[409264]: }
Oct  7 10:47:16 np0005473739 systemd[1]: libpod-6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5.scope: Deactivated successfully.
Oct  7 10:47:16 np0005473739 podman[409248]: 2025-10-07 14:47:16.574852616 +0000 UTC m=+1.160516668 container died 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:47:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bde4ee4c313b7df48efca96669f8358719e3279a3e890d950309393015c5ee4d-merged.mount: Deactivated successfully.
Oct  7 10:47:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2551: 305 pgs: 305 active+clean; 96 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 714 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  7 10:47:16 np0005473739 podman[409248]: 2025-10-07 14:47:16.623139469 +0000 UTC m=+1.208803521 container remove 6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_kalam, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:47:16 np0005473739 systemd[1]: libpod-conmon-6d9de39a09f22b407917b25525529275bdcc3df100dd86e3409d6ab1e38713e5.scope: Deactivated successfully.
Oct  7 10:47:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:47:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:47:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:47:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:47:16 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0653c403-876e-46e3-8d8d-fcb0705f9b3e does not exist
Oct  7 10:47:16 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 63231e2b-34a9-40ba-9504-14a4ac5e6205 does not exist
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.339 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:17 np0005473739 NetworkManager[44949]: <info>  [1759848437.4398] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/607)
Oct  7 10:47:17 np0005473739 NetworkManager[44949]: <info>  [1759848437.4408] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/608)
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.439 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.461 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Successfully updated port: d90f9db1-8372-46fb-93ed-9be2902fe85c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.481 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.481 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.481 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:17Z|01519|binding|INFO|Releasing lport 4030b5ae-6c2e-4b07-9359-70a14ce783de from this chassis (sb_readonly=0)
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.621 2 DEBUG nova.compute.manager [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.622 2 DEBUG nova.compute.manager [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing instance network info cache due to event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.623 2 DEBUG oslo_concurrency.lockutils [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:47:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:47:17 np0005473739 nova_compute[259550]: 2025-10-07 14:47:17.698 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:47:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:18 np0005473739 nova_compute[259550]: 2025-10-07 14:47:18.213 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2552: 305 pgs: 305 active+clean; 96 MiB data, 986 MiB used, 59 GiB / 60 GiB avail; 698 KiB/s rd, 1.6 MiB/s wr, 37 op/s
Oct  7 10:47:19 np0005473739 nova_compute[259550]: 2025-10-07 14:47:19.715 2 DEBUG nova.compute.manager [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:19 np0005473739 nova_compute[259550]: 2025-10-07 14:47:19.716 2 DEBUG nova.compute.manager [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing instance network info cache due to event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:47:19 np0005473739 nova_compute[259550]: 2025-10-07 14:47:19.717 2 DEBUG oslo_concurrency.lockutils [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:19 np0005473739 nova_compute[259550]: 2025-10-07 14:47:19.717 2 DEBUG oslo_concurrency.lockutils [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:19 np0005473739 nova_compute[259550]: 2025-10-07 14:47:19.717 2 DEBUG nova.network.neutron [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:47:20 np0005473739 nova_compute[259550]: 2025-10-07 14:47:20.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2553: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.0 MiB/s wr, 104 op/s
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2554: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:47:22
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta', 'default.rgw.control', '.rgw.root', 'images', 'volumes', 'backups', 'cephfs.cephfs.data', 'vms']
Oct  7 10:47:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:47:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:47:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.216 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.635 2 DEBUG nova.network.neutron [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.638 2 DEBUG nova.network.neutron [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updated VIF entry in instance network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.638 2 DEBUG nova.network.neutron [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.664 2 DEBUG oslo_concurrency.lockutils [req-98f9d95a-dae2-4828-a41b-6bec2aaf9be1 req-dfe5ad57-4886-44f2-a222-ee6eaaaf40c7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.665 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.665 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance network_info: |[{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.666 2 DEBUG oslo_concurrency.lockutils [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.666 2 DEBUG nova.network.neutron [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.669 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start _get_guest_xml network_info=[{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.674 2 WARNING nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.681 2 DEBUG nova.virt.libvirt.host [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.682 2 DEBUG nova.virt.libvirt.host [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.691 2 DEBUG nova.virt.libvirt.host [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.692 2 DEBUG nova.virt.libvirt.host [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.693 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.693 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.693 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.694 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.694 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.694 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.695 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.695 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.695 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.695 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.696 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.696 2 DEBUG nova.virt.hardware [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:47:23 np0005473739 nova_compute[259550]: 2025-10-07 14:47:23.699 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:47:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1425288132' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.163 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.192 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.198 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2555: 305 pgs: 305 active+clean; 134 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:47:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:47:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3049854274' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.642 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.645 2 DEBUG nova.virt.libvirt.vif [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1379651617',display_name='tempest-TestGettingAddress-server-1379651617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1379651617',id=138,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-wdqfp22g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:14Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=c621ddbd-d6b8-461e-9374-4f7e50d0ca5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.646 2 DEBUG nova.network.os_vif_util [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.650 2 DEBUG nova.network.os_vif_util [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.653 2 DEBUG nova.objects.instance [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid c621ddbd-d6b8-461e-9374-4f7e50d0ca5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.671 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <uuid>c621ddbd-d6b8-461e-9374-4f7e50d0ca5f</uuid>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <name>instance-0000008a</name>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1379651617</nova:name>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:47:23</nova:creationTime>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <nova:port uuid="d90f9db1-8372-46fb-93ed-9be2902fe85c">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fe48:3908" ipVersion="6"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe48:3908" ipVersion="6"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <entry name="serial">c621ddbd-d6b8-461e-9374-4f7e50d0ca5f</entry>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <entry name="uuid">c621ddbd-d6b8-461e-9374-4f7e50d0ca5f</entry>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:48:39:08"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <target dev="tapd90f9db1-83"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/console.log" append="off"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:47:24 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:47:24 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:47:24 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:47:24 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.672 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Preparing to wait for external event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.673 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.673 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.673 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.674 2 DEBUG nova.virt.libvirt.vif [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1379651617',display_name='tempest-TestGettingAddress-server-1379651617',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1379651617',id=138,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-wdqfp22g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:14Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=c621ddbd-d6b8-461e-9374-4f7e50d0ca5f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.674 2 DEBUG nova.network.os_vif_util [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.675 2 DEBUG nova.network.os_vif_util [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.676 2 DEBUG os_vif [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.677 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.678 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.683 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd90f9db1-83, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.684 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd90f9db1-83, col_values=(('external_ids', {'iface-id': 'd90f9db1-8372-46fb-93ed-9be2902fe85c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:39:08', 'vm-uuid': 'c621ddbd-d6b8-461e-9374-4f7e50d0ca5f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:24 np0005473739 NetworkManager[44949]: <info>  [1759848444.6866] manager: (tapd90f9db1-83): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/609)
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.697 2 INFO os_vif [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83')#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.753 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.754 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.755 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:48:39:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.755 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Using config drive#033[00m
Oct  7 10:47:24 np0005473739 nova_compute[259550]: 2025-10-07 14:47:24.778 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:25 np0005473739 nova_compute[259550]: 2025-10-07 14:47:25.965 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Creating config drive at /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config#033[00m
Oct  7 10:47:25 np0005473739 nova_compute[259550]: 2025-10-07 14:47:25.973 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc4njtkmh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.118 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpc4njtkmh" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.140 2 DEBUG nova.storage.rbd_utils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.143 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.293 2 DEBUG oslo_concurrency.processutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.294 2 INFO nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Deleting local config drive /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f/disk.config because it was imported into RBD.#033[00m
Oct  7 10:47:26 np0005473739 kernel: tapd90f9db1-83: entered promiscuous mode
Oct  7 10:47:26 np0005473739 NetworkManager[44949]: <info>  [1759848446.3574] manager: (tapd90f9db1-83): new Tun device (/org/freedesktop/NetworkManager/Devices/610)
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:26Z|01520|binding|INFO|Claiming lport d90f9db1-8372-46fb-93ed-9be2902fe85c for this chassis.
Oct  7 10:47:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:26Z|01521|binding|INFO|d90f9db1-8372-46fb-93ed-9be2902fe85c: Claiming fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.367 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.373 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908'], port_security=['fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe48:3908/64 2001:db8::f816:3eff:fe48:3908/64', 'neutron:device_id': 'c621ddbd-d6b8-461e-9374-4f7e50d0ca5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ca8f444-a15b-48d2-afd7-d5447a1f3a63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d90f9db1-8372-46fb-93ed-9be2902fe85c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.374 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d90f9db1-8372-46fb-93ed-9be2902fe85c in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 bound to our chassis#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.375 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 970990f9-7a8a-40de-9a55-f4c40d657453#033[00m
Oct  7 10:47:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:26Z|01522|binding|INFO|Setting lport d90f9db1-8372-46fb-93ed-9be2902fe85c ovn-installed in OVS
Oct  7 10:47:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:26Z|01523|binding|INFO|Setting lport d90f9db1-8372-46fb-93ed-9be2902fe85c up in Southbound
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.387 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cbb8fcce-aa09-4a65-b88c-5b66df688831]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.389 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap970990f9-71 in ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.390 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap970990f9-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.391 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[11dfe8fb-9dbe-4d40-931d-ec770a2319fa]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.391 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b1a212ea-25a2-4654-9eb7-af106659a679]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.407 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[910afb28-be11-403e-8a38-50cf6bef722f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.424 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8979d584-b20a-4680-b48d-fb1b5b40406f]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 systemd-machined[214580]: New machine qemu-172-instance-0000008a.
Oct  7 10:47:26 np0005473739 systemd[1]: Started Virtual Machine qemu-172-instance-0000008a.
Oct  7 10:47:26 np0005473739 systemd-udevd[409604]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.466 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f5fe6d5d-83fb-414a-b1a3-41a0c9761fb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 NetworkManager[44949]: <info>  [1759848446.4694] device (tapd90f9db1-83): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:47:26 np0005473739 NetworkManager[44949]: <info>  [1759848446.4702] device (tapd90f9db1-83): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.472 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[08ed92b9-9617-4abc-b333-645587407a9e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 NetworkManager[44949]: <info>  [1759848446.4738] manager: (tap970990f9-70): new Veth device (/org/freedesktop/NetworkManager/Devices/611)
Oct  7 10:47:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:26Z|00180|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:97:e6:f4 10.100.0.12
Oct  7 10:47:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:26Z|00181|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:97:e6:f4 10.100.0.12
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.509 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[55120493-a884-4fa3-9837-717fd97fd038]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.512 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a776734b-1b07-42b5-8fd1-79960adbf80f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 podman[409567]: 2025-10-07 14:47:26.51995919 +0000 UTC m=+0.119185639 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:47:26 np0005473739 NetworkManager[44949]: <info>  [1759848446.5394] device (tap970990f9-70): carrier: link connected
Oct  7 10:47:26 np0005473739 podman[409569]: 2025-10-07 14:47:26.542072718 +0000 UTC m=+0.139623732 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.544 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[baac336d-659c-4c57-9da7-2f99e644689d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.560 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eceb1bd9-60d1-4d53-949d-ac874d3adf95]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970990f9-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:9c:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903010, 'reachable_time': 44649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409647, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.579 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6db212-07b8-4c82-ac1f-ed28b9897d78]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:9ccb'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903010, 'tstamp': 903010}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 409648, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2aac0f4-6204-49d4-a7ad-fb05cb801781]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970990f9-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:9c:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903010, 'reachable_time': 44649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 409649, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2556: 305 pgs: 305 active+clean; 148 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 3.1 MiB/s wr, 117 op/s
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.640 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[de7e7df5-94cb-4a2c-a072-6d0d5fcd5f61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.708 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f1ee7284-dddb-45a3-8854-15267332514b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970990f9-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.710 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970990f9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:26 np0005473739 NetworkManager[44949]: <info>  [1759848446.7123] manager: (tap970990f9-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/612)
Oct  7 10:47:26 np0005473739 kernel: tap970990f9-70: entered promiscuous mode
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.718 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap970990f9-70, col_values=(('external_ids', {'iface-id': 'a7ef9e15-2145-4a59-b756-368bcbe72d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:26Z|01524|binding|INFO|Releasing lport a7ef9e15-2145-4a59-b756-368bcbe72d69 from this chassis (sb_readonly=0)
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.721 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/970990f9-7a8a-40de-9a55-f4c40d657453.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/970990f9-7a8a-40de-9a55-f4c40d657453.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.722 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd50617-9564-4be6-9274-9e3c01b252d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.723 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-970990f9-7a8a-40de-9a55-f4c40d657453
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/970990f9-7a8a-40de-9a55-f4c40d657453.pid.haproxy
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 970990f9-7a8a-40de-9a55-f4c40d657453
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:47:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:26.724 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'env', 'PROCESS_TAG=haproxy-970990f9-7a8a-40de-9a55-f4c40d657453', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/970990f9-7a8a-40de-9a55-f4c40d657453.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:47:26 np0005473739 nova_compute[259550]: 2025-10-07 14:47:26.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:27 np0005473739 podman[409723]: 2025-10-07 14:47:27.109540941 +0000 UTC m=+0.048049318 container create 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, io.buildah.version=1.41.3)
Oct  7 10:47:27 np0005473739 systemd[1]: Started libpod-conmon-10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c.scope.
Oct  7 10:47:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:47:27 np0005473739 podman[409723]: 2025-10-07 14:47:27.084173887 +0000 UTC m=+0.022682294 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:47:27 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/47a24921b3c8e4f08e504680464e3b58519f09b02a877bb0e4246cd4ccca4536/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:47:27 np0005473739 podman[409723]: 2025-10-07 14:47:27.194431858 +0000 UTC m=+0.132940235 container init 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:47:27 np0005473739 podman[409723]: 2025-10-07 14:47:27.199632406 +0000 UTC m=+0.138140783 container start 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.219 2 DEBUG nova.compute.manager [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.220 2 DEBUG oslo_concurrency.lockutils [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.220 2 DEBUG oslo_concurrency.lockutils [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.220 2 DEBUG oslo_concurrency.lockutils [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.221 2 DEBUG nova.compute.manager [req-3fe5b1ae-f0fa-472c-be79-59a7e5295a21 req-10fac8b1-c1b0-45c0-8be9-8ac64de2914a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Processing event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:47:27 np0005473739 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [NOTICE]   (409742) : New worker (409744) forked
Oct  7 10:47:27 np0005473739 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [NOTICE]   (409742) : Loading success.
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.228 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848447.227463, c621ddbd-d6b8-461e-9374-4f7e50d0ca5f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.228 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] VM Started (Lifecycle Event)#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.231 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.235 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.240 2 INFO nova.virt.libvirt.driver [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance spawned successfully.#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.240 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.256 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.264 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.267 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.267 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.268 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.268 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.268 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.269 2 DEBUG nova.virt.libvirt.driver [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.315 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.315 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848447.227604, c621ddbd-d6b8-461e-9374-4f7e50d0ca5f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.316 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.354 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.357 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848447.2345088, c621ddbd-d6b8-461e-9374-4f7e50d0ca5f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.358 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.366 2 INFO nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Took 12.38 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.366 2 DEBUG nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.380 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.383 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.407 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.441 2 INFO nova.compute.manager [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Took 13.45 seconds to build instance.#033[00m
Oct  7 10:47:27 np0005473739 nova_compute[259550]: 2025-10-07 14:47:27.458 2 DEBUG oslo_concurrency.lockutils [None req-04f59643-d128-4fa2-8f80-05626c303b02 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:28 np0005473739 nova_compute[259550]: 2025-10-07 14:47:28.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:28.462 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=47, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=46) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:47:28 np0005473739 nova_compute[259550]: 2025-10-07 14:47:28.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:28.463 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:47:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2557: 305 pgs: 305 active+clean; 164 MiB data, 995 MiB used, 59 GiB / 60 GiB avail; 3.0 MiB/s rd, 3.5 MiB/s wr, 171 op/s
Oct  7 10:47:28 np0005473739 nova_compute[259550]: 2025-10-07 14:47:28.987 2 DEBUG nova.network.neutron [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updated VIF entry in instance network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:47:28 np0005473739 nova_compute[259550]: 2025-10-07 14:47:28.987 2 DEBUG nova.network.neutron [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:29 np0005473739 nova_compute[259550]: 2025-10-07 14:47:29.002 2 DEBUG oslo_concurrency.lockutils [req-6238584c-0eb4-4bef-80bd-055396f6a4eb req-b6057d0c-b279-4752-bbd5-c9a976f58b75 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:47:29 np0005473739 nova_compute[259550]: 2025-10-07 14:47:29.308 2 DEBUG nova.compute.manager [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:29 np0005473739 nova_compute[259550]: 2025-10-07 14:47:29.309 2 DEBUG oslo_concurrency.lockutils [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:29 np0005473739 nova_compute[259550]: 2025-10-07 14:47:29.309 2 DEBUG oslo_concurrency.lockutils [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:29 np0005473739 nova_compute[259550]: 2025-10-07 14:47:29.309 2 DEBUG oslo_concurrency.lockutils [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:29 np0005473739 nova_compute[259550]: 2025-10-07 14:47:29.309 2 DEBUG nova.compute.manager [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] No waiting events found dispatching network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:47:29 np0005473739 nova_compute[259550]: 2025-10-07 14:47:29.310 2 WARNING nova.compute.manager [req-1b2aff16-f71f-49ef-ba6d-a9ca2b84ad36 req-c2aff801-b9f3-45ae-bfee-e28c23b0256e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received unexpected event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c for instance with vm_state active and task_state None.#033[00m
Oct  7 10:47:29 np0005473739 nova_compute[259550]: 2025-10-07 14:47:29.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2558: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 3.4 MiB/s rd, 3.6 MiB/s wr, 202 op/s
Oct  7 10:47:32 np0005473739 nova_compute[259550]: 2025-10-07 14:47:32.116 2 DEBUG nova.compute.manager [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:32 np0005473739 nova_compute[259550]: 2025-10-07 14:47:32.116 2 DEBUG nova.compute.manager [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing instance network info cache due to event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:47:32 np0005473739 nova_compute[259550]: 2025-10-07 14:47:32.117 2 DEBUG oslo_concurrency.lockutils [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:32 np0005473739 nova_compute[259550]: 2025-10-07 14:47:32.117 2 DEBUG oslo_concurrency.lockutils [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:32 np0005473739 nova_compute[259550]: 2025-10-07 14:47:32.117 2 DEBUG nova.network.neutron [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2559: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 135 op/s
Oct  7 10:47:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:47:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540941090' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:47:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:47:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3540941090' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0011050157297974865 of space, bias 1.0, pg target 0.33150471893924593 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:47:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:47:32 np0005473739 nova_compute[259550]: 2025-10-07 14:47:32.792 2 INFO nova.compute.manager [None req-047d5187-286d-4fa9-8709-ae5bd87b32af 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Get console output#033[00m
Oct  7 10:47:32 np0005473739 nova_compute[259550]: 2025-10-07 14:47:32.797 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:47:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:33 np0005473739 nova_compute[259550]: 2025-10-07 14:47:33.218 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:33Z|01525|binding|INFO|Releasing lport a7ef9e15-2145-4a59-b756-368bcbe72d69 from this chassis (sb_readonly=0)
Oct  7 10:47:33 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:33Z|01526|binding|INFO|Releasing lport 4030b5ae-6c2e-4b07-9359-70a14ce783de from this chassis (sb_readonly=0)
Oct  7 10:47:33 np0005473739 nova_compute[259550]: 2025-10-07 14:47:33.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:33 np0005473739 nova_compute[259550]: 2025-10-07 14:47:33.968 2 DEBUG nova.network.neutron [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updated VIF entry in instance network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:47:33 np0005473739 nova_compute[259550]: 2025-10-07 14:47:33.969 2 DEBUG nova.network.neutron [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:34 np0005473739 nova_compute[259550]: 2025-10-07 14:47:34.062 2 DEBUG oslo_concurrency.lockutils [req-1d7462e7-02e5-46b8-8e79-1a0c4ca6799f req-b8eb4111-c6b1-443d-8f53-6dbbcb36cdc8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:47:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2560: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.1 MiB/s wr, 137 op/s
Oct  7 10:47:34 np0005473739 nova_compute[259550]: 2025-10-07 14:47:34.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:35 np0005473739 nova_compute[259550]: 2025-10-07 14:47:35.091 2 INFO nova.compute.manager [None req-b81f85ef-9592-47f1-b7c3-a9cdcd64e684 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Get console output#033[00m
Oct  7 10:47:35 np0005473739 nova_compute[259550]: 2025-10-07 14:47:35.096 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:47:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2561: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.2 MiB/s rd, 2.2 MiB/s wr, 137 op/s
Oct  7 10:47:37 np0005473739 nova_compute[259550]: 2025-10-07 14:47:37.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:38 np0005473739 nova_compute[259550]: 2025-10-07 14:47:38.221 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:38 np0005473739 nova_compute[259550]: 2025-10-07 14:47:38.449 2 INFO nova.compute.manager [None req-6c6ead25-02dc-4e56-9390-f4466daa31a4 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Get console output#033[00m
Oct  7 10:47:38 np0005473739 nova_compute[259550]: 2025-10-07 14:47:38.454 29474 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Oct  7 10:47:38 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:38.465 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '47'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2562: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 875 KiB/s wr, 118 op/s
Oct  7 10:47:39 np0005473739 nova_compute[259550]: 2025-10-07 14:47:39.730 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:40 np0005473739 nova_compute[259550]: 2025-10-07 14:47:40.513 2 DEBUG nova.compute.manager [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:40 np0005473739 nova_compute[259550]: 2025-10-07 14:47:40.514 2 DEBUG nova.compute.manager [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing instance network info cache due to event network-changed-b06adbc1-01d9-45e3-b4b6-71bc4f85a659. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:47:40 np0005473739 nova_compute[259550]: 2025-10-07 14:47:40.514 2 DEBUG oslo_concurrency.lockutils [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:40 np0005473739 nova_compute[259550]: 2025-10-07 14:47:40.514 2 DEBUG oslo_concurrency.lockutils [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:40 np0005473739 nova_compute[259550]: 2025-10-07 14:47:40.515 2 DEBUG nova.network.neutron [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Refreshing network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:47:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2563: 305 pgs: 305 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 577 KiB/s rd, 449 KiB/s wr, 51 op/s
Oct  7 10:47:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:40Z|00182|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:39:08 10.100.0.13
Oct  7 10:47:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:40Z|00183|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:39:08 10.100.0.13
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.210 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.210 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.211 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.211 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.211 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.212 2 INFO nova.compute.manager [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Terminating instance#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.213 2 DEBUG nova.compute.manager [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:47:41 np0005473739 kernel: tapb06adbc1-01 (unregistering): left promiscuous mode
Oct  7 10:47:41 np0005473739 NetworkManager[44949]: <info>  [1759848461.2807] device (tapb06adbc1-01): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:41Z|01527|binding|INFO|Releasing lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 from this chassis (sb_readonly=0)
Oct  7 10:47:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:41Z|01528|binding|INFO|Setting lport b06adbc1-01d9-45e3-b4b6-71bc4f85a659 down in Southbound
Oct  7 10:47:41 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:41Z|01529|binding|INFO|Removing iface tapb06adbc1-01 ovn-installed in OVS
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.336 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:41 np0005473739 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Deactivated successfully.
Oct  7 10:47:41 np0005473739 systemd[1]: machine-qemu\x2d171\x2dinstance\x2d00000089.scope: Consumed 13.392s CPU time.
Oct  7 10:47:41 np0005473739 systemd-machined[214580]: Machine qemu-171-instance-00000089 terminated.
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.459 2 INFO nova.virt.libvirt.driver [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Instance destroyed successfully.#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.460 2 DEBUG nova.objects.instance [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lazy-loading 'resources' on Instance uuid 1262caef-f43e-429a-b613-b4d54273e604 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.565 2 DEBUG nova.virt.libvirt.vif [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:47:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-866328963',display_name='tempest-TestNetworkBasicOps-server-866328963',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-866328963',id=137,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBPS/KeXOUgsVFvz8/06bZGZYUJsL0P8KN5zfOcHd36qPCONG0eiDz9BDiYtOjbD9G91TMKvoW2fltNYdXkyA98S8eOoqdEV3DHQkPpvlOS52YF4JxVXz9leZAC3qrtG9CA==',key_name='tempest-TestNetworkBasicOps-1170467374',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:47:13Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2b72d80a22994265ac649277e01837af',ramdisk_id='',reservation_id='r-uefbav6l',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-306784636',owner_user_name='tempest-TestNetworkBasicOps-306784636-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:47:13Z,user_data=None,user_id='4c50d2bc13fb451fa34788d0157e1827',uuid=1262caef-f43e-429a-b613-b4d54273e604,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.566 2 DEBUG nova.network.os_vif_util [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converting VIF {"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.194", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.567 2 DEBUG nova.network.os_vif_util [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.568 2 DEBUG os_vif [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.572 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb06adbc1-01, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:47:41 np0005473739 nova_compute[259550]: 2025-10-07 14:47:41.579 2 INFO os_vif [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:97:e6:f4,bridge_name='br-int',has_traffic_filtering=True,id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659,network=Network(08f8ca28-b7fe-4840-94a7-78acb08138e1),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb06adbc1-01')#033[00m
Oct  7 10:47:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.668 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:97:e6:f4 10.100.0.12'], port_security=['fa:16:3e:97:e6:f4 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '1262caef-f43e-429a-b613-b4d54273e604', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2b72d80a22994265ac649277e01837af', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3843b3bc-7e2a-472a-b856-d1d145332927', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=14a0a809-1526-4a53-a5bb-23565156e65a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b06adbc1-01d9-45e3-b4b6-71bc4f85a659) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:47:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.670 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b06adbc1-01d9-45e3-b4b6-71bc4f85a659 in datapath 08f8ca28-b7fe-4840-94a7-78acb08138e1 unbound from our chassis#033[00m
Oct  7 10:47:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.672 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 08f8ca28-b7fe-4840-94a7-78acb08138e1, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:47:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.673 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6f0eebba-5525-428e-a122-cfa351f6f28c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.673 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 namespace which is not needed anymore#033[00m
Oct  7 10:47:41 np0005473739 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [NOTICE]   (408917) : haproxy version is 2.8.14-c23fe91
Oct  7 10:47:41 np0005473739 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [NOTICE]   (408917) : path to executable is /usr/sbin/haproxy
Oct  7 10:47:41 np0005473739 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [WARNING]  (408917) : Exiting Master process...
Oct  7 10:47:41 np0005473739 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [WARNING]  (408917) : Exiting Master process...
Oct  7 10:47:41 np0005473739 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [ALERT]    (408917) : Current worker (408923) exited with code 143 (Terminated)
Oct  7 10:47:41 np0005473739 neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1[408912]: [WARNING]  (408917) : All workers exited. Exiting... (0)
Oct  7 10:47:41 np0005473739 systemd[1]: libpod-c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078.scope: Deactivated successfully.
Oct  7 10:47:41 np0005473739 podman[409807]: 2025-10-07 14:47:41.847369798 +0000 UTC m=+0.051861279 container died c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:47:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078-userdata-shm.mount: Deactivated successfully.
Oct  7 10:47:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cd60f4915bfff9b9f5172271d3ec2486162b140a81a778a520bd35334ab4d772-merged.mount: Deactivated successfully.
Oct  7 10:47:41 np0005473739 podman[409807]: 2025-10-07 14:47:41.916252949 +0000 UTC m=+0.120744420 container cleanup c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:47:41 np0005473739 systemd[1]: libpod-conmon-c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078.scope: Deactivated successfully.
Oct  7 10:47:41 np0005473739 podman[409838]: 2025-10-07 14:47:41.98547532 +0000 UTC m=+0.049572280 container remove c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:47:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.994 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d090e373-5f21-43a5-ad11-167881107ac9]: (4, ('Tue Oct  7 02:47:41 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 (c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078)\nc37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078\nTue Oct  7 02:47:41 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 (c37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078)\nc37d8bccf6d4916c2d861b9cf97708b698306983e000ea3a4386f778201c3078\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.998 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef4d30d7-bdca-4e2d-b3bd-249323e04c20]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:41.999 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap08f8ca28-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.001 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:42 np0005473739 kernel: tap08f8ca28-b0: left promiscuous mode
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.016 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.021 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[741dbda7-d783-4306-861d-cb3a2f412945]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.057 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[36e07ae3-039a-41a4-8bbe-e2851b5c52bd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.059 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bc50ccab-536f-4210-b504-7e165cc761ba]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.076 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba542a92-ba6c-47ff-b460-ce39b226a6d6]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 901617, 'reachable_time': 31542, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 409854, 'error': None, 'target': 'ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:42 np0005473739 systemd[1]: run-netns-ovnmeta\x2d08f8ca28\x2db7fe\x2d4840\x2d94a7\x2d78acb08138e1.mount: Deactivated successfully.
Oct  7 10:47:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.082 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-08f8ca28-b7fe-4840-94a7-78acb08138e1 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:47:42 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:47:42.082 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[62ecc3b5-84d7-4e80-919d-8d7b5b61913e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.154 2 INFO nova.virt.libvirt.driver [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Deleting instance files /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604_del#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.155 2 INFO nova.virt.libvirt.driver [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Deletion of /var/lib/nova/instances/1262caef-f43e-429a-b613-b4d54273e604_del complete#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.336 2 INFO nova.compute.manager [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.337 2 DEBUG oslo.service.loopingcall [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.337 2 DEBUG nova.compute.manager [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.337 2 DEBUG nova.network.neutron [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:47:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2564: 305 pgs: 305 active+clean; 175 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 121 KiB/s rd, 386 KiB/s wr, 20 op/s
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.627 2 DEBUG nova.compute.manager [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-unplugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.627 2 DEBUG oslo_concurrency.lockutils [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.628 2 DEBUG oslo_concurrency.lockutils [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.628 2 DEBUG oslo_concurrency.lockutils [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.628 2 DEBUG nova.compute.manager [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] No waiting events found dispatching network-vif-unplugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:47:42 np0005473739 nova_compute[259550]: 2025-10-07 14:47:42.628 2 DEBUG nova.compute.manager [req-db66c004-f098-4539-820f-6a25faaceb67 req-4f09df87-d402-4018-9fb1-e189607aadd6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-unplugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:47:43 np0005473739 podman[409855]: 2025-10-07 14:47:43.095796092 +0000 UTC m=+0.060026107 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:47:43 np0005473739 podman[409856]: 2025-10-07 14:47:43.095814753 +0000 UTC m=+0.058904948 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:47:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:43 np0005473739 nova_compute[259550]: 2025-10-07 14:47:43.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.157 2 DEBUG nova.network.neutron [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updated VIF entry in instance network info cache for port b06adbc1-01d9-45e3-b4b6-71bc4f85a659. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.157 2 DEBUG nova.network.neutron [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [{"id": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "address": "fa:16:3e:97:e6:f4", "network": {"id": "08f8ca28-b7fe-4840-94a7-78acb08138e1", "bridge": "br-int", "label": "tempest-network-smoke--35940909", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2b72d80a22994265ac649277e01837af", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb06adbc1-01", "ovs_interfaceid": "b06adbc1-01d9-45e3-b4b6-71bc4f85a659", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.306 2 DEBUG nova.network.neutron [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.358 2 DEBUG nova.compute.manager [req-abbf02cf-c52b-473d-a428-66f9ab6d1c83 req-1866ab39-141e-4865-adb3-a0e4e9f99daa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-deleted-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.358 2 INFO nova.compute.manager [req-abbf02cf-c52b-473d-a428-66f9ab6d1c83 req-1866ab39-141e-4865-adb3-a0e4e9f99daa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Neutron deleted interface b06adbc1-01d9-45e3-b4b6-71bc4f85a659; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.358 2 DEBUG nova.network.neutron [req-abbf02cf-c52b-473d-a428-66f9ab6d1c83 req-1866ab39-141e-4865-adb3-a0e4e9f99daa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.476 2 DEBUG oslo_concurrency.lockutils [req-48f34889-77c3-42bd-af6c-b61d5c1b9521 req-35ccd3aa-07fc-44b1-a785-58eb3f47c713 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1262caef-f43e-429a-b613-b4d54273e604" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.520 2 INFO nova.compute.manager [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Took 2.18 seconds to deallocate network for instance.#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.526 2 DEBUG nova.compute.manager [req-abbf02cf-c52b-473d-a428-66f9ab6d1c83 req-1866ab39-141e-4865-adb3-a0e4e9f99daa 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Detach interface failed, port_id=b06adbc1-01d9-45e3-b4b6-71bc4f85a659, reason: Instance 1262caef-f43e-429a-b613-b4d54273e604 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:47:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2565: 305 pgs: 305 active+clean; 159 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 71 op/s
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.641 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.641 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.825 2 DEBUG oslo_concurrency.processutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.866 2 DEBUG nova.compute.manager [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.866 2 DEBUG oslo_concurrency.lockutils [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1262caef-f43e-429a-b613-b4d54273e604-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.867 2 DEBUG oslo_concurrency.lockutils [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.867 2 DEBUG oslo_concurrency.lockutils [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.867 2 DEBUG nova.compute.manager [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] No waiting events found dispatching network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:47:44 np0005473739 nova_compute[259550]: 2025-10-07 14:47:44.867 2 WARNING nova.compute.manager [req-dd8c6a35-15af-48cb-a38e-6f529176ce1b req-7361e5c6-4a64-4614-b0b1-89272055e4ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Received unexpected event network-vif-plugged-b06adbc1-01d9-45e3-b4b6-71bc4f85a659 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:47:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:47:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2257398268' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:47:45 np0005473739 nova_compute[259550]: 2025-10-07 14:47:45.429 2 DEBUG oslo_concurrency.processutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.604s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:45 np0005473739 nova_compute[259550]: 2025-10-07 14:47:45.434 2 DEBUG nova.compute.provider_tree [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:47:45 np0005473739 nova_compute[259550]: 2025-10-07 14:47:45.517 2 DEBUG nova.scheduler.client.report [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:47:45 np0005473739 nova_compute[259550]: 2025-10-07 14:47:45.650 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 1.009s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:45 np0005473739 nova_compute[259550]: 2025-10-07 14:47:45.900 2 INFO nova.scheduler.client.report [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Deleted allocations for instance 1262caef-f43e-429a-b613-b4d54273e604#033[00m
Oct  7 10:47:46 np0005473739 nova_compute[259550]: 2025-10-07 14:47:46.082 2 DEBUG oslo_concurrency.lockutils [None req-aa2abc4c-9419-4b97-b3f0-7bf965470e0c 4c50d2bc13fb451fa34788d0157e1827 2b72d80a22994265ac649277e01837af - - default default] Lock "1262caef-f43e-429a-b613-b4d54273e604" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.872s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:46 np0005473739 nova_compute[259550]: 2025-10-07 14:47:46.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2566: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct  7 10:47:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:48 np0005473739 nova_compute[259550]: 2025-10-07 14:47:48.225 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2567: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 265 KiB/s rd, 2.1 MiB/s wr, 89 op/s
Oct  7 10:47:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2568: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 242 KiB/s rd, 2.1 MiB/s wr, 83 op/s
Oct  7 10:47:51 np0005473739 nova_compute[259550]: 2025-10-07 14:47:51.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.317 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.317 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.338 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.431 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.432 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.437 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.437 2 INFO nova.compute.claims [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:47:52 np0005473739 ovn_controller[151684]: 2025-10-07T14:47:52Z|01530|binding|INFO|Releasing lport a7ef9e15-2145-4a59-b756-368bcbe72d69 from this chassis (sb_readonly=0)
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.562 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:52 np0005473739 nova_compute[259550]: 2025-10-07 14:47:52.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2569: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 1.8 MiB/s wr, 71 op/s
Oct  7 10:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:47:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:47:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:47:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/335214207' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.013 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.018 2 DEBUG nova.compute.provider_tree [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.036 2 DEBUG nova.scheduler.client.report [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.061 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.062 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.113 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.113 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.134 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.151 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:47:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.240 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.241 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.241 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Creating image(s)#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.260 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.284 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.308 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.314 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.395 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.396 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.397 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.397 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.423 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.427 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:53 np0005473739 nova_compute[259550]: 2025-10-07 14:47:53.496 2 DEBUG nova.policy [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.276 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.849s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.336 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.565 2 DEBUG nova.objects.instance [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.586 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.586 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Ensure instance console log exists: /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.587 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.587 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.587 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2570: 305 pgs: 305 active+clean; 121 MiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 194 KiB/s rd, 1.8 MiB/s wr, 72 op/s
Oct  7 10:47:54 np0005473739 nova_compute[259550]: 2025-10-07 14:47:54.733 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Successfully created port: 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.038 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Successfully updated port: 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.106 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.107 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.107 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.210 2 DEBUG nova.compute.manager [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.210 2 DEBUG nova.compute.manager [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing instance network info cache due to event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.211 2 DEBUG oslo_concurrency.lockutils [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.420 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.455 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848461.454498, 1262caef-f43e-429a-b613-b4d54273e604 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.456 2 INFO nova.compute.manager [-] [instance: 1262caef-f43e-429a-b613-b4d54273e604] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.482 2 DEBUG nova.compute.manager [None req-87e5fb10-4843-4e58-90b6-1c7f82ef0ac2 - - - - - -] [instance: 1262caef-f43e-429a-b613-b4d54273e604] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:47:56 np0005473739 nova_compute[259550]: 2025-10-07 14:47:56.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2571: 305 pgs: 305 active+clean; 137 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 25 KiB/s rd, 968 KiB/s wr, 22 op/s
Oct  7 10:47:57 np0005473739 podman[410103]: 2025-10-07 14:47:57.065380699 +0000 UTC m=+0.050486443 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  7 10:47:57 np0005473739 podman[410104]: 2025-10-07 14:47:57.13392011 +0000 UTC m=+0.112080750 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:47:57 np0005473739 nova_compute[259550]: 2025-10-07 14:47:57.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.388 2 DEBUG nova.network.neutron [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.409 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.410 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance network_info: |[{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.411 2 DEBUG oslo_concurrency.lockutils [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.412 2 DEBUG nova.network.neutron [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.420 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start _get_guest_xml network_info=[{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.428 2 WARNING nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.436 2 DEBUG nova.virt.libvirt.host [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.437 2 DEBUG nova.virt.libvirt.host [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.444 2 DEBUG nova.virt.libvirt.host [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.444 2 DEBUG nova.virt.libvirt.host [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.445 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.445 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.446 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.446 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.447 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.447 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.447 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.448 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.448 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.448 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.448 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.449 2 DEBUG nova.virt.hardware [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.451 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2572: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:47:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:47:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2471712782' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.922 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.959 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:47:58 np0005473739 nova_compute[259550]: 2025-10-07 14:47:58.966 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.019 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:47:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:47:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1872657069' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.431 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.433 2 DEBUG nova.virt.libvirt.vif [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-179829393',display_name='tempest-TestGettingAddress-server-179829393',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-179829393',id=139,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-17ii6zr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:53Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=96b2365c-ce47-4fa4-bff7-51dd2e4ac413,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.433 2 DEBUG nova.network.os_vif_util [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.435 2 DEBUG nova.network.os_vif_util [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.436 2 DEBUG nova.objects.instance [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.454 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <uuid>96b2365c-ce47-4fa4-bff7-51dd2e4ac413</uuid>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <name>instance-0000008b</name>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-179829393</nova:name>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:47:58</nova:creationTime>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <nova:port uuid="07e00b31-d9ec-46d0-927f-1d89f6d03bc6">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8:0:1:f816:3eff:fee6:143f" ipVersion="6"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee6:143f" ipVersion="6"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <entry name="serial">96b2365c-ce47-4fa4-bff7-51dd2e4ac413</entry>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <entry name="uuid">96b2365c-ce47-4fa4-bff7-51dd2e4ac413</entry>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:e6:14:3f"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <target dev="tap07e00b31-d9"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/console.log" append="off"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:47:59 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:47:59 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:47:59 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:47:59 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.454 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Preparing to wait for external event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.455 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.455 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.455 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.456 2 DEBUG nova.virt.libvirt.vif [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:47:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-179829393',display_name='tempest-TestGettingAddress-server-179829393',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-179829393',id=139,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-17ii6zr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:47:53Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=96b2365c-ce47-4fa4-bff7-51dd2e4ac413,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.456 2 DEBUG nova.network.os_vif_util [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.457 2 DEBUG nova.network.os_vif_util [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.457 2 DEBUG os_vif [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.458 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.462 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap07e00b31-d9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.462 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap07e00b31-d9, col_values=(('external_ids', {'iface-id': '07e00b31-d9ec-46d0-927f-1d89f6d03bc6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e6:14:3f', 'vm-uuid': '96b2365c-ce47-4fa4-bff7-51dd2e4ac413'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:47:59 np0005473739 NetworkManager[44949]: <info>  [1759848479.4658] manager: (tap07e00b31-d9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/613)
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.473 2 INFO os_vif [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9')#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.540 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.540 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.540 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:e6:14:3f, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.541 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Using config drive#033[00m
Oct  7 10:47:59 np0005473739 nova_compute[259550]: 2025-10-07 14:47:59.566 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.081 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.081 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.082 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.121 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Creating config drive at /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.127 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_sb3pk2c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.272 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp_sb3pk2c" returned: 0 in 0.145s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.302 2 DEBUG nova.storage.rbd_utils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.306 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.496 2 DEBUG oslo_concurrency.processutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config 96b2365c-ce47-4fa4-bff7-51dd2e4ac413_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.189s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.496 2 INFO nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Deleting local config drive /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413/disk.config because it was imported into RBD.#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.538 2 DEBUG nova.network.neutron [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updated VIF entry in instance network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.539 2 DEBUG nova.network.neutron [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:48:00 np0005473739 kernel: tap07e00b31-d9: entered promiscuous mode
Oct  7 10:48:00 np0005473739 NetworkManager[44949]: <info>  [1759848480.5633] manager: (tap07e00b31-d9): new Tun device (/org/freedesktop/NetworkManager/Devices/614)
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.563 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:00Z|01531|binding|INFO|Claiming lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 for this chassis.
Oct  7 10:48:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:00Z|01532|binding|INFO|07e00b31-d9ec-46d0-927f-1d89f6d03bc6: Claiming fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.566 2 DEBUG oslo_concurrency.lockutils [req-9e9fb854-5f1f-4521-b51c-1bc7939b580d req-abc02c04-9bf1-45f9-8683-63613976af83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.571 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f'], port_security=['fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fee6:143f/64 2001:db8::f816:3eff:fee6:143f/64', 'neutron:device_id': '96b2365c-ce47-4fa4-bff7-51dd2e4ac413', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ca8f444-a15b-48d2-afd7-d5447a1f3a63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=07e00b31-d9ec-46d0-927f-1d89f6d03bc6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.572 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 bound to our chassis#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.573 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 970990f9-7a8a-40de-9a55-f4c40d657453#033[00m
Oct  7 10:48:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:00Z|01533|binding|INFO|Setting lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 ovn-installed in OVS
Oct  7 10:48:00 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:00Z|01534|binding|INFO|Setting lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 up in Southbound
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:00 np0005473739 systemd-udevd[410285]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7b5f8e5-28b0-4e7f-a606-918f7ebebdc1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:00 np0005473739 systemd-machined[214580]: New machine qemu-173-instance-0000008b.
Oct  7 10:48:00 np0005473739 NetworkManager[44949]: <info>  [1759848480.6087] device (tap07e00b31-d9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:48:00 np0005473739 NetworkManager[44949]: <info>  [1759848480.6105] device (tap07e00b31-d9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:48:00 np0005473739 systemd[1]: Started Virtual Machine qemu-173-instance-0000008b.
Oct  7 10:48:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2573: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.630 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c62abfef-fee1-4ad4-a0a8-a4fa67f3d940]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.633 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4a62b8-95b9-428d-9084-9da85d5f9c6e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.674 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[589f0329-93f3-4090-8fed-3b1fd5e73806]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.694 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5f7be3bd-d3e5-4b4c-ad25-55830b8b689f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970990f9-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:9c:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 26, 'tx_packets': 5, 'rx_bytes': 2300, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903010, 'reachable_time': 44649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 24, 'inoctets': 1880, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 24, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1880, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 24, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 410297, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.710 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eba998b2-e46b-482d-a71c-1d7f85e4bae6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap970990f9-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903023, 'tstamp': 903023}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410299, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap970990f9-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903026, 'tstamp': 903026}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 410299, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.712 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970990f9-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.716 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970990f9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.717 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.717 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap970990f9-70, col_values=(('external_ids', {'iface-id': 'a7ef9e15-2145-4a59-b756-368bcbe72d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:00.718 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.801 2 DEBUG nova.compute.manager [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.802 2 DEBUG oslo_concurrency.lockutils [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.802 2 DEBUG oslo_concurrency.lockutils [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.802 2 DEBUG oslo_concurrency.lockutils [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:00 np0005473739 nova_compute[259550]: 2025-10-07 14:48:00.802 2 DEBUG nova.compute.manager [req-1aac3596-ec4c-494a-aee9-8c0949921f4d req-c703a233-b1bf-4786-a10f-3d5d276d05e6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Processing event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.566 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.568 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848481.565543, 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.568 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] VM Started (Lifecycle Event)#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.573 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.577 2 INFO nova.virt.libvirt.driver [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance spawned successfully.#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.578 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.594 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.601 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.605 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.605 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.606 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.606 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.606 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.607 2 DEBUG nova.virt.libvirt.driver [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.637 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.638 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848481.5659757, 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.638 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.674 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.677 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848481.5709083, 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.677 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.682 2 INFO nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Took 8.44 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.683 2 DEBUG nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.694 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.696 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.727 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.768 2 INFO nova.compute.manager [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Took 9.37 seconds to build instance.#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.787 2 DEBUG oslo_concurrency.lockutils [None req-30b95e2b-8be6-4118-b5fd-4eaafc9629ee d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:01 np0005473739 nova_compute[259550]: 2025-10-07 14:48:01.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:48:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2574: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:48:02 np0005473739 nova_compute[259550]: 2025-10-07 14:48:02.911 2 DEBUG nova.compute.manager [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:02 np0005473739 nova_compute[259550]: 2025-10-07 14:48:02.911 2 DEBUG oslo_concurrency.lockutils [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:02 np0005473739 nova_compute[259550]: 2025-10-07 14:48:02.912 2 DEBUG oslo_concurrency.lockutils [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:02 np0005473739 nova_compute[259550]: 2025-10-07 14:48:02.912 2 DEBUG oslo_concurrency.lockutils [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:02 np0005473739 nova_compute[259550]: 2025-10-07 14:48:02.913 2 DEBUG nova.compute.manager [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] No waiting events found dispatching network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:48:02 np0005473739 nova_compute[259550]: 2025-10-07 14:48:02.913 2 WARNING nova.compute.manager [req-5490cfd6-be0f-4d3d-bf32-d6dad5083334 req-41b33835-2148-4045-b2f5-4c5072098535 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received unexpected event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:48:02 np0005473739 nova_compute[259550]: 2025-10-07 14:48:02.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:03 np0005473739 nova_compute[259550]: 2025-10-07 14:48:03.230 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:04 np0005473739 nova_compute[259550]: 2025-10-07 14:48:04.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2575: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1000 KiB/s rd, 1.8 MiB/s wr, 68 op/s
Oct  7 10:48:05 np0005473739 nova_compute[259550]: 2025-10-07 14:48:05.095 2 DEBUG nova.compute.manager [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:05 np0005473739 nova_compute[259550]: 2025-10-07 14:48:05.096 2 DEBUG nova.compute.manager [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing instance network info cache due to event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:48:05 np0005473739 nova_compute[259550]: 2025-10-07 14:48:05.096 2 DEBUG oslo_concurrency.lockutils [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:48:05 np0005473739 nova_compute[259550]: 2025-10-07 14:48:05.096 2 DEBUG oslo_concurrency.lockutils [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:48:05 np0005473739 nova_compute[259550]: 2025-10-07 14:48:05.096 2 DEBUG nova.network.neutron [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:48:06 np0005473739 nova_compute[259550]: 2025-10-07 14:48:06.574 2 DEBUG nova.network.neutron [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updated VIF entry in instance network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:48:06 np0005473739 nova_compute[259550]: 2025-10-07 14:48:06.575 2 DEBUG nova.network.neutron [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:48:06 np0005473739 nova_compute[259550]: 2025-10-07 14:48:06.595 2 DEBUG oslo_concurrency.lockutils [req-aada1cd2-caad-40c1-980c-93dede33a4a6 req-a5674181-7168-4aef-a44a-2fc862f2120d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:48:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2576: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #126. Immutable memtables: 0.
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.816629) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 75] Flushing memtable with next log file: 126
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487816667, "job": 75, "event": "flush_started", "num_memtables": 1, "num_entries": 750, "num_deletes": 251, "total_data_size": 930792, "memory_usage": 945728, "flush_reason": "Manual Compaction"}
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 75] Level-0 flush table #127: started
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487823373, "cf_name": "default", "job": 75, "event": "table_file_creation", "file_number": 127, "file_size": 921923, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 53670, "largest_seqno": 54419, "table_properties": {"data_size": 918037, "index_size": 1666, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8847, "raw_average_key_size": 19, "raw_value_size": 910236, "raw_average_value_size": 2013, "num_data_blocks": 74, "num_entries": 452, "num_filter_entries": 452, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848429, "oldest_key_time": 1759848429, "file_creation_time": 1759848487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 127, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 75] Flush lasted 6778 microseconds, and 3537 cpu microseconds.
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.823403) [db/flush_job.cc:967] [default] [JOB 75] Level-0 flush table #127: 921923 bytes OK
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.823421) [db/memtable_list.cc:519] [default] Level-0 commit table #127 started
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.825690) [db/memtable_list.cc:722] [default] Level-0 commit table #127: memtable #1 done
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.825704) EVENT_LOG_v1 {"time_micros": 1759848487825699, "job": 75, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.825718) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 75] Try to delete WAL files size 926947, prev total WAL file size 926947, number of live WAL files 2.
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000123.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.826224) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035303230' seq:72057594037927935, type:22 .. '7061786F730035323732' seq:0, type:0; will stop at (end)
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 76] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 75 Base level 0, inputs: [127(900KB)], [125(8877KB)]
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487826257, "job": 76, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [127], "files_L6": [125], "score": -1, "input_data_size": 10012384, "oldest_snapshot_seqno": -1}
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 76] Generated table #128: 7306 keys, 8317491 bytes, temperature: kUnknown
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487879560, "cf_name": "default", "job": 76, "event": "table_file_creation", "file_number": 128, "file_size": 8317491, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8271652, "index_size": 26462, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18309, "raw_key_size": 191347, "raw_average_key_size": 26, "raw_value_size": 8144006, "raw_average_value_size": 1114, "num_data_blocks": 1024, "num_entries": 7306, "num_filter_entries": 7306, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848487, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 128, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.879814) [db/compaction/compaction_job.cc:1663] [default] [JOB 76] Compacted 1@0 + 1@6 files to L6 => 8317491 bytes
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.881618) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 187.6 rd, 155.8 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.7 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(19.9) write-amplify(9.0) OK, records in: 7820, records dropped: 514 output_compression: NoCompression
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.881765) EVENT_LOG_v1 {"time_micros": 1759848487881755, "job": 76, "event": "compaction_finished", "compaction_time_micros": 53381, "compaction_time_cpu_micros": 27117, "output_level": 6, "num_output_files": 1, "total_output_size": 8317491, "num_input_records": 7820, "num_output_records": 7306, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000127.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487882060, "job": 76, "event": "table_file_deletion", "file_number": 127}
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000125.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848487883649, "job": 76, "event": "table_file_deletion", "file_number": 125}
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.826158) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883725) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883727) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883728) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:48:07 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:48:07.883730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:48:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:08 np0005473739 nova_compute[259550]: 2025-10-07 14:48:08.233 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2577: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 897 KiB/s wr, 99 op/s
Oct  7 10:48:09 np0005473739 nova_compute[259550]: 2025-10-07 14:48:09.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2578: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:48:10 np0005473739 nova_compute[259550]: 2025-10-07 14:48:10.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.005 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.036 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.037 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.037 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.037 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.038 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:48:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:48:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3334848461' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.562 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.525s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.637 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.637 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008b as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.642 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.642 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008a as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.848 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.849 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3207MB free_disk=59.92183303833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.850 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.850 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.948 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance c621ddbd-d6b8-461e-9374-4f7e50d0ca5f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.949 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.950 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:48:11 np0005473739 nova_compute[259550]: 2025-10-07 14:48:11.951 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:48:12 np0005473739 nova_compute[259550]: 2025-10-07 14:48:12.041 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:48:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:48:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2345827177' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:48:12 np0005473739 nova_compute[259550]: 2025-10-07 14:48:12.519 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:48:12 np0005473739 nova_compute[259550]: 2025-10-07 14:48:12.525 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:48:12 np0005473739 nova_compute[259550]: 2025-10-07 14:48:12.546 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:48:12 np0005473739 nova_compute[259550]: 2025-10-07 14:48:12.568 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:48:12 np0005473739 nova_compute[259550]: 2025-10-07 14:48:12.568 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2579: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:48:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:13 np0005473739 nova_compute[259550]: 2025-10-07 14:48:13.235 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:13 np0005473739 nova_compute[259550]: 2025-10-07 14:48:13.546 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:13 np0005473739 nova_compute[259550]: 2025-10-07 14:48:13.546 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:48:13 np0005473739 nova_compute[259550]: 2025-10-07 14:48:13.546 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:48:13 np0005473739 nova_compute[259550]: 2025-10-07 14:48:13.971 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:48:13 np0005473739 nova_compute[259550]: 2025-10-07 14:48:13.971 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:48:13 np0005473739 nova_compute[259550]: 2025-10-07 14:48:13.971 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:48:13 np0005473739 nova_compute[259550]: 2025-10-07 14:48:13.972 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid c621ddbd-d6b8-461e-9374-4f7e50d0ca5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:48:14 np0005473739 podman[410388]: 2025-10-07 14:48:14.092417623 +0000 UTC m=+0.076538396 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  7 10:48:14 np0005473739 podman[410389]: 2025-10-07 14:48:14.114903751 +0000 UTC m=+0.099256290 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=iscsid, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 10:48:14 np0005473739 nova_compute[259550]: 2025-10-07 14:48:14.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2580: 305 pgs: 305 active+clean; 168 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 247 KiB/s wr, 83 op/s
Oct  7 10:48:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:14Z|00184|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e6:14:3f 10.100.0.7
Oct  7 10:48:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:14Z|00185|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e6:14:3f 10.100.0.7
Oct  7 10:48:16 np0005473739 nova_compute[259550]: 2025-10-07 14:48:16.547 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:48:16 np0005473739 nova_compute[259550]: 2025-10-07 14:48:16.564 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:48:16 np0005473739 nova_compute[259550]: 2025-10-07 14:48:16.565 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:48:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2581: 305 pgs: 305 active+clean; 183 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct  7 10:48:16 np0005473739 nova_compute[259550]: 2025-10-07 14:48:16.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:48:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 06b9c6ef-7f49-4a97-906a-0a7869133d43 does not exist
Oct  7 10:48:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1218c77f-5e49-41df-9e30-6748b153180f does not exist
Oct  7 10:48:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a3572ace-65af-4dbf-8180-7cab3b631f56 does not exist
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:48:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:48:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:18 np0005473739 nova_compute[259550]: 2025-10-07 14:48:18.236 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:18 np0005473739 podman[410694]: 2025-10-07 14:48:18.21905401 +0000 UTC m=+0.020549197 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:48:18 np0005473739 podman[410694]: 2025-10-07 14:48:18.345181883 +0000 UTC m=+0.146677040 container create 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:48:18 np0005473739 systemd[1]: Started libpod-conmon-42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d.scope.
Oct  7 10:48:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:48:18 np0005473739 podman[410694]: 2025-10-07 14:48:18.531639678 +0000 UTC m=+0.333134845 container init 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True)
Oct  7 10:48:18 np0005473739 podman[410694]: 2025-10-07 14:48:18.545692582 +0000 UTC m=+0.347187749 container start 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:48:18 np0005473739 systemd[1]: libpod-42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d.scope: Deactivated successfully.
Oct  7 10:48:18 np0005473739 mystifying_buck[410711]: 167 167
Oct  7 10:48:18 np0005473739 conmon[410711]: conmon 42558a698836e7566b7d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d.scope/container/memory.events
Oct  7 10:48:18 np0005473739 podman[410694]: 2025-10-07 14:48:18.584504384 +0000 UTC m=+0.385999571 container attach 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:48:18 np0005473739 podman[410694]: 2025-10-07 14:48:18.586793215 +0000 UTC m=+0.388288372 container died 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:48:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2582: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:48:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8164d63db1cce6b3ca56faaf280cf0d859bc6672ff15746d290ed69d03a37237-merged.mount: Deactivated successfully.
Oct  7 10:48:18 np0005473739 podman[410694]: 2025-10-07 14:48:18.916041426 +0000 UTC m=+0.717536583 container remove 42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_buck, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:48:18 np0005473739 systemd[1]: libpod-conmon-42558a698836e7566b7d5ea3cb45fa2dd52dc2d7625637dc5529e6b763a8c77d.scope: Deactivated successfully.
Oct  7 10:48:19 np0005473739 podman[410735]: 2025-10-07 14:48:19.105753559 +0000 UTC m=+0.048015267 container create 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:48:19 np0005473739 systemd[1]: Started libpod-conmon-1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640.scope.
Oct  7 10:48:19 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:48:19 np0005473739 podman[410735]: 2025-10-07 14:48:19.085694356 +0000 UTC m=+0.027956094 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:48:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:19 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:19 np0005473739 podman[410735]: 2025-10-07 14:48:19.198538586 +0000 UTC m=+0.140800414 container init 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 10:48:19 np0005473739 podman[410735]: 2025-10-07 14:48:19.207448942 +0000 UTC m=+0.149710660 container start 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:48:19 np0005473739 podman[410735]: 2025-10-07 14:48:19.211083369 +0000 UTC m=+0.153345077 container attach 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:48:19 np0005473739 nova_compute[259550]: 2025-10-07 14:48:19.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:20 np0005473739 eager_cori[410751]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:48:20 np0005473739 eager_cori[410751]: --> relative data size: 1.0
Oct  7 10:48:20 np0005473739 eager_cori[410751]: --> All data devices are unavailable
Oct  7 10:48:20 np0005473739 systemd[1]: libpod-1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640.scope: Deactivated successfully.
Oct  7 10:48:20 np0005473739 systemd[1]: libpod-1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640.scope: Consumed 1.154s CPU time.
Oct  7 10:48:20 np0005473739 podman[410735]: 2025-10-07 14:48:20.424475701 +0000 UTC m=+1.366737419 container died 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:48:20 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9fedc5864be58e4e5591dca69cc2ffe67b66754934d9aba8c8f9d8400c2b2b74-merged.mount: Deactivated successfully.
Oct  7 10:48:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2583: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:48:20 np0005473739 podman[410735]: 2025-10-07 14:48:20.93232954 +0000 UTC m=+1.874591248 container remove 1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_cori, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:48:20 np0005473739 systemd[1]: libpod-conmon-1247211d0512e7ab20479caa9ff646f595f92242556ce8166611711a93038640.scope: Deactivated successfully.
Oct  7 10:48:21 np0005473739 podman[410933]: 2025-10-07 14:48:21.558976676 +0000 UTC m=+0.024815150 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:48:21 np0005473739 podman[410933]: 2025-10-07 14:48:21.816572763 +0000 UTC m=+0.282411217 container create d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:48:21 np0005473739 systemd[1]: Started libpod-conmon-d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4.scope.
Oct  7 10:48:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:48:22 np0005473739 podman[410933]: 2025-10-07 14:48:22.208776149 +0000 UTC m=+0.674614613 container init d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:48:22 np0005473739 podman[410933]: 2025-10-07 14:48:22.216473213 +0000 UTC m=+0.682311657 container start d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:48:22 np0005473739 silly_poitras[410949]: 167 167
Oct  7 10:48:22 np0005473739 systemd[1]: libpod-d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4.scope: Deactivated successfully.
Oct  7 10:48:22 np0005473739 podman[410933]: 2025-10-07 14:48:22.295651568 +0000 UTC m=+0.761490032 container attach d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:48:22 np0005473739 podman[410933]: 2025-10-07 14:48:22.296848409 +0000 UTC m=+0.762686853 container died d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:48:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-00c88d8eed651f384757c646631e620f75ced2493cc7cfa7c0b1482eaab4e95c-merged.mount: Deactivated successfully.
Oct  7 10:48:22 np0005473739 podman[410933]: 2025-10-07 14:48:22.403526035 +0000 UTC m=+0.869364479 container remove d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_poitras, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:48:22 np0005473739 systemd[1]: libpod-conmon-d8f7a8841df82aab3d7b58652de34806a3e29ea67905172bc06794a72e461df4.scope: Deactivated successfully.
Oct  7 10:48:22 np0005473739 podman[410975]: 2025-10-07 14:48:22.577479349 +0000 UTC m=+0.042469500 container create 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:48:22 np0005473739 systemd[1]: Started libpod-conmon-1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c.scope.
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2584: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:48:22 np0005473739 podman[410975]: 2025-10-07 14:48:22.557356083 +0000 UTC m=+0.022346254 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:48:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:48:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:22 np0005473739 podman[410975]: 2025-10-07 14:48:22.677769314 +0000 UTC m=+0.142759495 container init 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:48:22 np0005473739 podman[410975]: 2025-10-07 14:48:22.684134163 +0000 UTC m=+0.149124324 container start 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:48:22 np0005473739 podman[410975]: 2025-10-07 14:48:22.689321152 +0000 UTC m=+0.154311323 container attach 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:48:22
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.control', 'vms', 'default.rgw.meta', 'backups', 'default.rgw.log', '.mgr', 'cephfs.cephfs.meta', 'images']
Oct  7 10:48:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.072 2 DEBUG nova.compute.manager [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.074 2 DEBUG nova.compute.manager [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing instance network info cache due to event network-changed-07e00b31-d9ec-46d0-927f-1d89f6d03bc6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.075 2 DEBUG oslo_concurrency.lockutils [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.075 2 DEBUG oslo_concurrency.lockutils [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.075 2 DEBUG nova.network.neutron [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Refreshing network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.135 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.136 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.137 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.137 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.138 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.139 2 INFO nova.compute.manager [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Terminating instance#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.140 2 DEBUG nova.compute.manager [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:48:23 np0005473739 kernel: tap07e00b31-d9 (unregistering): left promiscuous mode
Oct  7 10:48:23 np0005473739 NetworkManager[44949]: <info>  [1759848503.2097] device (tap07e00b31-d9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:48:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:23Z|01535|binding|INFO|Releasing lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 from this chassis (sb_readonly=0)
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.226 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:23Z|01536|binding|INFO|Setting lport 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 down in Southbound
Oct  7 10:48:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:23Z|01537|binding|INFO|Removing iface tap07e00b31-d9 ovn-installed in OVS
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.231 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.242 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f'], port_security=['fa:16:3e:e6:14:3f 10.100.0.7 2001:db8:0:1:f816:3eff:fee6:143f 2001:db8::f816:3eff:fee6:143f'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28 2001:db8:0:1:f816:3eff:fee6:143f/64 2001:db8::f816:3eff:fee6:143f/64', 'neutron:device_id': '96b2365c-ce47-4fa4-bff7-51dd2e4ac413', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ca8f444-a15b-48d2-afd7-d5447a1f3a63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=07e00b31-d9ec-46d0-927f-1d89f6d03bc6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.244 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6 in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 unbound from our chassis#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.245 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 970990f9-7a8a-40de-9a55-f4c40d657453#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.248 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.266 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1b17e96-c8ec-4666-9ab9-33b581a3abb2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:23 np0005473739 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Deactivated successfully.
Oct  7 10:48:23 np0005473739 systemd[1]: machine-qemu\x2d173\x2dinstance\x2d0000008b.scope: Consumed 13.822s CPU time.
Oct  7 10:48:23 np0005473739 systemd-machined[214580]: Machine qemu-173-instance-0000008b terminated.
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.300 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8ab41ed2-030a-46b4-bc6f-00e21a40490f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.304 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b0d73f3a-3c21-4d75-a9ca-71d5bd475d0e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.337 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a7d12e16-49e7-42c2-8986-8b43373b6ccc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.367 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ccbf752f-dc96-4033-befc-7e46347313ba]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap970990f9-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:9c:cb'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 44, 'tx_packets': 7, 'rx_bytes': 3768, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 438], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903010, 'reachable_time': 44649, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 40, 'inoctets': 3040, 'indelivers': 13, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 40, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 3040, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 40, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 13, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411009, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.386 2 INFO nova.virt.libvirt.driver [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Instance destroyed successfully.#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.386 2 DEBUG nova.objects.instance [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.386 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f40bac61-5722-477a-bdf9-68f43879a00f]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap970990f9-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903023, 'tstamp': 903023}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411017, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap970990f9-71'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 903026, 'tstamp': 903026}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 411017, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.388 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970990f9-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.395 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap970990f9-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.395 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.395 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap970990f9-70, col_values=(('external_ids', {'iface-id': 'a7ef9e15-2145-4a59-b756-368bcbe72d69'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:23 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:23.396 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.403 2 DEBUG nova.virt.libvirt.vif [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:47:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-179829393',display_name='tempest-TestGettingAddress-server-179829393',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-179829393',id=139,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:48:01Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-17ii6zr4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:48:01Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=96b2365c-ce47-4fa4-bff7-51dd2e4ac413,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.404 2 DEBUG nova.network.os_vif_util [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.250", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.405 2 DEBUG nova.network.os_vif_util [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.406 2 DEBUG os_vif [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.408 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap07e00b31-d9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.414 2 INFO os_vif [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e6:14:3f,bridge_name='br-int',has_traffic_filtering=True,id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap07e00b31-d9')#033[00m
Oct  7 10:48:23 np0005473739 sad_hoover[410993]: {
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:    "0": [
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:        {
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "devices": [
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "/dev/loop3"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            ],
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_name": "ceph_lv0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_size": "21470642176",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "name": "ceph_lv0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "tags": {
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cluster_name": "ceph",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.crush_device_class": "",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.encrypted": "0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osd_id": "0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.type": "block",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.vdo": "0"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            },
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "type": "block",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "vg_name": "ceph_vg0"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:        }
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:    ],
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:    "1": [
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:        {
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "devices": [
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "/dev/loop4"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            ],
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_name": "ceph_lv1",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_size": "21470642176",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "name": "ceph_lv1",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "tags": {
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cluster_name": "ceph",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.crush_device_class": "",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.encrypted": "0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osd_id": "1",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.type": "block",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.vdo": "0"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            },
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "type": "block",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "vg_name": "ceph_vg1"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:        }
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:    ],
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:    "2": [
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:        {
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "devices": [
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "/dev/loop5"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            ],
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_name": "ceph_lv2",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_size": "21470642176",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "name": "ceph_lv2",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "tags": {
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.cluster_name": "ceph",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.crush_device_class": "",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.encrypted": "0",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osd_id": "2",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.type": "block",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:                "ceph.vdo": "0"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            },
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "type": "block",
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:            "vg_name": "ceph_vg2"
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:        }
Oct  7 10:48:23 np0005473739 sad_hoover[410993]:    ]
Oct  7 10:48:23 np0005473739 sad_hoover[410993]: }
Oct  7 10:48:23 np0005473739 systemd[1]: libpod-1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c.scope: Deactivated successfully.
Oct  7 10:48:23 np0005473739 podman[410975]: 2025-10-07 14:48:23.542285543 +0000 UTC m=+1.007275714 container died 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:48:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d68514caba0fb5c24000dc412f345aca78890e232ba6269b7c192ae8b59a1826-merged.mount: Deactivated successfully.
Oct  7 10:48:23 np0005473739 podman[410975]: 2025-10-07 14:48:23.611480882 +0000 UTC m=+1.076471033 container remove 1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_hoover, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:48:23 np0005473739 systemd[1]: libpod-conmon-1efed685bf531d19e51cc53a4e41a28e9f6490eee4b562d41c84b731c1d6528c.scope: Deactivated successfully.
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.895 2 INFO nova.virt.libvirt.driver [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Deleting instance files /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413_del#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.897 2 INFO nova.virt.libvirt.driver [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Deletion of /var/lib/nova/instances/96b2365c-ce47-4fa4-bff7-51dd2e4ac413_del complete#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.950 2 INFO nova.compute.manager [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.951 2 DEBUG oslo.service.loopingcall [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.952 2 DEBUG nova.compute.manager [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:48:23 np0005473739 nova_compute[259550]: 2025-10-07 14:48:23.952 2 DEBUG nova.network.neutron [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:48:24 np0005473739 podman[411200]: 2025-10-07 14:48:24.342886474 +0000 UTC m=+0.109431950 container create 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct  7 10:48:24 np0005473739 podman[411200]: 2025-10-07 14:48:24.259749464 +0000 UTC m=+0.026294930 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:48:24 np0005473739 systemd[1]: Started libpod-conmon-0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a.scope.
Oct  7 10:48:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:48:24 np0005473739 podman[411200]: 2025-10-07 14:48:24.570524614 +0000 UTC m=+0.337070090 container init 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 10:48:24 np0005473739 nova_compute[259550]: 2025-10-07 14:48:24.573 2 DEBUG nova.network.neutron [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:48:24 np0005473739 podman[411200]: 2025-10-07 14:48:24.579464222 +0000 UTC m=+0.346009648 container start 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:48:24 np0005473739 happy_fermat[411216]: 167 167
Oct  7 10:48:24 np0005473739 systemd[1]: libpod-0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a.scope: Deactivated successfully.
Oct  7 10:48:24 np0005473739 nova_compute[259550]: 2025-10-07 14:48:24.593 2 INFO nova.compute.manager [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Took 0.64 seconds to deallocate network for instance.#033[00m
Oct  7 10:48:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2585: 305 pgs: 305 active+clean; 191 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct  7 10:48:24 np0005473739 nova_compute[259550]: 2025-10-07 14:48:24.645 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:24 np0005473739 nova_compute[259550]: 2025-10-07 14:48:24.646 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:24 np0005473739 podman[411200]: 2025-10-07 14:48:24.673669136 +0000 UTC m=+0.440214572 container attach 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 10:48:24 np0005473739 podman[411200]: 2025-10-07 14:48:24.674837017 +0000 UTC m=+0.441382463 container died 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 10:48:24 np0005473739 nova_compute[259550]: 2025-10-07 14:48:24.709 2 DEBUG oslo_concurrency.processutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:48:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3b79d2abfc80da078dfc5c57f4056cb6abfedc148aaf92c64e29a8710788c0c8-merged.mount: Deactivated successfully.
Oct  7 10:48:24 np0005473739 podman[411200]: 2025-10-07 14:48:24.818348061 +0000 UTC m=+0.584893497 container remove 0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_fermat, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:48:24 np0005473739 systemd[1]: libpod-conmon-0af0885e61172a933195f7cb35ac7aa719f6044c1187aa1f184bde3c2f62e32a.scope: Deactivated successfully.
Oct  7 10:48:25 np0005473739 podman[411261]: 2025-10-07 14:48:25.013672074 +0000 UTC m=+0.046112797 container create 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.016 2 DEBUG nova.network.neutron [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updated VIF entry in instance network info cache for port 07e00b31-d9ec-46d0-927f-1d89f6d03bc6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.017 2 DEBUG nova.network.neutron [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [{"id": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "address": "fa:16:3e:e6:14:3f", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee6:143f", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap07e00b31-d9", "ovs_interfaceid": "07e00b31-d9ec-46d0-927f-1d89f6d03bc6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.035 2 DEBUG oslo_concurrency.lockutils [req-d3068b3f-8baa-43d0-b81f-4d7fe49f71dc req-4beab88b-6515-4b5e-80c9-53e720e2834a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-96b2365c-ce47-4fa4-bff7-51dd2e4ac413" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:48:25 np0005473739 systemd[1]: Started libpod-conmon-1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b.scope.
Oct  7 10:48:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:48:25 np0005473739 podman[411261]: 2025-10-07 14:48:24.996545669 +0000 UTC m=+0.028986412 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:48:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:48:25 np0005473739 podman[411261]: 2025-10-07 14:48:25.144555802 +0000 UTC m=+0.176996555 container init 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 10:48:25 np0005473739 podman[411261]: 2025-10-07 14:48:25.157735913 +0000 UTC m=+0.190176626 container start 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 10:48:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:48:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1432347890' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.183 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-unplugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] No waiting events found dispatching network-vif-unplugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.184 2 WARNING nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received unexpected event network-vif-unplugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.184 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.185 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.185 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.185 2 DEBUG oslo_concurrency.lockutils [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.186 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] No waiting events found dispatching network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.186 2 WARNING nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received unexpected event network-vif-plugged-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.186 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Received event network-vif-deleted-07e00b31-d9ec-46d0-927f-1d89f6d03bc6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.186 2 INFO nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Neutron deleted interface 07e00b31-d9ec-46d0-927f-1d89f6d03bc6; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.186 2 DEBUG nova.network.neutron [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:48:25 np0005473739 podman[411261]: 2025-10-07 14:48:25.191144381 +0000 UTC m=+0.223585104 container attach 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.203 2 DEBUG oslo_concurrency.processutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.210 2 DEBUG nova.compute.provider_tree [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.241 2 DEBUG nova.compute.manager [req-c2840c16-eb7d-428e-881c-f06ae7845a16 req-26458181-3224-444e-8bdb-0323abd764ca 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Detach interface failed, port_id=07e00b31-d9ec-46d0-927f-1d89f6d03bc6, reason: Instance 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.246 2 DEBUG nova.scheduler.client.report [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.265 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.291 2 INFO nova.scheduler.client.report [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 96b2365c-ce47-4fa4-bff7-51dd2e4ac413#033[00m
Oct  7 10:48:25 np0005473739 nova_compute[259550]: 2025-10-07 14:48:25.354 2 DEBUG oslo_concurrency.lockutils [None req-6cb588c3-bcb4-41d6-b466-9597363ac2bd d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "96b2365c-ce47-4fa4-bff7-51dd2e4ac413" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]: {
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "osd_id": 2,
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "type": "bluestore"
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:    },
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "osd_id": 1,
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "type": "bluestore"
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:    },
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "osd_id": 0,
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:        "type": "bluestore"
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]:    }
Oct  7 10:48:26 np0005473739 nervous_hugle[411279]: }
Oct  7 10:48:26 np0005473739 systemd[1]: libpod-1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b.scope: Deactivated successfully.
Oct  7 10:48:26 np0005473739 systemd[1]: libpod-1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b.scope: Consumed 1.101s CPU time.
Oct  7 10:48:26 np0005473739 podman[411261]: 2025-10-07 14:48:26.294233881 +0000 UTC m=+1.326674604 container died 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:48:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2586: 305 pgs: 305 active+clean; 159 MiB data, 1014 MiB used, 59 GiB / 60 GiB avail; 315 KiB/s rd, 1.9 MiB/s wr, 67 op/s
Oct  7 10:48:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f801c86c80fffdfbdbfb67447ac66b6380cdaa3df0b2df87262a8229d8007f73-merged.mount: Deactivated successfully.
Oct  7 10:48:26 np0005473739 podman[411261]: 2025-10-07 14:48:26.943985522 +0000 UTC m=+1.976426245 container remove 1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_hugle, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:48:26 np0005473739 systemd[1]: libpod-conmon-1e0c207eb8369d047d415c5a47dc7229825c54334bf32698185e13bcc2e7ca0b.scope: Deactivated successfully.
Oct  7 10:48:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:48:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:48:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:48:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:48:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 191fd511-8159-4260-b64d-523697365158 does not exist
Oct  7 10:48:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0def557f-5fcf-43e4-bf8d-1eb01d5d5a02 does not exist
Oct  7 10:48:27 np0005473739 podman[411350]: 2025-10-07 14:48:27.229880001 +0000 UTC m=+0.060653663 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:48:27 np0005473739 podman[411351]: 2025-10-07 14:48:27.260093054 +0000 UTC m=+0.088468483 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:48:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:48:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.163 2 DEBUG nova.compute.manager [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.164 2 DEBUG nova.compute.manager [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing instance network info cache due to event network-changed-d90f9db1-8372-46fb-93ed-9be2902fe85c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.165 2 DEBUG oslo_concurrency.lockutils [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.165 2 DEBUG oslo_concurrency.lockutils [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.166 2 DEBUG nova.network.neutron [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Refreshing network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:48:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.250 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.252 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.253 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.253 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.253 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.254 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.255 2 INFO nova.compute.manager [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Terminating instance#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.256 2 DEBUG nova.compute.manager [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:48:28 np0005473739 kernel: tapd90f9db1-83 (unregistering): left promiscuous mode
Oct  7 10:48:28 np0005473739 NetworkManager[44949]: <info>  [1759848508.3236] device (tapd90f9db1-83): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.335 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:28Z|01538|binding|INFO|Releasing lport d90f9db1-8372-46fb-93ed-9be2902fe85c from this chassis (sb_readonly=0)
Oct  7 10:48:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:28Z|01539|binding|INFO|Setting lport d90f9db1-8372-46fb-93ed-9be2902fe85c down in Southbound
Oct  7 10:48:28 np0005473739 ovn_controller[151684]: 2025-10-07T14:48:28Z|01540|binding|INFO|Removing iface tapd90f9db1-83 ovn-installed in OVS
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.340 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.347 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908'], port_security=['fa:16:3e:48:39:08 10.100.0.13 2001:db8:0:1:f816:3eff:fe48:3908 2001:db8::f816:3eff:fe48:3908'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28 2001:db8:0:1:f816:3eff:fe48:3908/64 2001:db8::f816:3eff:fe48:3908/64', 'neutron:device_id': 'c621ddbd-d6b8-461e-9374-4f7e50d0ca5f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-970990f9-7a8a-40de-9a55-f4c40d657453', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ca8f444-a15b-48d2-afd7-d5447a1f3a63', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=227fc944-7eb8-4e47-9b7f-017eeb7f2711, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=d90f9db1-8372-46fb-93ed-9be2902fe85c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.348 161536 INFO neutron.agent.ovn.metadata.agent [-] Port d90f9db1-8372-46fb-93ed-9be2902fe85c in datapath 970990f9-7a8a-40de-9a55-f4c40d657453 unbound from our chassis#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.349 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 970990f9-7a8a-40de-9a55-f4c40d657453, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.350 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8fd90a10-566e-4bbb-a49f-c27f99a791f6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.350 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 namespace which is not needed anymore#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Deactivated successfully.
Oct  7 10:48:28 np0005473739 systemd[1]: machine-qemu\x2d172\x2dinstance\x2d0000008a.scope: Consumed 15.080s CPU time.
Oct  7 10:48:28 np0005473739 systemd-machined[214580]: Machine qemu-172-instance-0000008a terminated.
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.410 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.476 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.495 2 INFO nova.virt.libvirt.driver [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Instance destroyed successfully.#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.496 2 DEBUG nova.objects.instance [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid c621ddbd-d6b8-461e-9374-4f7e50d0ca5f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:48:28 np0005473739 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [NOTICE]   (409742) : haproxy version is 2.8.14-c23fe91
Oct  7 10:48:28 np0005473739 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [NOTICE]   (409742) : path to executable is /usr/sbin/haproxy
Oct  7 10:48:28 np0005473739 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [WARNING]  (409742) : Exiting Master process...
Oct  7 10:48:28 np0005473739 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [WARNING]  (409742) : Exiting Master process...
Oct  7 10:48:28 np0005473739 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [ALERT]    (409742) : Current worker (409744) exited with code 143 (Terminated)
Oct  7 10:48:28 np0005473739 neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453[409738]: [WARNING]  (409742) : All workers exited. Exiting... (0)
Oct  7 10:48:28 np0005473739 systemd[1]: libpod-10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c.scope: Deactivated successfully.
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.512 2 DEBUG nova.virt.libvirt.vif [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:47:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1379651617',display_name='tempest-TestGettingAddress-server-1379651617',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1379651617',id=138,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBEkAt4wtteK91EP3aa6Au9K7yq+N15JSCUefd3a6DRNjmPgvGC0hgDKYUniMgalUA3tACkiPsQDKv7a9b9TFDwqZAEmvf7GWwU8qoBld9UJd4PAomUBnp4Nc81ZIU+LnYw==',key_name='tempest-TestGettingAddress-370921161',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:47:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-wdqfp22g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:47:27Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=c621ddbd-d6b8-461e-9374-4f7e50d0ca5f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.512 2 DEBUG nova.network.os_vif_util [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.514 2 DEBUG nova.network.os_vif_util [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.514 2 DEBUG os_vif [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.516 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd90f9db1-83, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:28 np0005473739 podman[411443]: 2025-10-07 14:48:28.517159947 +0000 UTC m=+0.059646397 container died 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.524 2 INFO os_vif [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:39:08,bridge_name='br-int',has_traffic_filtering=True,id=d90f9db1-8372-46fb-93ed-9be2902fe85c,network=Network(970990f9-7a8a-40de-9a55-f4c40d657453),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd90f9db1-83')#033[00m
Oct  7 10:48:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c-userdata-shm.mount: Deactivated successfully.
Oct  7 10:48:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-47a24921b3c8e4f08e504680464e3b58519f09b02a877bb0e4246cd4ccca4536-merged.mount: Deactivated successfully.
Oct  7 10:48:28 np0005473739 podman[411443]: 2025-10-07 14:48:28.557600862 +0000 UTC m=+0.100087302 container cleanup 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:48:28 np0005473739 systemd[1]: libpod-conmon-10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c.scope: Deactivated successfully.
Oct  7 10:48:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2587: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 218 KiB/s rd, 978 KiB/s wr, 66 op/s
Oct  7 10:48:28 np0005473739 podman[411501]: 2025-10-07 14:48:28.660719974 +0000 UTC m=+0.076391562 container remove 10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.668 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92178668-3360-4cce-9414-358119271785]: (4, ('Tue Oct  7 02:48:28 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 (10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c)\n10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c\nTue Oct  7 02:48:28 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 (10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c)\n10975c5c82ccb569ba7cdbe9db8ba00da4e9ab40681c315d450a12b61b0bb76c\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.670 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[eac524eb-455f-4e46-85e8-7e3746d43bbf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.672 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap970990f9-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.675 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 kernel: tap970990f9-70: left promiscuous mode
Oct  7 10:48:28 np0005473739 nova_compute[259550]: 2025-10-07 14:48:28.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.693 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ef2772a4-c674-4d64-90d2-489c8d48a0f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.719 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6e671a80-66d8-440a-acb7-9384ffbc3b3d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.721 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f11aef5a-4ebf-477f-8352-5d4396359bf9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.739 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[922a0276-661c-44f3-a2ff-9670a20e6583]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 903002, 'reachable_time': 18976, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411516, 'error': None, 'target': 'ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:28 np0005473739 systemd[1]: run-netns-ovnmeta\x2d970990f9\x2d7a8a\x2d40de\x2d9a55\x2df4c40d657453.mount: Deactivated successfully.
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.744 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-970990f9-7a8a-40de-9a55-f4c40d657453 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:48:28 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:28.745 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[86e6cbf8-f92d-4ef4-9540-700f20fd30f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.072 2 DEBUG nova.compute.manager [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-unplugged-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.072 2 DEBUG oslo_concurrency.lockutils [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.072 2 DEBUG oslo_concurrency.lockutils [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.072 2 DEBUG oslo_concurrency.lockutils [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.073 2 DEBUG nova.compute.manager [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] No waiting events found dispatching network-vif-unplugged-d90f9db1-8372-46fb-93ed-9be2902fe85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.073 2 DEBUG nova.compute.manager [req-b6320d80-cf59-4219-8315-27b888403cd3 req-fafc5822-92b1-4947-9033-f9af68b9747f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-unplugged-d90f9db1-8372-46fb-93ed-9be2902fe85c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.198 2 INFO nova.virt.libvirt.driver [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Deleting instance files /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_del#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.199 2 INFO nova.virt.libvirt.driver [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Deletion of /var/lib/nova/instances/c621ddbd-d6b8-461e-9374-4f7e50d0ca5f_del complete#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.381 2 INFO nova.compute.manager [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Took 1.12 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.382 2 DEBUG oslo.service.loopingcall [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.382 2 DEBUG nova.compute.manager [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:48:29 np0005473739 nova_compute[259550]: 2025-10-07 14:48:29.382 2 DEBUG nova.network.neutron [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:48:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:30.098 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=48, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=47) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:48:30 np0005473739 nova_compute[259550]: 2025-10-07 14:48:30.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:30.099 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:48:30 np0005473739 nova_compute[259550]: 2025-10-07 14:48:30.466 2 DEBUG nova.network.neutron [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:48:30 np0005473739 nova_compute[259550]: 2025-10-07 14:48:30.499 2 INFO nova.compute.manager [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Took 1.12 seconds to deallocate network for instance.#033[00m
Oct  7 10:48:30 np0005473739 nova_compute[259550]: 2025-10-07 14:48:30.570 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:30 np0005473739 nova_compute[259550]: 2025-10-07 14:48:30.571 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:30 np0005473739 nova_compute[259550]: 2025-10-07 14:48:30.635 2 DEBUG oslo_concurrency.processutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:48:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2588: 305 pgs: 305 active+clean; 102 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 24 KiB/s wr, 48 op/s
Oct  7 10:48:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:48:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/490652022' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.123 2 DEBUG oslo_concurrency.processutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.488s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.130 2 DEBUG nova.compute.provider_tree [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.166 2 DEBUG nova.scheduler.client.report [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.179 2 DEBUG nova.compute.manager [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.180 2 DEBUG oslo_concurrency.lockutils [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.180 2 DEBUG oslo_concurrency.lockutils [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.180 2 DEBUG oslo_concurrency.lockutils [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.180 2 DEBUG nova.compute.manager [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] No waiting events found dispatching network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.181 2 WARNING nova.compute.manager [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received unexpected event network-vif-plugged-d90f9db1-8372-46fb-93ed-9be2902fe85c for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.181 2 DEBUG nova.compute.manager [req-cb66206e-fb6b-497d-afd5-f8d44a74ff2e req-b5c7ea5a-a945-4fed-808d-ed92d907fe6b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Received event network-vif-deleted-d90f9db1-8372-46fb-93ed-9be2902fe85c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.200 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.228 2 INFO nova.scheduler.client.report [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance c621ddbd-d6b8-461e-9374-4f7e50d0ca5f#033[00m
Oct  7 10:48:31 np0005473739 nova_compute[259550]: 2025-10-07 14:48:31.309 2 DEBUG oslo_concurrency.lockutils [None req-f0ec9c61-c93f-4285-99fe-e38064207f7f d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:48:32 np0005473739 nova_compute[259550]: 2025-10-07 14:48:32.311 2 DEBUG nova.network.neutron [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updated VIF entry in instance network info cache for port d90f9db1-8372-46fb-93ed-9be2902fe85c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:48:32 np0005473739 nova_compute[259550]: 2025-10-07 14:48:32.312 2 DEBUG nova.network.neutron [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Updating instance_info_cache with network_info: [{"id": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "address": "fa:16:3e:48:39:08", "network": {"id": "970990f9-7a8a-40de-9a55-f4c40d657453", "bridge": "br-int", "label": "tempest-network-smoke--1947295101", "subnets": [{"cidr": "2001:db8:0:1::/64", "dns": [], "gateway": {"address": "2001:db8:0:1::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8:0:1:f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}, {"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe48:3908", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd90f9db1-83", "ovs_interfaceid": "d90f9db1-8372-46fb-93ed-9be2902fe85c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:48:32 np0005473739 nova_compute[259550]: 2025-10-07 14:48:32.338 2 DEBUG oslo_concurrency.lockutils [req-bdfc2fa5-9b2f-4aa5-816a-26b2d3d84e53 req-d55c51fb-8ee2-4769-8af3-0f9c8938669e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-c621ddbd-d6b8-461e-9374-4f7e50d0ca5f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2589: 305 pgs: 305 active+clean; 102 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 23 KiB/s wr, 47 op/s
Oct  7 10:48:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:48:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2908587486' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:48:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:48:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2908587486' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0006099620694145992 of space, bias 1.0, pg target 0.18298862082437975 quantized to 32 (current 32)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:48:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:48:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:33 np0005473739 nova_compute[259550]: 2025-10-07 14:48:33.251 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:33 np0005473739 nova_compute[259550]: 2025-10-07 14:48:33.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2590: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 44 KiB/s rd, 24 KiB/s wr, 57 op/s
Oct  7 10:48:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2591: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 37 KiB/s rd, 12 KiB/s wr, 49 op/s
Oct  7 10:48:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:37.100 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '48'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:48:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:38 np0005473739 nova_compute[259550]: 2025-10-07 14:48:38.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:38 np0005473739 nova_compute[259550]: 2025-10-07 14:48:38.385 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848503.3836818, 96b2365c-ce47-4fa4-bff7-51dd2e4ac413 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:48:38 np0005473739 nova_compute[259550]: 2025-10-07 14:48:38.385 2 INFO nova.compute.manager [-] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:48:38 np0005473739 nova_compute[259550]: 2025-10-07 14:48:38.411 2 DEBUG nova.compute.manager [None req-21808f39-ca3a-4265-a3fe-b968ee0b8469 - - - - - -] [instance: 96b2365c-ce47-4fa4-bff7-51dd2e4ac413] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:48:38 np0005473739 nova_compute[259550]: 2025-10-07 14:48:38.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2592: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 12 KiB/s wr, 43 op/s
Oct  7 10:48:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2593: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 3.5 KiB/s wr, 22 op/s
Oct  7 10:48:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2594: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 852 B/s wr, 9 op/s
Oct  7 10:48:43 np0005473739 nova_compute[259550]: 2025-10-07 14:48:43.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:43 np0005473739 nova_compute[259550]: 2025-10-07 14:48:43.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:43 np0005473739 nova_compute[259550]: 2025-10-07 14:48:43.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:43 np0005473739 nova_compute[259550]: 2025-10-07 14:48:43.494 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848508.4918177, c621ddbd-d6b8-461e-9374-4f7e50d0ca5f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:48:43 np0005473739 nova_compute[259550]: 2025-10-07 14:48:43.494 2 INFO nova.compute.manager [-] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:48:43 np0005473739 nova_compute[259550]: 2025-10-07 14:48:43.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:43 np0005473739 nova_compute[259550]: 2025-10-07 14:48:43.596 2 DEBUG nova.compute.manager [None req-c998466e-40f2-470b-83f6-7e425c5df92a - - - - - -] [instance: c621ddbd-d6b8-461e-9374-4f7e50d0ca5f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:48:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2595: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail; 5.4 KiB/s rd, 853 B/s wr, 9 op/s
Oct  7 10:48:45 np0005473739 podman[411543]: 2025-10-07 14:48:45.078773231 +0000 UTC m=+0.064328961 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct  7 10:48:45 np0005473739 podman[411542]: 2025-10-07 14:48:45.08434749 +0000 UTC m=+0.070524457 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 10:48:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2596: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:48:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:48 np0005473739 nova_compute[259550]: 2025-10-07 14:48:48.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:48 np0005473739 nova_compute[259550]: 2025-10-07 14:48:48.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2597: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:48:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2598: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:48:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2599: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:48:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:48:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:48:53 np0005473739 nova_compute[259550]: 2025-10-07 14:48:53.258 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:53 np0005473739 nova_compute[259550]: 2025-10-07 14:48:53.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:53.595 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c5:4f:60 10.100.0.2 2001:db8::f816:3eff:fec5:4f60'], port_security=[], type=localport, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': ''}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.2/28 2001:db8::f816:3eff:fec5:4f60/64', 'neutron:device_id': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '3', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[], tunnel_key=1, gateway_chassis=[], requested_chassis=[], logical_port=c9a8228f-4b9d-4f81-919d-b6781c06cebf) old=Port_Binding(mac=['fa:16:3e:c5:4f:60 10.100.0.2'], external_ids={'neutron:cidrs': '10.100.0.2/28', 'neutron:device_id': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:device_owner': 'network:distributed', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:48:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:53.596 161536 INFO neutron.agent.ovn.metadata.agent [-] Metadata Port c9a8228f-4b9d-4f81-919d-b6781c06cebf in datapath 69026eb8-a969-41bf-9300-c17871babf58 updated#033[00m
Oct  7 10:48:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:53.597 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69026eb8-a969-41bf-9300-c17871babf58, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:48:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:48:53.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd9cf94a-d634-4cf1-8d5d-8b3084e94fe6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:48:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2600: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:48:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2601: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:48:58 np0005473739 podman[411584]: 2025-10-07 14:48:58.087051322 +0000 UTC m=+0.063995421 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Oct  7 10:48:58 np0005473739 podman[411585]: 2025-10-07 14:48:58.118157331 +0000 UTC m=+0.092428507 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:48:58 np0005473739 nova_compute[259550]: 2025-10-07 14:48:58.260 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:58 np0005473739 nova_compute[259550]: 2025-10-07 14:48:58.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:48:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2602: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:48:58 np0005473739 nova_compute[259550]: 2025-10-07 14:48:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.326 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.327 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.346 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.435 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.435 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.444 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.444 2 INFO nova.compute.claims [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.539 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:48:59 np0005473739 nova_compute[259550]: 2025-10-07 14:48:59.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:49:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:00.082 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:00.083 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:00.083 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:00 np0005473739 ceph-mds[100686]: mds.beacon.cephfs.compute-0.xpofvx missed beacon ack from the monitors
Oct  7 10:49:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2603: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:49:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:49:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3294263181' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.504 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.965s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.512 2 DEBUG nova.compute.provider_tree [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.536 2 DEBUG nova.scheduler.client.report [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.563 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 2.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.564 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.613 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.613 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.645 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.677 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.776 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.778 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.778 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Creating image(s)#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.807 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.837 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.865 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.870 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.962 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.964 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.965 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.965 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.991 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:01 np0005473739 nova_compute[259550]: 2025-10-07 14:49:01.995 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f8ed40fc-237a-46e5-9557-c128fe833cea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:02 np0005473739 nova_compute[259550]: 2025-10-07 14:49:02.043 2 DEBUG nova.policy [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:49:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2604: 305 pgs: 305 active+clean; 41 MiB data, 948 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:49:02 np0005473739 nova_compute[259550]: 2025-10-07 14:49:02.759 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Successfully created port: b3e49a3d-2116-49cf-832f-f0126541c3aa _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.261 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.615 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Successfully updated port: b3e49a3d-2116-49cf-832f-f0126541c3aa _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.635 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.635 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.635 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.711 2 DEBUG nova.compute.manager [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.712 2 DEBUG nova.compute.manager [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing instance network info cache due to event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.712 2 DEBUG oslo_concurrency.lockutils [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.798 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:49:03 np0005473739 nova_compute[259550]: 2025-10-07 14:49:03.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:49:04 np0005473739 nova_compute[259550]: 2025-10-07 14:49:04.369 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f8ed40fc-237a-46e5-9557-c128fe833cea_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.374s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:04 np0005473739 nova_compute[259550]: 2025-10-07 14:49:04.433 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:49:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2605: 305 pgs: 305 active+clean; 43 MiB data, 952 MiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 92 KiB/s wr, 1 op/s
Oct  7 10:49:04 np0005473739 nova_compute[259550]: 2025-10-07 14:49:04.885 2 DEBUG nova.network.neutron [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:49:04 np0005473739 nova_compute[259550]: 2025-10-07 14:49:04.913 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:49:04 np0005473739 nova_compute[259550]: 2025-10-07 14:49:04.914 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance network_info: |[{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:49:04 np0005473739 nova_compute[259550]: 2025-10-07 14:49:04.915 2 DEBUG oslo_concurrency.lockutils [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:49:04 np0005473739 nova_compute[259550]: 2025-10-07 14:49:04.915 2 DEBUG nova.network.neutron [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:49:04 np0005473739 nova_compute[259550]: 2025-10-07 14:49:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.120 2 DEBUG nova.objects.instance [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid f8ed40fc-237a-46e5-9557-c128fe833cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.135 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.136 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Ensure instance console log exists: /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.136 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.136 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.137 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.139 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start _get_guest_xml network_info=[{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.144 2 WARNING nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.149 2 DEBUG nova.virt.libvirt.host [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.150 2 DEBUG nova.virt.libvirt.host [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.154 2 DEBUG nova.virt.libvirt.host [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.155 2 DEBUG nova.virt.libvirt.host [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.155 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.155 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.156 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.156 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.156 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.156 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.157 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.157 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.157 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.157 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.158 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.158 2 DEBUG nova.virt.hardware [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.161 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:49:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3030086265' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.665 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.691 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:05 np0005473739 nova_compute[259550]: 2025-10-07 14:49:05.696 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:49:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/562497360' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.141 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.143 2 DEBUG nova.virt.libvirt.vif [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-332055908',display_name='tempest-TestGettingAddress-server-332055908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-332055908',id=140,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fp0trybq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:49:01Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=f8ed40fc-237a-46e5-9557-c128fe833cea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.143 2 DEBUG nova.network.os_vif_util [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.144 2 DEBUG nova.network.os_vif_util [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.145 2 DEBUG nova.objects.instance [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid f8ed40fc-237a-46e5-9557-c128fe833cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.172 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <uuid>f8ed40fc-237a-46e5-9557-c128fe833cea</uuid>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <name>instance-0000008c</name>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-332055908</nova:name>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:49:05</nova:creationTime>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <nova:port uuid="b3e49a3d-2116-49cf-832f-f0126541c3aa">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.14" ipVersion="4"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fe53:473d" ipVersion="6"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <entry name="serial">f8ed40fc-237a-46e5-9557-c128fe833cea</entry>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <entry name="uuid">f8ed40fc-237a-46e5-9557-c128fe833cea</entry>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f8ed40fc-237a-46e5-9557-c128fe833cea_disk">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:53:47:3d"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <target dev="tapb3e49a3d-21"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/console.log" append="off"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:49:06 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:49:06 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:49:06 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:49:06 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.174 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Preparing to wait for external event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.175 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.175 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.175 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.176 2 DEBUG nova.virt.libvirt.vif [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-332055908',display_name='tempest-TestGettingAddress-server-332055908',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-332055908',id=140,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fp0trybq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:49:01Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=f8ed40fc-237a-46e5-9557-c128fe833cea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.177 2 DEBUG nova.network.os_vif_util [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.177 2 DEBUG nova.network.os_vif_util [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.178 2 DEBUG os_vif [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.179 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.180 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.184 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb3e49a3d-21, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.185 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb3e49a3d-21, col_values=(('external_ids', {'iface-id': 'b3e49a3d-2116-49cf-832f-f0126541c3aa', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:53:47:3d', 'vm-uuid': 'f8ed40fc-237a-46e5-9557-c128fe833cea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.187 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:06 np0005473739 NetworkManager[44949]: <info>  [1759848546.1877] manager: (tapb3e49a3d-21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/615)
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.189 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.194 2 INFO os_vif [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21')#033[00m
Oct  7 10:49:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.282 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.283 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.283 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:53:47:3d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.284 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Using config drive#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.304 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2606: 305 pgs: 305 active+clean; 68 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 16 KiB/s rd, 857 KiB/s wr, 25 op/s
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.716 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Creating config drive at /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.721 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv15dtvt execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.870 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpxv15dtvt" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.895 2 DEBUG nova.storage.rbd_utils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:06 np0005473739 nova_compute[259550]: 2025-10-07 14:49:06.899 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.046 2 DEBUG nova.network.neutron [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updated VIF entry in instance network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.047 2 DEBUG nova.network.neutron [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.070 2 DEBUG oslo_concurrency.lockutils [req-181e6895-26c7-4a73-8c35-cd5fa41c199c req-6678d108-bfea-4559-a228-913d17ea7b07 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.263 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.326 2 DEBUG oslo_concurrency.processutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config f8ed40fc-237a-46e5-9557-c128fe833cea_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.326 2 INFO nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Deleting local config drive /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea/disk.config because it was imported into RBD.#033[00m
Oct  7 10:49:08 np0005473739 kernel: tapb3e49a3d-21: entered promiscuous mode
Oct  7 10:49:08 np0005473739 NetworkManager[44949]: <info>  [1759848548.3777] manager: (tapb3e49a3d-21): new Tun device (/org/freedesktop/NetworkManager/Devices/616)
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:08Z|01541|binding|INFO|Claiming lport b3e49a3d-2116-49cf-832f-f0126541c3aa for this chassis.
Oct  7 10:49:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:08Z|01542|binding|INFO|b3e49a3d-2116-49cf-832f-f0126541c3aa: Claiming fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.396 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d'], port_security=['fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe53:473d/64', 'neutron:device_id': 'f8ed40fc-237a-46e5-9557-c128fe833cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '131e11cc-a041-4277-a4c1-b09bd1db8909', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b3e49a3d-2116-49cf-832f-f0126541c3aa) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.397 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b3e49a3d-2116-49cf-832f-f0126541c3aa in datapath 69026eb8-a969-41bf-9300-c17871babf58 bound to our chassis#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.399 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69026eb8-a969-41bf-9300-c17871babf58#033[00m
Oct  7 10:49:08 np0005473739 systemd-machined[214580]: New machine qemu-174-instance-0000008c.
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.413 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6733876e-a006-4d0b-8dfc-3bbec74b3e0e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.414 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap69026eb8-a1 in ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.416 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap69026eb8-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.416 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[666c85d6-c63c-4c53-b05b-6424b3a8ec40]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.416 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[414384ea-7a02-41c5-9214-0ebaae1d6704]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 systemd[1]: Started Virtual Machine qemu-174-instance-0000008c.
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.429 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[295ae2c0-c8af-43db-ab03-98d3544ae494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 systemd-udevd[411953]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.456 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[945db05b-88dc-4a58-b6e4-44cd9a213c0b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 NetworkManager[44949]: <info>  [1759848548.4632] device (tapb3e49a3d-21): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:49:08 np0005473739 NetworkManager[44949]: <info>  [1759848548.4643] device (tapb3e49a3d-21): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:08Z|01543|binding|INFO|Setting lport b3e49a3d-2116-49cf-832f-f0126541c3aa ovn-installed in OVS
Oct  7 10:49:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:08Z|01544|binding|INFO|Setting lport b3e49a3d-2116-49cf-832f-f0126541c3aa up in Southbound
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.474 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.496 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[82bb5606-2ac0-416d-b193-ce1fefc82c5b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.501 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b80d00fb-75ed-40a3-8a00-0cd81b8013db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 NetworkManager[44949]: <info>  [1759848548.5029] manager: (tap69026eb8-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/617)
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.541 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a002c675-cfd9-4c7a-afe5-a9f15e64e140]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.546 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b61b7e4e-def9-4943-a4f6-68332a279fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 NetworkManager[44949]: <info>  [1759848548.5767] device (tap69026eb8-a0): carrier: link connected
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.583 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d4c3300b-ed3c-4ffc-b472-38e16cae3ed3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.604 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fca98c6c-939c-4839-a9bb-355571d60c92]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69026eb8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:4f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913214, 'reachable_time': 15041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 411987, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e0915857-4f40-482c-b2fd-871fd0cfe5bc]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec5:4f60'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913214, 'tstamp': 913214}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 412000, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.641 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6bda8aad-3258-4edb-8732-e4742b9ac5a8]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69026eb8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:4f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913214, 'reachable_time': 15041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 412003, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2607: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.674 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a56c15bd-d87f-4013-aa09-45d2eb7627a7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.741 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[93084394-a690-49af-b156-93b2480912da]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.743 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69026eb8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.744 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.744 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69026eb8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:08 np0005473739 NetworkManager[44949]: <info>  [1759848548.7473] manager: (tap69026eb8-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/618)
Oct  7 10:49:08 np0005473739 kernel: tap69026eb8-a0: entered promiscuous mode
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.750 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69026eb8-a0, col_values=(('external_ids', {'iface-id': 'c9a8228f-4b9d-4f81-919d-b6781c06cebf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.751 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:08 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:08Z|01545|binding|INFO|Releasing lport c9a8228f-4b9d-4f81-919d-b6781c06cebf from this chassis (sb_readonly=0)
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.753 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/69026eb8-a969-41bf-9300-c17871babf58.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/69026eb8-a969-41bf-9300-c17871babf58.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.754 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c10fd891-a8de-4fd8-9f9c-c2a78e7d5d04]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.755 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-69026eb8-a969-41bf-9300-c17871babf58
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/69026eb8-a969-41bf-9300-c17871babf58.pid.haproxy
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 69026eb8-a969-41bf-9300-c17871babf58
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:49:08 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:08.756 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'env', 'PROCESS_TAG=haproxy-69026eb8-a969-41bf-9300-c17871babf58', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/69026eb8-a969-41bf-9300-c17871babf58.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:49:08 np0005473739 nova_compute[259550]: 2025-10-07 14:49:08.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:09 np0005473739 podman[412059]: 2025-10-07 14:49:09.110918941 +0000 UTC m=+0.026056150 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.219 2 DEBUG nova.compute.manager [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.219 2 DEBUG oslo_concurrency.lockutils [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.220 2 DEBUG oslo_concurrency.lockutils [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.220 2 DEBUG oslo_concurrency.lockutils [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.220 2 DEBUG nova.compute.manager [req-1002c955-d2e3-43f6-a884-20488ac222be req-3864af0d-9b38-4185-8eb0-4c780bda004e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Processing event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.393 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.394 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848549.3933198, f8ed40fc-237a-46e5-9557-c128fe833cea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.395 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] VM Started (Lifecycle Event)#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.399 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.403 2 INFO nova.virt.libvirt.driver [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance spawned successfully.#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.404 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.426 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.432 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.435 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.436 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.436 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.437 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.437 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.438 2 DEBUG nova.virt.libvirt.driver [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.462 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.462 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848549.3942733, f8ed40fc-237a-46e5-9557-c128fe833cea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.462 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.499 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848549.398714, f8ed40fc-237a-46e5-9557-c128fe833cea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.499 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.502 2 INFO nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Took 7.73 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.503 2 DEBUG nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.517 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.519 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.540 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.575 2 INFO nova.compute.manager [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Took 10.18 seconds to build instance.#033[00m
Oct  7 10:49:09 np0005473739 nova_compute[259550]: 2025-10-07 14:49:09.593 2 DEBUG oslo_concurrency.lockutils [None req-b9db62d7-2635-46ef-937c-683bfe836ed3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.266s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:09 np0005473739 podman[412059]: 2025-10-07 14:49:09.967174622 +0000 UTC m=+0.882311801 container create 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:49:10 np0005473739 systemd[1]: Started libpod-conmon-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848.scope.
Oct  7 10:49:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:49:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5d721598f91972f193adad12221f9b62ad2f40fb31ec50195e4d2557053f117c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:10 np0005473739 podman[412059]: 2025-10-07 14:49:10.287178295 +0000 UTC m=+1.202315484 container init 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 10:49:10 np0005473739 podman[412059]: 2025-10-07 14:49:10.296022414 +0000 UTC m=+1.211159593 container start 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:49:10 np0005473739 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [NOTICE]   (412078) : New worker (412080) forked
Oct  7 10:49:10 np0005473739 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [NOTICE]   (412078) : Loading success.
Oct  7 10:49:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2608: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct  7 10:49:11 np0005473739 nova_compute[259550]: 2025-10-07 14:49:11.188 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:11 np0005473739 nova_compute[259550]: 2025-10-07 14:49:11.317 2 DEBUG nova.compute.manager [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:49:11 np0005473739 nova_compute[259550]: 2025-10-07 14:49:11.318 2 DEBUG oslo_concurrency.lockutils [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:11 np0005473739 nova_compute[259550]: 2025-10-07 14:49:11.318 2 DEBUG oslo_concurrency.lockutils [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:11 np0005473739 nova_compute[259550]: 2025-10-07 14:49:11.318 2 DEBUG oslo_concurrency.lockutils [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:11 np0005473739 nova_compute[259550]: 2025-10-07 14:49:11.319 2 DEBUG nova.compute.manager [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] No waiting events found dispatching network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:49:11 np0005473739 nova_compute[259550]: 2025-10-07 14:49:11.319 2 WARNING nova.compute.manager [req-71b0e18f-6f34-4a55-be25-01d9967c301a req-b52adefc-7545-4a4d-844c-de29c7542fdb 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received unexpected event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa for instance with vm_state active and task_state None.#033[00m
Oct  7 10:49:11 np0005473739 nova_compute[259550]: 2025-10-07 14:49:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.014 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.014 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.014 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.015 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:49:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2650259811' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.516 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2609: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 680 KiB/s rd, 1.8 MiB/s wr, 57 op/s
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.715 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.716 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.859 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.860 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3489MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.860 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:12 np0005473739 nova_compute[259550]: 2025-10-07 14:49:12.860 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:13 np0005473739 nova_compute[259550]: 2025-10-07 14:49:13.264 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:13 np0005473739 nova_compute[259550]: 2025-10-07 14:49:13.647 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance f8ed40fc-237a-46e5-9557-c128fe833cea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:49:13 np0005473739 nova_compute[259550]: 2025-10-07 14:49:13.648 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:49:13 np0005473739 nova_compute[259550]: 2025-10-07 14:49:13.648 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.015 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:14Z|01546|binding|INFO|Releasing lport c9a8228f-4b9d-4f81-919d-b6781c06cebf from this chassis (sb_readonly=0)
Oct  7 10:49:14 np0005473739 NetworkManager[44949]: <info>  [1759848554.0410] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/619)
Oct  7 10:49:14 np0005473739 NetworkManager[44949]: <info>  [1759848554.0422] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/620)
Oct  7 10:49:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:14Z|01547|binding|INFO|Releasing lport c9a8228f-4b9d-4f81-919d-b6781c06cebf from this chassis (sb_readonly=0)
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.075 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.448 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.449 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.465 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.474 2 DEBUG nova.compute.manager [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.474 2 DEBUG nova.compute.manager [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing instance network info cache due to event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.474 2 DEBUG oslo_concurrency.lockutils [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.475 2 DEBUG oslo_concurrency.lockutils [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.475 2 DEBUG nova.network.neutron [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.489 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:49:14 np0005473739 nova_compute[259550]: 2025-10-07 14:49:14.526 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2610: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 1.8 MiB/s wr, 92 op/s
Oct  7 10:49:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:49:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/276797032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:49:15 np0005473739 nova_compute[259550]: 2025-10-07 14:49:15.015 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:15 np0005473739 nova_compute[259550]: 2025-10-07 14:49:15.021 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:49:15 np0005473739 nova_compute[259550]: 2025-10-07 14:49:15.073 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:49:15 np0005473739 nova_compute[259550]: 2025-10-07 14:49:15.106 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:49:15 np0005473739 nova_compute[259550]: 2025-10-07 14:49:15.107 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:16 np0005473739 podman[412136]: 2025-10-07 14:49:16.074334517 +0000 UTC m=+0.058868560 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:49:16 np0005473739 podman[412135]: 2025-10-07 14:49:16.083874234 +0000 UTC m=+0.071787827 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:49:16 np0005473739 nova_compute[259550]: 2025-10-07 14:49:16.107 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:49:16 np0005473739 nova_compute[259550]: 2025-10-07 14:49:16.108 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:49:16 np0005473739 nova_compute[259550]: 2025-10-07 14:49:16.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:16 np0005473739 nova_compute[259550]: 2025-10-07 14:49:16.211 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:49:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2611: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.7 MiB/s wr, 99 op/s
Oct  7 10:49:17 np0005473739 nova_compute[259550]: 2025-10-07 14:49:17.169 2 DEBUG nova.network.neutron [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updated VIF entry in instance network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:49:17 np0005473739 nova_compute[259550]: 2025-10-07 14:49:17.170 2 DEBUG nova.network.neutron [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:49:17 np0005473739 nova_compute[259550]: 2025-10-07 14:49:17.193 2 DEBUG oslo_concurrency.lockutils [req-6eba4f43-68d4-4ab7-b78a-a9c76167848a req-528fee86-4715-417c-a3e0-28901f4b1f19 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:49:17 np0005473739 nova_compute[259550]: 2025-10-07 14:49:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:49:18 np0005473739 nova_compute[259550]: 2025-10-07 14:49:18.266 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2612: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 970 KiB/s wr, 75 op/s
Oct  7 10:49:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2613: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 72 op/s
Oct  7 10:49:21 np0005473739 nova_compute[259550]: 2025-10-07 14:49:21.195 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2614: 305 pgs: 305 active+clean; 88 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 47 op/s
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:49:22
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.meta', 'default.rgw.control', 'backups', 'default.rgw.log', 'images', 'vms', '.mgr', 'cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'volumes']
Oct  7 10:49:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:49:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:23Z|00186|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:53:47:3d 10.100.0.14
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:49:23 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:23Z|00187|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:53:47:3d 10.100.0.14
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:49:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:49:23 np0005473739 nova_compute[259550]: 2025-10-07 14:49:23.268 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2615: 305 pgs: 305 active+clean; 113 MiB data, 981 MiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 1.3 MiB/s wr, 69 op/s
Oct  7 10:49:26 np0005473739 nova_compute[259550]: 2025-10-07 14:49:26.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2616: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 627 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:49:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b51d6463-6aea-429a-b2bb-de1738e2dd14 does not exist
Oct  7 10:49:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b54900a6-be00-43b2-ba61-68eee045047e does not exist
Oct  7 10:49:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 73604d0f-7308-4f33-929c-18770bf0092b does not exist
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:49:28 np0005473739 nova_compute[259550]: 2025-10-07 14:49:28.270 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:28 np0005473739 podman[412330]: 2025-10-07 14:49:28.321299502 +0000 UTC m=+0.072751309 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:49:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:49:28 np0005473739 podman[412331]: 2025-10-07 14:49:28.356670242 +0000 UTC m=+0.102300721 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  7 10:49:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2617: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:49:28 np0005473739 podman[412491]: 2025-10-07 14:49:28.784562728 +0000 UTC m=+0.041210101 container create 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:49:28 np0005473739 systemd[1]: Started libpod-conmon-31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41.scope.
Oct  7 10:49:28 np0005473739 podman[412491]: 2025-10-07 14:49:28.763541428 +0000 UTC m=+0.020188831 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:49:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:49:28 np0005473739 podman[412491]: 2025-10-07 14:49:28.897946451 +0000 UTC m=+0.154593824 container init 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:49:28 np0005473739 podman[412491]: 2025-10-07 14:49:28.906286459 +0000 UTC m=+0.162933832 container start 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:49:28 np0005473739 podman[412491]: 2025-10-07 14:49:28.910550981 +0000 UTC m=+0.167198354 container attach 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:49:28 np0005473739 infallible_brahmagupta[412508]: 167 167
Oct  7 10:49:28 np0005473739 systemd[1]: libpod-31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41.scope: Deactivated successfully.
Oct  7 10:49:28 np0005473739 conmon[412508]: conmon 31b8eed3a41daaaa1fb4 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41.scope/container/memory.events
Oct  7 10:49:28 np0005473739 podman[412491]: 2025-10-07 14:49:28.915658911 +0000 UTC m=+0.172306284 container died 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:49:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7e5233ace8eb59972be32f50ae3db0e647ac9cdbca680df0f9e10d55302defb9-merged.mount: Deactivated successfully.
Oct  7 10:49:28 np0005473739 podman[412491]: 2025-10-07 14:49:28.957816644 +0000 UTC m=+0.214464077 container remove 31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_brahmagupta, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:49:28 np0005473739 systemd[1]: libpod-conmon-31b8eed3a41daaaa1fb42f490d2488ea795ac8add286d377e63d65fcc8ff7a41.scope: Deactivated successfully.
Oct  7 10:49:29 np0005473739 podman[412532]: 2025-10-07 14:49:29.133438336 +0000 UTC m=+0.042026660 container create cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:49:29 np0005473739 systemd[1]: Started libpod-conmon-cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d.scope.
Oct  7 10:49:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:49:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:29 np0005473739 podman[412532]: 2025-10-07 14:49:29.115857067 +0000 UTC m=+0.024445431 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:49:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:29 np0005473739 podman[412532]: 2025-10-07 14:49:29.22705931 +0000 UTC m=+0.135647654 container init cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:49:29 np0005473739 podman[412532]: 2025-10-07 14:49:29.233254367 +0000 UTC m=+0.141842691 container start cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:49:29 np0005473739 podman[412532]: 2025-10-07 14:49:29.238608784 +0000 UTC m=+0.147197138 container attach cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:49:30 np0005473739 silly_elgamal[412548]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:49:30 np0005473739 silly_elgamal[412548]: --> relative data size: 1.0
Oct  7 10:49:30 np0005473739 silly_elgamal[412548]: --> All data devices are unavailable
Oct  7 10:49:30 np0005473739 systemd[1]: libpod-cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d.scope: Deactivated successfully.
Oct  7 10:49:30 np0005473739 podman[412532]: 2025-10-07 14:49:30.389108546 +0000 UTC m=+1.297696880 container died cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:49:30 np0005473739 systemd[1]: libpod-cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d.scope: Consumed 1.098s CPU time.
Oct  7 10:49:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2ba27fe8ee89b7843e35688e480b1422abb8077aa5d4f575b42478ed6fef31f4-merged.mount: Deactivated successfully.
Oct  7 10:49:30 np0005473739 podman[412532]: 2025-10-07 14:49:30.466409702 +0000 UTC m=+1.374998026 container remove cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=silly_elgamal, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:49:30 np0005473739 systemd[1]: libpod-conmon-cd4376d17dd0a1b95db853bdf98f624ef945f3a2e650849682f9dbea9385ae0d.scope: Deactivated successfully.
Oct  7 10:49:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2618: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 351 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:49:31 np0005473739 podman[412729]: 2025-10-07 14:49:31.102482633 +0000 UTC m=+0.048614545 container create 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:49:31 np0005473739 podman[412729]: 2025-10-07 14:49:31.077222823 +0000 UTC m=+0.023354745 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:49:31 np0005473739 systemd[1]: Started libpod-conmon-041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca.scope.
Oct  7 10:49:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:49:31 np0005473739 nova_compute[259550]: 2025-10-07 14:49:31.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:31 np0005473739 podman[412729]: 2025-10-07 14:49:31.268588319 +0000 UTC m=+0.214720251 container init 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:49:31 np0005473739 podman[412729]: 2025-10-07 14:49:31.276059436 +0000 UTC m=+0.222191328 container start 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:49:31 np0005473739 vigorous_hellman[412746]: 167 167
Oct  7 10:49:31 np0005473739 systemd[1]: libpod-041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca.scope: Deactivated successfully.
Oct  7 10:49:31 np0005473739 podman[412729]: 2025-10-07 14:49:31.285520111 +0000 UTC m=+0.231652033 container attach 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:49:31 np0005473739 podman[412729]: 2025-10-07 14:49:31.285872209 +0000 UTC m=+0.232004111 container died 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:49:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5ccce8dfb7b84a1ef09a73dfd8e1c977058d619558a90fae3cb5c886c08802f3-merged.mount: Deactivated successfully.
Oct  7 10:49:31 np0005473739 podman[412729]: 2025-10-07 14:49:31.525570344 +0000 UTC m=+0.471702246 container remove 041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_hellman, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:49:31 np0005473739 systemd[1]: libpod-conmon-041d6ad8efbc052610b6026e27aa36e55043064c20a012fcc3906076ecf32fca.scope: Deactivated successfully.
Oct  7 10:49:31 np0005473739 podman[412772]: 2025-10-07 14:49:31.727298267 +0000 UTC m=+0.072684408 container create 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:49:31 np0005473739 podman[412772]: 2025-10-07 14:49:31.676665893 +0000 UTC m=+0.022052024 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:49:31 np0005473739 systemd[1]: Started libpod-conmon-066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15.scope.
Oct  7 10:49:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:49:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:32 np0005473739 podman[412772]: 2025-10-07 14:49:32.016467216 +0000 UTC m=+0.361853357 container init 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:49:32 np0005473739 podman[412772]: 2025-10-07 14:49:32.023907772 +0000 UTC m=+0.369293883 container start 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 10:49:32 np0005473739 podman[412772]: 2025-10-07 14:49:32.130217478 +0000 UTC m=+0.475603589 container attach 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2619: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 340 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct  7 10:49:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:49:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562905878' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:49:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:49:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/562905878' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007587643257146578 of space, bias 1.0, pg target 0.22762929771439736 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:49:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:49:32 np0005473739 sad_wiles[412788]: {
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:    "0": [
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:        {
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "devices": [
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "/dev/loop3"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            ],
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_name": "ceph_lv0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_size": "21470642176",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "name": "ceph_lv0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "tags": {
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cluster_name": "ceph",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.crush_device_class": "",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.encrypted": "0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osd_id": "0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.type": "block",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.vdo": "0"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            },
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "type": "block",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "vg_name": "ceph_vg0"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:        }
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:    ],
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:    "1": [
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:        {
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "devices": [
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "/dev/loop4"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            ],
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_name": "ceph_lv1",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_size": "21470642176",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "name": "ceph_lv1",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "tags": {
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cluster_name": "ceph",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.crush_device_class": "",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.encrypted": "0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osd_id": "1",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.type": "block",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.vdo": "0"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            },
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "type": "block",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "vg_name": "ceph_vg1"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:        }
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:    ],
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:    "2": [
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:        {
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "devices": [
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "/dev/loop5"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            ],
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_name": "ceph_lv2",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_size": "21470642176",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "name": "ceph_lv2",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "tags": {
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.cluster_name": "ceph",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.crush_device_class": "",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.encrypted": "0",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osd_id": "2",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.type": "block",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:                "ceph.vdo": "0"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            },
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "type": "block",
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:            "vg_name": "ceph_vg2"
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:        }
Oct  7 10:49:32 np0005473739 sad_wiles[412788]:    ]
Oct  7 10:49:32 np0005473739 sad_wiles[412788]: }
Oct  7 10:49:32 np0005473739 systemd[1]: libpod-066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15.scope: Deactivated successfully.
Oct  7 10:49:32 np0005473739 podman[412772]: 2025-10-07 14:49:32.792877051 +0000 UTC m=+1.138263162 container died 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:49:32 np0005473739 nova_compute[259550]: 2025-10-07 14:49:32.980 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:32 np0005473739 nova_compute[259550]: 2025-10-07 14:49:32.982 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:32 np0005473739 nova_compute[259550]: 2025-10-07 14:49:32.997 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:49:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e3e647979171da4c7639b41f46b7629fcf9bf0bc7e5059d03e8b399baa01a22a-merged.mount: Deactivated successfully.
Oct  7 10:49:33 np0005473739 podman[412772]: 2025-10-07 14:49:33.035889754 +0000 UTC m=+1.381275865 container remove 066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_wiles, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:49:33 np0005473739 systemd[1]: libpod-conmon-066ece02b6fc141a8cd31314b55e0a854b0705247e15e70f7190a780f4ef0b15.scope: Deactivated successfully.
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.082 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.083 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.094 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.095 2 INFO nova.compute.claims [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.238 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:33 np0005473739 podman[412972]: 2025-10-07 14:49:33.682455084 +0000 UTC m=+0.043062214 container create 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:49:33 np0005473739 systemd[1]: Started libpod-conmon-120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5.scope.
Oct  7 10:49:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:49:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3931066364' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:49:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.754 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.516s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:33 np0005473739 podman[412972]: 2025-10-07 14:49:33.663392821 +0000 UTC m=+0.023999981 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:49:33 np0005473739 podman[412972]: 2025-10-07 14:49:33.763978001 +0000 UTC m=+0.124585151 container init 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.763 2 DEBUG nova.compute.provider_tree [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:49:33 np0005473739 podman[412972]: 2025-10-07 14:49:33.771823157 +0000 UTC m=+0.132430287 container start 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:49:33 np0005473739 podman[412972]: 2025-10-07 14:49:33.776463397 +0000 UTC m=+0.137070567 container attach 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:49:33 np0005473739 upbeat_cerf[412989]: 167 167
Oct  7 10:49:33 np0005473739 systemd[1]: libpod-120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5.scope: Deactivated successfully.
Oct  7 10:49:33 np0005473739 podman[412972]: 2025-10-07 14:49:33.778399534 +0000 UTC m=+0.139006684 container died 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  7 10:49:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0e87213fe89ff7cc3f43911b8920999c37256905e3e42c357446da86dfaaf3d1-merged.mount: Deactivated successfully.
Oct  7 10:49:33 np0005473739 podman[412972]: 2025-10-07 14:49:33.83170864 +0000 UTC m=+0.192315770 container remove 120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_cerf, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:49:33 np0005473739 systemd[1]: libpod-conmon-120d2dd6b147dbc2e9952e76ae73c775f0764c2af21d54ec198761114ccb6fa5.scope: Deactivated successfully.
Oct  7 10:49:33 np0005473739 nova_compute[259550]: 2025-10-07 14:49:33.984 2 DEBUG nova.scheduler.client.report [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.019 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.936s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.020 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:49:34 np0005473739 podman[413014]: 2025-10-07 14:49:34.025912713 +0000 UTC m=+0.043718809 container create 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.067 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.069 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:49:34 np0005473739 systemd[1]: Started libpod-conmon-2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8.scope.
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.093 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:49:34 np0005473739 podman[413014]: 2025-10-07 14:49:34.009021812 +0000 UTC m=+0.026827938 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:49:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:49:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.114 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:49:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:49:34 np0005473739 podman[413014]: 2025-10-07 14:49:34.135984268 +0000 UTC m=+0.153790394 container init 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:49:34 np0005473739 podman[413014]: 2025-10-07 14:49:34.147798549 +0000 UTC m=+0.165604645 container start 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:49:34 np0005473739 podman[413014]: 2025-10-07 14:49:34.152110361 +0000 UTC m=+0.169916487 container attach 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.210 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.212 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.212 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Creating image(s)#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.244 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.270 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.294 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.299 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.386 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.388 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.388 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.389 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.415 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.419 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2620: 305 pgs: 305 active+clean; 121 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 341 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.837 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.418s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:34 np0005473739 nova_compute[259550]: 2025-10-07 14:49:34.908 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] resizing rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:49:35 np0005473739 nova_compute[259550]: 2025-10-07 14:49:35.021 2 DEBUG nova.objects.instance [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'migration_context' on Instance uuid 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:49:35 np0005473739 nova_compute[259550]: 2025-10-07 14:49:35.028 2 DEBUG nova.policy [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'd385c9b3a9ee47cdb1425cac9b13ed1a', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '574d256d67124b08812e14c4c1d87ace', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:49:35 np0005473739 nova_compute[259550]: 2025-10-07 14:49:35.053 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:49:35 np0005473739 nova_compute[259550]: 2025-10-07 14:49:35.054 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Ensure instance console log exists: /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:49:35 np0005473739 nova_compute[259550]: 2025-10-07 14:49:35.054 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:35 np0005473739 nova_compute[259550]: 2025-10-07 14:49:35.055 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:35 np0005473739 nova_compute[259550]: 2025-10-07 14:49:35.055 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:35 np0005473739 magical_shtern[413029]: {
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "osd_id": 2,
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "type": "bluestore"
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:    },
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "osd_id": 1,
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "type": "bluestore"
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:    },
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "osd_id": 0,
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:        "type": "bluestore"
Oct  7 10:49:35 np0005473739 magical_shtern[413029]:    }
Oct  7 10:49:35 np0005473739 magical_shtern[413029]: }
Oct  7 10:49:35 np0005473739 systemd[1]: libpod-2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8.scope: Deactivated successfully.
Oct  7 10:49:35 np0005473739 systemd[1]: libpod-2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8.scope: Consumed 1.118s CPU time.
Oct  7 10:49:35 np0005473739 podman[413228]: 2025-10-07 14:49:35.3277274 +0000 UTC m=+0.027268868 container died 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:49:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b4f4a91ad3f1af05aac1a62c987dd9cfdd0ebad2a4ae13a91b61e3139d46478b-merged.mount: Deactivated successfully.
Oct  7 10:49:35 np0005473739 podman[413228]: 2025-10-07 14:49:35.392672742 +0000 UTC m=+0.092214170 container remove 2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_shtern, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:49:35 np0005473739 systemd[1]: libpod-conmon-2bf64008cad87eb1635a3d585e7c09f5a3a18a77ce886455e687456a257814c8.scope: Deactivated successfully.
Oct  7 10:49:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:49:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:49:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:49:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:49:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e5d2f3a-73b8-401e-ac23-e21abf915910 does not exist
Oct  7 10:49:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4eb57069-362a-4acd-b653-7a8adb223785 does not exist
Oct  7 10:49:36 np0005473739 nova_compute[259550]: 2025-10-07 14:49:36.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:49:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:49:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:36.585 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=49, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=48) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:49:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:36.586 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:49:36 np0005473739 nova_compute[259550]: 2025-10-07 14:49:36.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:36 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:36.587 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '49'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2621: 305 pgs: 305 active+clean; 132 MiB data, 992 MiB used, 59 GiB / 60 GiB avail; 208 KiB/s rd, 940 KiB/s wr, 38 op/s
Oct  7 10:49:36 np0005473739 nova_compute[259550]: 2025-10-07 14:49:36.671 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Successfully created port: b6c3792a-487a-43d7-969c-5f2b969e1390 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:49:38 np0005473739 nova_compute[259550]: 2025-10-07 14:49:38.278 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2622: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:49:40 np0005473739 nova_compute[259550]: 2025-10-07 14:49:40.047 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Successfully updated port: b6c3792a-487a-43d7-969c-5f2b969e1390 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:49:40 np0005473739 nova_compute[259550]: 2025-10-07 14:49:40.076 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:49:40 np0005473739 nova_compute[259550]: 2025-10-07 14:49:40.076 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquired lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:49:40 np0005473739 nova_compute[259550]: 2025-10-07 14:49:40.077 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:49:40 np0005473739 nova_compute[259550]: 2025-10-07 14:49:40.161 2 DEBUG nova.compute.manager [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:49:40 np0005473739 nova_compute[259550]: 2025-10-07 14:49:40.162 2 DEBUG nova.compute.manager [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing instance network info cache due to event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:49:40 np0005473739 nova_compute[259550]: 2025-10-07 14:49:40.162 2 DEBUG oslo_concurrency.lockutils [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:49:40 np0005473739 nova_compute[259550]: 2025-10-07 14:49:40.252 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:49:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2623: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:49:41 np0005473739 nova_compute[259550]: 2025-10-07 14:49:41.253 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.633 2 DEBUG nova.network.neutron [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.660 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Releasing lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.661 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance network_info: |[{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.662 2 DEBUG oslo_concurrency.lockutils [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.662 2 DEBUG nova.network.neutron [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.665 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start _get_guest_xml network_info=[{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.673 2 WARNING nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:49:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2624: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.683 2 DEBUG nova.virt.libvirt.host [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.684 2 DEBUG nova.virt.libvirt.host [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.689 2 DEBUG nova.virt.libvirt.host [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.690 2 DEBUG nova.virt.libvirt.host [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.690 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.691 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.691 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.691 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.692 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.693 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.693 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.693 2 DEBUG nova.virt.hardware [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:49:42 np0005473739 nova_compute[259550]: 2025-10-07 14:49:42.696 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:49:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/289082590' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.163 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.186 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.191 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:49:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/46692623' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.659 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.661 2 DEBUG nova.virt.libvirt.vif [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1636457952',display_name='tempest-TestGettingAddress-server-1636457952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1636457952',id=141,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-rg0e899h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:49:34Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6d966826-1c6c-4205-9ee0-3a69c9e1c2d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.662 2 DEBUG nova.network.os_vif_util [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.663 2 DEBUG nova.network.os_vif_util [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.664 2 DEBUG nova.objects.instance [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'pci_devices' on Instance uuid 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.682 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <uuid>6d966826-1c6c-4205-9ee0-3a69c9e1c2d5</uuid>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <name>instance-0000008d</name>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestGettingAddress-server-1636457952</nova:name>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:49:42</nova:creationTime>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <nova:user uuid="d385c9b3a9ee47cdb1425cac9b13ed1a">tempest-TestGettingAddress-9217867-project-member</nova:user>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <nova:project uuid="574d256d67124b08812e14c4c1d87ace">tempest-TestGettingAddress-9217867</nova:project>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <nova:port uuid="b6c3792a-487a-43d7-969c-5f2b969e1390">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="2001:db8::f816:3eff:fee2:42c6" ipVersion="6"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <entry name="serial">6d966826-1c6c-4205-9ee0-3a69c9e1c2d5</entry>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <entry name="uuid">6d966826-1c6c-4205-9ee0-3a69c9e1c2d5</entry>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:e2:42:c6"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <target dev="tapb6c3792a-48"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/console.log" append="off"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:49:43 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:49:43 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:49:43 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:49:43 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.684 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Preparing to wait for external event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.684 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.684 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.685 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.686 2 DEBUG nova.virt.libvirt.vif [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1636457952',display_name='tempest-TestGettingAddress-server-1636457952',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1636457952',id=141,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-rg0e899h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:49:34Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6d966826-1c6c-4205-9ee0-3a69c9e1c2d5,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.686 2 DEBUG nova.network.os_vif_util [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.687 2 DEBUG nova.network.os_vif_util [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.687 2 DEBUG os_vif [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.688 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.689 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb6c3792a-48, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.695 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb6c3792a-48, col_values=(('external_ids', {'iface-id': 'b6c3792a-487a-43d7-969c-5f2b969e1390', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e2:42:c6', 'vm-uuid': '6d966826-1c6c-4205-9ee0-3a69c9e1c2d5'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:43 np0005473739 NetworkManager[44949]: <info>  [1759848583.6990] manager: (tapb6c3792a-48): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/621)
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.703 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.710 2 INFO os_vif [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48')#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.766 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.766 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.767 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] No VIF found with MAC fa:16:3e:e2:42:c6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.767 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Using config drive#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.787 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.911 2 DEBUG nova.network.neutron [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updated VIF entry in instance network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.912 2 DEBUG nova.network.neutron [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:49:43 np0005473739 nova_compute[259550]: 2025-10-07 14:49:43.926 2 DEBUG oslo_concurrency.lockutils [req-7183ada6-f127-44e2-be89-39e684553c21 req-3a3f6344-ecb6-4ec7-b2c3-0d2303d8aa81 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:49:44 np0005473739 nova_compute[259550]: 2025-10-07 14:49:44.572 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Creating config drive at /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config#033[00m
Oct  7 10:49:44 np0005473739 nova_compute[259550]: 2025-10-07 14:49:44.578 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar2rfiix execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2625: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:49:44 np0005473739 nova_compute[259550]: 2025-10-07 14:49:44.727 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar2rfiix" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:44 np0005473739 nova_compute[259550]: 2025-10-07 14:49:44.755 2 DEBUG nova.storage.rbd_utils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] rbd image 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:49:44 np0005473739 nova_compute[259550]: 2025-10-07 14:49:44.759 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.095 2 DEBUG oslo_concurrency.processutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.336s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.096 2 INFO nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Deleting local config drive /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5/disk.config because it was imported into RBD.#033[00m
Oct  7 10:49:46 np0005473739 kernel: tapb6c3792a-48: entered promiscuous mode
Oct  7 10:49:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:46Z|01548|binding|INFO|Claiming lport b6c3792a-487a-43d7-969c-5f2b969e1390 for this chassis.
Oct  7 10:49:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:46Z|01549|binding|INFO|b6c3792a-487a-43d7-969c-5f2b969e1390: Claiming fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6
Oct  7 10:49:46 np0005473739 NetworkManager[44949]: <info>  [1759848586.1657] manager: (tapb6c3792a-48): new Tun device (/org/freedesktop/NetworkManager/Devices/622)
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.164 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.172 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6'], port_security=['fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fee2:42c6/64', 'neutron:device_id': '6d966826-1c6c-4205-9ee0-3a69c9e1c2d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '2', 'neutron:security_group_ids': '131e11cc-a041-4277-a4c1-b09bd1db8909', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b6c3792a-487a-43d7-969c-5f2b969e1390) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.173 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b6c3792a-487a-43d7-969c-5f2b969e1390 in datapath 69026eb8-a969-41bf-9300-c17871babf58 bound to our chassis#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.174 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69026eb8-a969-41bf-9300-c17871babf58#033[00m
Oct  7 10:49:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:46Z|01550|binding|INFO|Setting lport b6c3792a-487a-43d7-969c-5f2b969e1390 up in Southbound
Oct  7 10:49:46 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:46Z|01551|binding|INFO|Setting lport b6c3792a-487a-43d7-969c-5f2b969e1390 ovn-installed in OVS
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.191 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.194 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0fc3d5-c4f1-4a7a-a033-59307ed587ea]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:46 np0005473739 systemd-machined[214580]: New machine qemu-175-instance-0000008d.
Oct  7 10:49:46 np0005473739 systemd[1]: Started Virtual Machine qemu-175-instance-0000008d.
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.229 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c521452a-5cf6-4795-acaf-51a1d4b164b3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.232 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bdc6b008-753a-40a9-99bc-4b988d749e91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:46 np0005473739 systemd-udevd[413451]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:49:46 np0005473739 NetworkManager[44949]: <info>  [1759848586.2551] device (tapb6c3792a-48): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:49:46 np0005473739 NetworkManager[44949]: <info>  [1759848586.2561] device (tapb6c3792a-48): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.270 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9597fe75-1a98-42c5-bced-559a47958474]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:46 np0005473739 podman[413425]: 2025-10-07 14:49:46.275392459 +0000 UTC m=+0.075005464 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible)
Oct  7 10:49:46 np0005473739 podman[413423]: 2025-10-07 14:49:46.277710234 +0000 UTC m=+0.069679236 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.295 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d7a7161e-ce15-42e5-bf20-4e944735063a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69026eb8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:4f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 22, 'tx_packets': 5, 'rx_bytes': 1956, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913214, 'reachable_time': 15041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 20, 'inoctets': 1592, 'indelivers': 4, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 20, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1592, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 20, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 4, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413474, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.312 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b96c7d18-6e8d-4e5e-877e-4abb0bb2dc4d]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69026eb8-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913226, 'tstamp': 913226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413477, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69026eb8-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913229, 'tstamp': 913229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413477, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.314 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69026eb8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.317 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69026eb8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.318 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.318 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69026eb8-a0, col_values=(('external_ids', {'iface-id': 'c9a8228f-4b9d-4f81-919d-b6781c06cebf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:49:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:49:46.318 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.413 2 DEBUG nova.compute.manager [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.413 2 DEBUG oslo_concurrency.lockutils [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.414 2 DEBUG oslo_concurrency.lockutils [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.414 2 DEBUG oslo_concurrency.lockutils [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:46 np0005473739 nova_compute[259550]: 2025-10-07 14:49:46.414 2 DEBUG nova.compute.manager [req-ff0b314d-a592-4270-988a-c74054e53835 req-97b920e0-b824-4ffd-b764-e43d6f9af808 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Processing event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:49:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2626: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 29 op/s
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.256 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848587.2561026, 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.256 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] VM Started (Lifecycle Event)#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.259 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.262 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.265 2 INFO nova.virt.libvirt.driver [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance spawned successfully.#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.266 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.277 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.280 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.337 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.337 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848587.256277, 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.337 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.344 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.344 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.345 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.345 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.346 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.346 2 DEBUG nova.virt.libvirt.driver [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.464 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.467 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848587.2611039, 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.467 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.501 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.506 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.519 2 INFO nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Took 13.31 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.519 2 DEBUG nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.532 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.601 2 INFO nova.compute.manager [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Took 14.55 seconds to build instance.#033[00m
Oct  7 10:49:47 np0005473739 nova_compute[259550]: 2025-10-07 14:49:47.621 2 DEBUG oslo_concurrency.lockutils [None req-8f268ba4-5b2b-4b64-a4ed-5edcee1a5e51 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:48 np0005473739 nova_compute[259550]: 2025-10-07 14:49:48.282 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:48 np0005473739 nova_compute[259550]: 2025-10-07 14:49:48.495 2 DEBUG nova.compute.manager [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:49:48 np0005473739 nova_compute[259550]: 2025-10-07 14:49:48.495 2 DEBUG oslo_concurrency.lockutils [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:49:48 np0005473739 nova_compute[259550]: 2025-10-07 14:49:48.495 2 DEBUG oslo_concurrency.lockutils [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:49:48 np0005473739 nova_compute[259550]: 2025-10-07 14:49:48.495 2 DEBUG oslo_concurrency.lockutils [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:49:48 np0005473739 nova_compute[259550]: 2025-10-07 14:49:48.496 2 DEBUG nova.compute.manager [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] No waiting events found dispatching network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:49:48 np0005473739 nova_compute[259550]: 2025-10-07 14:49:48.496 2 WARNING nova.compute.manager [req-3f762281-0374-4008-ae03-1dea144f190a req-27bdb0eb-08fe-4092-91ec-32f8bf7eedf1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received unexpected event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:49:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2627: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.7 MiB/s wr, 94 op/s
Oct  7 10:49:48 np0005473739 nova_compute[259550]: 2025-10-07 14:49:48.698 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2628: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:49:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2629: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:49:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:49:53 np0005473739 nova_compute[259550]: 2025-10-07 14:49:53.157 2 DEBUG nova.compute.manager [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:49:53 np0005473739 nova_compute[259550]: 2025-10-07 14:49:53.157 2 DEBUG nova.compute.manager [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing instance network info cache due to event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:49:53 np0005473739 nova_compute[259550]: 2025-10-07 14:49:53.158 2 DEBUG oslo_concurrency.lockutils [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:49:53 np0005473739 nova_compute[259550]: 2025-10-07 14:49:53.158 2 DEBUG oslo_concurrency.lockutils [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:49:53 np0005473739 nova_compute[259550]: 2025-10-07 14:49:53.158 2 DEBUG nova.network.neutron [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:49:53 np0005473739 nova_compute[259550]: 2025-10-07 14:49:53.285 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:53 np0005473739 nova_compute[259550]: 2025-10-07 14:49:53.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:54 np0005473739 nova_compute[259550]: 2025-10-07 14:49:54.542 2 DEBUG nova.network.neutron [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updated VIF entry in instance network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:49:54 np0005473739 nova_compute[259550]: 2025-10-07 14:49:54.543 2 DEBUG nova.network.neutron [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:49:54 np0005473739 nova_compute[259550]: 2025-10-07 14:49:54.577 2 DEBUG oslo_concurrency.lockutils [req-2cd6db21-befe-44b2-a514-e77ed4dff719 req-5ba7c562-e949-4b6b-ae4e-c39c03e85770 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:49:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2630: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 73 op/s
Oct  7 10:49:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:49:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2631: 305 pgs: 305 active+clean; 167 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 15 KiB/s wr, 74 op/s
Oct  7 10:49:58 np0005473739 nova_compute[259550]: 2025-10-07 14:49:58.286 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:58 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct  7 10:49:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2632: 305 pgs: 305 active+clean; 174 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 741 KiB/s wr, 84 op/s
Oct  7 10:49:58 np0005473739 nova_compute[259550]: 2025-10-07 14:49:58.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:49:59 np0005473739 podman[413521]: 2025-10-07 14:49:59.093134806 +0000 UTC m=+0.081853026 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:49:59 np0005473739 podman[413522]: 2025-10-07 14:49:59.105021848 +0000 UTC m=+0.088847342 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:49:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:59Z|00188|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e2:42:c6 10.100.0.8
Oct  7 10:49:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:49:59Z|00189|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e2:42:c6 10.100.0.8
Oct  7 10:49:59 np0005473739 nova_compute[259550]: 2025-10-07 14:49:59.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:00.084 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:00.085 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:00.086 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2633: 305 pgs: 305 active+clean; 183 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 149 KiB/s rd, 1.3 MiB/s wr, 25 op/s
Oct  7 10:50:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:01 np0005473739 nova_compute[259550]: 2025-10-07 14:50:01.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2634: 305 pgs: 305 active+clean; 183 MiB data, 1018 MiB used, 59 GiB / 60 GiB avail; 40 KiB/s rd, 1.3 MiB/s wr, 19 op/s
Oct  7 10:50:03 np0005473739 nova_compute[259550]: 2025-10-07 14:50:03.289 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:03 np0005473739 nova_compute[259550]: 2025-10-07 14:50:03.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2635: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:50:04 np0005473739 nova_compute[259550]: 2025-10-07 14:50:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:04 np0005473739 nova_compute[259550]: 2025-10-07 14:50:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:04 np0005473739 nova_compute[259550]: 2025-10-07 14:50:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:04 np0005473739 nova_compute[259550]: 2025-10-07 14:50:04.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:50:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2636: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:50:06 np0005473739 nova_compute[259550]: 2025-10-07 14:50:06.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:08 np0005473739 nova_compute[259550]: 2025-10-07 14:50:08.292 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2637: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:50:08 np0005473739 nova_compute[259550]: 2025-10-07 14:50:08.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.476 2 DEBUG nova.compute.manager [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.477 2 DEBUG nova.compute.manager [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing instance network info cache due to event network-changed-b6c3792a-487a-43d7-969c-5f2b969e1390. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.477 2 DEBUG oslo_concurrency.lockutils [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.477 2 DEBUG oslo_concurrency.lockutils [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.477 2 DEBUG nova.network.neutron [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Refreshing network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.564 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.564 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.564 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.565 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.565 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.566 2 INFO nova.compute.manager [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Terminating instance#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.567 2 DEBUG nova.compute.manager [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:50:10 np0005473739 kernel: tapb6c3792a-48 (unregistering): left promiscuous mode
Oct  7 10:50:10 np0005473739 NetworkManager[44949]: <info>  [1759848610.6371] device (tapb6c3792a-48): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:50:10Z|01552|binding|INFO|Releasing lport b6c3792a-487a-43d7-969c-5f2b969e1390 from this chassis (sb_readonly=0)
Oct  7 10:50:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:50:10Z|01553|binding|INFO|Setting lport b6c3792a-487a-43d7-969c-5f2b969e1390 down in Southbound
Oct  7 10:50:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:50:10Z|01554|binding|INFO|Removing iface tapb6c3792a-48 ovn-installed in OVS
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.669 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6'], port_security=['fa:16:3e:e2:42:c6 10.100.0.8 2001:db8::f816:3eff:fee2:42c6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28 2001:db8::f816:3eff:fee2:42c6/64', 'neutron:device_id': '6d966826-1c6c-4205-9ee0-3a69c9e1c2d5', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '131e11cc-a041-4277-a4c1-b09bd1db8909', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b6c3792a-487a-43d7-969c-5f2b969e1390) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.670 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b6c3792a-487a-43d7-969c-5f2b969e1390 in datapath 69026eb8-a969-41bf-9300-c17871babf58 unbound from our chassis#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.671 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 69026eb8-a969-41bf-9300-c17871babf58#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2638: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 303 KiB/s rd, 1.4 MiB/s wr, 52 op/s
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.691 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5ab2b4ea-b1b6-41db-a9b5-0327838aa780]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:10 np0005473739 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Deactivated successfully.
Oct  7 10:50:10 np0005473739 systemd[1]: machine-qemu\x2d175\x2dinstance\x2d0000008d.scope: Consumed 13.624s CPU time.
Oct  7 10:50:10 np0005473739 systemd-machined[214580]: Machine qemu-175-instance-0000008d terminated.
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.723 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5af3368a-3ba6-442d-a2a3-5215d04dd20b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.727 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1f2f7ff5-d238-4f41-beed-b4a2eab0c5e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.762 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[87eacac8-c9f1-4f36-b352-b8d8644e7c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.783 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69cc8106-fbe6-4874-8531-0b68215d42a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap69026eb8-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c5:4f:60'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 36, 'tx_packets': 7, 'rx_bytes': 3080, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 444], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913214, 'reachable_time': 15041, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 32, 'inoctets': 2464, 'indelivers': 7, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 32, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 2464, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 32, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 7, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413576, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.802 2 INFO nova.virt.libvirt.driver [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Instance destroyed successfully.#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.803 2 DEBUG nova.objects.instance [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.805 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[54c60d3b-e914-4f3d-a035-bd68ede93a0e]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap69026eb8-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913226, 'tstamp': 913226}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413582, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap69026eb8-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 913229, 'tstamp': 913229}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 413582, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.807 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69026eb8-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.815 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.814 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap69026eb8-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.814 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.815 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap69026eb8-a0, col_values=(('external_ids', {'iface-id': 'c9a8228f-4b9d-4f81-919d-b6781c06cebf'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:50:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:10.815 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.828 2 DEBUG nova.virt.libvirt.vif [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:49:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-1636457952',display_name='tempest-TestGettingAddress-server-1636457952',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-1636457952',id=141,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:49:47Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-rg0e899h',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:49:47Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=6d966826-1c6c-4205-9ee0-3a69c9e1c2d5,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.830 2 DEBUG nova.network.os_vif_util [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.175", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.831 2 DEBUG nova.network.os_vif_util [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.831 2 DEBUG os_vif [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.834 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb6c3792a-48, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.835 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.839 2 INFO os_vif [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e2:42:c6,bridge_name='br-int',has_traffic_filtering=True,id=b6c3792a-487a-43d7-969c-5f2b969e1390,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb6c3792a-48')#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.859 2 DEBUG nova.compute.manager [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-unplugged-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.860 2 DEBUG oslo_concurrency.lockutils [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.860 2 DEBUG oslo_concurrency.lockutils [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.860 2 DEBUG oslo_concurrency.lockutils [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.860 2 DEBUG nova.compute.manager [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] No waiting events found dispatching network-vif-unplugged-b6c3792a-487a-43d7-969c-5f2b969e1390 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:50:10 np0005473739 nova_compute[259550]: 2025-10-07 14:50:10.861 2 DEBUG nova.compute.manager [req-a3c5a33e-dcd2-4b4a-b861-93965fc0b6c6 req-3781a92b-4fa3-4f9c-a05b-2e010218bf84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-unplugged-b6c3792a-487a-43d7-969c-5f2b969e1390 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:50:11 np0005473739 nova_compute[259550]: 2025-10-07 14:50:11.300 2 INFO nova.virt.libvirt.driver [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Deleting instance files /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_del#033[00m
Oct  7 10:50:11 np0005473739 nova_compute[259550]: 2025-10-07 14:50:11.301 2 INFO nova.virt.libvirt.driver [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Deletion of /var/lib/nova/instances/6d966826-1c6c-4205-9ee0-3a69c9e1c2d5_del complete#033[00m
Oct  7 10:50:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:11 np0005473739 nova_compute[259550]: 2025-10-07 14:50:11.350 2 INFO nova.compute.manager [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:50:11 np0005473739 nova_compute[259550]: 2025-10-07 14:50:11.352 2 DEBUG oslo.service.loopingcall [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:50:11 np0005473739 nova_compute[259550]: 2025-10-07 14:50:11.352 2 DEBUG nova.compute.manager [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:50:11 np0005473739 nova_compute[259550]: 2025-10-07 14:50:11.353 2 DEBUG nova.network.neutron [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.003 2 DEBUG nova.network.neutron [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updated VIF entry in instance network info cache for port b6c3792a-487a-43d7-969c-5f2b969e1390. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.004 2 DEBUG nova.network.neutron [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [{"id": "b6c3792a-487a-43d7-969c-5f2b969e1390", "address": "fa:16:3e:e2:42:c6", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fee2:42c6", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb6c3792a-48", "ovs_interfaceid": "b6c3792a-487a-43d7-969c-5f2b969e1390", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.022 2 DEBUG nova.network.neutron [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.024 2 DEBUG oslo_concurrency.lockutils [req-c4f51eab-4362-4bf2-8a11-0f9b0e34583b req-f851ae8e-944d-4c0b-9f42-7c4b94c3da84 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.037 2 INFO nova.compute.manager [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Took 0.68 seconds to deallocate network for instance.#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.074 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.075 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.144 2 DEBUG oslo_concurrency.processutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.556 2 DEBUG nova.compute.manager [req-fd779f44-3d9b-4e34-bcb5-303522981ee1 req-c89e1209-4bb5-4a1f-af11-9ce3bdc12e97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-deleted-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:50:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1366973841' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.607 2 DEBUG oslo_concurrency.processutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.613 2 DEBUG nova.compute.provider_tree [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.631 2 DEBUG nova.scheduler.client.report [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.653 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.677 2 INFO nova.scheduler.client.report [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5#033[00m
Oct  7 10:50:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2639: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 286 KiB/s rd, 921 KiB/s wr, 45 op/s
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.737 2 DEBUG oslo_concurrency.lockutils [None req-c1786457-8a30-4788-acec-c8b11e9bfce3 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.173s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.937 2 DEBUG nova.compute.manager [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.938 2 DEBUG oslo_concurrency.lockutils [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.939 2 DEBUG oslo_concurrency.lockutils [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.939 2 DEBUG oslo_concurrency.lockutils [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "6d966826-1c6c-4205-9ee0-3a69c9e1c2d5-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.940 2 DEBUG nova.compute.manager [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] No waiting events found dispatching network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.940 2 WARNING nova.compute.manager [req-8ea4a2c9-34a5-43cc-a7ec-f380d168f542 req-0cce5372-e5d3-44c6-8d40-86e7f510a66c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Received unexpected event network-vif-plugged-b6c3792a-487a-43d7-969c-5f2b969e1390 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:50:12 np0005473739 nova_compute[259550]: 2025-10-07 14:50:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.006 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.007 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.008 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.008 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.009 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:50:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1845816187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.470 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.550 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.551 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008c as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.710 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.711 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3421MB free_disk=59.89718246459961GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.712 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.712 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.778 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance f8ed40fc-237a-46e5-9557-c128fe833cea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.778 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.779 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:50:13 np0005473739 nova_compute[259550]: 2025-10-07 14:50:13.813 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.070 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.071 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.071 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.072 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.072 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.073 2 INFO nova.compute.manager [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Terminating instance#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.075 2 DEBUG nova.compute.manager [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:50:14 np0005473739 kernel: tapb3e49a3d-21 (unregistering): left promiscuous mode
Oct  7 10:50:14 np0005473739 NetworkManager[44949]: <info>  [1759848614.1413] device (tapb3e49a3d-21): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:50:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:50:14Z|01555|binding|INFO|Releasing lport b3e49a3d-2116-49cf-832f-f0126541c3aa from this chassis (sb_readonly=0)
Oct  7 10:50:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:50:14Z|01556|binding|INFO|Setting lport b3e49a3d-2116-49cf-832f-f0126541c3aa down in Southbound
Oct  7 10:50:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:50:14Z|01557|binding|INFO|Removing iface tapb3e49a3d-21 ovn-installed in OVS
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.157 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d'], port_security=['fa:16:3e:53:47:3d 10.100.0.14 2001:db8::f816:3eff:fe53:473d'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.14/28 2001:db8::f816:3eff:fe53:473d/64', 'neutron:device_id': 'f8ed40fc-237a-46e5-9557-c128fe833cea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-69026eb8-a969-41bf-9300-c17871babf58', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '574d256d67124b08812e14c4c1d87ace', 'neutron:revision_number': '4', 'neutron:security_group_ids': '131e11cc-a041-4277-a4c1-b09bd1db8909', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ba61e351-8cde-4bb5-94c2-d294e74e4b35, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=b3e49a3d-2116-49cf-832f-f0126541c3aa) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.159 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.158 161536 INFO neutron.agent.ovn.metadata.agent [-] Port b3e49a3d-2116-49cf-832f-f0126541c3aa in datapath 69026eb8-a969-41bf-9300-c17871babf58 unbound from our chassis#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.159 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 69026eb8-a969-41bf-9300-c17871babf58, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.160 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c1c88b-7798-4362-b10a-0671723b37ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.161 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 namespace which is not needed anymore#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.176 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:14 np0005473739 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Deactivated successfully.
Oct  7 10:50:14 np0005473739 systemd[1]: machine-qemu\x2d174\x2dinstance\x2d0000008c.scope: Consumed 15.135s CPU time.
Oct  7 10:50:14 np0005473739 systemd-machined[214580]: Machine qemu-174-instance-0000008c terminated.
Oct  7 10:50:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:50:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3725484053' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.294 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:14 np0005473739 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [NOTICE]   (412078) : haproxy version is 2.8.14-c23fe91
Oct  7 10:50:14 np0005473739 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [NOTICE]   (412078) : path to executable is /usr/sbin/haproxy
Oct  7 10:50:14 np0005473739 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [WARNING]  (412078) : Exiting Master process...
Oct  7 10:50:14 np0005473739 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [WARNING]  (412078) : Exiting Master process...
Oct  7 10:50:14 np0005473739 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [ALERT]    (412078) : Current worker (412080) exited with code 143 (Terminated)
Oct  7 10:50:14 np0005473739 neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58[412072]: [WARNING]  (412078) : All workers exited. Exiting... (0)
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:14 np0005473739 systemd[1]: libpod-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848.scope: Deactivated successfully.
Oct  7 10:50:14 np0005473739 conmon[412072]: conmon 08b45ab11c8c74752dfc <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848.scope/container/memory.events
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.308 2 INFO nova.virt.libvirt.driver [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Instance destroyed successfully.#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.308 2 DEBUG nova.objects.instance [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lazy-loading 'resources' on Instance uuid f8ed40fc-237a-46e5-9557-c128fe833cea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:50:14 np0005473739 podman[413698]: 2025-10-07 14:50:14.310395737 +0000 UTC m=+0.049158559 container died 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.317 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.322 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.339 2 DEBUG nova.virt.libvirt.vif [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:48:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestGettingAddress-server-332055908',display_name='tempest-TestGettingAddress-server-332055908',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testgettingaddress-server-332055908',id=140,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKnFc7oNzLOWPggfVYh6NCn5dPRqTtk1lzxcoCVefWPMaEDbpbiTeEgrtbvchkPj1nQBZHZbolxqW+9TzmB8BHZOPL0u9Rkk75pOIuFTC4DXuvzWlGpX2j66d1jp31XwzQ==',key_name='tempest-TestGettingAddress-1287985769',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:49:09Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='574d256d67124b08812e14c4c1d87ace',ramdisk_id='',reservation_id='r-fp0trybq',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestGettingAddress-9217867',owner_user_name='tempest-TestGettingAddress-9217867-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:49:09Z,user_data=None,user_id='d385c9b3a9ee47cdb1425cac9b13ed1a',uuid=f8ed40fc-237a-46e5-9557-c128fe833cea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.340 2 DEBUG nova.network.os_vif_util [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converting VIF {"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:50:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5d721598f91972f193adad12221f9b62ad2f40fb31ec50195e4d2557053f117c-merged.mount: Deactivated successfully.
Oct  7 10:50:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848-userdata-shm.mount: Deactivated successfully.
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.341 2 DEBUG nova.network.os_vif_util [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.341 2 DEBUG os_vif [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.345 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb3e49a3d-21, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.346 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:14 np0005473739 podman[413698]: 2025-10-07 14:50:14.350211802 +0000 UTC m=+0.088974624 container cleanup 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.352 2 INFO os_vif [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:53:47:3d,bridge_name='br-int',has_traffic_filtering=True,id=b3e49a3d-2116-49cf-832f-f0126541c3aa,network=Network(69026eb8-a969-41bf-9300-c17871babf58),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb3e49a3d-21')#033[00m
Oct  7 10:50:14 np0005473739 systemd[1]: libpod-conmon-08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848.scope: Deactivated successfully.
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.390 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.391 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.679s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:14 np0005473739 podman[413736]: 2025-10-07 14:50:14.426156527 +0000 UTC m=+0.052816256 container remove 08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.433 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[532cfdc0-f897-485b-b316-1aee31c3689f]: (4, ('Tue Oct  7 02:50:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 (08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848)\n08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848\nTue Oct  7 02:50:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 (08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848)\n08b45ab11c8c74752dfc5d06b9514c1ac01a4fc0ebb80cc4797523706e9a1848\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.435 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e1d3de90-bce4-4e6f-aec7-a534a86dac24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.435 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap69026eb8-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.437 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:14 np0005473739 kernel: tap69026eb8-a0: left promiscuous mode
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.442 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6db400fd-960b-423e-903b-661149ab240d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.470 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7bc1b30b-0adb-4a4c-949b-42d0be9b2f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.472 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f409d21-1969-44d2-a529-6b88bbd83ef8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.488 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b5b92baa-bcfc-4579-a701-748d6d739287]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 913205, 'reachable_time': 22125, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 413770, 'error': None, 'target': 'ovnmeta-69026eb8-a969-41bf-9300-c17871babf58', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.491 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-69026eb8-a969-41bf-9300-c17871babf58 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:50:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:14.491 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[7711c11b-1006-4d99-8d93-7c6c67928c01]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:50:14 np0005473739 systemd[1]: run-netns-ovnmeta\x2d69026eb8\x2da969\x2d41bf\x2d9300\x2dc17871babf58.mount: Deactivated successfully.
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.635 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.638 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing instance network info cache due to event network-changed-b3e49a3d-2116-49cf-832f-f0126541c3aa. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.638 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.638 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.639 2 DEBUG nova.network.neutron [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Refreshing network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:50:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2640: 305 pgs: 305 active+clean; 141 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 922 KiB/s wr, 64 op/s
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.805 2 INFO nova.virt.libvirt.driver [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Deleting instance files /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea_del#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.807 2 INFO nova.virt.libvirt.driver [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Deletion of /var/lib/nova/instances/f8ed40fc-237a-46e5-9557-c128fe833cea_del complete#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.888 2 INFO nova.compute.manager [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Took 0.81 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.889 2 DEBUG oslo.service.loopingcall [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.890 2 DEBUG nova.compute.manager [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:50:14 np0005473739 nova_compute[259550]: 2025-10-07 14:50:14.890 2 DEBUG nova.network.neutron [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:50:15 np0005473739 nova_compute[259550]: 2025-10-07 14:50:15.599 2 DEBUG nova.network.neutron [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:50:15 np0005473739 nova_compute[259550]: 2025-10-07 14:50:15.617 2 INFO nova.compute.manager [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Took 0.73 seconds to deallocate network for instance.#033[00m
Oct  7 10:50:15 np0005473739 nova_compute[259550]: 2025-10-07 14:50:15.658 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:15 np0005473739 nova_compute[259550]: 2025-10-07 14:50:15.658 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:15 np0005473739 nova_compute[259550]: 2025-10-07 14:50:15.726 2 DEBUG oslo_concurrency.processutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:50:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:50:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612406044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.196 2 DEBUG oslo_concurrency.processutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.206 2 DEBUG nova.compute.provider_tree [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.227 2 DEBUG nova.scheduler.client.report [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.247 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.589s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.270 2 INFO nova.scheduler.client.report [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Deleted allocations for instance f8ed40fc-237a-46e5-9557-c128fe833cea#033[00m
Oct  7 10:50:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.343 2 DEBUG oslo_concurrency.lockutils [None req-b5ec9812-c530-4d10-b5ec-90e3ec67e607 d385c9b3a9ee47cdb1425cac9b13ed1a 574d256d67124b08812e14c4c1d87ace - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.374 2 DEBUG nova.network.neutron [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updated VIF entry in instance network info cache for port b3e49a3d-2116-49cf-832f-f0126541c3aa. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.375 2 DEBUG nova.network.neutron [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Updating instance_info_cache with network_info: [{"id": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "address": "fa:16:3e:53:47:3d", "network": {"id": "69026eb8-a969-41bf-9300-c17871babf58", "bridge": "br-int", "label": "tempest-network-smoke--2097231794", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.14", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}, {"cidr": "2001:db8::/64", "dns": [], "gateway": {"address": "2001:db8::", "type": "gateway", "version": 6, "meta": {}}, "ips": [{"address": "2001:db8::f816:3eff:fe53:473d", "type": "fixed", "version": 6, "meta": {}, "floating_ips": []}], "routes": [], "version": 6, "meta": {"enable_dhcp": true, "ipv6_address_mode": "slaac"}}], "meta": {"injected": false, "tenant_id": "574d256d67124b08812e14c4c1d87ace", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb3e49a3d-21", "ovs_interfaceid": "b3e49a3d-2116-49cf-832f-f0126541c3aa", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.392 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.396 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-f8ed40fc-237a-46e5-9557-c128fe833cea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.396 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-unplugged-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.397 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.397 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.398 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.398 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] No waiting events found dispatching network-vif-unplugged-b3e49a3d-2116-49cf-832f-f0126541c3aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.398 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-unplugged-b3e49a3d-2116-49cf-832f-f0126541c3aa for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.398 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.399 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.399 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.399 2 DEBUG oslo_concurrency.lockutils [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "f8ed40fc-237a-46e5-9557-c128fe833cea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.400 2 DEBUG nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] No waiting events found dispatching network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.400 2 WARNING nova.compute.manager [req-21e4b7a1-edcd-4f98-9b02-d57950722a31 req-43402001-1092-496b-9b60-405d9027eafe 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received unexpected event network-vif-plugged-b3e49a3d-2116-49cf-832f-f0126541c3aa for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.408 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.408 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.408 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.420 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:50:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2641: 305 pgs: 305 active+clean; 88 MiB data, 990 MiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 22 KiB/s wr, 43 op/s
Oct  7 10:50:16 np0005473739 nova_compute[259550]: 2025-10-07 14:50:16.699 2 DEBUG nova.compute.manager [req-791c0356-eeef-4b10-9862-49f2cffe2d91 req-cf181dfd-709b-410d-87ef-faead0f39e56 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Received event network-vif-deleted-b3e49a3d-2116-49cf-832f-f0126541c3aa external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:17 np0005473739 podman[413794]: 2025-10-07 14:50:17.068894749 +0000 UTC m=+0.055746886 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:50:17 np0005473739 podman[413795]: 2025-10-07 14:50:17.074669336 +0000 UTC m=+0.059132935 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 10:50:17 np0005473739 nova_compute[259550]: 2025-10-07 14:50:17.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:50:18 np0005473739 nova_compute[259550]: 2025-10-07 14:50:18.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2642: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 23 KiB/s wr, 58 op/s
Oct  7 10:50:19 np0005473739 nova_compute[259550]: 2025-10-07 14:50:19.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2643: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 11 KiB/s wr, 57 op/s
Oct  7 10:50:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2644: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:50:22
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.data', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', '.mgr', 'images', 'default.rgw.meta', 'volumes', 'vms', '.rgw.root']
Oct  7 10:50:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:50:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:50:23 np0005473739 nova_compute[259550]: 2025-10-07 14:50:23.299 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:24 np0005473739 nova_compute[259550]: 2025-10-07 14:50:24.350 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2645: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 2.3 KiB/s wr, 56 op/s
Oct  7 10:50:24 np0005473739 nova_compute[259550]: 2025-10-07 14:50:24.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:24 np0005473739 nova_compute[259550]: 2025-10-07 14:50:24.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:25 np0005473739 nova_compute[259550]: 2025-10-07 14:50:25.802 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848610.8006272, 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:50:25 np0005473739 nova_compute[259550]: 2025-10-07 14:50:25.802 2 INFO nova.compute.manager [-] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:50:25 np0005473739 nova_compute[259550]: 2025-10-07 14:50:25.826 2 DEBUG nova.compute.manager [None req-7dac03c7-a698-4ec6-92b3-086b70525b77 - - - - - -] [instance: 6d966826-1c6c-4205-9ee0-3a69c9e1c2d5] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:50:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2646: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.2 KiB/s wr, 37 op/s
Oct  7 10:50:28 np0005473739 nova_compute[259550]: 2025-10-07 14:50:28.302 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2647: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 852 B/s wr, 14 op/s
Oct  7 10:50:29 np0005473739 nova_compute[259550]: 2025-10-07 14:50:29.307 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848614.3047776, f8ed40fc-237a-46e5-9557-c128fe833cea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:50:29 np0005473739 nova_compute[259550]: 2025-10-07 14:50:29.307 2 INFO nova.compute.manager [-] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:50:29 np0005473739 nova_compute[259550]: 2025-10-07 14:50:29.331 2 DEBUG nova.compute.manager [None req-7b7ce7f8-f2f5-4ac4-afbf-769a5a7b9da1 - - - - - -] [instance: f8ed40fc-237a-46e5-9557-c128fe833cea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:50:29 np0005473739 nova_compute[259550]: 2025-10-07 14:50:29.352 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:30 np0005473739 podman[413835]: 2025-10-07 14:50:30.073970496 +0000 UTC m=+0.060501579 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:50:30 np0005473739 podman[413836]: 2025-10-07 14:50:30.112652324 +0000 UTC m=+0.089556898 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 10:50:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2648: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2649: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:50:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1371807605' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:50:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:50:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1371807605' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:50:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:50:33 np0005473739 nova_compute[259550]: 2025-10-07 14:50:33.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:34 np0005473739 nova_compute[259550]: 2025-10-07 14:50:34.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2650: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:50:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 12K writes, 55K keys, 12K commit groups, 1.0 writes per commit group, ingest: 0.07 GB, 0.02 MB/s#012Cumulative WAL: 12K writes, 12K syncs, 1.00 writes per sync, written: 0.07 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1362 writes, 6184 keys, 1362 commit groups, 1.0 writes per commit group, ingest: 8.69 MB, 0.01 MB/s#012Interval WAL: 1362 writes, 1362 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     76.9      0.86              0.21        38    0.023       0      0       0.0       0.0#012  L6      1/0    7.93 MB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   4.5    143.4    120.1      2.50              0.85        37    0.068    224K    20K       0.0       0.0#012 Sum      1/0    7.93 MB   0.0      0.3     0.1      0.3       0.4      0.1       0.0   5.5    106.7    109.1      3.36              1.06        75    0.045    224K    20K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   7.0    148.2    148.3      0.35              0.15        10    0.035     38K   2533       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.3     0.1      0.3       0.3      0.0       0.0   0.0    143.4    120.1      2.50              0.85        37    0.068    224K    20K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     77.2      0.85              0.21        37    0.023       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 4800.0 total, 600.0 interval#012Flush(GB): cumulative 0.065, interval 0.007#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.36 GB write, 0.08 MB/s write, 0.35 GB read, 0.07 MB/s read, 3.4 seconds#012Interval compaction: 0.05 GB write, 0.09 MB/s write, 0.05 GB read, 0.09 MB/s read, 0.3 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 40.75 MB table_size: 0 occupancy: 18446744073709551615 collections: 9 last_copies: 0 last_secs: 0.000297 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(2648,39.10 MB,12.8633%) FilterBlock(76,632.36 KB,0.203138%) IndexBlock(76,1.03 MB,0.338158%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:50:36 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1575957d-f4ec-46bf-b164-4041e986014a does not exist
Oct  7 10:50:36 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 975a8f92-78af-4835-a1a0-ed515622348f does not exist
Oct  7 10:50:36 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e2040ad4-f5ed-42c3-834d-b42088e5fa76 does not exist
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:50:36 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:50:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2651: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:36 np0005473739 podman[414151]: 2025-10-07 14:50:36.981559304 +0000 UTC m=+0.046948616 container create 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 10:50:37 np0005473739 systemd[1]: Started libpod-conmon-2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055.scope.
Oct  7 10:50:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:50:37 np0005473739 podman[414151]: 2025-10-07 14:50:36.962359377 +0000 UTC m=+0.027748719 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:50:37 np0005473739 podman[414151]: 2025-10-07 14:50:37.074077532 +0000 UTC m=+0.139466864 container init 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:50:37 np0005473739 podman[414151]: 2025-10-07 14:50:37.081110129 +0000 UTC m=+0.146499451 container start 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:50:37 np0005473739 podman[414151]: 2025-10-07 14:50:37.084614692 +0000 UTC m=+0.150004024 container attach 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:50:37 np0005473739 brave_fermi[414167]: 167 167
Oct  7 10:50:37 np0005473739 systemd[1]: libpod-2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055.scope: Deactivated successfully.
Oct  7 10:50:37 np0005473739 podman[414151]: 2025-10-07 14:50:37.088888564 +0000 UTC m=+0.154277886 container died 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 10:50:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cbcfeb77df84228bf29b99ed1c3df5904a5395542f2f5889427cd279f9279b85-merged.mount: Deactivated successfully.
Oct  7 10:50:37 np0005473739 podman[414151]: 2025-10-07 14:50:37.13044534 +0000 UTC m=+0.195834652 container remove 2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_fermi, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:50:37 np0005473739 systemd[1]: libpod-conmon-2a73dc31d08b34dd6aed3db1cc13771b94d87a729d6e526684e6952ac8c53055.scope: Deactivated successfully.
Oct  7 10:50:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:37.213 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=50, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=49) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:50:37 np0005473739 nova_compute[259550]: 2025-10-07 14:50:37.214 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:37.216 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:50:37 np0005473739 podman[414190]: 2025-10-07 14:50:37.305676904 +0000 UTC m=+0.046654730 container create 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 10:50:37 np0005473739 systemd[1]: Started libpod-conmon-6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c.scope.
Oct  7 10:50:37 np0005473739 podman[414190]: 2025-10-07 14:50:37.285094285 +0000 UTC m=+0.026072141 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:50:37 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:50:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:37 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:37 np0005473739 podman[414190]: 2025-10-07 14:50:37.40358577 +0000 UTC m=+0.144563616 container init 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:50:37 np0005473739 podman[414190]: 2025-10-07 14:50:37.412064282 +0000 UTC m=+0.153042108 container start 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 10:50:37 np0005473739 podman[414190]: 2025-10-07 14:50:37.418819241 +0000 UTC m=+0.159797067 container attach 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True)
Oct  7 10:50:38 np0005473739 nova_compute[259550]: 2025-10-07 14:50:38.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:38 np0005473739 infallible_almeida[414207]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:50:38 np0005473739 infallible_almeida[414207]: --> relative data size: 1.0
Oct  7 10:50:38 np0005473739 infallible_almeida[414207]: --> All data devices are unavailable
Oct  7 10:50:38 np0005473739 systemd[1]: libpod-6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c.scope: Deactivated successfully.
Oct  7 10:50:38 np0005473739 systemd[1]: libpod-6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c.scope: Consumed 1.132s CPU time.
Oct  7 10:50:38 np0005473739 podman[414190]: 2025-10-07 14:50:38.600479813 +0000 UTC m=+1.341457659 container died 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:50:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6c13297a2b7c42f41ca8f41016d799d1a011d312ac73e95a14285775f21bdf8f-merged.mount: Deactivated successfully.
Oct  7 10:50:38 np0005473739 podman[414190]: 2025-10-07 14:50:38.666058542 +0000 UTC m=+1.407036368 container remove 6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_almeida, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 10:50:38 np0005473739 systemd[1]: libpod-conmon-6360c78890b65084f4e4ab21119f7b05939b32fafb6a847bac9d21016b6a740c.scope: Deactivated successfully.
Oct  7 10:50:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2652: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:39 np0005473739 podman[414388]: 2025-10-07 14:50:39.368643843 +0000 UTC m=+0.071496680 container create 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 10:50:39 np0005473739 nova_compute[259550]: 2025-10-07 14:50:39.390 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:39 np0005473739 podman[414388]: 2025-10-07 14:50:39.322973177 +0000 UTC m=+0.025826004 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:50:39 np0005473739 systemd[1]: Started libpod-conmon-72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f.scope.
Oct  7 10:50:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:50:39 np0005473739 podman[414388]: 2025-10-07 14:50:39.612483476 +0000 UTC m=+0.315336293 container init 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 10:50:39 np0005473739 podman[414388]: 2025-10-07 14:50:39.619720528 +0000 UTC m=+0.322573335 container start 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:50:39 np0005473739 systemd[1]: libpod-72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f.scope: Deactivated successfully.
Oct  7 10:50:39 np0005473739 vibrant_ride[414404]: 167 167
Oct  7 10:50:39 np0005473739 conmon[414404]: conmon 72a33e3025f96325173a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f.scope/container/memory.events
Oct  7 10:50:39 np0005473739 podman[414388]: 2025-10-07 14:50:39.700995228 +0000 UTC m=+0.403848035 container attach 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:50:39 np0005473739 podman[414388]: 2025-10-07 14:50:39.701770287 +0000 UTC m=+0.404623124 container died 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:50:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-14974cce87bda83f2ca23232cc3f4636139ac1d0edb0fc515cff0e201014618b-merged.mount: Deactivated successfully.
Oct  7 10:50:40 np0005473739 podman[414388]: 2025-10-07 14:50:40.073757535 +0000 UTC m=+0.776610342 container remove 72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_ride, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:50:40 np0005473739 systemd[1]: libpod-conmon-72a33e3025f96325173a3d9c5a072571754bebb85f228a6ff3b37ca68a25838f.scope: Deactivated successfully.
Oct  7 10:50:40 np0005473739 podman[414427]: 2025-10-07 14:50:40.261761231 +0000 UTC m=+0.059558726 container create 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:50:40 np0005473739 systemd[1]: Started libpod-conmon-772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2.scope.
Oct  7 10:50:40 np0005473739 podman[414427]: 2025-10-07 14:50:40.229100835 +0000 UTC m=+0.026898410 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:50:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:50:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:40 np0005473739 podman[414427]: 2025-10-07 14:50:40.348675446 +0000 UTC m=+0.146472961 container init 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 10:50:40 np0005473739 podman[414427]: 2025-10-07 14:50:40.359557674 +0000 UTC m=+0.157355159 container start 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 10:50:40 np0005473739 podman[414427]: 2025-10-07 14:50:40.363081518 +0000 UTC m=+0.160879033 container attach 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 10:50:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2653: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]: {
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:    "0": [
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:        {
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "devices": [
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "/dev/loop3"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            ],
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_name": "ceph_lv0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_size": "21470642176",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "name": "ceph_lv0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "tags": {
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cluster_name": "ceph",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.crush_device_class": "",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.encrypted": "0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osd_id": "0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.type": "block",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.vdo": "0"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            },
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "type": "block",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "vg_name": "ceph_vg0"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:        }
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:    ],
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:    "1": [
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:        {
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "devices": [
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "/dev/loop4"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            ],
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_name": "ceph_lv1",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_size": "21470642176",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "name": "ceph_lv1",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "tags": {
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cluster_name": "ceph",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.crush_device_class": "",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.encrypted": "0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osd_id": "1",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.type": "block",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.vdo": "0"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            },
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "type": "block",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "vg_name": "ceph_vg1"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:        }
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:    ],
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:    "2": [
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:        {
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "devices": [
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "/dev/loop5"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            ],
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_name": "ceph_lv2",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_size": "21470642176",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "name": "ceph_lv2",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "tags": {
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.cluster_name": "ceph",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.crush_device_class": "",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.encrypted": "0",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osd_id": "2",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.type": "block",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:                "ceph.vdo": "0"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            },
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "type": "block",
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:            "vg_name": "ceph_vg2"
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:        }
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]:    ]
Oct  7 10:50:41 np0005473739 quizzical_payne[414443]: }
Oct  7 10:50:41 np0005473739 systemd[1]: libpod-772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2.scope: Deactivated successfully.
Oct  7 10:50:41 np0005473739 podman[414427]: 2025-10-07 14:50:41.187114004 +0000 UTC m=+0.984911569 container died 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:50:41 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:50:41.218 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '50'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:50:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cd10e1e6c9fa60614292ad4fecbfd9b97f53506e6d41113ab214d69fb4584ebe-merged.mount: Deactivated successfully.
Oct  7 10:50:42 np0005473739 podman[414427]: 2025-10-07 14:50:42.394667371 +0000 UTC m=+2.192464866 container remove 772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_payne, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 10:50:42 np0005473739 systemd[1]: libpod-conmon-772a8a159a18e92639b51548a85f72d935f9a18c6a8a2db2be832c7dc36449a2.scope: Deactivated successfully.
Oct  7 10:50:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2654: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:42 np0005473739 podman[414602]: 2025-10-07 14:50:42.984087154 +0000 UTC m=+0.036722664 container create 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:50:43 np0005473739 systemd[1]: Started libpod-conmon-7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66.scope.
Oct  7 10:50:43 np0005473739 podman[414602]: 2025-10-07 14:50:42.968831231 +0000 UTC m=+0.021466771 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:50:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:50:43 np0005473739 podman[414602]: 2025-10-07 14:50:43.080238848 +0000 UTC m=+0.132874378 container init 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:50:43 np0005473739 podman[414602]: 2025-10-07 14:50:43.089405405 +0000 UTC m=+0.142040915 container start 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:50:43 np0005473739 podman[414602]: 2025-10-07 14:50:43.093024162 +0000 UTC m=+0.145659702 container attach 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:50:43 np0005473739 recursing_torvalds[414618]: 167 167
Oct  7 10:50:43 np0005473739 systemd[1]: libpod-7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66.scope: Deactivated successfully.
Oct  7 10:50:43 np0005473739 podman[414623]: 2025-10-07 14:50:43.13504889 +0000 UTC m=+0.025172259 container died 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:50:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-160997dc104eaf9e71a547fa1d0eff32dff80b957537c57d5c4b7ff08b52e2a0-merged.mount: Deactivated successfully.
Oct  7 10:50:43 np0005473739 podman[414623]: 2025-10-07 14:50:43.16745929 +0000 UTC m=+0.057582619 container remove 7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=recursing_torvalds, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:50:43 np0005473739 systemd[1]: libpod-conmon-7afd718dfa4940c04619893c3e6086ec27141347cdd823c8642a08fb16a71f66.scope: Deactivated successfully.
Oct  7 10:50:43 np0005473739 nova_compute[259550]: 2025-10-07 14:50:43.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:43 np0005473739 podman[414644]: 2025-10-07 14:50:43.333785791 +0000 UTC m=+0.042809368 container create 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 10:50:43 np0005473739 systemd[1]: Started libpod-conmon-75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182.scope.
Oct  7 10:50:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:50:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:43 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:50:43 np0005473739 podman[414644]: 2025-10-07 14:50:43.317662368 +0000 UTC m=+0.026685975 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:50:43 np0005473739 podman[414644]: 2025-10-07 14:50:43.414916309 +0000 UTC m=+0.123939906 container init 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 10:50:43 np0005473739 podman[414644]: 2025-10-07 14:50:43.423242127 +0000 UTC m=+0.132265704 container start 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 10:50:43 np0005473739 podman[414644]: 2025-10-07 14:50:43.427445827 +0000 UTC m=+0.136469434 container attach 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:50:44 np0005473739 nova_compute[259550]: 2025-10-07 14:50:44.393 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]: {
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "osd_id": 2,
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "type": "bluestore"
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:    },
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "osd_id": 1,
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "type": "bluestore"
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:    },
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "osd_id": 0,
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:        "type": "bluestore"
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]:    }
Oct  7 10:50:44 np0005473739 jovial_kepler[414661]: }
Oct  7 10:50:44 np0005473739 systemd[1]: libpod-75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182.scope: Deactivated successfully.
Oct  7 10:50:44 np0005473739 systemd[1]: libpod-75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182.scope: Consumed 1.085s CPU time.
Oct  7 10:50:44 np0005473739 podman[414694]: 2025-10-07 14:50:44.54690128 +0000 UTC m=+0.029327647 container died 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:50:44 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d0842d37c1e684ee0fca45ed57a46edd6ddb90354acc6c6f2c9c057ae10c16d0-merged.mount: Deactivated successfully.
Oct  7 10:50:44 np0005473739 podman[414694]: 2025-10-07 14:50:44.608138225 +0000 UTC m=+0.090564512 container remove 75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kepler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:50:44 np0005473739 systemd[1]: libpod-conmon-75a28899d2f8395591499ba0a553479ddaa742ff4a4878af2e58f675c6653182.scope: Deactivated successfully.
Oct  7 10:50:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:50:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:50:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:50:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:50:44 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 903fd8e5-7cb8-4c48-8bf8-f30e21a4b7ec does not exist
Oct  7 10:50:44 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 508c81a4-a460-4720-ad59-e857c21d5f56 does not exist
Oct  7 10:50:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2655: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:45 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:50:45 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:50:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2656: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:48 np0005473739 podman[414758]: 2025-10-07 14:50:48.081665994 +0000 UTC m=+0.066189003 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  7 10:50:48 np0005473739 podman[414759]: 2025-10-07 14:50:48.08320034 +0000 UTC m=+0.066725006 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:50:48 np0005473739 nova_compute[259550]: 2025-10-07 14:50:48.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2657: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:49 np0005473739 nova_compute[259550]: 2025-10-07 14:50:49.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2658: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:50:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:50:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2659: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:52 np0005473739 nova_compute[259550]: 2025-10-07 14:50:52.827 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:52 np0005473739 nova_compute[259550]: 2025-10-07 14:50:52.827 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:52 np0005473739 nova_compute[259550]: 2025-10-07 14:50:52.847 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:50:52 np0005473739 nova_compute[259550]: 2025-10-07 14:50:52.924 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:52 np0005473739 nova_compute[259550]: 2025-10-07 14:50:52.925 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:52 np0005473739 nova_compute[259550]: 2025-10-07 14:50:52.935 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:50:52 np0005473739 nova_compute[259550]: 2025-10-07 14:50:52.935 2 INFO nova.compute.claims [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.033 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:50:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/881193533' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.516 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.483s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.523 2 DEBUG nova.compute.provider_tree [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.548 2 DEBUG nova.scheduler.client.report [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.576 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.652s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.578 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.651 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.652 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.678 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.726 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.819 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.821 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.821 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Creating image(s)#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.843 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.867 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.888 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.893 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.934 2 DEBUG nova.policy [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.972 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.974 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.975 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.975 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:53 np0005473739 nova_compute[259550]: 2025-10-07 14:50:53.999 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.003 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.367 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.364s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.433 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.524 2 DEBUG nova.objects.instance [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.578 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.579 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Ensure instance console log exists: /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.579 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.580 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:50:54 np0005473739 nova_compute[259550]: 2025-10-07 14:50:54.580 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:50:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2660: 305 pgs: 305 active+clean; 41 MiB data, 962 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:50:56 np0005473739 nova_compute[259550]: 2025-10-07 14:50:56.133 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Successfully created port: 144643c5-91ec-4bd7-a646-3d64339b6691 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:50:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:50:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2661: 305 pgs: 305 active+clean; 57 MiB data, 968 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 572 KiB/s wr, 24 op/s
Oct  7 10:50:58 np0005473739 nova_compute[259550]: 2025-10-07 14:50:58.268 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Successfully updated port: 144643c5-91ec-4bd7-a646-3d64339b6691 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:50:58 np0005473739 nova_compute[259550]: 2025-10-07 14:50:58.286 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:50:58 np0005473739 nova_compute[259550]: 2025-10-07 14:50:58.286 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:50:58 np0005473739 nova_compute[259550]: 2025-10-07 14:50:58.286 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:50:58 np0005473739 nova_compute[259550]: 2025-10-07 14:50:58.313 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:58 np0005473739 nova_compute[259550]: 2025-10-07 14:50:58.406 2 DEBUG nova.compute.manager [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:50:58 np0005473739 nova_compute[259550]: 2025-10-07 14:50:58.406 2 DEBUG nova.compute.manager [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing instance network info cache due to event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:50:58 np0005473739 nova_compute[259550]: 2025-10-07 14:50:58.406 2 DEBUG oslo_concurrency.lockutils [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:50:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2662: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.015 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.405 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.660 2 DEBUG nova.network.neutron [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.686 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.686 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance network_info: |[{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.687 2 DEBUG oslo_concurrency.lockutils [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.687 2 DEBUG nova.network.neutron [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.691 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start _get_guest_xml network_info=[{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.695 2 WARNING nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.702 2 DEBUG nova.virt.libvirt.host [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.702 2 DEBUG nova.virt.libvirt.host [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.706 2 DEBUG nova.virt.libvirt.host [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.706 2 DEBUG nova.virt.libvirt.host [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.707 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.707 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.707 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.708 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.709 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.709 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.709 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.709 2 DEBUG nova.virt.hardware [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:50:59 np0005473739 nova_compute[259550]: 2025-10-07 14:50:59.712 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:00.085 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:00.086 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:00.086 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:51:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2726827845' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.183 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.213 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.220 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:51:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3938573712' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.682 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.684 2 DEBUG nova.virt.libvirt.vif [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:50:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=142,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW4CdHXcKKzCR3ZI3nv4/KGv+Q9vtn7hxRZelXQfKtVpIrtTAVdZ3Df4UTU8EaZDakLPU1Olzl+Z8Q7yW10W/8sO5gvQt5UY4ZcGXGzLqm+i3yJBS0R+/jppzmQO8xBdQ==',key_name='tempest-TestSecurityGroupsBasicOps-1150608020',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-0j1o8fjx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:50:53Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=9b571b54-001b-4da1-8b91-2659b1fbaac6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.685 2 DEBUG nova.network.os_vif_util [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.685 2 DEBUG nova.network.os_vif_util [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.686 2 DEBUG nova.objects.instance [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.704 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <uuid>9b571b54-001b-4da1-8b91-2659b1fbaac6</uuid>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <name>instance-0000008e</name>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859</nova:name>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:50:59</nova:creationTime>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <nova:port uuid="144643c5-91ec-4bd7-a646-3d64339b6691">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <entry name="serial">9b571b54-001b-4da1-8b91-2659b1fbaac6</entry>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <entry name="uuid">9b571b54-001b-4da1-8b91-2659b1fbaac6</entry>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/9b571b54-001b-4da1-8b91-2659b1fbaac6_disk">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:24:52:85"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <target dev="tap144643c5-91"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/console.log" append="off"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:51:00 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:51:00 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:51:00 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:51:00 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:51:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2663: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.704 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Preparing to wait for external event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.705 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.705 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.706 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.706 2 DEBUG nova.virt.libvirt.vif [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:50:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=142,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW4CdHXcKKzCR3ZI3nv4/KGv+Q9vtn7hxRZelXQfKtVpIrtTAVdZ3Df4UTU8EaZDakLPU1Olzl+Z8Q7yW10W/8sO5gvQt5UY4ZcGXGzLqm+i3yJBS0R+/jppzmQO8xBdQ==',key_name='tempest-TestSecurityGroupsBasicOps-1150608020',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-0j1o8fjx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:50:53Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=9b571b54-001b-4da1-8b91-2659b1fbaac6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.707 2 DEBUG nova.network.os_vif_util [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.707 2 DEBUG nova.network.os_vif_util [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.708 2 DEBUG os_vif [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.709 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.712 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap144643c5-91, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap144643c5-91, col_values=(('external_ids', {'iface-id': '144643c5-91ec-4bd7-a646-3d64339b6691', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:24:52:85', 'vm-uuid': '9b571b54-001b-4da1-8b91-2659b1fbaac6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:00 np0005473739 NetworkManager[44949]: <info>  [1759848660.7154] manager: (tap144643c5-91): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/623)
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.722 2 INFO os_vif [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91')#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.791 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.792 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.792 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:24:52:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.793 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Using config drive#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.827 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:00 np0005473739 podman[415049]: 2025-10-07 14:51:00.839327919 +0000 UTC m=+0.078717552 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 10:51:00 np0005473739 podman[415051]: 2025-10-07 14:51:00.854184121 +0000 UTC m=+0.092618201 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.863 2 DEBUG nova.network.neutron [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated VIF entry in instance network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.864 2 DEBUG nova.network.neutron [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.945 2 DEBUG oslo_concurrency.lockutils [req-883a8dff-c6e4-4d65-af4a-76c9bd6cc461 req-a08f3a59-45c4-4cbb-b7a4-f0aaa32b8f59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:51:00 np0005473739 nova_compute[259550]: 2025-10-07 14:51:00.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.129 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Creating config drive at /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.135 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnip4at91 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.285 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpnip4at91" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.311 2 DEBUG nova.storage.rbd_utils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.315 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.487 2 DEBUG oslo_concurrency.processutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config 9b571b54-001b-4da1-8b91-2659b1fbaac6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.171s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.488 2 INFO nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Deleting local config drive /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6/disk.config because it was imported into RBD.#033[00m
Oct  7 10:51:01 np0005473739 kernel: tap144643c5-91: entered promiscuous mode
Oct  7 10:51:01 np0005473739 NetworkManager[44949]: <info>  [1759848661.5525] manager: (tap144643c5-91): new Tun device (/org/freedesktop/NetworkManager/Devices/624)
Oct  7 10:51:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:01Z|01558|binding|INFO|Claiming lport 144643c5-91ec-4bd7-a646-3d64339b6691 for this chassis.
Oct  7 10:51:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:01Z|01559|binding|INFO|144643c5-91ec-4bd7-a646-3d64339b6691: Claiming fa:16:3e:24:52:85 10.100.0.7
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.553 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.568 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:52:85 10.100.0.7'], port_security=['fa:16:3e:24:52:85 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9b571b54-001b-4da1-8b91-2659b1fbaac6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d6dba35-f647-4d4e-a538-87de70ead701', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5665608a-5872-470d-ac7c-b6ba91a59eac e6cb9e17-5a87-48e7-beaf-ada7a8c060a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=459b9ec0-d62a-47af-8167-94e59b8e15bc, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=144643c5-91ec-4bd7-a646-3d64339b6691) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.570 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 144643c5-91ec-4bd7-a646-3d64339b6691 in datapath 7d6dba35-f647-4d4e-a538-87de70ead701 bound to our chassis#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.572 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7d6dba35-f647-4d4e-a538-87de70ead701#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.592 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f5768da1-4d40-41e7-874f-c198261c0aad]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.594 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7d6dba35-f1 in ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.598 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7d6dba35-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.598 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[074ee520-53ae-48cf-b170-ed88cf8b5445]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 systemd-machined[214580]: New machine qemu-176-instance-0000008e.
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.600 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1683de15-df04-4cd2-9d84-2c3c34b207c9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 systemd-udevd[415163]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.615 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[ad766a94-922d-437a-8ba7-bc22beaffd2c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 NetworkManager[44949]: <info>  [1759848661.6179] device (tap144643c5-91): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:51:01 np0005473739 NetworkManager[44949]: <info>  [1759848661.6193] device (tap144643c5-91): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.625 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:01 np0005473739 systemd[1]: Started Virtual Machine qemu-176-instance-0000008e.
Oct  7 10:51:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:01Z|01560|binding|INFO|Setting lport 144643c5-91ec-4bd7-a646-3d64339b6691 ovn-installed in OVS
Oct  7 10:51:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:01Z|01561|binding|INFO|Setting lport 144643c5-91ec-4bd7-a646-3d64339b6691 up in Southbound
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.632 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[53ee17e1-7d48-457b-9f8d-0510984ec249]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.665 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[c8851b64-c3f6-41fa-b462-6da69acfdb4f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 NetworkManager[44949]: <info>  [1759848661.6713] manager: (tap7d6dba35-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/625)
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.669 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5aca7b83-1162-4e68-b3dc-14d53a5c15e5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.706 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d20e6b19-6a9e-4f27-b505-4e59dca7c05a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.710 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f6a8ea29-f930-4d9e-a3b9-0d92480443c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 NetworkManager[44949]: <info>  [1759848661.7397] device (tap7d6dba35-f0): carrier: link connected
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.746 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[683f301d-ad14-4f05-8d9a-bb56e8c14827]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.765 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7853c0d-1976-4cfc-8f59-abc1bdafc232]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7d6dba35-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:31:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 924530, 'reachable_time': 28816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 415195, 'error': None, 'target': 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.780 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b1398a94-f0fa-4555-8142-5121e8901ce9]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea2:31a5'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 924530, 'tstamp': 924530}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 415196, 'error': None, 'target': 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.802 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd32287-f1ba-4f0d-a710-1b2a9fa13810]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7d6dba35-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a2:31:a5'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 449], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 924530, 'reachable_time': 28816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 415197, 'error': None, 'target': 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.821 2 DEBUG nova.compute.manager [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.822 2 DEBUG oslo_concurrency.lockutils [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.822 2 DEBUG oslo_concurrency.lockutils [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.822 2 DEBUG oslo_concurrency.lockutils [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.822 2 DEBUG nova.compute.manager [req-2dc0fa44-b0f8-47a3-b7eb-1e212623608f req-b9ec8e9c-5e01-4765-9ea7-bb936fa48a5e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Processing event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.840 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[61c0ab09-2aa3-42c0-852a-f6c9acac90d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.914 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6c3e676c-3938-41d0-b0f7-918eac41bd6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.916 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d6dba35-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.917 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.917 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7d6dba35-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.918 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:01 np0005473739 NetworkManager[44949]: <info>  [1759848661.9193] manager: (tap7d6dba35-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/626)
Oct  7 10:51:01 np0005473739 kernel: tap7d6dba35-f0: entered promiscuous mode
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.921 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.923 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7d6dba35-f0, col_values=(('external_ids', {'iface-id': '271db248-560d-4b3a-bd65-947c8ea27522'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:01Z|01562|binding|INFO|Releasing lport 271db248-560d-4b3a-bd65-947c8ea27522 from this chassis (sb_readonly=0)
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.926 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7d6dba35-f647-4d4e-a538-87de70ead701.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7d6dba35-f647-4d4e-a538-87de70ead701.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.930 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e175ed51-1056-4df3-87fd-a8631d36ea0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.931 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-7d6dba35-f647-4d4e-a538-87de70ead701
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/7d6dba35-f647-4d4e-a538-87de70ead701.pid.haproxy
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 7d6dba35-f647-4d4e-a538-87de70ead701
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:51:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:01.932 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'env', 'PROCESS_TAG=haproxy-7d6dba35-f647-4d4e-a538-87de70ead701', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7d6dba35-f647-4d4e-a538-87de70ead701.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:51:01 np0005473739 nova_compute[259550]: 2025-10-07 14:51:01.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:02 np0005473739 podman[415271]: 2025-10-07 14:51:02.331420375 +0000 UTC m=+0.049422535 container create 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 10:51:02 np0005473739 systemd[1]: Started libpod-conmon-9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b.scope.
Oct  7 10:51:02 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:51:02 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c990b5ffee34bb0ee8d8fa81835ad0deb9249f2ad983045d67c79c75ff5af772/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:02 np0005473739 podman[415271]: 2025-10-07 14:51:02.302674323 +0000 UTC m=+0.020676503 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:51:02 np0005473739 podman[415271]: 2025-10-07 14:51:02.400494727 +0000 UTC m=+0.118496917 container init 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 10:51:02 np0005473739 podman[415271]: 2025-10-07 14:51:02.40608419 +0000 UTC m=+0.124086350 container start 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 10:51:02 np0005473739 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [NOTICE]   (415290) : New worker (415292) forked
Oct  7 10:51:02 np0005473739 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [NOTICE]   (415290) : Loading success.
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.462 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.463 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848662.462222, 9b571b54-001b-4da1-8b91-2659b1fbaac6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.464 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] VM Started (Lifecycle Event)#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.466 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.470 2 INFO nova.virt.libvirt.driver [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance spawned successfully.#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.470 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.494 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.501 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.501 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.502 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.502 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.502 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.503 2 DEBUG nova.virt.libvirt.driver [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.508 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.543 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.544 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848662.4624236, 9b571b54-001b-4da1-8b91-2659b1fbaac6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.544 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.584 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.588 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848662.4658854, 9b571b54-001b-4da1-8b91-2659b1fbaac6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.588 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.601 2 INFO nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Took 8.78 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.602 2 DEBUG nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.614 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.618 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.656 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.683 2 INFO nova.compute.manager [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Took 9.79 seconds to build instance.#033[00m
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.704 2 DEBUG oslo_concurrency.lockutils [None req-2bf6d535-0883-4940-95e9-e5209b0832e5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2664: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:51:02 np0005473739 nova_compute[259550]: 2025-10-07 14:51:02.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:03 np0005473739 nova_compute[259550]: 2025-10-07 14:51:03.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:03 np0005473739 nova_compute[259550]: 2025-10-07 14:51:03.932 2 DEBUG nova.compute.manager [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:51:03 np0005473739 nova_compute[259550]: 2025-10-07 14:51:03.933 2 DEBUG oslo_concurrency.lockutils [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:03 np0005473739 nova_compute[259550]: 2025-10-07 14:51:03.933 2 DEBUG oslo_concurrency.lockutils [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:03 np0005473739 nova_compute[259550]: 2025-10-07 14:51:03.933 2 DEBUG oslo_concurrency.lockutils [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:03 np0005473739 nova_compute[259550]: 2025-10-07 14:51:03.939 2 DEBUG nova.compute.manager [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] No waiting events found dispatching network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:51:03 np0005473739 nova_compute[259550]: 2025-10-07 14:51:03.939 2 WARNING nova.compute.manager [req-12163b53-0f8e-43dd-8363-2340f1bcb0a8 req-43791f4e-511e-449f-b41c-23d912266732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received unexpected event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:51:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2665: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 67 op/s
Oct  7 10:51:04 np0005473739 nova_compute[259550]: 2025-10-07 14:51:04.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:05 np0005473739 nova_compute[259550]: 2025-10-07 14:51:05.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:05 np0005473739 nova_compute[259550]: 2025-10-07 14:51:05.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:05 np0005473739 nova_compute[259550]: 2025-10-07 14:51:05.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:51:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2666: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.8 MiB/s wr, 96 op/s
Oct  7 10:51:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:06Z|01563|binding|INFO|Releasing lport 271db248-560d-4b3a-bd65-947c8ea27522 from this chassis (sb_readonly=0)
Oct  7 10:51:06 np0005473739 nova_compute[259550]: 2025-10-07 14:51:06.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:06 np0005473739 NetworkManager[44949]: <info>  [1759848666.8179] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/627)
Oct  7 10:51:06 np0005473739 NetworkManager[44949]: <info>  [1759848666.8186] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/628)
Oct  7 10:51:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:06Z|01564|binding|INFO|Releasing lport 271db248-560d-4b3a-bd65-947c8ea27522 from this chassis (sb_readonly=0)
Oct  7 10:51:06 np0005473739 nova_compute[259550]: 2025-10-07 14:51:06.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:06 np0005473739 nova_compute[259550]: 2025-10-07 14:51:06.852 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:06 np0005473739 nova_compute[259550]: 2025-10-07 14:51:06.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:06 np0005473739 nova_compute[259550]: 2025-10-07 14:51:06.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:08 np0005473739 nova_compute[259550]: 2025-10-07 14:51:08.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:08 np0005473739 nova_compute[259550]: 2025-10-07 14:51:08.571 2 DEBUG nova.compute.manager [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:51:08 np0005473739 nova_compute[259550]: 2025-10-07 14:51:08.572 2 DEBUG nova.compute.manager [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing instance network info cache due to event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:51:08 np0005473739 nova_compute[259550]: 2025-10-07 14:51:08.572 2 DEBUG oslo_concurrency.lockutils [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:51:08 np0005473739 nova_compute[259550]: 2025-10-07 14:51:08.572 2 DEBUG oslo_concurrency.lockutils [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:51:08 np0005473739 nova_compute[259550]: 2025-10-07 14:51:08.573 2 DEBUG nova.network.neutron [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:51:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2667: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 75 op/s
Oct  7 10:51:09 np0005473739 nova_compute[259550]: 2025-10-07 14:51:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_shelved_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2668: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:51:10 np0005473739 nova_compute[259550]: 2025-10-07 14:51:10.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2669: 305 pgs: 305 active+clean; 88 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:51:12 np0005473739 nova_compute[259550]: 2025-10-07 14:51:12.994 2 DEBUG nova.network.neutron [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated VIF entry in instance network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:51:12 np0005473739 nova_compute[259550]: 2025-10-07 14:51:12.995 2 DEBUG nova.network.neutron [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:51:13 np0005473739 nova_compute[259550]: 2025-10-07 14:51:13.149 2 DEBUG oslo_concurrency.lockutils [req-3116c987-dbbd-442c-9ca4-0d64dadcd0a6 req-70ba1d85-d699-4e02-98ea-9a92c8f8c369 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:51:13 np0005473739 nova_compute[259550]: 2025-10-07 14:51:13.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:13 np0005473739 nova_compute[259550]: 2025-10-07 14:51:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:14 np0005473739 nova_compute[259550]: 2025-10-07 14:51:14.051 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:14 np0005473739 nova_compute[259550]: 2025-10-07 14:51:14.052 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:14 np0005473739 nova_compute[259550]: 2025-10-07 14:51:14.052 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:14 np0005473739 nova_compute[259550]: 2025-10-07 14:51:14.052 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:51:14 np0005473739 nova_compute[259550]: 2025-10-07 14:51:14.053 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:14Z|00190|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:24:52:85 10.100.0.7
Oct  7 10:51:14 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:14Z|00191|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:24:52:85 10.100.0.7
Oct  7 10:51:14 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:51:14 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842761179' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:51:14 np0005473739 nova_compute[259550]: 2025-10-07 14:51:14.551 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2670: 305 pgs: 305 active+clean; 91 MiB data, 983 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 455 KiB/s wr, 80 op/s
Oct  7 10:51:14 np0005473739 nova_compute[259550]: 2025-10-07 14:51:14.838 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:51:14 np0005473739 nova_compute[259550]: 2025-10-07 14:51:14.838 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.012 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3468MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.159 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9b571b54-001b-4da1-8b91-2659b1fbaac6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.160 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.160 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.197 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:51:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1763728664' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.676 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.682 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:15 np0005473739 nova_compute[259550]: 2025-10-07 14:51:15.768 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:51:16 np0005473739 nova_compute[259550]: 2025-10-07 14:51:16.057 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:51:16 np0005473739 nova_compute[259550]: 2025-10-07 14:51:16.058 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2671: 305 pgs: 305 active+clean; 95 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 970 KiB/s rd, 746 KiB/s wr, 64 op/s
Oct  7 10:51:18 np0005473739 nova_compute[259550]: 2025-10-07 14:51:18.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2672: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 461 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  7 10:51:19 np0005473739 nova_compute[259550]: 2025-10-07 14:51:19.059 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:19 np0005473739 nova_compute[259550]: 2025-10-07 14:51:19.059 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:51:19 np0005473739 nova_compute[259550]: 2025-10-07 14:51:19.059 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:51:19 np0005473739 podman[415350]: 2025-10-07 14:51:19.076709723 +0000 UTC m=+0.058733136 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:51:19 np0005473739 podman[415349]: 2025-10-07 14:51:19.082055891 +0000 UTC m=+0.066702086 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Oct  7 10:51:20 np0005473739 nova_compute[259550]: 2025-10-07 14:51:20.062 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:51:20 np0005473739 nova_compute[259550]: 2025-10-07 14:51:20.063 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:51:20 np0005473739 nova_compute[259550]: 2025-10-07 14:51:20.063 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:51:20 np0005473739 nova_compute[259550]: 2025-10-07 14:51:20.063 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:51:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2673: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 10:51:20 np0005473739 nova_compute[259550]: 2025-10-07 14:51:20.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:22 np0005473739 nova_compute[259550]: 2025-10-07 14:51:22.668 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2674: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:51:22
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'vms', 'images', 'default.rgw.meta', 'volumes', 'backups', '.mgr', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'cephfs.cephfs.data']
Oct  7 10:51:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:51:22 np0005473739 nova_compute[259550]: 2025-10-07 14:51:22.817 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:51:22 np0005473739 nova_compute[259550]: 2025-10-07 14:51:22.818 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:51:22 np0005473739 nova_compute[259550]: 2025-10-07 14:51:22.818 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:51:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:51:23 np0005473739 nova_compute[259550]: 2025-10-07 14:51:23.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2675: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 326 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:51:25 np0005473739 nova_compute[259550]: 2025-10-07 14:51:25.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:25 np0005473739 nova_compute[259550]: 2025-10-07 14:51:25.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2676: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 1.7 MiB/s wr, 56 op/s
Oct  7 10:51:28 np0005473739 nova_compute[259550]: 2025-10-07 14:51:28.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2677: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 224 KiB/s rd, 1.4 MiB/s wr, 32 op/s
Oct  7 10:51:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2678: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct  7 10:51:30 np0005473739 nova_compute[259550]: 2025-10-07 14:51:30.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:31 np0005473739 podman[415387]: 2025-10-07 14:51:31.071320213 +0000 UTC m=+0.057846026 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS)
Oct  7 10:51:31 np0005473739 podman[415388]: 2025-10-07 14:51:31.094883223 +0000 UTC m=+0.078684371 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:51:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:51:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1933739025' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:51:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:51:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1933739025' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2679: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075901868854594 of space, bias 1.0, pg target 0.227705606563782 quantized to 32 (current 32)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:51:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:51:33 np0005473739 nova_compute[259550]: 2025-10-07 14:51:33.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2680: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 13 KiB/s wr, 0 op/s
Oct  7 10:51:35 np0005473739 nova_compute[259550]: 2025-10-07 14:51:35.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2681: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct  7 10:51:37 np0005473739 nova_compute[259550]: 2025-10-07 14:51:37.508 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:38 np0005473739 nova_compute[259550]: 2025-10-07 14:51:38.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2682: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  7 10:51:40 np0005473739 nova_compute[259550]: 2025-10-07 14:51:40.271 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:40.271 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=51, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=50) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:51:40 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:40.273 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:51:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2683: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  7 10:51:40 np0005473739 nova_compute[259550]: 2025-10-07 14:51:40.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2684: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  7 10:51:43 np0005473739 nova_compute[259550]: 2025-10-07 14:51:43.330 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2685: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:51:45 np0005473739 nova_compute[259550]: 2025-10-07 14:51:45.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:51:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 23b38127-8c77-4021-8ac6-4c1eba8d7e52 does not exist
Oct  7 10:51:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 56564b26-f2b0-402b-ab5c-7011145e9b57 does not exist
Oct  7 10:51:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ca1acce2-3704-462a-a24e-ae3103fed068 does not exist
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:51:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:51:45 np0005473739 nova_compute[259550]: 2025-10-07 14:51:45.799 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:45 np0005473739 nova_compute[259550]: 2025-10-07 14:51:45.799 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:45 np0005473739 nova_compute[259550]: 2025-10-07 14:51:45.841 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:51:45 np0005473739 nova_compute[259550]: 2025-10-07 14:51:45.966 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:45 np0005473739 nova_compute[259550]: 2025-10-07 14:51:45.967 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:45 np0005473739 nova_compute[259550]: 2025-10-07 14:51:45.978 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:51:45 np0005473739 nova_compute[259550]: 2025-10-07 14:51:45.979 2 INFO nova.compute.claims [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:51:46 np0005473739 nova_compute[259550]: 2025-10-07 14:51:46.262 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:46 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:46.275 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '51'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:51:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:51:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:51:46 np0005473739 podman[415700]: 2025-10-07 14:51:46.428210817 +0000 UTC m=+0.101697228 container create b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:51:46 np0005473739 podman[415700]: 2025-10-07 14:51:46.352970629 +0000 UTC m=+0.026457070 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:51:46 np0005473739 systemd[1]: Started libpod-conmon-b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497.scope.
Oct  7 10:51:46 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:51:46 np0005473739 podman[415700]: 2025-10-07 14:51:46.696187463 +0000 UTC m=+0.369673904 container init b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:51:46 np0005473739 podman[415700]: 2025-10-07 14:51:46.705600806 +0000 UTC m=+0.379087207 container start b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 10:51:46 np0005473739 festive_williamson[415735]: 167 167
Oct  7 10:51:46 np0005473739 systemd[1]: libpod-b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497.scope: Deactivated successfully.
Oct  7 10:51:46 np0005473739 conmon[415735]: conmon b37509a12264de23ee8a <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497.scope/container/memory.events
Oct  7 10:51:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2686: 305 pgs: 305 active+clean; 121 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  7 10:51:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:51:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/132995398' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:51:46 np0005473739 podman[415700]: 2025-10-07 14:51:46.777020313 +0000 UTC m=+0.450506734 container attach b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:51:46 np0005473739 podman[415700]: 2025-10-07 14:51:46.777450003 +0000 UTC m=+0.450936444 container died b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:51:46 np0005473739 nova_compute[259550]: 2025-10-07 14:51:46.780 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:46 np0005473739 nova_compute[259550]: 2025-10-07 14:51:46.790 2 DEBUG nova.compute.provider_tree [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:51:46 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b93e86c5e0702438d9fb1d06871ce67d37e05cfd56ea222b83c5201d68fe325e-merged.mount: Deactivated successfully.
Oct  7 10:51:46 np0005473739 nova_compute[259550]: 2025-10-07 14:51:46.843 2 DEBUG nova.scheduler.client.report [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:51:46 np0005473739 podman[415700]: 2025-10-07 14:51:46.866161211 +0000 UTC m=+0.539647632 container remove b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=festive_williamson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:51:46 np0005473739 systemd[1]: libpod-conmon-b37509a12264de23ee8aff3997fc84023090aaa914762ce5ee1c80835bfc5497.scope: Deactivated successfully.
Oct  7 10:51:47 np0005473739 podman[415761]: 2025-10-07 14:51:47.039278003 +0000 UTC m=+0.024703497 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:51:47 np0005473739 podman[415761]: 2025-10-07 14:51:47.137327412 +0000 UTC m=+0.122752886 container create 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.149 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 1.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.150 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:51:47 np0005473739 systemd[1]: Started libpod-conmon-140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d.scope.
Oct  7 10:51:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:51:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:47 np0005473739 podman[415761]: 2025-10-07 14:51:47.269413421 +0000 UTC m=+0.254838885 container init 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:51:47 np0005473739 podman[415761]: 2025-10-07 14:51:47.280528575 +0000 UTC m=+0.265954049 container start 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 10:51:47 np0005473739 podman[415761]: 2025-10-07 14:51:47.302981338 +0000 UTC m=+0.288406802 container attach 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.373 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.374 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.523 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.722 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.931 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.933 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.934 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Creating image(s)#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.954 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.975 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:47 np0005473739 nova_compute[259550]: 2025-10-07 14:51:47.998 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.002 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.090 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.091 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.092 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.092 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.113 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.119 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 89da30f4-cbe5-4f70-8597-b215d57427e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.164 2 DEBUG nova.policy [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'f756e2b18f7246a48f99aaa3bb77a5c3', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '969793770e3f43a48c49fbd115a24ce2', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.332 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:48 np0005473739 magical_euler[415778]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:51:48 np0005473739 magical_euler[415778]: --> relative data size: 1.0
Oct  7 10:51:48 np0005473739 magical_euler[415778]: --> All data devices are unavailable
Oct  7 10:51:48 np0005473739 systemd[1]: libpod-140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d.scope: Deactivated successfully.
Oct  7 10:51:48 np0005473739 systemd[1]: libpod-140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d.scope: Consumed 1.059s CPU time.
Oct  7 10:51:48 np0005473739 podman[415761]: 2025-10-07 14:51:48.417444844 +0000 UTC m=+1.402870338 container died 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:51:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1937279c8bf8f6999014c95b338431a049ef9133ef1f3381bbb042b992cedea4-merged.mount: Deactivated successfully.
Oct  7 10:51:48 np0005473739 podman[415761]: 2025-10-07 14:51:48.604411255 +0000 UTC m=+1.589836729 container remove 140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:51:48 np0005473739 systemd[1]: libpod-conmon-140875aab2b111b045744d4f98dff9f7750a33fea23fa44b5e3ee08d4ea8468d.scope: Deactivated successfully.
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.625 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 89da30f4-cbe5-4f70-8597-b215d57427e0_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.506s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.702 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] resizing rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:51:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2687: 305 pgs: 305 active+clean; 143 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 8.1 KiB/s rd, 560 KiB/s wr, 14 op/s
Oct  7 10:51:48 np0005473739 nova_compute[259550]: 2025-10-07 14:51:48.898 2 DEBUG nova.objects.instance [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lazy-loading 'migration_context' on Instance uuid 89da30f4-cbe5-4f70-8597-b215d57427e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:51:49 np0005473739 nova_compute[259550]: 2025-10-07 14:51:49.164 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:51:49 np0005473739 nova_compute[259550]: 2025-10-07 14:51:49.164 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Ensure instance console log exists: /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:51:49 np0005473739 nova_compute[259550]: 2025-10-07 14:51:49.165 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:49 np0005473739 nova_compute[259550]: 2025-10-07 14:51:49.165 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:49 np0005473739 nova_compute[259550]: 2025-10-07 14:51:49.165 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:49 np0005473739 podman[416123]: 2025-10-07 14:51:49.245857384 +0000 UTC m=+0.037599115 container create 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:51:49 np0005473739 systemd[1]: Started libpod-conmon-678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca.scope.
Oct  7 10:51:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:51:49 np0005473739 podman[416123]: 2025-10-07 14:51:49.318733765 +0000 UTC m=+0.110475526 container init 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:51:49 np0005473739 podman[416123]: 2025-10-07 14:51:49.229544597 +0000 UTC m=+0.021286348 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:51:49 np0005473739 podman[416123]: 2025-10-07 14:51:49.327482013 +0000 UTC m=+0.119223754 container start 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:51:49 np0005473739 podman[416123]: 2025-10-07 14:51:49.332277587 +0000 UTC m=+0.124019318 container attach 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:51:49 np0005473739 romantic_gagarin[416141]: 167 167
Oct  7 10:51:49 np0005473739 systemd[1]: libpod-678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca.scope: Deactivated successfully.
Oct  7 10:51:49 np0005473739 podman[416123]: 2025-10-07 14:51:49.336969998 +0000 UTC m=+0.128711739 container died 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:51:49 np0005473739 podman[416140]: 2025-10-07 14:51:49.348098232 +0000 UTC m=+0.065390373 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:51:49 np0005473739 podman[416137]: 2025-10-07 14:51:49.350801487 +0000 UTC m=+0.068371636 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:51:49 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a380edfa45d7f4dd9ce1a931a7a0cec048f4db85680172a96cba3284d96f4ff5-merged.mount: Deactivated successfully.
Oct  7 10:51:49 np0005473739 podman[416123]: 2025-10-07 14:51:49.45153721 +0000 UTC m=+0.243278941 container remove 678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_gagarin, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:51:49 np0005473739 systemd[1]: libpod-conmon-678aa6568cd73e1be1be9bde5e238df212096a41120892efea6bfb5092e4beca.scope: Deactivated successfully.
Oct  7 10:51:49 np0005473739 podman[416201]: 2025-10-07 14:51:49.624335556 +0000 UTC m=+0.049556419 container create b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:51:49 np0005473739 systemd[1]: Started libpod-conmon-b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7.scope.
Oct  7 10:51:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:51:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:49 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:49 np0005473739 podman[416201]: 2025-10-07 14:51:49.604814182 +0000 UTC m=+0.030035075 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:51:49 np0005473739 podman[416201]: 2025-10-07 14:51:49.709953309 +0000 UTC m=+0.135174202 container init b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:51:49 np0005473739 podman[416201]: 2025-10-07 14:51:49.718962903 +0000 UTC m=+0.144183766 container start b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 10:51:49 np0005473739 podman[416201]: 2025-10-07 14:51:49.72304665 +0000 UTC m=+0.148267513 container attach b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct  7 10:51:50 np0005473739 nova_compute[259550]: 2025-10-07 14:51:50.151 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Successfully created port: 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]: {
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:    "0": [
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:        {
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "devices": [
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "/dev/loop3"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            ],
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_name": "ceph_lv0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_size": "21470642176",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "name": "ceph_lv0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "tags": {
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cluster_name": "ceph",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.crush_device_class": "",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.encrypted": "0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osd_id": "0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.type": "block",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.vdo": "0"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            },
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "type": "block",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "vg_name": "ceph_vg0"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:        }
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:    ],
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:    "1": [
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:        {
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "devices": [
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "/dev/loop4"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            ],
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_name": "ceph_lv1",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_size": "21470642176",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "name": "ceph_lv1",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "tags": {
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cluster_name": "ceph",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.crush_device_class": "",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.encrypted": "0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osd_id": "1",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.type": "block",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.vdo": "0"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            },
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "type": "block",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "vg_name": "ceph_vg1"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:        }
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:    ],
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:    "2": [
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:        {
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "devices": [
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "/dev/loop5"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            ],
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_name": "ceph_lv2",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_size": "21470642176",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "name": "ceph_lv2",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "tags": {
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.cluster_name": "ceph",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.crush_device_class": "",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.encrypted": "0",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osd_id": "2",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.type": "block",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:                "ceph.vdo": "0"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            },
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "type": "block",
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:            "vg_name": "ceph_vg2"
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:        }
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]:    ]
Oct  7 10:51:50 np0005473739 sharp_torvalds[416218]: }
Oct  7 10:51:50 np0005473739 podman[416201]: 2025-10-07 14:51:50.50933063 +0000 UTC m=+0.934551503 container died b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 10:51:50 np0005473739 systemd[1]: libpod-b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7.scope: Deactivated successfully.
Oct  7 10:51:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4e90f6e2dd1b058541592aa495f3ea0c5f5c829fb339b7b75bdfe96209a2d499-merged.mount: Deactivated successfully.
Oct  7 10:51:50 np0005473739 podman[416201]: 2025-10-07 14:51:50.574738053 +0000 UTC m=+0.999958916 container remove b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_torvalds, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 10:51:50 np0005473739 systemd[1]: libpod-conmon-b0df088568a1969bb296bde69c9a2b87a05b70d75e0d77cbe85cc4baa5912da7.scope: Deactivated successfully.
Oct  7 10:51:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2688: 305 pgs: 305 active+clean; 167 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Oct  7 10:51:50 np0005473739 nova_compute[259550]: 2025-10-07 14:51:50.755 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:51 np0005473739 podman[416379]: 2025-10-07 14:51:51.189498137 +0000 UTC m=+0.040163054 container create d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:51:51 np0005473739 systemd[1]: Started libpod-conmon-d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696.scope.
Oct  7 10:51:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:51:51 np0005473739 podman[416379]: 2025-10-07 14:51:51.173999699 +0000 UTC m=+0.024664636 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:51:51 np0005473739 podman[416379]: 2025-10-07 14:51:51.271848434 +0000 UTC m=+0.122513381 container init d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:51:51 np0005473739 podman[416379]: 2025-10-07 14:51:51.279153378 +0000 UTC m=+0.129818295 container start d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:51:51 np0005473739 podman[416379]: 2025-10-07 14:51:51.283118911 +0000 UTC m=+0.133783848 container attach d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:51:51 np0005473739 dazzling_thompson[416395]: 167 167
Oct  7 10:51:51 np0005473739 systemd[1]: libpod-d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696.scope: Deactivated successfully.
Oct  7 10:51:51 np0005473739 podman[416379]: 2025-10-07 14:51:51.286594645 +0000 UTC m=+0.137259592 container died d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:51:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3b7adf8e52656e9c1301aaa4ed37cb2cd20574e1758dde5b37566df63f487a17-merged.mount: Deactivated successfully.
Oct  7 10:51:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:51 np0005473739 podman[416379]: 2025-10-07 14:51:51.326760249 +0000 UTC m=+0.177425166 container remove d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=dazzling_thompson, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:51:51 np0005473739 systemd[1]: libpod-conmon-d1e2658deed5eac7f46a0038f45cdca2048eaa7e7caf8994d90b24f4e8c4f696.scope: Deactivated successfully.
Oct  7 10:51:51 np0005473739 nova_compute[259550]: 2025-10-07 14:51:51.384 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Successfully updated port: 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:51:51 np0005473739 nova_compute[259550]: 2025-10-07 14:51:51.430 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:51:51 np0005473739 nova_compute[259550]: 2025-10-07 14:51:51.431 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquired lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:51:51 np0005473739 nova_compute[259550]: 2025-10-07 14:51:51.431 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:51:51 np0005473739 podman[416418]: 2025-10-07 14:51:51.501175062 +0000 UTC m=+0.040741389 container create b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:51:51 np0005473739 systemd[1]: Started libpod-conmon-b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac.scope.
Oct  7 10:51:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:51:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:51 np0005473739 podman[416418]: 2025-10-07 14:51:51.484368143 +0000 UTC m=+0.023934480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:51:51 np0005473739 podman[416418]: 2025-10-07 14:51:51.589142872 +0000 UTC m=+0.128709229 container init b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True)
Oct  7 10:51:51 np0005473739 podman[416418]: 2025-10-07 14:51:51.59747898 +0000 UTC m=+0.137045317 container start b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:51:51 np0005473739 podman[416418]: 2025-10-07 14:51:51.603702067 +0000 UTC m=+0.143268404 container attach b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:51:51 np0005473739 nova_compute[259550]: 2025-10-07 14:51:51.633 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:51:51 np0005473739 nova_compute[259550]: 2025-10-07 14:51:51.680 2 DEBUG nova.compute.manager [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:51:51 np0005473739 nova_compute[259550]: 2025-10-07 14:51:51.680 2 DEBUG nova.compute.manager [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing instance network info cache due to event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:51:51 np0005473739 nova_compute[259550]: 2025-10-07 14:51:51.680 2 DEBUG oslo_concurrency.lockutils [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]: {
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "osd_id": 2,
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "type": "bluestore"
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:    },
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "osd_id": 1,
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "type": "bluestore"
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:    },
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "osd_id": 0,
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:        "type": "bluestore"
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]:    }
Oct  7 10:51:52 np0005473739 determined_satoshi[416434]: }
Oct  7 10:51:52 np0005473739 systemd[1]: libpod-b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac.scope: Deactivated successfully.
Oct  7 10:51:52 np0005473739 podman[416418]: 2025-10-07 14:51:52.560718983 +0000 UTC m=+1.100285320 container died b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 10:51:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3504e3312785d7c1a0a1ac352201bf43c509a1297ae5c04a535bc874fcae1c72-merged.mount: Deactivated successfully.
Oct  7 10:51:52 np0005473739 podman[416418]: 2025-10-07 14:51:52.619409808 +0000 UTC m=+1.158976145 container remove b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_satoshi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 10:51:52 np0005473739 systemd[1]: libpod-conmon-b4b7c8dbe4346ba614b15264d1257e47527fd0c84f5a9a742c49069bdefd90ac.scope: Deactivated successfully.
Oct  7 10:51:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:51:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:51:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:51:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e36d1196-efed-4b03-8ce2-99ba3b4e2113 does not exist
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0b4aba54-208b-4412-a8f8-b6a889c554e5 does not exist
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:51:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2689: 305 pgs: 305 active+clean; 167 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 8.2 KiB/s rd, 1.8 MiB/s wr, 16 op/s
Oct  7 10:51:53 np0005473739 nova_compute[259550]: 2025-10-07 14:51:53.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:51:53 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:51:53 np0005473739 nova_compute[259550]: 2025-10-07 14:51:53.964 2 DEBUG nova.network.neutron [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.034 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Releasing lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.035 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance network_info: |[{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.035 2 DEBUG oslo_concurrency.lockutils [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.035 2 DEBUG nova.network.neutron [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.039 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start _get_guest_xml network_info=[{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.043 2 WARNING nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.050 2 DEBUG nova.virt.libvirt.host [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.051 2 DEBUG nova.virt.libvirt.host [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.055 2 DEBUG nova.virt.libvirt.host [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.056 2 DEBUG nova.virt.libvirt.host [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.056 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.056 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.057 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.058 2 DEBUG nova.virt.hardware [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.061 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:51:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2298750681' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.535 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.560 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:54 np0005473739 nova_compute[259550]: 2025-10-07 14:51:54.564 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2690: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:51:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:51:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1626967430' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.008 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.010 2 DEBUG nova.virt.libvirt.vif [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-987945970-acc',id=143,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKkGL7CpbVdMuxEtVgVuwQZuX1JagsFc33SU+kYXfEMvDW2NlVQUlEd18hOTn1D9VxBpXbBu9ycn+w26VHLMwz7Ov9eXBoA6pi1tNnQXvgt/I9xVMdBkIMeLhNwY3Vzj3A==',key_name='tempest-TestSecurityGroupsBasicOps-1225276649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='969793770e3f43a48c49fbd115a24ce2',ramdisk_id='',reservation_id='r-iv0cq43w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-987945970',owner_user_name='tempest-TestSecurityGroupsBasicOps-987945970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:51:47Z,user_data=None,user_id='f756e2b18f7246a48f99aaa3bb77a5c3',uuid=89da30f4-cbe5-4f70-8597-b215d57427e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.011 2 DEBUG nova.network.os_vif_util [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converting VIF {"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.011 2 DEBUG nova.network.os_vif_util [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.013 2 DEBUG nova.objects.instance [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89da30f4-cbe5-4f70-8597-b215d57427e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.103 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <uuid>89da30f4-cbe5-4f70-8597-b215d57427e0</uuid>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <name>instance-0000008f</name>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505</nova:name>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:51:54</nova:creationTime>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <nova:user uuid="f756e2b18f7246a48f99aaa3bb77a5c3">tempest-TestSecurityGroupsBasicOps-987945970-project-member</nova:user>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <nova:project uuid="969793770e3f43a48c49fbd115a24ce2">tempest-TestSecurityGroupsBasicOps-987945970</nova:project>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <nova:port uuid="5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <entry name="serial">89da30f4-cbe5-4f70-8597-b215d57427e0</entry>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <entry name="uuid">89da30f4-cbe5-4f70-8597-b215d57427e0</entry>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/89da30f4-cbe5-4f70-8597-b215d57427e0_disk">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:a4:e9:5b"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <target dev="tap5ad3e5c8-f8"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/console.log" append="off"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:51:55 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:51:55 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:51:55 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:51:55 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.104 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Preparing to wait for external event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.104 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.105 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.105 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.105 2 DEBUG nova.virt.libvirt.vif [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-987945970-acc',id=143,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKkGL7CpbVdMuxEtVgVuwQZuX1JagsFc33SU+kYXfEMvDW2NlVQUlEd18hOTn1D9VxBpXbBu9ycn+w26VHLMwz7Ov9eXBoA6pi1tNnQXvgt/I9xVMdBkIMeLhNwY3Vzj3A==',key_name='tempest-TestSecurityGroupsBasicOps-1225276649',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='969793770e3f43a48c49fbd115a24ce2',ramdisk_id='',reservation_id='r-iv0cq43w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-987945970',owner_user_name='tempest-TestSecurityGroupsBasicOps-987945970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:51:47Z,user_data=None,user_id='f756e2b18f7246a48f99aaa3bb77a5c3',uuid=89da30f4-cbe5-4f70-8597-b215d57427e0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.106 2 DEBUG nova.network.os_vif_util [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converting VIF {"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.106 2 DEBUG nova.network.os_vif_util [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.107 2 DEBUG os_vif [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.107 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.108 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.113 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5ad3e5c8-f8, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.114 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap5ad3e5c8-f8, col_values=(('external_ids', {'iface-id': '5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:e9:5b', 'vm-uuid': '89da30f4-cbe5-4f70-8597-b215d57427e0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.115 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:55 np0005473739 NetworkManager[44949]: <info>  [1759848715.1171] manager: (tap5ad3e5c8-f8): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/629)
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.126 2 INFO os_vif [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8')#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.217 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.217 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.217 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] No VIF found with MAC fa:16:3e:a4:e9:5b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.218 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Using config drive#033[00m
Oct  7 10:51:55 np0005473739 nova_compute[259550]: 2025-10-07 14:51:55.240 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:51:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2691: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #129. Immutable memtables: 0.
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.790652) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 77] Flushing memtable with next log file: 129
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716790734, "job": 77, "event": "flush_started", "num_memtables": 1, "num_entries": 2063, "num_deletes": 251, "total_data_size": 3430374, "memory_usage": 3490128, "flush_reason": "Manual Compaction"}
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 77] Level-0 flush table #130: started
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716810880, "cf_name": "default", "job": 77, "event": "table_file_creation", "file_number": 130, "file_size": 3363784, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 54420, "largest_seqno": 56482, "table_properties": {"data_size": 3354361, "index_size": 5980, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 19058, "raw_average_key_size": 20, "raw_value_size": 3335654, "raw_average_value_size": 3533, "num_data_blocks": 265, "num_entries": 944, "num_filter_entries": 944, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848488, "oldest_key_time": 1759848488, "file_creation_time": 1759848716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 130, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 77] Flush lasted 20613 microseconds, and 7300 cpu microseconds.
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.811273) [db/flush_job.cc:967] [default] [JOB 77] Level-0 flush table #130: 3363784 bytes OK
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.811303) [db/memtable_list.cc:519] [default] Level-0 commit table #130 started
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.813956) [db/memtable_list.cc:722] [default] Level-0 commit table #130: memtable #1 done
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.813971) EVENT_LOG_v1 {"time_micros": 1759848716813966, "job": 77, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.813986) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 77] Try to delete WAL files size 3421706, prev total WAL file size 3421706, number of live WAL files 2.
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000126.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.814854) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035323731' seq:72057594037927935, type:22 .. '7061786F730035353233' seq:0, type:0; will stop at (end)
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 78] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 77 Base level 0, inputs: [130(3284KB)], [128(8122KB)]
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716814886, "job": 78, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [130], "files_L6": [128], "score": -1, "input_data_size": 11681275, "oldest_snapshot_seqno": -1}
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 78] Generated table #131: 7736 keys, 10006781 bytes, temperature: kUnknown
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716873631, "cf_name": "default", "job": 78, "event": "table_file_creation", "file_number": 131, "file_size": 10006781, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9956386, "index_size": 29930, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19397, "raw_key_size": 200978, "raw_average_key_size": 25, "raw_value_size": 9819570, "raw_average_value_size": 1269, "num_data_blocks": 1169, "num_entries": 7736, "num_filter_entries": 7736, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848716, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 131, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.873853) [db/compaction/compaction_job.cc:1663] [default] [JOB 78] Compacted 1@0 + 1@6 files to L6 => 10006781 bytes
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.876748) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 198.6 rd, 170.2 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(9.5 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8250, records dropped: 514 output_compression: NoCompression
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.876767) EVENT_LOG_v1 {"time_micros": 1759848716876758, "job": 78, "event": "compaction_finished", "compaction_time_micros": 58807, "compaction_time_cpu_micros": 24269, "output_level": 6, "num_output_files": 1, "total_output_size": 10006781, "num_input_records": 8250, "num_output_records": 7736, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000130.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716877504, "job": 78, "event": "table_file_deletion", "file_number": 130}
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000128.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848716879292, "job": 78, "event": "table_file_deletion", "file_number": 128}
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.814764) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879345) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879352) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879354) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:51:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:51:56.879358) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:51:56 np0005473739 nova_compute[259550]: 2025-10-07 14:51:56.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:51:56 np0005473739 nova_compute[259550]: 2025-10-07 14:51:56.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.144 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Creating config drive at /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.149 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp92zjtbvh execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.299 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp92zjtbvh" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.328 2 DEBUG nova.storage.rbd_utils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] rbd image 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.332 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.377 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.513 2 DEBUG oslo_concurrency.processutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config 89da30f4-cbe5-4f70-8597-b215d57427e0_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.181s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.514 2 INFO nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Deleting local config drive /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0/disk.config because it was imported into RBD.#033[00m
Oct  7 10:51:57 np0005473739 kernel: tap5ad3e5c8-f8: entered promiscuous mode
Oct  7 10:51:57 np0005473739 NetworkManager[44949]: <info>  [1759848717.5702] manager: (tap5ad3e5c8-f8): new Tun device (/org/freedesktop/NetworkManager/Devices/630)
Oct  7 10:51:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:57Z|01565|binding|INFO|Claiming lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a for this chassis.
Oct  7 10:51:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:57Z|01566|binding|INFO|5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a: Claiming fa:16:3e:a4:e9:5b 10.100.0.9
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.571 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:57Z|01567|binding|INFO|Setting lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a ovn-installed in OVS
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:57 np0005473739 nova_compute[259550]: 2025-10-07 14:51:57.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:57 np0005473739 systemd-udevd[416661]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:51:57 np0005473739 NetworkManager[44949]: <info>  [1759848717.6167] device (tap5ad3e5c8-f8): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:51:57 np0005473739 NetworkManager[44949]: <info>  [1759848717.6180] device (tap5ad3e5c8-f8): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:51:57 np0005473739 systemd-machined[214580]: New machine qemu-177-instance-0000008f.
Oct  7 10:51:57 np0005473739 systemd[1]: Started Virtual Machine qemu-177-instance-0000008f.
Oct  7 10:51:57 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:57Z|01568|binding|INFO|Setting lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a up in Southbound
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.762 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:e9:5b 10.100.0.9'], port_security=['fa:16:3e:a4:e9:5b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '89da30f4-cbe5-4f70-8597-b215d57427e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0572f71-e177-44ba-b973-868f94fd5edc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '969793770e3f43a48c49fbd115a24ce2', 'neutron:revision_number': '2', 'neutron:security_group_ids': '246aff8e-3934-45b5-b52a-548706ef8690 bc0fccda-f204-449b-a489-9db51a7ff6e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9fa489b-3f19-42ed-a2d8-cef4ea357478, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.763 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a in datapath d0572f71-e177-44ba-b973-868f94fd5edc bound to our chassis#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.765 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d0572f71-e177-44ba-b973-868f94fd5edc#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.778 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[30acecf5-2b0b-4ec1-a2b8-9f9de41925d9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.779 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd0572f71-e1 in ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.784 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd0572f71-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.785 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e057764a-50c3-443d-9296-937c28148720]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.785 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fdd5891-6350-44ff-8c38-88f0fb9e5432]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.800 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1c24016b-d18f-44f9-9ef8-090044e462f5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.829 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5358ed78-0356-47fb-89b1-e680af184fe4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.855 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7aa63b11-9511-4463-bbb1-dbfdd9a82a80]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.860 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee891db6-03a4-4458-9a14-29ae37776fb4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 NetworkManager[44949]: <info>  [1759848717.8622] manager: (tapd0572f71-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/631)
Oct  7 10:51:57 np0005473739 systemd-udevd[416665]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.896 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[559acc35-3756-472e-ae84-a4b4ec036c98]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.899 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[eca288f1-2a23-4617-b335-7181683837bb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 NetworkManager[44949]: <info>  [1759848717.9293] device (tapd0572f71-e0): carrier: link connected
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.936 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d67d8f70-2b5c-4b96-8542-ed42e62710b5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.958 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0cae99da-1cfd-4521-b832-4eeb4b7d0094]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0572f71-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:76:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 451], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930149, 'reachable_time': 19606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 416697, 'error': None, 'target': 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.975 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[761f0934-4264-47ba-9770-63d782bd78d8]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe43:768d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 930149, 'tstamp': 930149}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 416698, 'error': None, 'target': 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:57 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:57.993 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7883cbc9-52ff-4b17-8b76-cf4a7978254a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd0572f71-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:43:76:8d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 451], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930149, 'reachable_time': 19606, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 416699, 'error': None, 'target': 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.020 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b3eab527-beca-446b-90df-d2e9ed3e7a03]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.099 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[dd46ac78-c38a-4d13-8667-9d1ade8e99cb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.100 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0572f71-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.100 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd0572f71-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:58 np0005473739 NetworkManager[44949]: <info>  [1759848718.1036] manager: (tapd0572f71-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/632)
Oct  7 10:51:58 np0005473739 kernel: tapd0572f71-e0: entered promiscuous mode
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.105 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd0572f71-e0, col_values=(('external_ids', {'iface-id': 'd9a91367-648e-44ca-a4fb-a9159c3c71a6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:51:58 np0005473739 ovn_controller[151684]: 2025-10-07T14:51:58Z|01569|binding|INFO|Releasing lport d9a91367-648e-44ca-a4fb-a9159c3c71a6 from this chassis (sb_readonly=0)
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.121 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d0572f71-e177-44ba-b973-868f94fd5edc.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d0572f71-e177-44ba-b973-868f94fd5edc.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.122 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[51a0b905-cfa9-4bbd-8150-5779a096a8f4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.123 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d0572f71-e177-44ba-b973-868f94fd5edc
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d0572f71-e177-44ba-b973-868f94fd5edc.pid.haproxy
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d0572f71-e177-44ba-b973-868f94fd5edc
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:51:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:51:58.123 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'env', 'PROCESS_TAG=haproxy-d0572f71-e177-44ba-b973-868f94fd5edc', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d0572f71-e177-44ba-b973-868f94fd5edc.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.335 2 DEBUG nova.compute.manager [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.335 2 DEBUG oslo_concurrency.lockutils [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.336 2 DEBUG oslo_concurrency.lockutils [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.336 2 DEBUG oslo_concurrency.lockutils [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.336 2 DEBUG nova.compute.manager [req-d601dca1-0bd8-421b-980e-e21fe632f460 req-4b959a63-770c-4c35-9a42-966d446f92a6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Processing event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.337 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:51:58 np0005473739 podman[416772]: 2025-10-07 14:51:58.553418568 +0000 UTC m=+0.099876703 container create a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 10:51:58 np0005473739 podman[416772]: 2025-10-07 14:51:58.476753497 +0000 UTC m=+0.023211652 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:51:58 np0005473739 systemd[1]: Started libpod-conmon-a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620.scope.
Oct  7 10:51:58 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:51:58 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5b1bb2847302638c83cc4ed2dbe5103686edd071cff651ce463463f585fcfd30/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:51:58 np0005473739 podman[416772]: 2025-10-07 14:51:58.642914525 +0000 UTC m=+0.189372700 container init a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:51:58 np0005473739 podman[416772]: 2025-10-07 14:51:58.651600641 +0000 UTC m=+0.198058776 container start a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3)
Oct  7 10:51:58 np0005473739 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [NOTICE]   (416791) : New worker (416793) forked
Oct  7 10:51:58 np0005473739 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [NOTICE]   (416791) : Loading success.
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.677 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.678 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848718.6778185, 89da30f4-cbe5-4f70-8597-b215d57427e0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.678 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] VM Started (Lifecycle Event)#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.686 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.693 2 INFO nova.virt.libvirt.driver [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance spawned successfully.#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.693 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:51:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2692: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 37 op/s
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.893 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.897 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.969 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.970 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.970 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.970 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.971 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:58 np0005473739 nova_compute[259550]: 2025-10-07 14:51:58.971 2 DEBUG nova.virt.libvirt.driver [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.046 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.047 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848718.67813, 89da30f4-cbe5-4f70-8597-b215d57427e0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.047 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.099 2 DEBUG nova.network.neutron [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updated VIF entry in instance network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.100 2 DEBUG nova.network.neutron [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.510 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.512 2 DEBUG oslo_concurrency.lockutils [req-f9282397-1606-4021-8a5a-9efb3dace988 req-cd6cca8b-796f-42eb-aa64-ca85fb1b0e59 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.515 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848718.6808052, 89da30f4-cbe5-4f70-8597-b215d57427e0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.515 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.537 2 INFO nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Took 11.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.538 2 DEBUG nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.575 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.578 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.622 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.641 2 INFO nova.compute.manager [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Took 13.71 seconds to build instance.#033[00m
Oct  7 10:51:59 np0005473739 nova_compute[259550]: 2025-10-07 14:51:59.692 2 DEBUG oslo_concurrency.lockutils [None req-27b9ed63-2aa2-4e61-99b4-4b0e1eb32174 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.893s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:00.086 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:00.087 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:00.088 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:00 np0005473739 nova_compute[259550]: 2025-10-07 14:52:00.116 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:00 np0005473739 nova_compute[259550]: 2025-10-07 14:52:00.465 2 DEBUG nova.compute.manager [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:00 np0005473739 nova_compute[259550]: 2025-10-07 14:52:00.466 2 DEBUG oslo_concurrency.lockutils [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:00 np0005473739 nova_compute[259550]: 2025-10-07 14:52:00.466 2 DEBUG oslo_concurrency.lockutils [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:00 np0005473739 nova_compute[259550]: 2025-10-07 14:52:00.467 2 DEBUG oslo_concurrency.lockutils [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:00 np0005473739 nova_compute[259550]: 2025-10-07 14:52:00.467 2 DEBUG nova.compute.manager [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] No waiting events found dispatching network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:52:00 np0005473739 nova_compute[259550]: 2025-10-07 14:52:00.468 2 WARNING nova.compute.manager [req-29c03693-d7b8-401b-b42e-330791c466f6 req-61788503-cf5f-4c7d-90bd-ca469f782e79 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received unexpected event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a for instance with vm_state active and task_state None.#033[00m
Oct  7 10:52:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2693: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct  7 10:52:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:01 np0005473739 nova_compute[259550]: 2025-10-07 14:52:01.377 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:02 np0005473739 podman[416802]: 2025-10-07 14:52:02.072809707 +0000 UTC m=+0.060500149 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Oct  7 10:52:02 np0005473739 podman[416803]: 2025-10-07 14:52:02.098303613 +0000 UTC m=+0.083100716 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:52:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2694: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 13 KiB/s wr, 62 op/s
Oct  7 10:52:02 np0005473739 nova_compute[259550]: 2025-10-07 14:52:02.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:03 np0005473739 nova_compute[259550]: 2025-10-07 14:52:03.339 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2695: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 84 op/s
Oct  7 10:52:05 np0005473739 nova_compute[259550]: 2025-10-07 14:52:05.119 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:05 np0005473739 nova_compute[259550]: 2025-10-07 14:52:05.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:05 np0005473739 nova_compute[259550]: 2025-10-07 14:52:05.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:05 np0005473739 nova_compute[259550]: 2025-10-07 14:52:05.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:52:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2696: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct  7 10:52:07 np0005473739 nova_compute[259550]: 2025-10-07 14:52:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:08 np0005473739 nova_compute[259550]: 2025-10-07 14:52:08.021 2 DEBUG nova.compute.manager [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:08 np0005473739 nova_compute[259550]: 2025-10-07 14:52:08.022 2 DEBUG nova.compute.manager [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing instance network info cache due to event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:52:08 np0005473739 nova_compute[259550]: 2025-10-07 14:52:08.022 2 DEBUG oslo_concurrency.lockutils [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:52:08 np0005473739 nova_compute[259550]: 2025-10-07 14:52:08.022 2 DEBUG oslo_concurrency.lockutils [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:52:08 np0005473739 nova_compute[259550]: 2025-10-07 14:52:08.023 2 DEBUG nova.network.neutron [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:52:08 np0005473739 nova_compute[259550]: 2025-10-07 14:52:08.341 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2697: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 21 KiB/s wr, 74 op/s
Oct  7 10:52:08 np0005473739 nova_compute[259550]: 2025-10-07 14:52:08.901 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.083 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.084 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Triggering sync for uuid 89da30f4-cbe5-4f70-8597-b215d57427e0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.084 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.084 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.085 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.085 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.085 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.344 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.345 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.261s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:09 np0005473739 nova_compute[259550]: 2025-10-07 14:52:09.426 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:10 np0005473739 nova_compute[259550]: 2025-10-07 14:52:10.120 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:10 np0005473739 nova_compute[259550]: 2025-10-07 14:52:10.496 2 DEBUG nova.network.neutron [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updated VIF entry in instance network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:52:10 np0005473739 nova_compute[259550]: 2025-10-07 14:52:10.497 2 DEBUG nova.network.neutron [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:52:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2698: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 930 KiB/s wr, 77 op/s
Oct  7 10:52:10 np0005473739 nova_compute[259550]: 2025-10-07 14:52:10.769 2 DEBUG oslo_concurrency.lockutils [req-f6c6af4a-d97b-41fb-8657-e122586735dd req-5e72d569-f201-4bca-940d-84cd65af43d7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:52:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:11Z|00192|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:a4:e9:5b 10.100.0.9
Oct  7 10:52:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:11Z|00193|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:a4:e9:5b 10.100.0.9
Oct  7 10:52:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2699: 305 pgs: 305 active+clean; 174 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 755 KiB/s rd, 930 KiB/s wr, 35 op/s
Oct  7 10:52:13 np0005473739 nova_compute[259550]: 2025-10-07 14:52:13.343 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:52:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.0 total, 600.0 interval#012Cumulative writes: 41K writes, 167K keys, 41K commit groups, 1.0 writes per commit group, ingest: 0.17 GB, 0.04 MB/s#012Cumulative WAL: 41K writes, 14K syncs, 2.80 writes per sync, written: 0.17 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 5090 writes, 20K keys, 5090 commit groups, 1.0 writes per commit group, ingest: 24.09 MB, 0.04 MB/s#012Interval WAL: 5090 writes, 1904 syncs, 2.67 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:52:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2700: 305 pgs: 305 active+clean; 198 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 914 KiB/s rd, 2.1 MiB/s wr, 81 op/s
Oct  7 10:52:15 np0005473739 nova_compute[259550]: 2025-10-07 14:52:15.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:15 np0005473739 nova_compute[259550]: 2025-10-07 14:52:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.081 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.081 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.081 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.082 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.082 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:52:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:52:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3413451029' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.555 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.727 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.727 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008e as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.731 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.732 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-0000008f as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:52:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2701: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.903 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.904 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3235MB free_disk=59.89751052856445GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.904 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:16 np0005473739 nova_compute[259550]: 2025-10-07 14:52:16.904 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.246 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 9b571b54-001b-4da1-8b91-2659b1fbaac6 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.246 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 89da30f4-cbe5-4f70-8597-b215d57427e0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.246 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.247 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.317 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:52:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:52:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/371424317' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.886 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.569s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.893 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.918 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.944 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.944 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.040s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.945 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:17 np0005473739 nova_compute[259550]: 2025-10-07 14:52:17.945 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:52:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:52:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.77 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4220 writes, 19K keys, 4220 commit groups, 1.0 writes per commit group, ingest: 23.74 MB, 0.04 MB/s#012Interval WAL: 4220 writes, 1544 syncs, 2.73 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:52:18 np0005473739 nova_compute[259550]: 2025-10-07 14:52:18.346 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2702: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:52:19 np0005473739 nova_compute[259550]: 2025-10-07 14:52:19.962 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:19 np0005473739 nova_compute[259550]: 2025-10-07 14:52:19.962 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:52:19 np0005473739 nova_compute[259550]: 2025-10-07 14:52:19.963 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:52:20 np0005473739 podman[416893]: 2025-10-07 14:52:20.074566916 +0000 UTC m=+0.058577943 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=iscsid)
Oct  7 10:52:20 np0005473739 podman[416892]: 2025-10-07 14:52:20.109949326 +0000 UTC m=+0.096737139 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 10:52:20 np0005473739 nova_compute[259550]: 2025-10-07 14:52:20.125 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:20 np0005473739 nova_compute[259550]: 2025-10-07 14:52:20.272 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:52:20 np0005473739 nova_compute[259550]: 2025-10-07 14:52:20.272 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:52:20 np0005473739 nova_compute[259550]: 2025-10-07 14:52:20.272 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:52:20 np0005473739 nova_compute[259550]: 2025-10-07 14:52:20.272 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:52:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2703: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:52:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:22 np0005473739 nova_compute[259550]: 2025-10-07 14:52:22.199 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:52:22 np0005473739 nova_compute[259550]: 2025-10-07 14:52:22.228 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:52:22 np0005473739 nova_compute[259550]: 2025-10-07 14:52:22.228 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:52:22 np0005473739 nova_compute[259550]: 2025-10-07 14:52:22.229 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 10:52:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 33K writes, 134K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 33K writes, 11K syncs, 2.82 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2785 writes, 10K keys, 2785 commit groups, 1.0 writes per commit group, ingest: 11.72 MB, 0.02 MB/s#012Interval WAL: 2785 writes, 1097 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2704: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:52:22
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'backups', '.mgr', '.rgw.root', 'images', 'volumes', 'default.rgw.meta', 'vms']
Oct  7 10:52:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:52:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:52:23 np0005473739 nova_compute[259550]: 2025-10-07 14:52:23.245 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:23 np0005473739 nova_compute[259550]: 2025-10-07 14:52:23.348 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 10:52:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2705: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 206 KiB/s rd, 1.2 MiB/s wr, 51 op/s
Oct  7 10:52:25 np0005473739 nova_compute[259550]: 2025-10-07 14:52:25.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:26.442 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=52, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=51) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:52:26 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:26.444 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:52:26 np0005473739 nova_compute[259550]: 2025-10-07 14:52:26.471 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2706: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 28 KiB/s wr, 6 op/s
Oct  7 10:52:28 np0005473739 nova_compute[259550]: 2025-10-07 14:52:28.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2707: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 12 KiB/s wr, 0 op/s
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.290 2 DEBUG nova.compute.manager [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.290 2 DEBUG nova.compute.manager [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing instance network info cache due to event network-changed-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.290 2 DEBUG oslo_concurrency.lockutils [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.291 2 DEBUG oslo_concurrency.lockutils [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.291 2 DEBUG nova.network.neutron [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Refreshing network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.358 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.358 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.359 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.359 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.359 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.361 2 INFO nova.compute.manager [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Terminating instance#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.362 2 DEBUG nova.compute.manager [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:52:29 np0005473739 kernel: tap5ad3e5c8-f8 (unregistering): left promiscuous mode
Oct  7 10:52:29 np0005473739 NetworkManager[44949]: <info>  [1759848749.4837] device (tap5ad3e5c8-f8): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:52:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:29Z|01570|binding|INFO|Releasing lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a from this chassis (sb_readonly=0)
Oct  7 10:52:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:29Z|01571|binding|INFO|Setting lport 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a down in Southbound
Oct  7 10:52:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:29Z|01572|binding|INFO|Removing iface tap5ad3e5c8-f8 ovn-installed in OVS
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.505 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:e9:5b 10.100.0.9'], port_security=['fa:16:3e:a4:e9:5b 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '89da30f4-cbe5-4f70-8597-b215d57427e0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d0572f71-e177-44ba-b973-868f94fd5edc', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '969793770e3f43a48c49fbd115a24ce2', 'neutron:revision_number': '4', 'neutron:security_group_ids': '246aff8e-3934-45b5-b52a-548706ef8690 bc0fccda-f204-449b-a489-9db51a7ff6e3', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=e9fa489b-3f19-42ed-a2d8-cef4ea357478, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:52:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.506 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a in datapath d0572f71-e177-44ba-b973-868f94fd5edc unbound from our chassis#033[00m
Oct  7 10:52:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.507 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d0572f71-e177-44ba-b973-868f94fd5edc, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:52:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.509 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c82d11cd-877d-43d5-9661-5773a588010b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:29 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:29.510 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc namespace which is not needed anymore#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:29 np0005473739 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Deactivated successfully.
Oct  7 10:52:29 np0005473739 systemd[1]: machine-qemu\x2d177\x2dinstance\x2d0000008f.scope: Consumed 14.840s CPU time.
Oct  7 10:52:29 np0005473739 systemd-machined[214580]: Machine qemu-177-instance-0000008f terminated.
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.598 2 INFO nova.virt.libvirt.driver [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Instance destroyed successfully.#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.600 2 DEBUG nova.objects.instance [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lazy-loading 'resources' on Instance uuid 89da30f4-cbe5-4f70-8597-b215d57427e0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.669 2 DEBUG nova.virt.libvirt.vif [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:51:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-987945970-access_point-838105505',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-987945970-acc',id=143,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBKkGL7CpbVdMuxEtVgVuwQZuX1JagsFc33SU+kYXfEMvDW2NlVQUlEd18hOTn1D9VxBpXbBu9ycn+w26VHLMwz7Ov9eXBoA6pi1tNnQXvgt/I9xVMdBkIMeLhNwY3Vzj3A==',key_name='tempest-TestSecurityGroupsBasicOps-1225276649',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:51:59Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='969793770e3f43a48c49fbd115a24ce2',ramdisk_id='',reservation_id='r-iv0cq43w',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-987945970',owner_user_name='tempest-TestSecurityGroupsBasicOps-987945970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:51:59Z,user_data=None,user_id='f756e2b18f7246a48f99aaa3bb77a5c3',uuid=89da30f4-cbe5-4f70-8597-b215d57427e0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.671 2 DEBUG nova.network.os_vif_util [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converting VIF {"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.182", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.672 2 DEBUG nova.network.os_vif_util [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.673 2 DEBUG os_vif [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.675 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5ad3e5c8-f8, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:52:29 np0005473739 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [NOTICE]   (416791) : haproxy version is 2.8.14-c23fe91
Oct  7 10:52:29 np0005473739 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [NOTICE]   (416791) : path to executable is /usr/sbin/haproxy
Oct  7 10:52:29 np0005473739 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [WARNING]  (416791) : Exiting Master process...
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.682 2 INFO os_vif [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:a4:e9:5b,bridge_name='br-int',has_traffic_filtering=True,id=5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a,network=Network(d0572f71-e177-44ba-b973-868f94fd5edc),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap5ad3e5c8-f8')#033[00m
Oct  7 10:52:29 np0005473739 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [ALERT]    (416791) : Current worker (416793) exited with code 143 (Terminated)
Oct  7 10:52:29 np0005473739 neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc[416787]: [WARNING]  (416791) : All workers exited. Exiting... (0)
Oct  7 10:52:29 np0005473739 systemd[1]: libpod-a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620.scope: Deactivated successfully.
Oct  7 10:52:29 np0005473739 podman[416965]: 2025-10-07 14:52:29.692856634 +0000 UTC m=+0.063461848 container died a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:52:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620-userdata-shm.mount: Deactivated successfully.
Oct  7 10:52:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5b1bb2847302638c83cc4ed2dbe5103686edd071cff651ce463463f585fcfd30-merged.mount: Deactivated successfully.
Oct  7 10:52:29 np0005473739 podman[416965]: 2025-10-07 14:52:29.849350362 +0000 UTC m=+0.219955576 container cleanup a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:52:29 np0005473739 systemd[1]: libpod-conmon-a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620.scope: Deactivated successfully.
Oct  7 10:52:29 np0005473739 nova_compute[259550]: 2025-10-07 14:52:29.936 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:52:30 np0005473739 podman[417015]: 2025-10-07 14:52:30.087742375 +0000 UTC m=+0.213366950 container remove a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.093 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d53713b1-e943-48d1-b264-0b4469d1ab96]: (4, ('Tue Oct  7 02:52:29 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc (a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620)\na7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620\nTue Oct  7 02:52:29 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc (a7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620)\na7d66d512f2640c97531c717df2609292f3640a021be42969d14a9b7ee299620\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.095 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[511d8bc8-285e-4b87-aea4-3336e7b35342]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.096 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd0572f71-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:52:30 np0005473739 kernel: tapd0572f71-e0: left promiscuous mode
Oct  7 10:52:30 np0005473739 nova_compute[259550]: 2025-10-07 14:52:30.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:30 np0005473739 nova_compute[259550]: 2025-10-07 14:52:30.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.120 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f6757687-6570-4173-99c4-6dd2755c2d37]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.152 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[cba6f9e7-11ff-44c9-9e2c-9c81cd2506f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.155 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[be6dd664-ed3a-441d-a1f7-03c7738a01bf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.179 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d41de374-d22f-4c89-95fb-e51defefb6c9]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 930141, 'reachable_time': 18977, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417029, 'error': None, 'target': 'ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.183 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d0572f71-e177-44ba-b973-868f94fd5edc deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:52:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:30.183 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[8d6aa528-408e-4bee-bdef-92b2d444a032]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:30 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd0572f71\x2de177\x2d44ba\x2db973\x2d868f94fd5edc.mount: Deactivated successfully.
Oct  7 10:52:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2708: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 3.1 KiB/s wr, 1 op/s
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #132. Immutable memtables: 0.
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.335105) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 79] Flushing memtable with next log file: 132
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751335150, "job": 79, "event": "flush_started", "num_memtables": 1, "num_entries": 510, "num_deletes": 257, "total_data_size": 491971, "memory_usage": 501880, "flush_reason": "Manual Compaction"}
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 79] Level-0 flush table #133: started
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751343517, "cf_name": "default", "job": 79, "event": "table_file_creation", "file_number": 133, "file_size": 487662, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56483, "largest_seqno": 56992, "table_properties": {"data_size": 484835, "index_size": 862, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 901, "raw_key_size": 6463, "raw_average_key_size": 18, "raw_value_size": 479217, "raw_average_value_size": 1346, "num_data_blocks": 39, "num_entries": 356, "num_filter_entries": 356, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848717, "oldest_key_time": 1759848717, "file_creation_time": 1759848751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 133, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 79] Flush lasted 8477 microseconds, and 2275 cpu microseconds.
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.343574) [db/flush_job.cc:967] [default] [JOB 79] Level-0 flush table #133: 487662 bytes OK
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.343603) [db/memtable_list.cc:519] [default] Level-0 commit table #133 started
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.351837) [db/memtable_list.cc:722] [default] Level-0 commit table #133: memtable #1 done
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.351888) EVENT_LOG_v1 {"time_micros": 1759848751351877, "job": 79, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.351919) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 79] Try to delete WAL files size 489008, prev total WAL file size 489008, number of live WAL files 2.
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000129.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.352523) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032323633' seq:72057594037927935, type:22 .. '6C6F676D0032353136' seq:0, type:0; will stop at (end)
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 80] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 79 Base level 0, inputs: [133(476KB)], [131(9772KB)]
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751352586, "job": 80, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [133], "files_L6": [131], "score": -1, "input_data_size": 10494443, "oldest_snapshot_seqno": -1}
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 80] Generated table #134: 7570 keys, 10391651 bytes, temperature: kUnknown
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751422885, "cf_name": "default", "job": 80, "event": "table_file_creation", "file_number": 134, "file_size": 10391651, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10341355, "index_size": 30279, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 18949, "raw_key_size": 198447, "raw_average_key_size": 26, "raw_value_size": 10206364, "raw_average_value_size": 1348, "num_data_blocks": 1181, "num_entries": 7570, "num_filter_entries": 7570, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848751, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 134, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.423273) [db/compaction/compaction_job.cc:1663] [default] [JOB 80] Compacted 1@0 + 1@6 files to L6 => 10391651 bytes
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.428358) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 148.9 rd, 147.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 9.5 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(42.8) write-amplify(21.3) OK, records in: 8092, records dropped: 522 output_compression: NoCompression
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.428421) EVENT_LOG_v1 {"time_micros": 1759848751428406, "job": 80, "event": "compaction_finished", "compaction_time_micros": 70457, "compaction_time_cpu_micros": 27363, "output_level": 6, "num_output_files": 1, "total_output_size": 10391651, "num_input_records": 8092, "num_output_records": 7570, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000133.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751428710, "job": 80, "event": "table_file_deletion", "file_number": 133}
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000131.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848751430942, "job": 80, "event": "table_file_deletion", "file_number": 131}
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.352413) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431056) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431062) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431064) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431066) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:52:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:52:31.431068) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.519 2 INFO nova.virt.libvirt.driver [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Deleting instance files /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0_del#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.520 2 INFO nova.virt.libvirt.driver [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Deletion of /var/lib/nova/instances/89da30f4-cbe5-4f70-8597-b215d57427e0_del complete#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.584 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-unplugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.584 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.585 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.585 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.585 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] No waiting events found dispatching network-vif-unplugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.585 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-unplugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.586 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.586 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.586 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.586 2 DEBUG oslo_concurrency.lockutils [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.587 2 DEBUG nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] No waiting events found dispatching network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.587 2 WARNING nova.compute.manager [req-5f90406c-1582-4f01-ae39-90d03f60bf39 req-c9c1a6aa-04b3-46a1-870f-a3c79b86513f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received unexpected event network-vif-plugged-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.790 2 INFO nova.compute.manager [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Took 2.43 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.791 2 DEBUG oslo.service.loopingcall [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.791 2 DEBUG nova.compute.manager [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:52:31 np0005473739 nova_compute[259550]: 2025-10-07 14:52:31.791 2 DEBUG nova.network.neutron [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:52:32 np0005473739 nova_compute[259550]: 2025-10-07 14:52:32.679 2 DEBUG nova.network.neutron [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updated VIF entry in instance network info cache for port 5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:52:32 np0005473739 nova_compute[259550]: 2025-10-07 14:52:32.680 2 DEBUG nova.network.neutron [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [{"id": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "address": "fa:16:3e:a4:e9:5b", "network": {"id": "d0572f71-e177-44ba-b973-868f94fd5edc", "bridge": "br-int", "label": "tempest-network-smoke--1569490364", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "969793770e3f43a48c49fbd115a24ce2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap5ad3e5c8-f8", "ovs_interfaceid": "5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:52:32 np0005473739 nova_compute[259550]: 2025-10-07 14:52:32.708 2 DEBUG oslo_concurrency.lockutils [req-4f00321b-6305-4138-8bc5-7dac172af786 req-51f4c246-0bf5-43dd-a74d-d5278df3f181 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-89da30f4-cbe5-4f70-8597-b215d57427e0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:52:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:52:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4204438902' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:52:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:52:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4204438902' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2709: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.1 KiB/s wr, 1 op/s
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0015185461027544442 of space, bias 1.0, pg target 0.4555638308263333 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:52:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:52:32 np0005473739 nova_compute[259550]: 2025-10-07 14:52:32.940 2 DEBUG nova.network.neutron [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:52:32 np0005473739 nova_compute[259550]: 2025-10-07 14:52:32.958 2 INFO nova.compute.manager [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Took 1.17 seconds to deallocate network for instance.#033[00m
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.017 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.017 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.020 2 DEBUG nova.compute.manager [req-6342257f-f45e-4a31-bafa-ae8775dcea85 req-740aa6e3-4905-4643-8fb2-b073de7df6b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Received event network-vif-deleted-5ad3e5c8-f889-4e3c-8c73-ce015ca7e56a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:33 np0005473739 podman[417032]: 2025-10-07 14:52:33.075125234 +0000 UTC m=+0.061243116 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.094 2 DEBUG oslo_concurrency.processutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:52:33 np0005473739 podman[417033]: 2025-10-07 14:52:33.112714338 +0000 UTC m=+0.092081069 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:52:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3172131990' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.595 2 DEBUG oslo_concurrency.processutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.501s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.602 2 DEBUG nova.compute.provider_tree [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.620 2 DEBUG nova.scheduler.client.report [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.654 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.636s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.679 2 INFO nova.scheduler.client.report [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Deleted allocations for instance 89da30f4-cbe5-4f70-8597-b215d57427e0#033[00m
Oct  7 10:52:33 np0005473739 nova_compute[259550]: 2025-10-07 14:52:33.762 2 DEBUG oslo_concurrency.lockutils [None req-ac5c515d-be1d-48cc-b7a3-b8f6994225f1 f756e2b18f7246a48f99aaa3bb77a5c3 969793770e3f43a48c49fbd115a24ce2 - - default default] Lock "89da30f4-cbe5-4f70-8597-b215d57427e0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.403s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:34 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:34.446 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '52'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:52:34 np0005473739 nova_compute[259550]: 2025-10-07 14:52:34.677 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2710: 305 pgs: 305 active+clean; 154 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 KiB/s wr, 27 op/s
Oct  7 10:52:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2711: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.6 KiB/s wr, 28 op/s
Oct  7 10:52:38 np0005473739 nova_compute[259550]: 2025-10-07 14:52:38.356 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2712: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct  7 10:52:39 np0005473739 nova_compute[259550]: 2025-10-07 14:52:39.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2713: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 3.5 KiB/s wr, 28 op/s
Oct  7 10:52:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2714: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct  7 10:52:43 np0005473739 nova_compute[259550]: 2025-10-07 14:52:43.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:44 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:44Z|01573|binding|INFO|Releasing lport 271db248-560d-4b3a-bd65-947c8ea27522 from this chassis (sb_readonly=0)
Oct  7 10:52:44 np0005473739 nova_compute[259550]: 2025-10-07 14:52:44.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:44 np0005473739 nova_compute[259550]: 2025-10-07 14:52:44.597 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848749.5961843, 89da30f4-cbe5-4f70-8597-b215d57427e0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:52:44 np0005473739 nova_compute[259550]: 2025-10-07 14:52:44.598 2 INFO nova.compute.manager [-] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:52:44 np0005473739 nova_compute[259550]: 2025-10-07 14:52:44.634 2 DEBUG nova.compute.manager [None req-6ba61f49-eb71-4e9e-893d-44f269675c91 - - - - - -] [instance: 89da30f4-cbe5-4f70-8597-b215d57427e0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:52:44 np0005473739 nova_compute[259550]: 2025-10-07 14:52:44.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2715: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.5 KiB/s wr, 27 op/s
Oct  7 10:52:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2716: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 426 B/s rd, 0 B/s wr, 1 op/s
Oct  7 10:52:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.370 2 DEBUG nova.compute.manager [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.370 2 DEBUG nova.compute.manager [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing instance network info cache due to event network-changed-144643c5-91ec-4bd7-a646-3d64339b6691. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.371 2 DEBUG oslo_concurrency.lockutils [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.371 2 DEBUG oslo_concurrency.lockutils [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.371 2 DEBUG nova.network.neutron [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Refreshing network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.427 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.428 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.428 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.431 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.431 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.433 2 INFO nova.compute.manager [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Terminating instance#033[00m
Oct  7 10:52:47 np0005473739 nova_compute[259550]: 2025-10-07 14:52:47.434 2 DEBUG nova.compute.manager [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:52:48 np0005473739 kernel: tap144643c5-91 (unregistering): left promiscuous mode
Oct  7 10:52:48 np0005473739 NetworkManager[44949]: <info>  [1759848768.0449] device (tap144643c5-91): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:52:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:48Z|01574|binding|INFO|Releasing lport 144643c5-91ec-4bd7-a646-3d64339b6691 from this chassis (sb_readonly=0)
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.053 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:48Z|01575|binding|INFO|Setting lport 144643c5-91ec-4bd7-a646-3d64339b6691 down in Southbound
Oct  7 10:52:48 np0005473739 ovn_controller[151684]: 2025-10-07T14:52:48Z|01576|binding|INFO|Removing iface tap144643c5-91 ovn-installed in OVS
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.071 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:24:52:85 10.100.0.7'], port_security=['fa:16:3e:24:52:85 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '9b571b54-001b-4da1-8b91-2659b1fbaac6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7d6dba35-f647-4d4e-a538-87de70ead701', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5665608a-5872-470d-ac7c-b6ba91a59eac e6cb9e17-5a87-48e7-beaf-ada7a8c060a5', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=459b9ec0-d62a-47af-8167-94e59b8e15bc, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=144643c5-91ec-4bd7-a646-3d64339b6691) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.073 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 144643c5-91ec-4bd7-a646-3d64339b6691 in datapath 7d6dba35-f647-4d4e-a538-87de70ead701 unbound from our chassis#033[00m
Oct  7 10:52:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.075 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7d6dba35-f647-4d4e-a538-87de70ead701, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:52:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.076 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bc8b623-c2bc-4847-b245-e6f23ee63509]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:48.077 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 namespace which is not needed anymore#033[00m
Oct  7 10:52:48 np0005473739 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Deactivated successfully.
Oct  7 10:52:48 np0005473739 systemd[1]: machine-qemu\x2d176\x2dinstance\x2d0000008e.scope: Consumed 16.856s CPU time.
Oct  7 10:52:48 np0005473739 systemd-machined[214580]: Machine qemu-176-instance-0000008e terminated.
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.274 2 INFO nova.virt.libvirt.driver [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Instance destroyed successfully.#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.275 2 DEBUG nova.objects.instance [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 9b571b54-001b-4da1-8b91-2659b1fbaac6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.294 2 DEBUG nova.virt.libvirt.vif [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:50:51Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-29238859',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=142,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBOW4CdHXcKKzCR3ZI3nv4/KGv+Q9vtn7hxRZelXQfKtVpIrtTAVdZ3Df4UTU8EaZDakLPU1Olzl+Z8Q7yW10W/8sO5gvQt5UY4ZcGXGzLqm+i3yJBS0R+/jppzmQO8xBdQ==',key_name='tempest-TestSecurityGroupsBasicOps-1150608020',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:51:02Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-0j1o8fjx',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:51:02Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=9b571b54-001b-4da1-8b91-2659b1fbaac6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.295 2 DEBUG nova.network.os_vif_util [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.240", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.296 2 DEBUG nova.network.os_vif_util [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.296 2 DEBUG os_vif [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.298 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap144643c5-91, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.304 2 INFO os_vif [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:24:52:85,bridge_name='br-int',has_traffic_filtering=True,id=144643c5-91ec-4bd7-a646-3d64339b6691,network=Network(7d6dba35-f647-4d4e-a538-87de70ead701),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap144643c5-91')#033[00m
Oct  7 10:52:48 np0005473739 nova_compute[259550]: 2025-10-07 14:52:48.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:48 np0005473739 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [NOTICE]   (415290) : haproxy version is 2.8.14-c23fe91
Oct  7 10:52:48 np0005473739 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [NOTICE]   (415290) : path to executable is /usr/sbin/haproxy
Oct  7 10:52:48 np0005473739 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [WARNING]  (415290) : Exiting Master process...
Oct  7 10:52:48 np0005473739 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [ALERT]    (415290) : Current worker (415292) exited with code 143 (Terminated)
Oct  7 10:52:48 np0005473739 neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701[415286]: [WARNING]  (415290) : All workers exited. Exiting... (0)
Oct  7 10:52:48 np0005473739 systemd[1]: libpod-9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b.scope: Deactivated successfully.
Oct  7 10:52:48 np0005473739 podman[417123]: 2025-10-07 14:52:48.45866551 +0000 UTC m=+0.293507084 container died 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:52:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2717: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  7 10:52:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b-userdata-shm.mount: Deactivated successfully.
Oct  7 10:52:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c990b5ffee34bb0ee8d8fa81835ad0deb9249f2ad983045d67c79c75ff5af772-merged.mount: Deactivated successfully.
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.569 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-unplugged-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.569 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] No waiting events found dispatching network-vif-unplugged-144643c5-91ec-4bd7-a646-3d64339b6691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-unplugged-144643c5-91ec-4bd7-a646-3d64339b6691 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.570 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.571 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.571 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.571 2 DEBUG oslo_concurrency.lockutils [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.571 2 DEBUG nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] No waiting events found dispatching network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:52:49 np0005473739 nova_compute[259550]: 2025-10-07 14:52:49.572 2 WARNING nova.compute.manager [req-ebb50811-b824-41e8-9ba6-42160a7b47d6 req-5b91998d-9c25-46ec-a85d-a96a42ebbd2e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received unexpected event network-vif-plugged-144643c5-91ec-4bd7-a646-3d64339b6691 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:52:49 np0005473739 podman[417123]: 2025-10-07 14:52:49.889803149 +0000 UTC m=+1.724644743 container cleanup 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:52:49 np0005473739 systemd[1]: libpod-conmon-9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b.scope: Deactivated successfully.
Oct  7 10:52:50 np0005473739 podman[417182]: 2025-10-07 14:52:50.197687114 +0000 UTC m=+0.282429121 container remove 9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.204 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f54a677-92b7-4d16-93f2-45686c6ccec8]: (4, ('Tue Oct  7 02:52:48 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 (9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b)\n9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b\nTue Oct  7 02:52:49 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 (9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b)\n9fb5ebf328239ac1030542ecf880cc7adb3a1b5e0a3eafdc6ed0888bf8a3c62b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.206 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8e89be9b-c580-4bad-8bd5-1315e130bb83]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.207 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7d6dba35-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:52:50 np0005473739 nova_compute[259550]: 2025-10-07 14:52:50.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:50 np0005473739 kernel: tap7d6dba35-f0: left promiscuous mode
Oct  7 10:52:50 np0005473739 nova_compute[259550]: 2025-10-07 14:52:50.222 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.225 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c1e39cbe-50d3-47ba-ba11-63b3bdaa3447]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.258 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4bde41a-36fa-4d12-babc-7b07da383077]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.260 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6ecb8acf-366b-4f2b-a0dc-20355d098fff]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.276 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4a2a6604-4253-4456-85c2-68df8442f21d]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 924522, 'reachable_time': 35533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 417220, 'error': None, 'target': 'ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.278 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7d6dba35-f647-4d4e-a538-87de70ead701 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:52:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:52:50.278 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[c80e73d8-14ef-4255-97f2-f2f69e08de9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:52:50 np0005473739 systemd[1]: run-netns-ovnmeta\x2d7d6dba35\x2df647\x2d4d4e\x2da538\x2d87de70ead701.mount: Deactivated successfully.
Oct  7 10:52:50 np0005473739 podman[417196]: 2025-10-07 14:52:50.305814162 +0000 UTC m=+0.062329171 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 10:52:50 np0005473739 podman[417195]: 2025-10-07 14:52:50.310023542 +0000 UTC m=+0.067814291 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:52:50 np0005473739 nova_compute[259550]: 2025-10-07 14:52:50.401 2 DEBUG nova.network.neutron [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updated VIF entry in instance network info cache for port 144643c5-91ec-4bd7-a646-3d64339b6691. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:52:50 np0005473739 nova_compute[259550]: 2025-10-07 14:52:50.402 2 DEBUG nova.network.neutron [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [{"id": "144643c5-91ec-4bd7-a646-3d64339b6691", "address": "fa:16:3e:24:52:85", "network": {"id": "7d6dba35-f647-4d4e-a538-87de70ead701", "bridge": "br-int", "label": "tempest-network-smoke--1685403353", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap144643c5-91", "ovs_interfaceid": "144643c5-91ec-4bd7-a646-3d64339b6691", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:52:50 np0005473739 nova_compute[259550]: 2025-10-07 14:52:50.431 2 DEBUG oslo_concurrency.lockutils [req-1ce3c26c-3412-4584-98d8-98b7b7459068 req-7c638f74-df93-4c22-a02a-d8156b39489f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-9b571b54-001b-4da1-8b91-2659b1fbaac6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:52:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2718: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 7 op/s
Oct  7 10:52:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:52:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:52:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2719: 305 pgs: 305 active+clean; 121 MiB data, 1012 MiB used, 59 GiB / 60 GiB avail; 5.7 KiB/s rd, 0 B/s wr, 7 op/s
Oct  7 10:52:52 np0005473739 nova_compute[259550]: 2025-10-07 14:52:52.992 2 INFO nova.virt.libvirt.driver [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Deleting instance files /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6_del#033[00m
Oct  7 10:52:52 np0005473739 nova_compute[259550]: 2025-10-07 14:52:52.993 2 INFO nova.virt.libvirt.driver [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Deletion of /var/lib/nova/instances/9b571b54-001b-4da1-8b91-2659b1fbaac6_del complete#033[00m
Oct  7 10:52:53 np0005473739 nova_compute[259550]: 2025-10-07 14:52:53.100 2 INFO nova.compute.manager [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Took 5.67 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:52:53 np0005473739 nova_compute[259550]: 2025-10-07 14:52:53.101 2 DEBUG oslo.service.loopingcall [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:52:53 np0005473739 nova_compute[259550]: 2025-10-07 14:52:53.101 2 DEBUG nova.compute.manager [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:52:53 np0005473739 nova_compute[259550]: 2025-10-07 14:52:53.101 2 DEBUG nova.network.neutron [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:52:53 np0005473739 nova_compute[259550]: 2025-10-07 14:52:53.301 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:53 np0005473739 nova_compute[259550]: 2025-10-07 14:52:53.362 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:52:53 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 20dfece1-da1b-4b5d-ad94-19368b28688e does not exist
Oct  7 10:52:53 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 728a550a-dc26-4184-96fe-0fc44982ea0e does not exist
Oct  7 10:52:53 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 11b0b58d-2bcc-465e-b0f2-a45d61ee376f does not exist
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:52:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:52:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:52:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:52:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:52:54 np0005473739 podman[417512]: 2025-10-07 14:52:54.472885887 +0000 UTC m=+0.101063551 container create d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:52:54 np0005473739 podman[417512]: 2025-10-07 14:52:54.402004784 +0000 UTC m=+0.030182468 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:52:54 np0005473739 systemd[1]: Started libpod-conmon-d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5.scope.
Oct  7 10:52:54 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:52:54 np0005473739 podman[417512]: 2025-10-07 14:52:54.702833901 +0000 UTC m=+0.331011615 container init d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:52:54 np0005473739 podman[417512]: 2025-10-07 14:52:54.713396341 +0000 UTC m=+0.341574005 container start d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True)
Oct  7 10:52:54 np0005473739 podman[417512]: 2025-10-07 14:52:54.722350915 +0000 UTC m=+0.350528599 container attach d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:52:54 np0005473739 gallant_liskov[417528]: 167 167
Oct  7 10:52:54 np0005473739 systemd[1]: libpod-d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5.scope: Deactivated successfully.
Oct  7 10:52:54 np0005473739 conmon[417528]: conmon d044e40c74198f23a3e6 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5.scope/container/memory.events
Oct  7 10:52:54 np0005473739 podman[417512]: 2025-10-07 14:52:54.72471257 +0000 UTC m=+0.352890244 container died d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:52:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8bb9ab64c6327eae4b24f894f32cbab2fbf2f45303ca0ff98a302e57fd411d58-merged.mount: Deactivated successfully.
Oct  7 10:52:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2720: 305 pgs: 305 active+clean; 61 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 511 B/s wr, 19 op/s
Oct  7 10:52:54 np0005473739 podman[417512]: 2025-10-07 14:52:54.766337629 +0000 UTC m=+0.394515303 container remove d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_liskov, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:52:54 np0005473739 systemd[1]: libpod-conmon-d044e40c74198f23a3e69925ae427146b0a3a9a809f4197298403437324bfcf5.scope: Deactivated successfully.
Oct  7 10:52:54 np0005473739 podman[417551]: 2025-10-07 14:52:54.94777807 +0000 UTC m=+0.052245033 container create 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:52:54 np0005473739 systemd[1]: Started libpod-conmon-21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995.scope.
Oct  7 10:52:55 np0005473739 podman[417551]: 2025-10-07 14:52:54.923885692 +0000 UTC m=+0.028352685 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:52:55 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:52:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:55 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:55 np0005473739 podman[417551]: 2025-10-07 14:52:55.044180529 +0000 UTC m=+0.148647522 container init 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 10:52:55 np0005473739 podman[417551]: 2025-10-07 14:52:55.050904199 +0000 UTC m=+0.155371152 container start 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:52:55 np0005473739 podman[417551]: 2025-10-07 14:52:55.054186068 +0000 UTC m=+0.158653031 container attach 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:52:55 np0005473739 nova_compute[259550]: 2025-10-07 14:52:55.274 2 DEBUG nova.network.neutron [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:52:55 np0005473739 nova_compute[259550]: 2025-10-07 14:52:55.309 2 INFO nova.compute.manager [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Took 2.21 seconds to deallocate network for instance.#033[00m
Oct  7 10:52:55 np0005473739 nova_compute[259550]: 2025-10-07 14:52:55.394 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:52:55 np0005473739 nova_compute[259550]: 2025-10-07 14:52:55.394 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:52:55 np0005473739 nova_compute[259550]: 2025-10-07 14:52:55.396 2 DEBUG nova.compute.manager [req-e983b98f-a03e-4bee-9a75-1c7e7543272c req-59b8671b-a94c-4e16-ac40-4d1959cde7b6 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Received event network-vif-deleted-144643c5-91ec-4bd7-a646-3d64339b6691 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:52:55 np0005473739 nova_compute[259550]: 2025-10-07 14:52:55.451 2 DEBUG oslo_concurrency.processutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:52:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:52:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3705558032' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:52:55 np0005473739 nova_compute[259550]: 2025-10-07 14:52:55.974 2 DEBUG oslo_concurrency.processutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.523s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:52:55 np0005473739 nova_compute[259550]: 2025-10-07 14:52:55.982 2 DEBUG nova.compute.provider_tree [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:52:56 np0005473739 nova_compute[259550]: 2025-10-07 14:52:56.000 2 DEBUG nova.scheduler.client.report [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:52:56 np0005473739 nova_compute[259550]: 2025-10-07 14:52:56.021 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.627s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:56 np0005473739 nova_compute[259550]: 2025-10-07 14:52:56.066 2 INFO nova.scheduler.client.report [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 9b571b54-001b-4da1-8b91-2659b1fbaac6#033[00m
Oct  7 10:52:56 np0005473739 nova_compute[259550]: 2025-10-07 14:52:56.152 2 DEBUG oslo_concurrency.lockutils [None req-48efb72d-96e6-49e5-b685-bdbc6a300900 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "9b571b54-001b-4da1-8b91-2659b1fbaac6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 8.725s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:52:56 np0005473739 priceless_aryabhata[417568]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:52:56 np0005473739 priceless_aryabhata[417568]: --> relative data size: 1.0
Oct  7 10:52:56 np0005473739 priceless_aryabhata[417568]: --> All data devices are unavailable
Oct  7 10:52:56 np0005473739 systemd[1]: libpod-21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995.scope: Deactivated successfully.
Oct  7 10:52:56 np0005473739 podman[417551]: 2025-10-07 14:52:56.244446514 +0000 UTC m=+1.348913487 container died 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:52:56 np0005473739 systemd[1]: libpod-21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995.scope: Consumed 1.133s CPU time.
Oct  7 10:52:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2b1fff7612ad704c86e5d73741555c9f1959046eb11e96f565beba79c2488f54-merged.mount: Deactivated successfully.
Oct  7 10:52:56 np0005473739 podman[417551]: 2025-10-07 14:52:56.525922471 +0000 UTC m=+1.630389434 container remove 21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_aryabhata, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:52:56 np0005473739 systemd[1]: libpod-conmon-21a72adee503ca01b85692c7eef3a2615ba8fe2921ec423051490a9b92d81995.scope: Deactivated successfully.
Oct  7 10:52:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2721: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:52:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:52:57 np0005473739 podman[417773]: 2025-10-07 14:52:57.173147536 +0000 UTC m=+0.046323851 container create 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 10:52:57 np0005473739 systemd[1]: Started libpod-conmon-22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d.scope.
Oct  7 10:52:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:52:57 np0005473739 podman[417773]: 2025-10-07 14:52:57.152807112 +0000 UTC m=+0.025983407 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:52:57 np0005473739 podman[417773]: 2025-10-07 14:52:57.268326167 +0000 UTC m=+0.141502462 container init 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:52:57 np0005473739 podman[417773]: 2025-10-07 14:52:57.276728347 +0000 UTC m=+0.149904652 container start 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:52:57 np0005473739 peaceful_joliot[417789]: 167 167
Oct  7 10:52:57 np0005473739 systemd[1]: libpod-22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d.scope: Deactivated successfully.
Oct  7 10:52:57 np0005473739 podman[417773]: 2025-10-07 14:52:57.290149456 +0000 UTC m=+0.163325771 container attach 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 10:52:57 np0005473739 podman[417773]: 2025-10-07 14:52:57.291667982 +0000 UTC m=+0.164844277 container died 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:52:57 np0005473739 systemd[1]: var-lib-containers-storage-overlay-551965e319eb2bb7eae6de6ddd54c5de3843457b5bada0b088bd6a6502cd73d4-merged.mount: Deactivated successfully.
Oct  7 10:52:57 np0005473739 podman[417773]: 2025-10-07 14:52:57.399720739 +0000 UTC m=+0.272897024 container remove 22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=peaceful_joliot, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:52:57 np0005473739 systemd[1]: libpod-conmon-22972adfcdae01a150ee6ea001e6375f59c46cd7bd14cf233ca7c4f183d8388d.scope: Deactivated successfully.
Oct  7 10:52:57 np0005473739 podman[417815]: 2025-10-07 14:52:57.558677905 +0000 UTC m=+0.036969079 container create 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:52:57 np0005473739 systemd[1]: Started libpod-conmon-78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7.scope.
Oct  7 10:52:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:52:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:57 np0005473739 podman[417815]: 2025-10-07 14:52:57.543180267 +0000 UTC m=+0.021471461 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:52:57 np0005473739 podman[417815]: 2025-10-07 14:52:57.658173508 +0000 UTC m=+0.136464692 container init 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:52:57 np0005473739 podman[417815]: 2025-10-07 14:52:57.665021621 +0000 UTC m=+0.143312795 container start 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:52:57 np0005473739 podman[417815]: 2025-10-07 14:52:57.66874498 +0000 UTC m=+0.147036164 container attach 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:52:58 np0005473739 nova_compute[259550]: 2025-10-07 14:52:58.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:58 np0005473739 nova_compute[259550]: 2025-10-07 14:52:58.364 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]: {
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:    "0": [
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:        {
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "devices": [
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "/dev/loop3"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            ],
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_name": "ceph_lv0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_size": "21470642176",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "name": "ceph_lv0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "tags": {
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cluster_name": "ceph",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.crush_device_class": "",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.encrypted": "0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osd_id": "0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.type": "block",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.vdo": "0"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            },
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "type": "block",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "vg_name": "ceph_vg0"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:        }
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:    ],
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:    "1": [
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:        {
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "devices": [
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "/dev/loop4"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            ],
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_name": "ceph_lv1",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_size": "21470642176",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "name": "ceph_lv1",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "tags": {
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cluster_name": "ceph",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.crush_device_class": "",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.encrypted": "0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osd_id": "1",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.type": "block",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.vdo": "0"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            },
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "type": "block",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "vg_name": "ceph_vg1"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:        }
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:    ],
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:    "2": [
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:        {
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "devices": [
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "/dev/loop5"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            ],
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_name": "ceph_lv2",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_size": "21470642176",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "name": "ceph_lv2",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "tags": {
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.cluster_name": "ceph",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.crush_device_class": "",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.encrypted": "0",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osd_id": "2",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.type": "block",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:                "ceph.vdo": "0"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            },
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "type": "block",
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:            "vg_name": "ceph_vg2"
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:        }
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]:    ]
Oct  7 10:52:58 np0005473739 optimistic_lichterman[417830]: }
Oct  7 10:52:58 np0005473739 systemd[1]: libpod-78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7.scope: Deactivated successfully.
Oct  7 10:52:58 np0005473739 podman[417815]: 2025-10-07 14:52:58.576846823 +0000 UTC m=+1.055138037 container died 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:52:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-86e15cfabbceaa6b9cbd0ceaa9b3789df4d8e7aa9f28df4c73f79c9de39f9cad-merged.mount: Deactivated successfully.
Oct  7 10:52:58 np0005473739 podman[417815]: 2025-10-07 14:52:58.652831508 +0000 UTC m=+1.131122682 container remove 78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_lichterman, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:52:58 np0005473739 systemd[1]: libpod-conmon-78ffab58952f447ad18d9ff5d35f3859a6f826de353fa24455acb6d40e1d28d7.scope: Deactivated successfully.
Oct  7 10:52:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2722: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:52:59 np0005473739 podman[417988]: 2025-10-07 14:52:59.273912272 +0000 UTC m=+0.057628690 container create df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:52:59 np0005473739 systemd[1]: Started libpod-conmon-df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67.scope.
Oct  7 10:52:59 np0005473739 podman[417988]: 2025-10-07 14:52:59.240675504 +0000 UTC m=+0.024391962 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:52:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:52:59 np0005473739 podman[417988]: 2025-10-07 14:52:59.374591074 +0000 UTC m=+0.158307502 container init df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:52:59 np0005473739 podman[417988]: 2025-10-07 14:52:59.383434335 +0000 UTC m=+0.167150753 container start df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:52:59 np0005473739 youthful_meninsky[418004]: 167 167
Oct  7 10:52:59 np0005473739 systemd[1]: libpod-df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67.scope: Deactivated successfully.
Oct  7 10:52:59 np0005473739 podman[417988]: 2025-10-07 14:52:59.395091291 +0000 UTC m=+0.178807739 container attach df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:52:59 np0005473739 podman[417988]: 2025-10-07 14:52:59.396418283 +0000 UTC m=+0.180134711 container died df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 10:52:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1ee07a0a313637955428a4c59c1d559b99f989d9a9ad07865dc0445082d4e64f-merged.mount: Deactivated successfully.
Oct  7 10:52:59 np0005473739 podman[417988]: 2025-10-07 14:52:59.439690371 +0000 UTC m=+0.223406789 container remove df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=youthful_meninsky, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 10:52:59 np0005473739 systemd[1]: libpod-conmon-df58cf3e53077e06a1e9cc935c52046c0622435780a50eba4bfa365664865e67.scope: Deactivated successfully.
Oct  7 10:52:59 np0005473739 podman[418028]: 2025-10-07 14:52:59.585654489 +0000 UTC m=+0.035750631 container create 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:52:59 np0005473739 systemd[1]: Started libpod-conmon-1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed.scope.
Oct  7 10:52:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:52:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:52:59 np0005473739 podman[418028]: 2025-10-07 14:52:59.65979783 +0000 UTC m=+0.109893982 container init 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:52:59 np0005473739 podman[418028]: 2025-10-07 14:52:59.570330724 +0000 UTC m=+0.020426886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:52:59 np0005473739 podman[418028]: 2025-10-07 14:52:59.667041872 +0000 UTC m=+0.117138014 container start 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:52:59 np0005473739 podman[418028]: 2025-10-07 14:52:59.670482584 +0000 UTC m=+0.120578736 container attach 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 10:53:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:00.089 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:00.090 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:00.090 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:00 np0005473739 nova_compute[259550]: 2025-10-07 14:53:00.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:00 np0005473739 nova_compute[259550]: 2025-10-07 14:53:00.398 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]: {
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "osd_id": 2,
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "type": "bluestore"
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:    },
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "osd_id": 1,
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "type": "bluestore"
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:    },
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "osd_id": 0,
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:        "type": "bluestore"
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]:    }
Oct  7 10:53:00 np0005473739 pensive_dewdney[418044]: }
Oct  7 10:53:00 np0005473739 systemd[1]: libpod-1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed.scope: Deactivated successfully.
Oct  7 10:53:00 np0005473739 systemd[1]: libpod-1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed.scope: Consumed 1.044s CPU time.
Oct  7 10:53:00 np0005473739 podman[418028]: 2025-10-07 14:53:00.710844409 +0000 UTC m=+1.160940551 container died 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 10:53:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2723: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:53:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-fc41776cddad7fb7cb995d805cb42a2b630a8595ab91eb4a09964777de963564-merged.mount: Deactivated successfully.
Oct  7 10:53:00 np0005473739 podman[418028]: 2025-10-07 14:53:00.836406442 +0000 UTC m=+1.286502584 container remove 1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_dewdney, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:53:00 np0005473739 systemd[1]: libpod-conmon-1828ebc10f5b7ee13c45a6f7cbce7069ee8f0104139476d6f636f36e5f0a0fed.scope: Deactivated successfully.
Oct  7 10:53:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:53:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:53:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:53:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:53:00 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ed5d41c6-30e5-40e1-9e5a-357ed82c489a does not exist
Oct  7 10:53:00 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0197eda3-e98c-4439-802b-6d05cf2f4c78 does not exist
Oct  7 10:53:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:53:01 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:53:02 np0005473739 nova_compute[259550]: 2025-10-07 14:53:02.012 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2724: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct  7 10:53:03 np0005473739 nova_compute[259550]: 2025-10-07 14:53:03.273 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848768.2726712, 9b571b54-001b-4da1-8b91-2659b1fbaac6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:53:03 np0005473739 nova_compute[259550]: 2025-10-07 14:53:03.274 2 INFO nova.compute.manager [-] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:53:03 np0005473739 nova_compute[259550]: 2025-10-07 14:53:03.309 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:03 np0005473739 nova_compute[259550]: 2025-10-07 14:53:03.421 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:03 np0005473739 nova_compute[259550]: 2025-10-07 14:53:03.446 2 DEBUG nova.compute.manager [None req-202c3388-2b9d-43ae-b9d8-b14cde7f7dab - - - - - -] [instance: 9b571b54-001b-4da1-8b91-2659b1fbaac6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:53:03 np0005473739 nova_compute[259550]: 2025-10-07 14:53:03.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:04 np0005473739 podman[418142]: 2025-10-07 14:53:04.094343678 +0000 UTC m=+0.079504800 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:53:04 np0005473739 podman[418143]: 2025-10-07 14:53:04.108961796 +0000 UTC m=+0.094531708 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:53:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2725: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.2 KiB/s wr, 20 op/s
Oct  7 10:53:05 np0005473739 nova_compute[259550]: 2025-10-07 14:53:05.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2726: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail; 5.2 KiB/s rd, 682 B/s wr, 8 op/s
Oct  7 10:53:07 np0005473739 nova_compute[259550]: 2025-10-07 14:53:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:07 np0005473739 nova_compute[259550]: 2025-10-07 14:53:07.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:07 np0005473739 nova_compute[259550]: 2025-10-07 14:53:07.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:53:08 np0005473739 nova_compute[259550]: 2025-10-07 14:53:08.311 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:08 np0005473739 nova_compute[259550]: 2025-10-07 14:53:08.424 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2727: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:53:08 np0005473739 nova_compute[259550]: 2025-10-07 14:53:08.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2728: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:53:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2729: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:53:13 np0005473739 nova_compute[259550]: 2025-10-07 14:53:13.315 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:13 np0005473739 nova_compute[259550]: 2025-10-07 14:53:13.427 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2730: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:53:14 np0005473739 nova_compute[259550]: 2025-10-07 14:53:14.790 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:14 np0005473739 nova_compute[259550]: 2025-10-07 14:53:14.791 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:14 np0005473739 nova_compute[259550]: 2025-10-07 14:53:14.836 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:53:14 np0005473739 nova_compute[259550]: 2025-10-07 14:53:14.971 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:14 np0005473739 nova_compute[259550]: 2025-10-07 14:53:14.972 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:14 np0005473739 nova_compute[259550]: 2025-10-07 14:53:14.983 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:53:14 np0005473739 nova_compute[259550]: 2025-10-07 14:53:14.984 2 INFO nova.compute.claims [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.264 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:53:15 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1864059452' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.711 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.718 2 DEBUG nova.compute.provider_tree [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.758 2 DEBUG nova.scheduler.client.report [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.787 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.815s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.788 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.871 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.873 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.899 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.918 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:53:15 np0005473739 nova_compute[259550]: 2025-10-07 14:53:15.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.033 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.034 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.034 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.034 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.035 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.085 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.087 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.087 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Creating image(s)#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.112 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.135 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.162 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.167 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.234 2 DEBUG nova.policy [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.248 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.249 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.250 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.250 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.276 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.280 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5360ad21-c174-468a-9be2-d82d672f2911_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:53:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2455657705' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.531 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.708 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.710 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3635MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.710 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.710 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2731: 305 pgs: 305 active+clean; 41 MiB data, 965 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.778 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5360ad21-c174-468a-9be2-d82d672f2911 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.779 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.779 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:53:16 np0005473739 nova_compute[259550]: 2025-10-07 14:53:16.813 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.004 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5360ad21-c174-468a-9be2-d82d672f2911_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.725s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.066 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:53:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:53:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1142625864' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.273 2 DEBUG nova.objects.instance [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 5360ad21-c174-468a-9be2-d82d672f2911 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.288 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.289 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Ensure instance console log exists: /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.289 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.290 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.290 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.291 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.478s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.297 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.314 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.339 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:53:17 np0005473739 nova_compute[259550]: 2025-10-07 14:53:17.340 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:18 np0005473739 nova_compute[259550]: 2025-10-07 14:53:18.242 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Successfully created port: 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:53:18 np0005473739 nova_compute[259550]: 2025-10-07 14:53:18.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:18 np0005473739 nova_compute[259550]: 2025-10-07 14:53:18.429 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2732: 305 pgs: 305 active+clean; 72 MiB data, 982 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.4 MiB/s wr, 25 op/s
Oct  7 10:53:19 np0005473739 nova_compute[259550]: 2025-10-07 14:53:19.359 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Successfully updated port: 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:53:19 np0005473739 nova_compute[259550]: 2025-10-07 14:53:19.412 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:53:19 np0005473739 nova_compute[259550]: 2025-10-07 14:53:19.412 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:53:19 np0005473739 nova_compute[259550]: 2025-10-07 14:53:19.412 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:53:19 np0005473739 nova_compute[259550]: 2025-10-07 14:53:19.444 2 DEBUG nova.compute.manager [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:53:19 np0005473739 nova_compute[259550]: 2025-10-07 14:53:19.444 2 DEBUG nova.compute.manager [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing instance network info cache due to event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:53:19 np0005473739 nova_compute[259550]: 2025-10-07 14:53:19.444 2 DEBUG oslo_concurrency.lockutils [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:53:19 np0005473739 nova_compute[259550]: 2025-10-07 14:53:19.575 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:53:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2733: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:53:21 np0005473739 podman[418422]: 2025-10-07 14:53:21.079753868 +0000 UTC m=+0.061973573 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd)
Oct  7 10:53:21 np0005473739 podman[418423]: 2025-10-07 14:53:21.102201121 +0000 UTC m=+0.082335426 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:53:21 np0005473739 nova_compute[259550]: 2025-10-07 14:53:21.342 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:21 np0005473739 nova_compute[259550]: 2025-10-07 14:53:21.342 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:53:21 np0005473739 nova_compute[259550]: 2025-10-07 14:53:21.503 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:53:21 np0005473739 nova_compute[259550]: 2025-10-07 14:53:21.504 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:53:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.215 2 DEBUG nova.network.neutron [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.409 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.410 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance network_info: |[{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.411 2 DEBUG oslo_concurrency.lockutils [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.411 2 DEBUG nova.network.neutron [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.419 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start _get_guest_xml network_info=[{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.424 2 WARNING nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.430 2 DEBUG nova.virt.libvirt.host [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.430 2 DEBUG nova.virt.libvirt.host [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.434 2 DEBUG nova.virt.libvirt.host [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.435 2 DEBUG nova.virt.libvirt.host [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.435 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.435 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.436 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.436 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.436 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.437 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.437 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.437 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.437 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.438 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.438 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.438 2 DEBUG nova.virt.hardware [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.441 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:53:22
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'default.rgw.meta', 'images', 'volumes', '.rgw.root', '.mgr', 'vms', 'backups', 'default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:53:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2734: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:53:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:53:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3403038772' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.929 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.970 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:22 np0005473739 nova_compute[259550]: 2025-10-07 14:53:22.975 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:53:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:53:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/891966138' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.431 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.439 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.440 2 DEBUG nova.virt.libvirt.vif [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=144,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-p3qp2bxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:53:15Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5360ad21-c174-468a-9be2-d82d672f2911,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.441 2 DEBUG nova.network.os_vif_util [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.442 2 DEBUG nova.network.os_vif_util [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.443 2 DEBUG nova.objects.instance [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5360ad21-c174-468a-9be2-d82d672f2911 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.468 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <uuid>5360ad21-c174-468a-9be2-d82d672f2911</uuid>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <name>instance-00000090</name>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091</nova:name>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:53:22</nova:creationTime>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <nova:port uuid="78d82e05-f97f-4cd4-aed0-b65c80e18ec5">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <entry name="serial">5360ad21-c174-468a-9be2-d82d672f2911</entry>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <entry name="uuid">5360ad21-c174-468a-9be2-d82d672f2911</entry>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5360ad21-c174-468a-9be2-d82d672f2911_disk">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5360ad21-c174-468a-9be2-d82d672f2911_disk.config">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:7f:b1:b3"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <target dev="tap78d82e05-f9"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/console.log" append="off"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:53:23 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:53:23 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:53:23 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:53:23 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.469 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Preparing to wait for external event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.469 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.470 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.470 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.471 2 DEBUG nova.virt.libvirt.vif [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=144,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-p3qp2bxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:53:15Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5360ad21-c174-468a-9be2-d82d672f2911,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.471 2 DEBUG nova.network.os_vif_util [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.472 2 DEBUG nova.network.os_vif_util [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.472 2 DEBUG os_vif [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.473 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap78d82e05-f9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.478 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap78d82e05-f9, col_values=(('external_ids', {'iface-id': '78d82e05-f97f-4cd4-aed0-b65c80e18ec5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7f:b1:b3', 'vm-uuid': '5360ad21-c174-468a-9be2-d82d672f2911'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:23 np0005473739 NetworkManager[44949]: <info>  [1759848803.4813] manager: (tap78d82e05-f9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/633)
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.487 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.489 2 INFO os_vif [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9')#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.541 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.541 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.542 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:7f:b1:b3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.542 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Using config drive#033[00m
Oct  7 10:53:23 np0005473739 nova_compute[259550]: 2025-10-07 14:53:23.566 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.190 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Creating config drive at /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.196 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0e626y2h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.350 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp0e626y2h" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.378 2 DEBUG nova.storage.rbd_utils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5360ad21-c174-468a-9be2-d82d672f2911_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.382 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config 5360ad21-c174-468a-9be2-d82d672f2911_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.427 2 DEBUG nova.network.neutron [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updated VIF entry in instance network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.427 2 DEBUG nova.network.neutron [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.443 2 DEBUG oslo_concurrency.lockutils [req-02f527be-cd24-45f9-8c9a-276f91a508f4 req-8c5d51ea-a99e-4e01-b1dd-504d77af6bc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.558 2 DEBUG oslo_concurrency.processutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config 5360ad21-c174-468a-9be2-d82d672f2911_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.175s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.558 2 INFO nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Deleting local config drive /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911/disk.config because it was imported into RBD.#033[00m
Oct  7 10:53:24 np0005473739 kernel: tap78d82e05-f9: entered promiscuous mode
Oct  7 10:53:24 np0005473739 NetworkManager[44949]: <info>  [1759848804.6051] manager: (tap78d82e05-f9): new Tun device (/org/freedesktop/NetworkManager/Devices/634)
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:24Z|01577|binding|INFO|Claiming lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 for this chassis.
Oct  7 10:53:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:24Z|01578|binding|INFO|78d82e05-f97f-4cd4-aed0-b65c80e18ec5: Claiming fa:16:3e:7f:b1:b3 10.100.0.6
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.622 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:b1:b3 10.100.0.6'], port_security=['fa:16:3e:7f:b1:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5360ad21-c174-468a-9be2-d82d672f2911', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7d1e85a4-dffc-45c3-a3d8-ec6178189662 fb064d27-a242-4db9-8bed-79adb9fb8cf9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8255c2c5-7755-43fa-85c6-c4d3fcdeecda, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=78d82e05-f97f-4cd4-aed0-b65c80e18ec5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.623 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 in datapath b7b9d978-a319-4f4b-b8e1-4891fcd559d0 bound to our chassis#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.624 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7b9d978-a319-4f4b-b8e1-4891fcd559d0#033[00m
Oct  7 10:53:24 np0005473739 systemd-udevd[418595]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.639 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43e753f2-df80-4f68-acce-6f5c67bf71d0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.641 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb7b9d978-a1 in ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.643 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb7b9d978-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.643 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8ff57bad-1245-49fe-9988-45a89a2cdd65]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.644 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[90847988-6af7-490c-8f8d-e1dcf2db716a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 systemd-machined[214580]: New machine qemu-178-instance-00000090.
Oct  7 10:53:24 np0005473739 NetworkManager[44949]: <info>  [1759848804.6534] device (tap78d82e05-f9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:53:24 np0005473739 NetworkManager[44949]: <info>  [1759848804.6551] device (tap78d82e05-f9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.661 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[1b8165ea-c041-4d2c-8ddb-4390a8d07c52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.672 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:24 np0005473739 systemd[1]: Started Virtual Machine qemu-178-instance-00000090.
Oct  7 10:53:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:24Z|01579|binding|INFO|Setting lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 ovn-installed in OVS
Oct  7 10:53:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:24Z|01580|binding|INFO|Setting lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 up in Southbound
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.688 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1d7968a7-1973-4044-b333-a6775cacfc66]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.719 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2ebdaef3-cfd0-4268-a2f2-5e4be6400c79]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.724 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e7e92d98-8c63-4dc5-a8a7-48441ec0f62b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 NetworkManager[44949]: <info>  [1759848804.7258] manager: (tapb7b9d978-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/635)
Oct  7 10:53:24 np0005473739 systemd-udevd[418599]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.760 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[a28d9d99-a9c7-4014-aa3a-1f0d3f71f508]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.766 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[71935adc-cd62-40cc-a40d-a36ba8a7dfe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2735: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:53:24 np0005473739 NetworkManager[44949]: <info>  [1759848804.7916] device (tapb7b9d978-a0): carrier: link connected
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.799 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b3febb33-764f-475f-9904-0b26e1ac2c5e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.820 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df25c65b-5dae-4285-96df-00f6547872d7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7b9d978-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:47:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938835, 'reachable_time': 42170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 418628, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.840 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[95e1b576-cc48-4479-bb1f-9d71fdbfce15]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feac:471e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938835, 'tstamp': 938835}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 418629, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.861 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[df3f8ec7-73cf-4e3a-acbd-b8da7330289f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7b9d978-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:47:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938835, 'reachable_time': 42170, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 418630, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.904 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9904787b-2056-49cc-8445-e0296a7ba0d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.931 2 DEBUG nova.compute.manager [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.932 2 DEBUG oslo_concurrency.lockutils [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.932 2 DEBUG oslo_concurrency.lockutils [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.932 2 DEBUG oslo_concurrency.lockutils [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:24 np0005473739 nova_compute[259550]: 2025-10-07 14:53:24.933 2 DEBUG nova.compute.manager [req-2b383cb7-56f8-48fc-b0db-8ff8d69fb781 req-a9a57a7a-8c75-423d-812d-f4dc8b599e16 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Processing event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.970 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3a328995-f97a-4369-b041-6b8536d9e5ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b9d978-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:53:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:24.972 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7b9d978-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:25 np0005473739 NetworkManager[44949]: <info>  [1759848805.0686] manager: (tapb7b9d978-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/636)
Oct  7 10:53:25 np0005473739 kernel: tapb7b9d978-a0: entered promiscuous mode
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.072 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7b9d978-a0, col_values=(('external_ids', {'iface-id': '844b358c-727e-4661-9ab8-5cf03a82eb82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:25Z|01581|binding|INFO|Releasing lport 844b358c-727e-4661-9ab8-5cf03a82eb82 from this chassis (sb_readonly=0)
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.086 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b7b9d978-a319-4f4b-b8e1-4891fcd559d0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b7b9d978-a319-4f4b-b8e1-4891fcd559d0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.087 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[beb8872f-858e-41f2-8aa1-db0165f8455a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.087 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-b7b9d978-a319-4f4b-b8e1-4891fcd559d0
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/b7b9d978-a319-4f4b-b8e1-4891fcd559d0.pid.haproxy
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID b7b9d978-a319-4f4b-b8e1-4891fcd559d0
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:53:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:25.088 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'env', 'PROCESS_TAG=haproxy-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b7b9d978-a319-4f4b-b8e1-4891fcd559d0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:53:25 np0005473739 podman[418703]: 2025-10-07 14:53:25.476356067 +0000 UTC m=+0.063907289 container create b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  7 10:53:25 np0005473739 systemd[1]: Started libpod-conmon-b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047.scope.
Oct  7 10:53:25 np0005473739 podman[418703]: 2025-10-07 14:53:25.439328447 +0000 UTC m=+0.026879689 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:53:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:53:25 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ced5dd1aff1b8cd3c8452ea0fb4c0158258e892a5ae20c9c27856c33d2c1bdf8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:53:25 np0005473739 podman[418703]: 2025-10-07 14:53:25.580626694 +0000 UTC m=+0.168177916 container init b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:53:25 np0005473739 podman[418703]: 2025-10-07 14:53:25.588807729 +0000 UTC m=+0.176358951 container start b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:53:25 np0005473739 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [NOTICE]   (418723) : New worker (418725) forked
Oct  7 10:53:25 np0005473739 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [NOTICE]   (418723) : Loading success.
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.918 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848805.9184005, 5360ad21-c174-468a-9be2-d82d672f2911 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.919 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] VM Started (Lifecycle Event)#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.921 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.926 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.930 2 INFO nova.virt.libvirt.driver [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance spawned successfully.#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.930 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.948 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.956 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.960 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.960 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.961 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.961 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.961 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.961 2 DEBUG nova.virt.libvirt.driver [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.997 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.998 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848805.921185, 5360ad21-c174-468a-9be2-d82d672f2911 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:53:25 np0005473739 nova_compute[259550]: 2025-10-07 14:53:25.998 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.033 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.036 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848805.9243731, 5360ad21-c174-468a-9be2-d82d672f2911 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.037 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.040 2 INFO nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Took 9.95 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.041 2 DEBUG nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.053 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.056 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.094 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.115 2 INFO nova.compute.manager [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Took 11.18 seconds to build instance.#033[00m
Oct  7 10:53:26 np0005473739 nova_compute[259550]: 2025-10-07 14:53:26.132 2 DEBUG oslo_concurrency.lockutils [None req-9114d0ae-1525-4fac-9122-8374b44a0515 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.342s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2736: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.8 MiB/s wr, 31 op/s
Oct  7 10:53:27 np0005473739 nova_compute[259550]: 2025-10-07 14:53:27.021 2 DEBUG nova.compute.manager [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:53:27 np0005473739 nova_compute[259550]: 2025-10-07 14:53:27.021 2 DEBUG oslo_concurrency.lockutils [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:27 np0005473739 nova_compute[259550]: 2025-10-07 14:53:27.022 2 DEBUG oslo_concurrency.lockutils [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:27 np0005473739 nova_compute[259550]: 2025-10-07 14:53:27.022 2 DEBUG oslo_concurrency.lockutils [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:27 np0005473739 nova_compute[259550]: 2025-10-07 14:53:27.022 2 DEBUG nova.compute.manager [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] No waiting events found dispatching network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:53:27 np0005473739 nova_compute[259550]: 2025-10-07 14:53:27.023 2 WARNING nova.compute.manager [req-67bd9490-0b3e-4bc9-a3da-eb909d0ad750 req-64202359-85f2-4644-a250-202275262732 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received unexpected event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:53:28 np0005473739 nova_compute[259550]: 2025-10-07 14:53:28.434 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:28 np0005473739 nova_compute[259550]: 2025-10-07 14:53:28.479 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2737: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 85 op/s
Oct  7 10:53:29 np0005473739 NetworkManager[44949]: <info>  [1759848809.5035] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/637)
Oct  7 10:53:29 np0005473739 NetworkManager[44949]: <info>  [1759848809.5045] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/638)
Oct  7 10:53:29 np0005473739 nova_compute[259550]: 2025-10-07 14:53:29.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:29Z|01582|binding|INFO|Releasing lport 844b358c-727e-4661-9ab8-5cf03a82eb82 from this chassis (sb_readonly=0)
Oct  7 10:53:29 np0005473739 nova_compute[259550]: 2025-10-07 14:53:29.557 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:29 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:29Z|01583|binding|INFO|Releasing lport 844b358c-727e-4661-9ab8-5cf03a82eb82 from this chassis (sb_readonly=0)
Oct  7 10:53:29 np0005473739 nova_compute[259550]: 2025-10-07 14:53:29.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:30 np0005473739 nova_compute[259550]: 2025-10-07 14:53:30.014 2 DEBUG nova.compute.manager [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:53:30 np0005473739 nova_compute[259550]: 2025-10-07 14:53:30.014 2 DEBUG nova.compute.manager [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing instance network info cache due to event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:53:30 np0005473739 nova_compute[259550]: 2025-10-07 14:53:30.014 2 DEBUG oslo_concurrency.lockutils [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:53:30 np0005473739 nova_compute[259550]: 2025-10-07 14:53:30.015 2 DEBUG oslo_concurrency.lockutils [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:53:30 np0005473739 nova_compute[259550]: 2025-10-07 14:53:30.015 2 DEBUG nova.network.neutron [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:53:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:30.038 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=53, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=52) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:53:30 np0005473739 nova_compute[259550]: 2025-10-07 14:53:30.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:30 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:30.040 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:53:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2738: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 379 KiB/s wr, 76 op/s
Oct  7 10:53:31 np0005473739 nova_compute[259550]: 2025-10-07 14:53:31.463 2 DEBUG nova.network.neutron [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updated VIF entry in instance network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:53:31 np0005473739 nova_compute[259550]: 2025-10-07 14:53:31.464 2 DEBUG nova.network.neutron [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:53:31 np0005473739 nova_compute[259550]: 2025-10-07 14:53:31.645 2 DEBUG oslo_concurrency.lockutils [req-227aae49-99d4-491a-847a-d41d7d9dfcef req-926d1705-ea86-4c23-ad1f-996c067248d3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:53:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:53:32.042 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '53'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:53:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/97945509' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:53:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:53:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/97945509' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2739: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 75 op/s
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:53:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:53:33 np0005473739 nova_compute[259550]: 2025-10-07 14:53:33.436 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:33 np0005473739 nova_compute[259550]: 2025-10-07 14:53:33.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2740: 305 pgs: 305 active+clean; 88 MiB data, 987 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 86 op/s
Oct  7 10:53:35 np0005473739 podman[418736]: 2025-10-07 14:53:35.071881084 +0000 UTC m=+0.061927972 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 10:53:35 np0005473739 podman[418737]: 2025-10-07 14:53:35.117556458 +0000 UTC m=+0.102263110 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:53:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2741: 305 pgs: 305 active+clean; 88 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 92 op/s
Oct  7 10:53:38 np0005473739 nova_compute[259550]: 2025-10-07 14:53:38.440 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:38 np0005473739 nova_compute[259550]: 2025-10-07 14:53:38.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2742: 305 pgs: 305 active+clean; 96 MiB data, 1002 MiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 931 KiB/s wr, 111 op/s
Oct  7 10:53:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:40Z|00194|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7f:b1:b3 10.100.0.6
Oct  7 10:53:40 np0005473739 ovn_controller[151684]: 2025-10-07T14:53:40Z|00195|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7f:b1:b3 10.100.0.6
Oct  7 10:53:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2743: 305 pgs: 305 active+clean; 101 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 483 KiB/s rd, 1.6 MiB/s wr, 71 op/s
Oct  7 10:53:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2744: 305 pgs: 305 active+clean; 101 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 56 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct  7 10:53:43 np0005473739 nova_compute[259550]: 2025-10-07 14:53:43.442 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:43 np0005473739 nova_compute[259550]: 2025-10-07 14:53:43.484 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2745: 305 pgs: 305 active+clean; 117 MiB data, 1015 MiB used, 59 GiB / 60 GiB avail; 354 KiB/s rd, 2.1 MiB/s wr, 111 op/s
Oct  7 10:53:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2746: 305 pgs: 305 active+clean; 121 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 108 op/s
Oct  7 10:53:48 np0005473739 nova_compute[259550]: 2025-10-07 14:53:48.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:48 np0005473739 nova_compute[259550]: 2025-10-07 14:53:48.485 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2747: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 350 KiB/s rd, 2.1 MiB/s wr, 105 op/s
Oct  7 10:53:50 np0005473739 nova_compute[259550]: 2025-10-07 14:53:50.512 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:50 np0005473739 nova_compute[259550]: 2025-10-07 14:53:50.513 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:50 np0005473739 nova_compute[259550]: 2025-10-07 14:53:50.533 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:53:50 np0005473739 nova_compute[259550]: 2025-10-07 14:53:50.657 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:50 np0005473739 nova_compute[259550]: 2025-10-07 14:53:50.657 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:50 np0005473739 nova_compute[259550]: 2025-10-07 14:53:50.672 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:53:50 np0005473739 nova_compute[259550]: 2025-10-07 14:53:50.672 2 INFO nova.compute.claims [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:53:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2748: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 319 KiB/s rd, 1.2 MiB/s wr, 82 op/s
Oct  7 10:53:50 np0005473739 nova_compute[259550]: 2025-10-07 14:53:50.785 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:53:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2043452326' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.246 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.255 2 DEBUG nova.compute.provider_tree [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.297 2 DEBUG nova.scheduler.client.report [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.444 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.444 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.643 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.643 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.693 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.737 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:53:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.862 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.863 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.864 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Creating image(s)#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.889 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.926 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.948 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:51 np0005473739 nova_compute[259550]: 2025-10-07 14:53:51.951 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:52 np0005473739 nova_compute[259550]: 2025-10-07 14:53:52.033 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:52 np0005473739 nova_compute[259550]: 2025-10-07 14:53:52.034 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:52 np0005473739 nova_compute[259550]: 2025-10-07 14:53:52.035 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:52 np0005473739 nova_compute[259550]: 2025-10-07 14:53:52.035 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:52 np0005473739 nova_compute[259550]: 2025-10-07 14:53:52.061 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:52 np0005473739 nova_compute[259550]: 2025-10-07 14:53:52.068 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0d0450f4-efd0-4508-8aca-28b5d858c397_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:52 np0005473739 podman[418861]: 2025-10-07 14:53:52.084607246 +0000 UTC m=+0.067031203 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251001)
Oct  7 10:53:52 np0005473739 podman[418862]: 2025-10-07 14:53:52.113120803 +0000 UTC m=+0.093813659 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Oct  7 10:53:52 np0005473739 nova_compute[259550]: 2025-10-07 14:53:52.210 2 DEBUG nova.policy [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:53:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:53:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2749: 305 pgs: 305 active+clean; 121 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 305 KiB/s rd, 599 KiB/s wr, 68 op/s
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.125 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 0d0450f4-efd0-4508-8aca-28b5d858c397_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.184 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.299 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Successfully created port: ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.446 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.864 2 DEBUG nova.objects.instance [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 0d0450f4-efd0-4508-8aca-28b5d858c397 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.945 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.946 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Ensure instance console log exists: /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.947 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.947 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:53 np0005473739 nova_compute[259550]: 2025-10-07 14:53:53.947 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2750: 305 pgs: 305 active+clean; 142 MiB data, 1019 MiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 1.8 MiB/s wr, 78 op/s
Oct  7 10:53:56 np0005473739 nova_compute[259550]: 2025-10-07 14:53:56.408 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Successfully updated port: ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:53:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:53:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2751: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 22 KiB/s rd, 1.8 MiB/s wr, 35 op/s
Oct  7 10:53:56 np0005473739 nova_compute[259550]: 2025-10-07 14:53:56.823 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:53:56 np0005473739 nova_compute[259550]: 2025-10-07 14:53:56.824 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:53:56 np0005473739 nova_compute[259550]: 2025-10-07 14:53:56.824 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:53:56 np0005473739 nova_compute[259550]: 2025-10-07 14:53:56.853 2 DEBUG nova.compute.manager [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-changed-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:53:56 np0005473739 nova_compute[259550]: 2025-10-07 14:53:56.854 2 DEBUG nova.compute.manager [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Refreshing instance network info cache due to event network-changed-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:53:56 np0005473739 nova_compute[259550]: 2025-10-07 14:53:56.854 2 DEBUG oslo_concurrency.lockutils [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:53:57 np0005473739 nova_compute[259550]: 2025-10-07 14:53:57.182 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.187 2 DEBUG nova.network.neutron [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updating instance_info_cache with network_info: [{"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.455 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.456 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance network_info: |[{"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.456 2 DEBUG oslo_concurrency.lockutils [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.456 2 DEBUG nova.network.neutron [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Refreshing network info cache for port ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.460 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start _get_guest_xml network_info=[{"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.466 2 WARNING nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.474 2 DEBUG nova.virt.libvirt.host [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.476 2 DEBUG nova.virt.libvirt.host [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.481 2 DEBUG nova.virt.libvirt.host [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.481 2 DEBUG nova.virt.libvirt.host [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.482 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.483 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.483 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.484 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.484 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.485 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.485 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.486 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.486 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.487 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.487 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.488 2 DEBUG nova.virt.hardware [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.493 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2752: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  7 10:53:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:53:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2310903720' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:53:58 np0005473739 nova_compute[259550]: 2025-10-07 14:53:58.979 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.486s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.004 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.008 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:53:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:53:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2387551868' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.455 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.447s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.457 2 DEBUG nova.virt.libvirt.vif [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:53:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=145,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-rvfyeg7d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:53:51Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=0d0450f4-efd0-4508-8aca-28b5d858c397,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.457 2 DEBUG nova.network.os_vif_util [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.458 2 DEBUG nova.network.os_vif_util [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.459 2 DEBUG nova.objects.instance [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 0d0450f4-efd0-4508-8aca-28b5d858c397 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.564 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <uuid>0d0450f4-efd0-4508-8aca-28b5d858c397</uuid>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <name>instance-00000091</name>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577</nova:name>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:53:58</nova:creationTime>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <nova:port uuid="ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <entry name="serial">0d0450f4-efd0-4508-8aca-28b5d858c397</entry>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <entry name="uuid">0d0450f4-efd0-4508-8aca-28b5d858c397</entry>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/0d0450f4-efd0-4508-8aca-28b5d858c397_disk">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:8a:bd:28"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <target dev="tapba7ffd6d-6a"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/console.log" append="off"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:53:59 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:53:59 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:53:59 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:53:59 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.566 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Preparing to wait for external event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.567 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.567 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.567 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.568 2 DEBUG nova.virt.libvirt.vif [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:53:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=145,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-rvfyeg7d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:53:51Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=0d0450f4-efd0-4508-8aca-28b5d858c397,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.568 2 DEBUG nova.network.os_vif_util [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.569 2 DEBUG nova.network.os_vif_util [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.570 2 DEBUG os_vif [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.571 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapba7ffd6d-6a, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.577 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapba7ffd6d-6a, col_values=(('external_ids', {'iface-id': 'ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:8a:bd:28', 'vm-uuid': '0d0450f4-efd0-4508-8aca-28b5d858c397'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:53:59 np0005473739 NetworkManager[44949]: <info>  [1759848839.5805] manager: (tapba7ffd6d-6a): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/639)
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.586 2 INFO os_vif [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a')#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.948 2 DEBUG nova.network.neutron [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updated VIF entry in instance network info cache for port ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:53:59 np0005473739 nova_compute[259550]: 2025-10-07 14:53:59.949 2 DEBUG nova.network.neutron [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updating instance_info_cache with network_info: [{"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:54:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:00.090 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:00.091 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:00.092 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:00 np0005473739 nova_compute[259550]: 2025-10-07 14:54:00.118 2 DEBUG oslo_concurrency.lockutils [req-22c3e85c-cab0-4641-a8e9-f1cb262b3b82 req-55affff8-18a7-410e-9d2c-223f656cb75f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-0d0450f4-efd0-4508-8aca-28b5d858c397" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:54:00 np0005473739 nova_compute[259550]: 2025-10-07 14:54:00.273 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:54:00 np0005473739 nova_compute[259550]: 2025-10-07 14:54:00.273 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:54:00 np0005473739 nova_compute[259550]: 2025-10-07 14:54:00.274 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:8a:bd:28, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:54:00 np0005473739 nova_compute[259550]: 2025-10-07 14:54:00.275 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Using config drive#033[00m
Oct  7 10:54:00 np0005473739 nova_compute[259550]: 2025-10-07 14:54:00.305 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:54:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2753: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.311 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Creating config drive at /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config#033[00m
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.316 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpjlzgu0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.458 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphpjlzgu0" returned: 0 in 0.142s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.490 2 DEBUG nova.storage.rbd_utils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.496 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.803 2 DEBUG oslo_concurrency.processutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config 0d0450f4-efd0-4508-8aca-28b5d858c397_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.307s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.805 2 INFO nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Deleting local config drive /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397/disk.config because it was imported into RBD.#033[00m
Oct  7 10:54:01 np0005473739 NetworkManager[44949]: <info>  [1759848841.8627] manager: (tapba7ffd6d-6a): new Tun device (/org/freedesktop/NetworkManager/Devices/640)
Oct  7 10:54:01 np0005473739 kernel: tapba7ffd6d-6a: entered promiscuous mode
Oct  7 10:54:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:01Z|01584|binding|INFO|Claiming lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 for this chassis.
Oct  7 10:54:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:01Z|01585|binding|INFO|ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57: Claiming fa:16:3e:8a:bd:28 10.100.0.8
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:01 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:01Z|01586|binding|INFO|Setting lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 ovn-installed in OVS
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:01 np0005473739 nova_compute[259550]: 2025-10-07 14:54:01.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:01 np0005473739 systemd-machined[214580]: New machine qemu-179-instance-00000091.
Oct  7 10:54:01 np0005473739 systemd[1]: Started Virtual Machine qemu-179-instance-00000091.
Oct  7 10:54:01 np0005473739 systemd-udevd[419278]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:54:01 np0005473739 NetworkManager[44949]: <info>  [1759848841.9307] device (tapba7ffd6d-6a): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:54:01 np0005473739 NetworkManager[44949]: <info>  [1759848841.9315] device (tapba7ffd6d-6a): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:54:01 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cb020fe9-7f22-4ebb-812f-141189117c6b does not exist
Oct  7 10:54:01 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6d1f556c-d670-4b7d-872f-f9eb687b22be does not exist
Oct  7 10:54:01 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 6f33e89c-6fac-43c5-adab-dde97e6dadfe does not exist
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:54:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:54:02 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:02Z|01587|binding|INFO|Setting lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 up in Southbound
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.191 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:bd:28 10.100.0.8'], port_security=['fa:16:3e:8a:bd:28 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0d0450f4-efd0-4508-8aca-28b5d858c397', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'fb064d27-a242-4db9-8bed-79adb9fb8cf9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8255c2c5-7755-43fa-85c6-c4d3fcdeecda, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.193 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 in datapath b7b9d978-a319-4f4b-b8e1-4891fcd559d0 bound to our chassis#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.194 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7b9d978-a319-4f4b-b8e1-4891fcd559d0#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.212 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d4a0ce-7376-48cd-9936-c52b5ec4f91a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.242 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[31fb85e9-526b-4282-a811-fa8358404e32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.245 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5af994a5-d43a-47ec-bf87-3bf918c6f29a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.273 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5631bc03-d1c2-4c8b-a156-8f97afb0cae7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.293 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[07354c73-177f-4b3e-bf84-222408f7a0c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7b9d978-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:47:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938835, 'reachable_time': 44639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 419392, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.315 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43e7c066-a140-40af-9bf5-346bfff99733]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb7b9d978-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938849, 'tstamp': 938849}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419393, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb7b9d978-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938852, 'tstamp': 938852}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 419393, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.317 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b9d978-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:02 np0005473739 nova_compute[259550]: 2025-10-07 14:54:02.319 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.320 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7b9d978-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.321 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.321 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7b9d978-a0, col_values=(('external_ids', {'iface-id': '844b358c-727e-4661-9ab8-5cf03a82eb82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:02 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:02.321 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:54:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:54:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:54:02 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:54:02 np0005473739 podman[419435]: 2025-10-07 14:54:02.525172296 +0000 UTC m=+0.021847870 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:54:02 np0005473739 podman[419435]: 2025-10-07 14:54:02.676161363 +0000 UTC m=+0.172836927 container create d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True)
Oct  7 10:54:02 np0005473739 systemd[1]: Started libpod-conmon-d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f.scope.
Oct  7 10:54:02 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:54:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2754: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:54:02 np0005473739 podman[419435]: 2025-10-07 14:54:02.818841003 +0000 UTC m=+0.315516577 container init d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:54:02 np0005473739 podman[419435]: 2025-10-07 14:54:02.826266959 +0000 UTC m=+0.322942513 container start d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:54:02 np0005473739 elegant_mayer[419487]: 167 167
Oct  7 10:54:02 np0005473739 systemd[1]: libpod-d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f.scope: Deactivated successfully.
Oct  7 10:54:02 np0005473739 podman[419435]: 2025-10-07 14:54:02.842952915 +0000 UTC m=+0.339628489 container attach d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:54:02 np0005473739 podman[419435]: 2025-10-07 14:54:02.843324095 +0000 UTC m=+0.339999649 container died d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:54:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-85db81374a0c3de4464ff227b48d241810a9bf53fba1032719a273f2cfa98ed9-merged.mount: Deactivated successfully.
Oct  7 10:54:02 np0005473739 podman[419435]: 2025-10-07 14:54:02.919026053 +0000 UTC m=+0.415701607 container remove d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_mayer, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:54:02 np0005473739 systemd[1]: libpod-conmon-d84ed425172dea7c8d736e5c009e58a6b008b215d6a5404025e61fa3a436350f.scope: Deactivated successfully.
Oct  7 10:54:02 np0005473739 nova_compute[259550]: 2025-10-07 14:54:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:03 np0005473739 podman[419517]: 2025-10-07 14:54:03.116716009 +0000 UTC m=+0.055499639 container create 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:54:03 np0005473739 systemd[1]: Started libpod-conmon-13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558.scope.
Oct  7 10:54:03 np0005473739 podman[419517]: 2025-10-07 14:54:03.088452318 +0000 UTC m=+0.027235948 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:54:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:54:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:03 np0005473739 podman[419517]: 2025-10-07 14:54:03.286056122 +0000 UTC m=+0.224839782 container init 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 10:54:03 np0005473739 podman[419517]: 2025-10-07 14:54:03.296102951 +0000 UTC m=+0.234886581 container start 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:54:03 np0005473739 podman[419517]: 2025-10-07 14:54:03.326050032 +0000 UTC m=+0.264833692 container attach 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.353 2 DEBUG nova.compute.manager [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.355 2 DEBUG oslo_concurrency.lockutils [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.355 2 DEBUG oslo_concurrency.lockutils [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.356 2 DEBUG oslo_concurrency.lockutils [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.356 2 DEBUG nova.compute.manager [req-86c4675d-e2a9-4038-8b10-f763a0b736cb req-da25cdf6-f85b-462e-b788-950d3d93921c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Processing event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.378 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848843.3777244, 0d0450f4-efd0-4508-8aca-28b5d858c397 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.379 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] VM Started (Lifecycle Event)#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.380 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.390 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.394 2 INFO nova.virt.libvirt.driver [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance spawned successfully.#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.394 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.412 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.417 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.452 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.631 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.632 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848843.3778427, 0d0450f4-efd0-4508-8aca-28b5d858c397 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.632 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.640 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.642 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.642 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.643 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.643 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.644 2 DEBUG nova.virt.libvirt.driver [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.943 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.945 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848843.3844037, 0d0450f4-efd0-4508-8aca-28b5d858c397 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:54:03 np0005473739 nova_compute[259550]: 2025-10-07 14:54:03.946 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:54:04 np0005473739 nova_compute[259550]: 2025-10-07 14:54:04.094 2 INFO nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Took 12.23 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:54:04 np0005473739 nova_compute[259550]: 2025-10-07 14:54:04.095 2 DEBUG nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:54:04 np0005473739 nova_compute[259550]: 2025-10-07 14:54:04.103 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:54:04 np0005473739 nova_compute[259550]: 2025-10-07 14:54:04.110 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:54:04 np0005473739 blissful_lumiere[419533]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:54:04 np0005473739 blissful_lumiere[419533]: --> relative data size: 1.0
Oct  7 10:54:04 np0005473739 blissful_lumiere[419533]: --> All data devices are unavailable
Oct  7 10:54:04 np0005473739 systemd[1]: libpod-13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558.scope: Deactivated successfully.
Oct  7 10:54:04 np0005473739 podman[419517]: 2025-10-07 14:54:04.491666733 +0000 UTC m=+1.430450373 container died 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 10:54:04 np0005473739 systemd[1]: libpod-13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558.scope: Consumed 1.111s CPU time.
Oct  7 10:54:04 np0005473739 nova_compute[259550]: 2025-10-07 14:54:04.537 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:54:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ba2116272e1e114924649c3e361a99f5e2910d3887817702d0fc803eca3d9eb5-merged.mount: Deactivated successfully.
Oct  7 10:54:04 np0005473739 nova_compute[259550]: 2025-10-07 14:54:04.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:04 np0005473739 podman[419517]: 2025-10-07 14:54:04.591128786 +0000 UTC m=+1.529912416 container remove 13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=blissful_lumiere, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0)
Oct  7 10:54:04 np0005473739 systemd[1]: libpod-conmon-13bf054d4e24be125c302f621a256692ee78747694e6cedd35f013255da66558.scope: Deactivated successfully.
Oct  7 10:54:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2755: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.8 MiB/s wr, 36 op/s
Oct  7 10:54:04 np0005473739 nova_compute[259550]: 2025-10-07 14:54:04.890 2 INFO nova.compute.manager [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Took 14.31 seconds to build instance.#033[00m
Oct  7 10:54:04 np0005473739 nova_compute[259550]: 2025-10-07 14:54:04.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:05 np0005473739 nova_compute[259550]: 2025-10-07 14:54:05.135 2 DEBUG oslo_concurrency.lockutils [None req-b5d27895-b006-4cc7-b362-ae670a5ea862 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:05 np0005473739 podman[419713]: 2025-10-07 14:54:05.226056169 +0000 UTC m=+0.053593524 container create e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:54:05 np0005473739 systemd[1]: Started libpod-conmon-e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256.scope.
Oct  7 10:54:05 np0005473739 podman[419713]: 2025-10-07 14:54:05.195818931 +0000 UTC m=+0.023356296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:54:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:54:05 np0005473739 podman[419713]: 2025-10-07 14:54:05.333207375 +0000 UTC m=+0.160744760 container init e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:54:05 np0005473739 podman[419713]: 2025-10-07 14:54:05.341666176 +0000 UTC m=+0.169203531 container start e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 10:54:05 np0005473739 modest_ellis[419748]: 167 167
Oct  7 10:54:05 np0005473739 systemd[1]: libpod-e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256.scope: Deactivated successfully.
Oct  7 10:54:05 np0005473739 podman[419713]: 2025-10-07 14:54:05.365563494 +0000 UTC m=+0.193100849 container attach e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:54:05 np0005473739 podman[419713]: 2025-10-07 14:54:05.366596968 +0000 UTC m=+0.194134323 container died e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:54:05 np0005473739 podman[419730]: 2025-10-07 14:54:05.367211683 +0000 UTC m=+0.101790830 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251001)
Oct  7 10:54:05 np0005473739 podman[419727]: 2025-10-07 14:54:05.390130148 +0000 UTC m=+0.124845088 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:54:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-86474028c34c60b1a022316c5d7a5d4e3550e08a00a079251a43017268332cd2-merged.mount: Deactivated successfully.
Oct  7 10:54:05 np0005473739 podman[419713]: 2025-10-07 14:54:05.453997624 +0000 UTC m=+0.281534989 container remove e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_ellis, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:54:05 np0005473739 systemd[1]: libpod-conmon-e283de28d54604c047c9ffe19dea22bf55bd80931a89bf9090e54fb81f97f256.scope: Deactivated successfully.
Oct  7 10:54:05 np0005473739 nova_compute[259550]: 2025-10-07 14:54:05.465 2 DEBUG nova.compute.manager [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:05 np0005473739 nova_compute[259550]: 2025-10-07 14:54:05.467 2 DEBUG oslo_concurrency.lockutils [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:05 np0005473739 nova_compute[259550]: 2025-10-07 14:54:05.467 2 DEBUG oslo_concurrency.lockutils [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:05 np0005473739 nova_compute[259550]: 2025-10-07 14:54:05.467 2 DEBUG oslo_concurrency.lockutils [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:05 np0005473739 nova_compute[259550]: 2025-10-07 14:54:05.468 2 DEBUG nova.compute.manager [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] No waiting events found dispatching network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:54:05 np0005473739 nova_compute[259550]: 2025-10-07 14:54:05.468 2 WARNING nova.compute.manager [req-7c496edb-aa6e-4dc7-b3dc-792a5fd82a3b req-01905b56-83fc-4dc8-bdce-362414fca55f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received unexpected event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:54:05 np0005473739 podman[419795]: 2025-10-07 14:54:05.698910063 +0000 UTC m=+0.080183646 container create 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:54:05 np0005473739 podman[419795]: 2025-10-07 14:54:05.643171448 +0000 UTC m=+0.024445051 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:54:05 np0005473739 systemd[1]: Started libpod-conmon-44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33.scope.
Oct  7 10:54:05 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:54:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:05 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:05 np0005473739 podman[419795]: 2025-10-07 14:54:05.878078219 +0000 UTC m=+0.259351822 container init 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 10:54:05 np0005473739 podman[419795]: 2025-10-07 14:54:05.884819509 +0000 UTC m=+0.266093092 container start 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:54:05 np0005473739 podman[419795]: 2025-10-07 14:54:05.924160114 +0000 UTC m=+0.305433697 container attach 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]: {
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:    "0": [
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:        {
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "devices": [
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "/dev/loop3"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            ],
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_name": "ceph_lv0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_size": "21470642176",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "name": "ceph_lv0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "tags": {
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cluster_name": "ceph",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.crush_device_class": "",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.encrypted": "0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osd_id": "0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.type": "block",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.vdo": "0"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            },
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "type": "block",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "vg_name": "ceph_vg0"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:        }
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:    ],
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:    "1": [
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:        {
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "devices": [
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "/dev/loop4"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            ],
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_name": "ceph_lv1",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_size": "21470642176",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "name": "ceph_lv1",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "tags": {
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cluster_name": "ceph",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.crush_device_class": "",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.encrypted": "0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osd_id": "1",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.type": "block",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.vdo": "0"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            },
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "type": "block",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "vg_name": "ceph_vg1"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:        }
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:    ],
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:    "2": [
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:        {
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "devices": [
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "/dev/loop5"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            ],
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_name": "ceph_lv2",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_size": "21470642176",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "name": "ceph_lv2",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "tags": {
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.cluster_name": "ceph",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.crush_device_class": "",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.encrypted": "0",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osd_id": "2",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.type": "block",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:                "ceph.vdo": "0"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            },
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "type": "block",
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:            "vg_name": "ceph_vg2"
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:        }
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]:    ]
Oct  7 10:54:06 np0005473739 admiring_bhaskara[419811]: }
Oct  7 10:54:06 np0005473739 systemd[1]: libpod-44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33.scope: Deactivated successfully.
Oct  7 10:54:06 np0005473739 podman[419795]: 2025-10-07 14:54:06.71912481 +0000 UTC m=+1.100398403 container died 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:54:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-49d47deb399f6f906785e83a76f35f316f5579eef63db850682d1edd043a5e41-merged.mount: Deactivated successfully.
Oct  7 10:54:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2756: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 558 KiB/s rd, 556 KiB/s wr, 45 op/s
Oct  7 10:54:06 np0005473739 podman[419795]: 2025-10-07 14:54:06.801394277 +0000 UTC m=+1.182667860 container remove 44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=admiring_bhaskara, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:54:06 np0005473739 systemd[1]: libpod-conmon-44da6b4cb9a01aac28943f1a10fdb454cc0c4359a77b0030f9d3c87f1246af33.scope: Deactivated successfully.
Oct  7 10:54:06 np0005473739 nova_compute[259550]: 2025-10-07 14:54:06.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:07 np0005473739 podman[419971]: 2025-10-07 14:54:07.441762655 +0000 UTC m=+0.075125760 container create bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:54:07 np0005473739 systemd[1]: Started libpod-conmon-bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f.scope.
Oct  7 10:54:07 np0005473739 podman[419971]: 2025-10-07 14:54:07.387853248 +0000 UTC m=+0.021216373 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:54:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:54:07 np0005473739 podman[419971]: 2025-10-07 14:54:07.54133829 +0000 UTC m=+0.174701415 container init bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:54:07 np0005473739 podman[419971]: 2025-10-07 14:54:07.549666221 +0000 UTC m=+0.183029326 container start bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 10:54:07 np0005473739 nervous_wu[419988]: 167 167
Oct  7 10:54:07 np0005473739 systemd[1]: libpod-bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f.scope: Deactivated successfully.
Oct  7 10:54:07 np0005473739 podman[419971]: 2025-10-07 14:54:07.5587132 +0000 UTC m=+0.192076325 container attach bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:54:07 np0005473739 podman[419971]: 2025-10-07 14:54:07.559378008 +0000 UTC m=+0.192741143 container died bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:54:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b18319f23850e9963f6d70c560db6198039b407c617d3fd315f8fb6293d8f698-merged.mount: Deactivated successfully.
Oct  7 10:54:07 np0005473739 podman[419971]: 2025-10-07 14:54:07.670854377 +0000 UTC m=+0.304217482 container remove bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_wu, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:54:07 np0005473739 systemd[1]: libpod-conmon-bd905385f3a9ca6bb569464591839d0fb5ebcd465bb2697b504b6ab2e86f019f.scope: Deactivated successfully.
Oct  7 10:54:07 np0005473739 podman[420013]: 2025-10-07 14:54:07.895609746 +0000 UTC m=+0.082332180 container create d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:54:07 np0005473739 podman[420013]: 2025-10-07 14:54:07.835802253 +0000 UTC m=+0.022524717 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:54:07 np0005473739 systemd[1]: Started libpod-conmon-d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc.scope.
Oct  7 10:54:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:54:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:54:07 np0005473739 nova_compute[259550]: 2025-10-07 14:54:07.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:08 np0005473739 podman[420013]: 2025-10-07 14:54:08.012685275 +0000 UTC m=+0.199407729 container init d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 10:54:08 np0005473739 podman[420013]: 2025-10-07 14:54:08.021016915 +0000 UTC m=+0.207739349 container start d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:54:08 np0005473739 podman[420013]: 2025-10-07 14:54:08.029870299 +0000 UTC m=+0.216592743 container attach d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:54:08 np0005473739 nova_compute[259550]: 2025-10-07 14:54:08.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2757: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 76 op/s
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]: {
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "osd_id": 2,
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "type": "bluestore"
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:    },
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "osd_id": 1,
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "type": "bluestore"
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:    },
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "osd_id": 0,
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:        "type": "bluestore"
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]:    }
Oct  7 10:54:09 np0005473739 adoring_mayer[420030]: }
Oct  7 10:54:09 np0005473739 systemd[1]: libpod-d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc.scope: Deactivated successfully.
Oct  7 10:54:09 np0005473739 systemd[1]: libpod-d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc.scope: Consumed 1.011s CPU time.
Oct  7 10:54:09 np0005473739 podman[420013]: 2025-10-07 14:54:09.040016632 +0000 UTC m=+1.226739076 container died d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 10:54:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f54b0fd7663a9a9fc792743db45a464d0a7a02f689d21d6040518b4cc13ecf5e-merged.mount: Deactivated successfully.
Oct  7 10:54:09 np0005473739 podman[420013]: 2025-10-07 14:54:09.168288197 +0000 UTC m=+1.355010631 container remove d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=adoring_mayer, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:54:09 np0005473739 systemd[1]: libpod-conmon-d2c00db30c5340e2a82a503eab9c9b0bd309e014f5a3a429ecb2932564e3eebc.scope: Deactivated successfully.
Oct  7 10:54:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:54:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:54:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:54:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:54:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 2a655268-3c0c-4254-bd7c-bc48764b8e01 does not exist
Oct  7 10:54:09 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 85d0db88-df97-4cde-96c7-02b195fcd014 does not exist
Oct  7 10:54:09 np0005473739 nova_compute[259550]: 2025-10-07 14:54:09.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:09 np0005473739 nova_compute[259550]: 2025-10-07 14:54:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:09 np0005473739 nova_compute[259550]: 2025-10-07 14:54:09.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:54:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:54:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:54:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2758: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  7 10:54:10 np0005473739 nova_compute[259550]: 2025-10-07 14:54:10.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2759: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:54:13 np0005473739 nova_compute[259550]: 2025-10-07 14:54:13.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:14 np0005473739 nova_compute[259550]: 2025-10-07 14:54:14.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2760: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 14 KiB/s wr, 74 op/s
Oct  7 10:54:15 np0005473739 nova_compute[259550]: 2025-10-07 14:54:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.098 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.099 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.099 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.099 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.099 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:54:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:54:16 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3612550605' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.566 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.467s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:54:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2761: 305 pgs: 305 active+clean; 169 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 473 KiB/s wr, 76 op/s
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.837 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.839 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000091 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.843 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:54:16 np0005473739 nova_compute[259550]: 2025-10-07 14:54:16.843 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000090 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.042 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.044 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3244MB free_disk=59.92183303833008GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.044 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.044 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.528 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5360ad21-c174-468a-9be2-d82d672f2911 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.529 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 0d0450f4-efd0-4508-8aca-28b5d858c397 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.529 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.529 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.585 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.656 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.656 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.671 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.691 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:54:17 np0005473739 nova_compute[259550]: 2025-10-07 14:54:17.753 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:54:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:17Z|00196|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:8a:bd:28 10.100.0.8
Oct  7 10:54:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:17Z|00197|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:8a:bd:28 10.100.0.8
Oct  7 10:54:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:54:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3417912071' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:54:18 np0005473739 nova_compute[259550]: 2025-10-07 14:54:18.215 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:54:18 np0005473739 nova_compute[259550]: 2025-10-07 14:54:18.223 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:54:18 np0005473739 nova_compute[259550]: 2025-10-07 14:54:18.256 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:54:18 np0005473739 nova_compute[259550]: 2025-10-07 14:54:18.447 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:54:18 np0005473739 nova_compute[259550]: 2025-10-07 14:54:18.448 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:18 np0005473739 nova_compute[259550]: 2025-10-07 14:54:18.456 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2762: 305 pgs: 305 active+clean; 191 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.0 MiB/s wr, 90 op/s
Oct  7 10:54:19 np0005473739 nova_compute[259550]: 2025-10-07 14:54:19.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2763: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  7 10:54:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:54:22
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'default.rgw.log', '.rgw.root', 'backups', 'images', 'default.rgw.meta', 'cephfs.cephfs.data', '.mgr', 'volumes', 'default.rgw.control', 'cephfs.cephfs.meta']
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:54:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2764: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  7 10:54:23 np0005473739 podman[420171]: 2025-10-07 14:54:23.084521319 +0000 UTC m=+0.068477943 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:54:23 np0005473739 podman[420172]: 2025-10-07 14:54:23.084828267 +0000 UTC m=+0.068728609 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:54:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.444 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.458 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.484 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.484 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.485 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.822 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.823 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.823 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:54:23 np0005473739 nova_compute[259550]: 2025-10-07 14:54:23.823 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 5360ad21-c174-468a-9be2-d82d672f2911 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.304 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.305 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.305 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.305 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.306 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.307 2 INFO nova.compute.manager [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Terminating instance#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.308 2 DEBUG nova.compute.manager [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:54:24 np0005473739 kernel: tapba7ffd6d-6a (unregistering): left promiscuous mode
Oct  7 10:54:24 np0005473739 NetworkManager[44949]: <info>  [1759848864.4356] device (tapba7ffd6d-6a): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:24Z|01588|binding|INFO|Releasing lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 from this chassis (sb_readonly=0)
Oct  7 10:54:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:24Z|01589|binding|INFO|Setting lport ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 down in Southbound
Oct  7 10:54:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:24Z|01590|binding|INFO|Removing iface tapba7ffd6d-6a ovn-installed in OVS
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.448 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.453 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:8a:bd:28 10.100.0.8'], port_security=['fa:16:3e:8a:bd:28 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '0d0450f4-efd0-4508-8aca-28b5d858c397', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'fb064d27-a242-4db9-8bed-79adb9fb8cf9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8255c2c5-7755-43fa-85c6-c4d3fcdeecda, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.455 161536 INFO neutron.agent.ovn.metadata.agent [-] Port ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 in datapath b7b9d978-a319-4f4b-b8e1-4891fcd559d0 unbound from our chassis#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.456 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b7b9d978-a319-4f4b-b8e1-4891fcd559d0#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.461 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.478 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2ee11dfb-557c-4e86-bd3e-750248cb5401]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:24 np0005473739 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Deactivated successfully.
Oct  7 10:54:24 np0005473739 systemd[1]: machine-qemu\x2d179\x2dinstance\x2d00000091.scope: Consumed 14.369s CPU time.
Oct  7 10:54:24 np0005473739 systemd-machined[214580]: Machine qemu-179-instance-00000091 terminated.
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.511 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1a5ca538-de8b-437a-9418-68c1a29fe952]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.515 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9e2d32a9-52cc-4689-9e5e-a9a1500f3e0c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.549 2 INFO nova.virt.libvirt.driver [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Instance destroyed successfully.#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.549 2 DEBUG nova.objects.instance [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 0d0450f4-efd0-4508-8aca-28b5d858c397 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.550 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8a797fb3-5df8-44f4-b5a6-8257d05d6cf6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.565 2 DEBUG nova.virt.libvirt.vif [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:53:49Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-0-2089167577',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=145,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:54:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-rvfyeg7d',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:54:04Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=0d0450f4-efd0-4508-8aca-28b5d858c397,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.566 2 DEBUG nova.network.os_vif_util [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "address": "fa:16:3e:8a:bd:28", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapba7ffd6d-6a", "ovs_interfaceid": "ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.567 2 DEBUG nova.network.os_vif_util [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.567 2 DEBUG os_vif [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.569 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapba7ffd6d-6a, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.573 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.575 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3b790696-f45c-47fb-ad24-a1f5127718c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb7b9d978-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:ac:47:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 455], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938835, 'reachable_time': 44639, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420231, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.578 2 INFO os_vif [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:8a:bd:28,bridge_name='br-int',has_traffic_filtering=True,id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapba7ffd6d-6a')#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.591 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a10ff121-bf5a-490d-a8de-a9b1746c3ca8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapb7b9d978-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938849, 'tstamp': 938849}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420232, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb7b9d978-a1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 938852, 'tstamp': 938852}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 420232, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.593 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b9d978-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb7b9d978-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.596 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb7b9d978-a0, col_values=(('external_ids', {'iface-id': '844b358c-727e-4661-9ab8-5cf03a82eb82'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:24.597 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.604 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.701 2 DEBUG nova.compute.manager [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-unplugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.701 2 DEBUG oslo_concurrency.lockutils [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.701 2 DEBUG oslo_concurrency.lockutils [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.701 2 DEBUG oslo_concurrency.lockutils [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.702 2 DEBUG nova.compute.manager [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] No waiting events found dispatching network-vif-unplugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:54:24 np0005473739 nova_compute[259550]: 2025-10-07 14:54:24.702 2 DEBUG nova.compute.manager [req-773df6bb-ea4a-45bd-b218-281ce30a54ca req-b5aca8ad-a895-45d8-9300-7774eeb037ce 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-unplugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:54:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2765: 305 pgs: 305 active+clean; 200 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 2.1 MiB/s wr, 67 op/s
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.214 2 INFO nova.virt.libvirt.driver [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Deleting instance files /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397_del#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.215 2 INFO nova.virt.libvirt.driver [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Deletion of /var/lib/nova/instances/0d0450f4-efd0-4508-8aca-28b5d858c397_del complete#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.244 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.359 2 INFO nova.compute.manager [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Took 2.05 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.359 2 DEBUG oslo.service.loopingcall [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.360 2 DEBUG nova.compute.manager [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.360 2 DEBUG nova.network.neutron [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.362 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.362 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.363 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:54:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2766: 305 pgs: 305 active+clean; 165 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 383 KiB/s rd, 2.1 MiB/s wr, 72 op/s
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.825 2 DEBUG nova.compute.manager [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.825 2 DEBUG oslo_concurrency.lockutils [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.825 2 DEBUG oslo_concurrency.lockutils [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.825 2 DEBUG oslo_concurrency.lockutils [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.826 2 DEBUG nova.compute.manager [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] No waiting events found dispatching network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:54:26 np0005473739 nova_compute[259550]: 2025-10-07 14:54:26.826 2 WARNING nova.compute.manager [req-70d58a5f-d45f-4392-945f-06cea39bbe90 req-f260dc27-7551-44f1-bd7b-01a4326aba4f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received unexpected event network-vif-plugged-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:54:27 np0005473739 nova_compute[259550]: 2025-10-07 14:54:27.967 2 DEBUG nova.network.neutron [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.104 2 INFO nova.compute.manager [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Took 1.74 seconds to deallocate network for instance.#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.114 2 DEBUG nova.compute.manager [req-545dab1c-c77f-4b6b-9ba8-6488f72cb71b req-75211747-bcf0-47e2-9112-a28122d53667 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Received event network-vif-deleted-ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.114 2 INFO nova.compute.manager [req-545dab1c-c77f-4b6b-9ba8-6488f72cb71b req-75211747-bcf0-47e2-9112-a28122d53667 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Neutron deleted interface ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.114 2 DEBUG nova.network.neutron [req-545dab1c-c77f-4b6b-9ba8-6488f72cb71b req-75211747-bcf0-47e2-9112-a28122d53667 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.339 2 DEBUG nova.compute.manager [req-545dab1c-c77f-4b6b-9ba8-6488f72cb71b req-75211747-bcf0-47e2-9112-a28122d53667 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Detach interface failed, port_id=ba7ffd6d-6a44-40c4-8555-4f7f3fe69e57, reason: Instance 0d0450f4-efd0-4508-8aca-28b5d858c397 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.457 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.458 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.521 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:28 np0005473739 nova_compute[259550]: 2025-10-07 14:54:28.568 2 DEBUG oslo_concurrency.processutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:54:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2767: 305 pgs: 305 active+clean; 121 MiB data, 1017 MiB used, 59 GiB / 60 GiB avail; 381 KiB/s rd, 1.7 MiB/s wr, 83 op/s
Oct  7 10:54:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:54:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2272273109' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:54:29 np0005473739 nova_compute[259550]: 2025-10-07 14:54:29.039 2 DEBUG oslo_concurrency.processutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:54:29 np0005473739 nova_compute[259550]: 2025-10-07 14:54:29.046 2 DEBUG nova.compute.provider_tree [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:54:29 np0005473739 nova_compute[259550]: 2025-10-07 14:54:29.162 2 DEBUG nova.scheduler.client.report [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:54:29 np0005473739 nova_compute[259550]: 2025-10-07 14:54:29.372 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.914s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:29 np0005473739 nova_compute[259550]: 2025-10-07 14:54:29.574 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:29 np0005473739 nova_compute[259550]: 2025-10-07 14:54:29.586 2 INFO nova.scheduler.client.report [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 0d0450f4-efd0-4508-8aca-28b5d858c397#033[00m
Oct  7 10:54:29 np0005473739 nova_compute[259550]: 2025-10-07 14:54:29.738 2 DEBUG oslo_concurrency.lockutils [None req-7054006c-2e29-4dd0-9f4a-d422e59b6189 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "0d0450f4-efd0-4508-8aca-28b5d858c397" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.433s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2768: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 172 KiB/s wr, 51 op/s
Oct  7 10:54:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.418 2 DEBUG nova.compute.manager [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.418 2 DEBUG nova.compute.manager [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing instance network info cache due to event network-changed-78d82e05-f97f-4cd4-aed0-b65c80e18ec5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.419 2 DEBUG oslo_concurrency.lockutils [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.419 2 DEBUG oslo_concurrency.lockutils [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.419 2 DEBUG nova.network.neutron [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Refreshing network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.561 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.561 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.561 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.562 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.562 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.563 2 INFO nova.compute.manager [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Terminating instance#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.564 2 DEBUG nova.compute.manager [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:54:32 np0005473739 kernel: tap78d82e05-f9 (unregistering): left promiscuous mode
Oct  7 10:54:32 np0005473739 NetworkManager[44949]: <info>  [1759848872.6310] device (tap78d82e05-f9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:54:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:32Z|01591|binding|INFO|Releasing lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 from this chassis (sb_readonly=0)
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:32Z|01592|binding|INFO|Setting lport 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 down in Southbound
Oct  7 10:54:32 np0005473739 ovn_controller[151684]: 2025-10-07T14:54:32Z|01593|binding|INFO|Removing iface tap78d82e05-f9 ovn-installed in OVS
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.665 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:32 np0005473739 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Deactivated successfully.
Oct  7 10:54:32 np0005473739 systemd[1]: machine-qemu\x2d178\x2dinstance\x2d00000090.scope: Consumed 16.091s CPU time.
Oct  7 10:54:32 np0005473739 systemd-machined[214580]: Machine qemu-178-instance-00000090 terminated.
Oct  7 10:54:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.757 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7f:b1:b3 10.100.0.6'], port_security=['fa:16:3e:7f:b1:b3 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '5360ad21-c174-468a-9be2-d82d672f2911', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7d1e85a4-dffc-45c3-a3d8-ec6178189662 fb064d27-a242-4db9-8bed-79adb9fb8cf9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8255c2c5-7755-43fa-85c6-c4d3fcdeecda, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=78d82e05-f97f-4cd4-aed0-b65c80e18ec5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:54:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.758 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5 in datapath b7b9d978-a319-4f4b-b8e1-4891fcd559d0 unbound from our chassis#033[00m
Oct  7 10:54:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.759 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b7b9d978-a319-4f4b-b8e1-4891fcd559d0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:54:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.760 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a8b4125-2048-4521-8ea5-f987883e1397]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.761 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 namespace which is not needed anymore#033[00m
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2769: 305 pgs: 305 active+clean; 121 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 22 KiB/s wr, 29 op/s
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.807 2 INFO nova.virt.libvirt.driver [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Instance destroyed successfully.#033[00m
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.808 2 DEBUG nova.objects.instance [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 5360ad21-c174-468a-9be2-d82d672f2911 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007600997305788891 of space, bias 1.0, pg target 0.22802991917366675 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:54:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.884 2 DEBUG nova.virt.libvirt.vif [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:53:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-945928091',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=144,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIvYEBw++0H23CPU/EQE9HWaLr7vi4X6+2UQNCQ0N5Tz2y4ny5PBn9cmCdgcJlcQrK39s79mDb9nDQdopbqGy2qQX711Rlve/xjg+NPmBBGEtxIddIA3GNH/TrG7DZF1Rw==',key_name='tempest-TestSecurityGroupsBasicOps-276938345',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:53:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-p3qp2bxs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:53:26Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5360ad21-c174-468a-9be2-d82d672f2911,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.885 2 DEBUG nova.network.os_vif_util [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.228", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.886 2 DEBUG nova.network.os_vif_util [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.887 2 DEBUG os_vif [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.891 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap78d82e05-f9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:32 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:32.909 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=54, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=53) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.940 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:54:32 np0005473739 nova_compute[259550]: 2025-10-07 14:54:32.942 2 INFO os_vif [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7f:b1:b3,bridge_name='br-int',has_traffic_filtering=True,id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5,network=Network(b7b9d978-a319-4f4b-b8e1-4891fcd559d0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap78d82e05-f9')#033[00m
Oct  7 10:54:32 np0005473739 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [NOTICE]   (418723) : haproxy version is 2.8.14-c23fe91
Oct  7 10:54:32 np0005473739 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [NOTICE]   (418723) : path to executable is /usr/sbin/haproxy
Oct  7 10:54:32 np0005473739 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [WARNING]  (418723) : Exiting Master process...
Oct  7 10:54:32 np0005473739 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [WARNING]  (418723) : Exiting Master process...
Oct  7 10:54:32 np0005473739 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [ALERT]    (418723) : Current worker (418725) exited with code 143 (Terminated)
Oct  7 10:54:32 np0005473739 neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0[418719]: [WARNING]  (418723) : All workers exited. Exiting... (0)
Oct  7 10:54:32 np0005473739 systemd[1]: libpod-b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047.scope: Deactivated successfully.
Oct  7 10:54:32 np0005473739 podman[420310]: 2025-10-07 14:54:32.98668017 +0000 UTC m=+0.127381212 container died b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:54:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047-userdata-shm.mount: Deactivated successfully.
Oct  7 10:54:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ced5dd1aff1b8cd3c8452ea0fb4c0158258e892a5ae20c9c27856c33d2c1bdf8-merged.mount: Deactivated successfully.
Oct  7 10:54:33 np0005473739 podman[420310]: 2025-10-07 14:54:33.210577055 +0000 UTC m=+0.351278087 container cleanup b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:54:33 np0005473739 systemd[1]: libpod-conmon-b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047.scope: Deactivated successfully.
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.248 2 DEBUG nova.compute.manager [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-unplugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.249 2 DEBUG oslo_concurrency.lockutils [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.249 2 DEBUG oslo_concurrency.lockutils [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.250 2 DEBUG oslo_concurrency.lockutils [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.250 2 DEBUG nova.compute.manager [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] No waiting events found dispatching network-vif-unplugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.250 2 DEBUG nova.compute.manager [req-ad172bb7-fbae-4c13-8701-20ec6629c2ad req-aa206efe-c1bf-4eec-af49-29193adc2c58 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-unplugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:54:33 np0005473739 podman[420355]: 2025-10-07 14:54:33.344142021 +0000 UTC m=+0.109096239 container remove b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.352 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1a52b9-c873-44f6-9014-2a11d0a46c96]: (4, ('Tue Oct  7 02:54:32 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 (b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047)\nb699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047\nTue Oct  7 02:54:33 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 (b699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047)\nb699b848157b0954413b7b2fb218f9d7ac8b4a6ed3b999bea271ca7125610047\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.355 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ff48cae1-045e-4e39-8922-565b1897bfb7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.357 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb7b9d978-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:33 np0005473739 kernel: tapb7b9d978-a0: left promiscuous mode
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.377 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.380 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5986fb-4a51-4c2b-8e61-97f19370929f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.408 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[539ec8b5-5b67-4ba8-aca2-67808e2ab5ca]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.409 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[48672143-568c-4617-a358-9959d4a28cca]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.427 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9427f58-47c6-4549-a2dd-67052d883f14]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 938827, 'reachable_time': 30503, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 420371, 'error': None, 'target': 'ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:33 np0005473739 systemd[1]: run-netns-ovnmeta\x2db7b9d978\x2da319\x2d4f4b\x2db8e1\x2d4891fcd559d0.mount: Deactivated successfully.
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.429 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b7b9d978-a319-4f4b-b8e1-4891fcd559d0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.429 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5d622e86-c635-46fd-aaf5-0b902689239b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:54:33 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:33.431 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:54:33 np0005473739 nova_compute[259550]: 2025-10-07 14:54:33.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:34 np0005473739 nova_compute[259550]: 2025-10-07 14:54:34.094 2 DEBUG nova.network.neutron [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updated VIF entry in instance network info cache for port 78d82e05-f97f-4cd4-aed0-b65c80e18ec5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:54:34 np0005473739 nova_compute[259550]: 2025-10-07 14:54:34.094 2 DEBUG nova.network.neutron [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [{"id": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "address": "fa:16:3e:7f:b1:b3", "network": {"id": "b7b9d978-a319-4f4b-b8e1-4891fcd559d0", "bridge": "br-int", "label": "tempest-network-smoke--2077534412", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap78d82e05-f9", "ovs_interfaceid": "78d82e05-f97f-4cd4-aed0-b65c80e18ec5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:54:34 np0005473739 nova_compute[259550]: 2025-10-07 14:54:34.285 2 DEBUG oslo_concurrency.lockutils [req-378d4042-a469-493e-acd7-deba626eb9fd req-534441a3-2a65-4acb-816d-bd883dfa7344 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5360ad21-c174-468a-9be2-d82d672f2911" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:54:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2770: 305 pgs: 305 active+clean; 113 MiB data, 1007 MiB used, 59 GiB / 60 GiB avail; 27 KiB/s rd, 22 KiB/s wr, 39 op/s
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.111 2 INFO nova.virt.libvirt.driver [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Deleting instance files /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911_del#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.111 2 INFO nova.virt.libvirt.driver [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Deletion of /var/lib/nova/instances/5360ad21-c174-468a-9be2-d82d672f2911_del complete#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.240 2 INFO nova.compute.manager [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Took 2.68 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.241 2 DEBUG oslo.service.loopingcall [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.243 2 DEBUG nova.compute.manager [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.243 2 DEBUG nova.network.neutron [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:54:35 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:54:35.433 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '54'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.913 2 DEBUG nova.compute.manager [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.914 2 DEBUG oslo_concurrency.lockutils [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5360ad21-c174-468a-9be2-d82d672f2911-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.914 2 DEBUG oslo_concurrency.lockutils [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.914 2 DEBUG oslo_concurrency.lockutils [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.915 2 DEBUG nova.compute.manager [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] No waiting events found dispatching network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:54:35 np0005473739 nova_compute[259550]: 2025-10-07 14:54:35.915 2 WARNING nova.compute.manager [req-8198aaeb-f273-43db-9ba4-0130bcacd92a req-10fad4e9-efe6-4a75-afa8-1f421b105ab9 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received unexpected event network-vif-plugged-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:54:36 np0005473739 podman[420375]: 2025-10-07 14:54:36.067045672 +0000 UTC m=+0.058596922 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 10:54:36 np0005473739 podman[420376]: 2025-10-07 14:54:36.099983134 +0000 UTC m=+0.091478441 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.330 2 DEBUG nova.network.neutron [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.369 2 DEBUG nova.compute.manager [req-f91a80c1-c8e3-42e7-9502-4caca56a3bd4 req-9eec0fc1-f03f-4816-acd8-2991b9a6fd72 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Received event network-vif-deleted-78d82e05-f97f-4cd4-aed0-b65c80e18ec5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.370 2 INFO nova.compute.manager [req-f91a80c1-c8e3-42e7-9502-4caca56a3bd4 req-9eec0fc1-f03f-4816-acd8-2991b9a6fd72 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Neutron deleted interface 78d82e05-f97f-4cd4-aed0-b65c80e18ec5; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.370 2 DEBUG nova.network.neutron [req-f91a80c1-c8e3-42e7-9502-4caca56a3bd4 req-9eec0fc1-f03f-4816-acd8-2991b9a6fd72 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.392 2 INFO nova.compute.manager [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Took 1.15 seconds to deallocate network for instance.#033[00m
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.461 2 DEBUG nova.compute.manager [req-f91a80c1-c8e3-42e7-9502-4caca56a3bd4 req-9eec0fc1-f03f-4816-acd8-2991b9a6fd72 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Detach interface failed, port_id=78d82e05-f97f-4cd4-aed0-b65c80e18ec5, reason: Instance 5360ad21-c174-468a-9be2-d82d672f2911 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.697 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.698 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:54:36 np0005473739 nova_compute[259550]: 2025-10-07 14:54:36.754 2 DEBUG oslo_concurrency.processutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:54:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2771: 305 pgs: 305 active+clean; 88 MiB data, 991 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 22 KiB/s wr, 49 op/s
Oct  7 10:54:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:54:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3549538094' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:54:37 np0005473739 nova_compute[259550]: 2025-10-07 14:54:37.224 2 DEBUG oslo_concurrency.processutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.470s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:54:37 np0005473739 nova_compute[259550]: 2025-10-07 14:54:37.232 2 DEBUG nova.compute.provider_tree [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:54:37 np0005473739 nova_compute[259550]: 2025-10-07 14:54:37.283 2 DEBUG nova.scheduler.client.report [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:54:37 np0005473739 nova_compute[259550]: 2025-10-07 14:54:37.366 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.668s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:37 np0005473739 nova_compute[259550]: 2025-10-07 14:54:37.434 2 INFO nova.scheduler.client.report [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 5360ad21-c174-468a-9be2-d82d672f2911#033[00m
Oct  7 10:54:37 np0005473739 nova_compute[259550]: 2025-10-07 14:54:37.806 2 DEBUG oslo_concurrency.lockutils [None req-9015fcb4-2767-48c3-b278-68a6b94c0f95 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5360ad21-c174-468a-9be2-d82d672f2911" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:54:37 np0005473739 nova_compute[259550]: 2025-10-07 14:54:37.937 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:38 np0005473739 nova_compute[259550]: 2025-10-07 14:54:38.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2772: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 11 KiB/s wr, 51 op/s
Oct  7 10:54:39 np0005473739 nova_compute[259550]: 2025-10-07 14:54:39.547 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848864.5441399, 0d0450f4-efd0-4508-8aca-28b5d858c397 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:54:39 np0005473739 nova_compute[259550]: 2025-10-07 14:54:39.548 2 INFO nova.compute.manager [-] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:54:39 np0005473739 nova_compute[259550]: 2025-10-07 14:54:39.581 2 DEBUG nova.compute.manager [None req-b81f6eb5-4aa7-45e3-ba4d-c1aaa61e0060 - - - - - -] [instance: 0d0450f4-efd0-4508-8aca-28b5d858c397] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:54:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2773: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 28 op/s
Oct  7 10:54:41 np0005473739 nova_compute[259550]: 2025-10-07 14:54:41.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:41 np0005473739 nova_compute[259550]: 2025-10-07 14:54:41.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2774: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:54:42 np0005473739 nova_compute[259550]: 2025-10-07 14:54:42.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:43 np0005473739 nova_compute[259550]: 2025-10-07 14:54:43.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2775: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:54:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2776: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 938 B/s wr, 17 op/s
Oct  7 10:54:47 np0005473739 nova_compute[259550]: 2025-10-07 14:54:47.807 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848872.805726, 5360ad21-c174-468a-9be2-d82d672f2911 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:54:47 np0005473739 nova_compute[259550]: 2025-10-07 14:54:47.808 2 INFO nova.compute.manager [-] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:54:47 np0005473739 nova_compute[259550]: 2025-10-07 14:54:47.925 2 DEBUG nova.compute.manager [None req-6790ce36-407e-4acc-810d-3cbb694b78c9 - - - - - -] [instance: 5360ad21-c174-468a-9be2-d82d672f2911] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:54:47 np0005473739 nova_compute[259550]: 2025-10-07 14:54:47.943 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:48 np0005473739 nova_compute[259550]: 2025-10-07 14:54:48.531 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2777: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail; 3.5 KiB/s rd, 938 B/s wr, 7 op/s
Oct  7 10:54:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2778: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:54:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:54:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:54:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2779: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:54:52 np0005473739 nova_compute[259550]: 2025-10-07 14:54:52.946 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:53 np0005473739 nova_compute[259550]: 2025-10-07 14:54:53.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:54 np0005473739 podman[420441]: 2025-10-07 14:54:54.071099097 +0000 UTC m=+0.061009886 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:54:54 np0005473739 podman[420440]: 2025-10-07 14:54:54.075118314 +0000 UTC m=+0.065627948 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:54:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2780: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:54:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2781: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:54:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:54:57 np0005473739 nova_compute[259550]: 2025-10-07 14:54:57.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:58 np0005473739 nova_compute[259550]: 2025-10-07 14:54:58.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:54:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2782: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:55:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:00.091 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:00.091 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:00.091 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2783: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:55:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:01 np0005473739 nova_compute[259550]: 2025-10-07 14:55:01.921 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:01 np0005473739 nova_compute[259550]: 2025-10-07 14:55:01.921 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.064 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.200 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.201 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.211 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.212 2 INFO nova.compute.claims [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.501 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2784: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.950 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:55:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2902939224' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.975 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:02 np0005473739 nova_compute[259550]: 2025-10-07 14:55:02.984 2 DEBUG nova.compute.provider_tree [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.011 2 DEBUG nova.scheduler.client.report [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.174 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.973s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.175 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.302 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.302 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.326 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.344 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.436 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.437 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.437 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Creating image(s)#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.462 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.492 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.516 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.520 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.560 2 DEBUG nova.policy [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.596 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.597 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.598 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.598 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.621 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:55:03 np0005473739 nova_compute[259550]: 2025-10-07 14:55:03.625 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:04 np0005473739 nova_compute[259550]: 2025-10-07 14:55:04.495 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Successfully created port: 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:55:04 np0005473739 nova_compute[259550]: 2025-10-07 14:55:04.579 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.955s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:04 np0005473739 nova_compute[259550]: 2025-10-07 14:55:04.649 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:55:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2785: 305 pgs: 305 active+clean; 41 MiB data, 961 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.121 2 DEBUG nova.objects.instance [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid b49c6d60-6c44-4afd-8dbc-521e4c1c31ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.274 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.275 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Ensure instance console log exists: /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.276 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.277 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.277 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.589 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Successfully updated port: 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.724 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.724 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.724 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.735 2 DEBUG nova.compute.manager [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.735 2 DEBUG nova.compute.manager [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing instance network info cache due to event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.736 2 DEBUG oslo_concurrency.lockutils [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:55:05 np0005473739 nova_compute[259550]: 2025-10-07 14:55:05.909 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:55:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2786: 305 pgs: 305 active+clean; 68 MiB data, 975 MiB used, 59 GiB / 60 GiB avail; 170 B/s rd, 1.1 MiB/s wr, 1 op/s
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.820 2 DEBUG nova.network.neutron [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.919 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.920 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance network_info: |[{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.920 2 DEBUG oslo_concurrency.lockutils [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.920 2 DEBUG nova.network.neutron [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.923 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start _get_guest_xml network_info=[{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.927 2 WARNING nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.932 2 DEBUG nova.virt.libvirt.host [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.933 2 DEBUG nova.virt.libvirt.host [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.936 2 DEBUG nova.virt.libvirt.host [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.937 2 DEBUG nova.virt.libvirt.host [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.938 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.938 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.938 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.939 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.940 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.940 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.940 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.940 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.941 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.941 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.941 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.941 2 DEBUG nova.virt.hardware [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.944 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:06 np0005473739 nova_compute[259550]: 2025-10-07 14:55:06.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:07 np0005473739 podman[420668]: 2025-10-07 14:55:07.06985089 +0000 UTC m=+0.054117944 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:55:07 np0005473739 podman[420669]: 2025-10-07 14:55:07.106951441 +0000 UTC m=+0.092651033 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  7 10:55:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:55:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2642530535' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.426 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.482s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.451 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.455 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:55:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1521135202' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.901 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.904 2 DEBUG nova.virt.libvirt.vif [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=146,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECPh1pByCCitV1nn5SXhUBD70Y3hmlmSyk96xTNnlcIfpBRz1+IZ9Aq3nwoGJZoH4mSlndssl/oL5+TBJGoYqXuFU3nK+musxCwWhV9dKumK0LA7mZMLKLnViO0sCqyFA==',key_name='tempest-TestSecurityGroupsBasicOps-911554559',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-5q94rep4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:55:03Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b49c6d60-6c44-4afd-8dbc-521e4c1c31ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.904 2 DEBUG nova.network.os_vif_util [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.905 2 DEBUG nova.network.os_vif_util [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.906 2 DEBUG nova.objects.instance [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid b49c6d60-6c44-4afd-8dbc-521e4c1c31ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.952 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <uuid>b49c6d60-6c44-4afd-8dbc-521e4c1c31ac</uuid>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <name>instance-00000092</name>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502</nova:name>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:55:06</nova:creationTime>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <nova:port uuid="0f10619b-f7b9-4cfb-a093-06e296ce6a63">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <entry name="serial">b49c6d60-6c44-4afd-8dbc-521e4c1c31ac</entry>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <entry name="uuid">b49c6d60-6c44-4afd-8dbc-521e4c1c31ac</entry>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:e0:c5:45"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <target dev="tap0f10619b-f7"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/console.log" append="off"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:55:07 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:55:07 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:55:07 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:55:07 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.954 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Preparing to wait for external event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.954 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.954 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.954 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.955 2 DEBUG nova.virt.libvirt.vif [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=146,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECPh1pByCCitV1nn5SXhUBD70Y3hmlmSyk96xTNnlcIfpBRz1+IZ9Aq3nwoGJZoH4mSlndssl/oL5+TBJGoYqXuFU3nK+musxCwWhV9dKumK0LA7mZMLKLnViO0sCqyFA==',key_name='tempest-TestSecurityGroupsBasicOps-911554559',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-5q94rep4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:55:03Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b49c6d60-6c44-4afd-8dbc-521e4c1c31ac,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.955 2 DEBUG nova.network.os_vif_util [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.956 2 DEBUG nova.network.os_vif_util [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.956 2 DEBUG os_vif [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.957 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.958 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.958 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.959 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.962 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap0f10619b-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.963 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap0f10619b-f7, col_values=(('external_ids', {'iface-id': '0f10619b-f7b9-4cfb-a093-06e296ce6a63', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:e0:c5:45', 'vm-uuid': 'b49c6d60-6c44-4afd-8dbc-521e4c1c31ac'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.965 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:07 np0005473739 NetworkManager[44949]: <info>  [1759848907.9659] manager: (tap0f10619b-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/641)
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.971 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:07 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.973 2 INFO os_vif [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7')#033[00m
Oct  7 10:55:07 np0005473739 nova_compute[259550]: 2025-10-07 14:55:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:07 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.062 2 DEBUG nova.network.neutron [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updated VIF entry in instance network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.063 2 DEBUG nova.network.neutron [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.147 2 DEBUG oslo_concurrency.lockutils [req-892d9197-7c1a-4ba1-b552-6e81e08b961f req-d432e146-d30e-489c-aa63-8d33caa92b97 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.152 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.152 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.152 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:e0:c5:45, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.153 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Using config drive#033[00m
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.176 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:55:08 np0005473739 nova_compute[259550]: 2025-10-07 14:55:08.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2787: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:55:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:55:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:55:09 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.212 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Creating config drive at /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config#033[00m
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.219 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2hbmpblw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.380 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp2hbmpblw" returned: 0 in 0.161s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.403 2 DEBUG nova.storage.rbd_utils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.406 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.583 2 DEBUG oslo_concurrency.processutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.177s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.585 2 INFO nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Deleting local config drive /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac/disk.config because it was imported into RBD.#033[00m
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:55:10 np0005473739 kernel: tap0f10619b-f7: entered promiscuous mode
Oct  7 10:55:10 np0005473739 NetworkManager[44949]: <info>  [1759848910.6787] manager: (tap0f10619b-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/642)
Oct  7 10:55:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:10Z|01594|binding|INFO|Claiming lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 for this chassis.
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:55:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:10Z|01595|binding|INFO|0f10619b-f7b9-4cfb-a093-06e296ce6a63: Claiming fa:16:3e:e0:c5:45 10.100.0.6
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.733 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:c5:45 10.100.0.6'], port_security=['fa:16:3e:e0:c5:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b49c6d60-6c44-4afd-8dbc-521e4c1c31ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7162764-2824-4b84-93f3-e59b2b2d0365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '1fd27387-7887-47bf-8a95-e87828fb4a63 9aa4f30d-9ce8-4452-a71d-172dedad2762', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c238a0b0-d3a1-4d4e-9703-74187cbb70ac, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0f10619b-f7b9-4cfb-a093-06e296ce6a63) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.734 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 in datapath d7162764-2824-4b84-93f3-e59b2b2d0365 bound to our chassis#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.735 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network d7162764-2824-4b84-93f3-e59b2b2d0365#033[00m
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:10 np0005473739 systemd-udevd[421097]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:55:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9d87e256-a54a-413d-85b5-9396f7b09200 does not exist
Oct  7 10:55:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8d6c3a39-01b8-4eab-8b75-27c71f89d804 does not exist
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.748 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[6a2a29c1-edaa-4f01-bda4-95b649b44f44]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.750 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapd7162764-21 in ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:55:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7e26ee2b-a2f2-41c7-9e2c-aa6fa318f8f7 does not exist
Oct  7 10:55:10 np0005473739 systemd-machined[214580]: New machine qemu-180-instance-00000092.
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.752 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapd7162764-20 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.753 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[087bc853-f573-4ca8-adb5-899a1b73f853]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.754 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[97112972-756b-46f8-8648-0012c666803d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:55:10 np0005473739 NetworkManager[44949]: <info>  [1759848910.7650] device (tap0f10619b-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:55:10 np0005473739 NetworkManager[44949]: <info>  [1759848910.7663] device (tap0f10619b-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.766 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3b00a56a-0e75-4988-b617-c65f08a0cd73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:55:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:55:10 np0005473739 systemd[1]: Started Virtual Machine qemu-180-instance-00000092.
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.788 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[807673f2-3277-4220-b5bb-40e8f42e5fe9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:10Z|01596|binding|INFO|Setting lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 ovn-installed in OVS
Oct  7 10:55:10 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:10Z|01597|binding|INFO|Setting lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 up in Southbound
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2788: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.827 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[4059cc30-431a-4f5d-89ca-06cdf703878b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 systemd-udevd[421101]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:55:10 np0005473739 NetworkManager[44949]: <info>  [1759848910.8341] manager: (tapd7162764-20): new Veth device (/org/freedesktop/NetworkManager/Devices/643)
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.833 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8700d2e6-7b17-471d-bb56-2442098b0494]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.871 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f314d3b9-9739-436b-8802-16d6029c992a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.877 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[15ddeb0b-e17c-4ad9-9e6f-4b3bc84dea35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 NetworkManager[44949]: <info>  [1759848910.9230] device (tapd7162764-20): carrier: link connected
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.929 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6d4fe259-8a3c-49b2-965e-a0272406abe9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.955 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[54c3370e-be8b-4a0b-ae76-8eb581016a28]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7162764-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949448, 'reachable_time': 31641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 421180, 'error': None, 'target': 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.975 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9a6b2af2-7ea2-4143-b6ab-8a9733ddbf37]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe02:68d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 949448, 'tstamp': 949448}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 421187, 'error': None, 'target': 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:10 np0005473739 nova_compute[259550]: 2025-10-07 14:55:10.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:55:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:10.995 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[84846f98-e8dc-4ce6-98a4-561e60436e15]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapd7162764-21'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:02:68:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 460], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949448, 'reachable_time': 31641, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 421203, 'error': None, 'target': 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.034 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a2fcbe32-ae6e-4320-9549-672f04a76867]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.100 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[adfe4a55-1ea9-482d-b441-87d6eee5ffcb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.101 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7162764-20, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.102 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.102 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd7162764-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:55:11 np0005473739 NetworkManager[44949]: <info>  [1759848911.1057] manager: (tapd7162764-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/644)
Oct  7 10:55:11 np0005473739 kernel: tapd7162764-20: entered promiscuous mode
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.108 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapd7162764-20, col_values=(('external_ids', {'iface-id': 'b1fef780-f55f-4980-93e7-c37cceea4385'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:55:11 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:11Z|01598|binding|INFO|Releasing lport b1fef780-f55f-4980-93e7-c37cceea4385 from this chassis (sb_readonly=0)
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.132 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/d7162764-2824-4b84-93f3-e59b2b2d0365.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/d7162764-2824-4b84-93f3-e59b2b2d0365.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.133 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[546e3c02-329d-4046-9289-0d1d52a75257]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.134 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-d7162764-2824-4b84-93f3-e59b2b2d0365
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/d7162764-2824-4b84-93f3-e59b2b2d0365.pid.haproxy
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID d7162764-2824-4b84-93f3-e59b2b2d0365
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:55:11 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:11.136 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'env', 'PROCESS_TAG=haproxy-d7162764-2824-4b84-93f3-e59b2b2d0365', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/d7162764-2824-4b84-93f3-e59b2b2d0365.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:55:11 np0005473739 podman[421321]: 2025-10-07 14:55:11.45337468 +0000 UTC m=+0.054583846 container create ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.484 2 DEBUG nova.compute.manager [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.487 2 DEBUG oslo_concurrency.lockutils [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.487 2 DEBUG oslo_concurrency.lockutils [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.488 2 DEBUG oslo_concurrency.lockutils [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.488 2 DEBUG nova.compute.manager [req-b75f3c15-89c1-402d-ae6e-f7892906d98f req-a378aab2-4b0a-429c-9362-082454f0a22e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Processing event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:55:11 np0005473739 systemd[1]: Started libpod-conmon-ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467.scope.
Oct  7 10:55:11 np0005473739 podman[421321]: 2025-10-07 14:55:11.432289711 +0000 UTC m=+0.033498917 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:55:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:55:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 10:55:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:55:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:55:11 np0005473739 podman[421321]: 2025-10-07 14:55:11.559325133 +0000 UTC m=+0.160534349 container init ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:55:11 np0005473739 podman[421321]: 2025-10-07 14:55:11.567234092 +0000 UTC m=+0.168443278 container start ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default)
Oct  7 10:55:11 np0005473739 podman[421357]: 2025-10-07 14:55:11.569431051 +0000 UTC m=+0.068649028 container create 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:55:11 np0005473739 inspiring_yonath[421364]: 167 167
Oct  7 10:55:11 np0005473739 podman[421321]: 2025-10-07 14:55:11.575385998 +0000 UTC m=+0.176595184 container attach ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 10:55:11 np0005473739 systemd[1]: libpod-ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467.scope: Deactivated successfully.
Oct  7 10:55:11 np0005473739 podman[421321]: 2025-10-07 14:55:11.581777917 +0000 UTC m=+0.182987093 container died ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 10:55:11 np0005473739 systemd[1]: Started libpod-conmon-14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275.scope.
Oct  7 10:55:11 np0005473739 podman[421357]: 2025-10-07 14:55:11.534397564 +0000 UTC m=+0.033615561 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:55:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7a3a6c09898b294b675b805c74e6ee2d718cbf7403086fb24883c8713ed47c8e-merged.mount: Deactivated successfully.
Oct  7 10:55:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:55:11 np0005473739 podman[421321]: 2025-10-07 14:55:11.653383943 +0000 UTC m=+0.254593129 container remove ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_yonath, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:55:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/980dd10139229014c16cb6749bcc4d9b0aa60136ed3872e78879cb5d137c8d02/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:11 np0005473739 systemd[1]: libpod-conmon-ce315dc818ff4a6324f0917c7f97fd4429063e0a41461cbcb6cd465f10911467.scope: Deactivated successfully.
Oct  7 10:55:11 np0005473739 podman[421357]: 2025-10-07 14:55:11.671633706 +0000 UTC m=+0.170851713 container init 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS)
Oct  7 10:55:11 np0005473739 podman[421357]: 2025-10-07 14:55:11.678265001 +0000 UTC m=+0.177482988 container start 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:55:11 np0005473739 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [NOTICE]   (421396) : New worker (421398) forked
Oct  7 10:55:11 np0005473739 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [NOTICE]   (421396) : Loading success.
Oct  7 10:55:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:11 np0005473739 podman[421412]: 2025-10-07 14:55:11.83881681 +0000 UTC m=+0.044918400 container create 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.846 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.848 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848911.8475258, b49c6d60-6c44-4afd-8dbc-521e4c1c31ac => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.848 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] VM Started (Lifecycle Event)#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.851 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.855 2 INFO nova.virt.libvirt.driver [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance spawned successfully.#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.856 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:55:11 np0005473739 systemd[1]: Started libpod-conmon-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope.
Oct  7 10:55:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:55:11 np0005473739 podman[421412]: 2025-10-07 14:55:11.819741215 +0000 UTC m=+0.025842825 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:55:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:11 np0005473739 podman[421412]: 2025-10-07 14:55:11.933830244 +0000 UTC m=+0.139931854 container init 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.939 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.945 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:55:11 np0005473739 podman[421412]: 2025-10-07 14:55:11.946382937 +0000 UTC m=+0.152484527 container start 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.950 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.951 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:55:11 np0005473739 podman[421412]: 2025-10-07 14:55:11.951358228 +0000 UTC m=+0.157459818 container attach 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.951 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.952 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.953 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.953 2 DEBUG nova.virt.libvirt.driver [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.991 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.992 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848911.8476467, b49c6d60-6c44-4afd-8dbc-521e4c1c31ac => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:55:11 np0005473739 nova_compute[259550]: 2025-10-07 14:55:11.992 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.050 2 INFO nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Took 8.61 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.050 2 DEBUG nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.058 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.062 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848911.8507066, b49c6d60-6c44-4afd-8dbc-521e4c1c31ac => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.062 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.110 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.114 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.228 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.287 2 INFO nova.compute.manager [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Took 10.12 seconds to build instance.#033[00m
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.348 2 DEBUG oslo_concurrency.lockutils [None req-bf6c3819-935d-42d8-ab23-87576e4d2bcc 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.427s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2789: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:55:12 np0005473739 nova_compute[259550]: 2025-10-07 14:55:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:13 np0005473739 nova_compute[259550]: 2025-10-07 14:55:13.009 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:13 np0005473739 gifted_jepsen[421429]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:55:13 np0005473739 gifted_jepsen[421429]: --> relative data size: 1.0
Oct  7 10:55:13 np0005473739 gifted_jepsen[421429]: --> All data devices are unavailable
Oct  7 10:55:13 np0005473739 systemd[1]: libpod-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope: Deactivated successfully.
Oct  7 10:55:13 np0005473739 systemd[1]: libpod-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope: Consumed 1.058s CPU time.
Oct  7 10:55:13 np0005473739 conmon[421429]: conmon 0cdda4a018b220ce555c <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope/container/memory.events
Oct  7 10:55:13 np0005473739 podman[421412]: 2025-10-07 14:55:13.134577762 +0000 UTC m=+1.340679382 container died 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 10:55:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7ffd2ef8981cf299fb82c8efa0b726fff752d840e7d7ec05292c502398879269-merged.mount: Deactivated successfully.
Oct  7 10:55:13 np0005473739 podman[421412]: 2025-10-07 14:55:13.199198103 +0000 UTC m=+1.405299693 container remove 0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_jepsen, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 10:55:13 np0005473739 systemd[1]: libpod-conmon-0cdda4a018b220ce555c54d680b1285178198819a7d9a97a5b589cf025b7e276.scope: Deactivated successfully.
Oct  7 10:55:13 np0005473739 nova_compute[259550]: 2025-10-07 14:55:13.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:13 np0005473739 nova_compute[259550]: 2025-10-07 14:55:13.618 2 DEBUG nova.compute.manager [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:55:13 np0005473739 nova_compute[259550]: 2025-10-07 14:55:13.619 2 DEBUG oslo_concurrency.lockutils [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:13 np0005473739 nova_compute[259550]: 2025-10-07 14:55:13.620 2 DEBUG oslo_concurrency.lockutils [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:13 np0005473739 nova_compute[259550]: 2025-10-07 14:55:13.620 2 DEBUG oslo_concurrency.lockutils [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:13 np0005473739 nova_compute[259550]: 2025-10-07 14:55:13.620 2 DEBUG nova.compute.manager [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] No waiting events found dispatching network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:55:13 np0005473739 nova_compute[259550]: 2025-10-07 14:55:13.621 2 WARNING nova.compute.manager [req-1fd54054-cccc-4b4b-bfff-84f66bf7e6aa req-75fce8c5-1c0d-49e5-96f1-e4c59d08fa4c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received unexpected event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:55:13 np0005473739 podman[421609]: 2025-10-07 14:55:13.799958592 +0000 UTC m=+0.050824627 container create a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:55:13 np0005473739 systemd[1]: Started libpod-conmon-a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6.scope.
Oct  7 10:55:13 np0005473739 podman[421609]: 2025-10-07 14:55:13.777366934 +0000 UTC m=+0.028232989 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:55:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:55:13 np0005473739 podman[421609]: 2025-10-07 14:55:13.913176628 +0000 UTC m=+0.164042853 container init a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default)
Oct  7 10:55:13 np0005473739 podman[421609]: 2025-10-07 14:55:13.920571264 +0000 UTC m=+0.171437299 container start a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:55:13 np0005473739 serene_johnson[421626]: 167 167
Oct  7 10:55:13 np0005473739 systemd[1]: libpod-a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6.scope: Deactivated successfully.
Oct  7 10:55:13 np0005473739 podman[421609]: 2025-10-07 14:55:13.969322394 +0000 UTC m=+0.220188429 container attach a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:55:13 np0005473739 podman[421609]: 2025-10-07 14:55:13.969766285 +0000 UTC m=+0.220632320 container died a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:55:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c020eb36c4a1e0be63906847e3f187419bf70ce928a1ca37e6cad4def9b18d10-merged.mount: Deactivated successfully.
Oct  7 10:55:14 np0005473739 podman[421609]: 2025-10-07 14:55:14.23746636 +0000 UTC m=+0.488332385 container remove a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_johnson, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2)
Oct  7 10:55:14 np0005473739 systemd[1]: libpod-conmon-a7512749695c8a26342ff0b803d02892f1e43af6666f8e6c29257ee601b2f2e6.scope: Deactivated successfully.
Oct  7 10:55:14 np0005473739 podman[421649]: 2025-10-07 14:55:14.416849957 +0000 UTC m=+0.049034189 container create 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:55:14 np0005473739 systemd[1]: Started libpod-conmon-2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b.scope.
Oct  7 10:55:14 np0005473739 podman[421649]: 2025-10-07 14:55:14.391881826 +0000 UTC m=+0.024066078 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:55:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:55:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:14 np0005473739 podman[421649]: 2025-10-07 14:55:14.525558414 +0000 UTC m=+0.157742666 container init 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:55:14 np0005473739 podman[421649]: 2025-10-07 14:55:14.534453899 +0000 UTC m=+0.166638131 container start 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:55:14 np0005473739 podman[421649]: 2025-10-07 14:55:14.537714296 +0000 UTC m=+0.169898528 container attach 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:55:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2790: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 701 KiB/s rd, 1.8 MiB/s wr, 58 op/s
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]: {
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:    "0": [
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:        {
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "devices": [
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "/dev/loop3"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            ],
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_name": "ceph_lv0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_size": "21470642176",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "name": "ceph_lv0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "tags": {
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cluster_name": "ceph",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.crush_device_class": "",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.encrypted": "0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osd_id": "0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.type": "block",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.vdo": "0"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            },
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "type": "block",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "vg_name": "ceph_vg0"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:        }
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:    ],
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:    "1": [
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:        {
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "devices": [
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "/dev/loop4"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            ],
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_name": "ceph_lv1",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_size": "21470642176",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "name": "ceph_lv1",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "tags": {
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cluster_name": "ceph",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.crush_device_class": "",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.encrypted": "0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osd_id": "1",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.type": "block",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.vdo": "0"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            },
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "type": "block",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "vg_name": "ceph_vg1"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:        }
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:    ],
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:    "2": [
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:        {
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "devices": [
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "/dev/loop5"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            ],
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_name": "ceph_lv2",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_size": "21470642176",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "name": "ceph_lv2",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "tags": {
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.cluster_name": "ceph",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.crush_device_class": "",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.encrypted": "0",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osd_id": "2",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.type": "block",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:                "ceph.vdo": "0"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            },
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "type": "block",
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:            "vg_name": "ceph_vg2"
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:        }
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]:    ]
Oct  7 10:55:15 np0005473739 sad_zhukovsky[421665]: }
Oct  7 10:55:15 np0005473739 systemd[1]: libpod-2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b.scope: Deactivated successfully.
Oct  7 10:55:15 np0005473739 podman[421674]: 2025-10-07 14:55:15.406399506 +0000 UTC m=+0.026569644 container died 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 10:55:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e04d2b2561392ebbff6cc7a67da3fbf8e4551788b552f2e88e213a2822b39c57-merged.mount: Deactivated successfully.
Oct  7 10:55:15 np0005473739 podman[421674]: 2025-10-07 14:55:15.505897459 +0000 UTC m=+0.126067577 container remove 2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sad_zhukovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:55:15 np0005473739 systemd[1]: libpod-conmon-2e7a97f95a936937b80c7059099c252a0fd888d0b63a44ad4db99b746598e08b.scope: Deactivated successfully.
Oct  7 10:55:15 np0005473739 NetworkManager[44949]: <info>  [1759848915.6161] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/645)
Oct  7 10:55:15 np0005473739 NetworkManager[44949]: <info>  [1759848915.6173] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/646)
Oct  7 10:55:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:15Z|01599|binding|INFO|Releasing lport b1fef780-f55f-4980-93e7-c37cceea4385 from this chassis (sb_readonly=0)
Oct  7 10:55:15 np0005473739 nova_compute[259550]: 2025-10-07 14:55:15.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:15 np0005473739 nova_compute[259550]: 2025-10-07 14:55:15.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:15 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:15Z|01600|binding|INFO|Releasing lport b1fef780-f55f-4980-93e7-c37cceea4385 from this chassis (sb_readonly=0)
Oct  7 10:55:15 np0005473739 nova_compute[259550]: 2025-10-07 14:55:15.666 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:16 np0005473739 nova_compute[259550]: 2025-10-07 14:55:16.043 2 DEBUG nova.compute.manager [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:55:16 np0005473739 nova_compute[259550]: 2025-10-07 14:55:16.044 2 DEBUG nova.compute.manager [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing instance network info cache due to event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:55:16 np0005473739 nova_compute[259550]: 2025-10-07 14:55:16.044 2 DEBUG oslo_concurrency.lockutils [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:55:16 np0005473739 nova_compute[259550]: 2025-10-07 14:55:16.045 2 DEBUG oslo_concurrency.lockutils [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:55:16 np0005473739 nova_compute[259550]: 2025-10-07 14:55:16.045 2 DEBUG nova.network.neutron [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:55:16 np0005473739 podman[421830]: 2025-10-07 14:55:16.186155472 +0000 UTC m=+0.043576544 container create c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 10:55:16 np0005473739 systemd[1]: Started libpod-conmon-c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730.scope.
Oct  7 10:55:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:55:16 np0005473739 podman[421830]: 2025-10-07 14:55:16.165137696 +0000 UTC m=+0.022558788 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:55:16 np0005473739 podman[421830]: 2025-10-07 14:55:16.273811822 +0000 UTC m=+0.131232924 container init c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:55:16 np0005473739 podman[421830]: 2025-10-07 14:55:16.282303187 +0000 UTC m=+0.139724259 container start c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:55:16 np0005473739 practical_aryabhata[421846]: 167 167
Oct  7 10:55:16 np0005473739 systemd[1]: libpod-c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730.scope: Deactivated successfully.
Oct  7 10:55:16 np0005473739 podman[421830]: 2025-10-07 14:55:16.290235726 +0000 UTC m=+0.147656788 container attach c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:55:16 np0005473739 podman[421830]: 2025-10-07 14:55:16.290586825 +0000 UTC m=+0.148007897 container died c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 10:55:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-652074d54567be9ad2f1b150a95f4a9e814c4a5f4b92d69b6760724d5b6b16c0-merged.mount: Deactivated successfully.
Oct  7 10:55:16 np0005473739 podman[421830]: 2025-10-07 14:55:16.342997793 +0000 UTC m=+0.200418865 container remove c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_aryabhata, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 10:55:16 np0005473739 systemd[1]: libpod-conmon-c21a57fe040a339129e7cbd3c6c1a1effde780919b2571a0b5e2b62818b85730.scope: Deactivated successfully.
Oct  7 10:55:16 np0005473739 podman[421870]: 2025-10-07 14:55:16.547214748 +0000 UTC m=+0.058609933 container create 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:55:16 np0005473739 systemd[1]: Started libpod-conmon-99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea.scope.
Oct  7 10:55:16 np0005473739 podman[421870]: 2025-10-07 14:55:16.51933266 +0000 UTC m=+0.030727865 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:55:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:55:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:55:16 np0005473739 podman[421870]: 2025-10-07 14:55:16.638785281 +0000 UTC m=+0.150180486 container init 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:55:16 np0005473739 podman[421870]: 2025-10-07 14:55:16.6455694 +0000 UTC m=+0.156964585 container start 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:55:16 np0005473739 podman[421870]: 2025-10-07 14:55:16.65046218 +0000 UTC m=+0.161857375 container attach 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:55:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2791: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 100 op/s
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]: {
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "osd_id": 2,
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "type": "bluestore"
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:    },
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "osd_id": 1,
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "type": "bluestore"
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:    },
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "osd_id": 0,
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:        "type": "bluestore"
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]:    }
Oct  7 10:55:17 np0005473739 gallant_hellman[421888]: }
Oct  7 10:55:17 np0005473739 nova_compute[259550]: 2025-10-07 14:55:17.698 2 DEBUG nova.network.neutron [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updated VIF entry in instance network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:55:17 np0005473739 nova_compute[259550]: 2025-10-07 14:55:17.701 2 DEBUG nova.network.neutron [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:55:17 np0005473739 nova_compute[259550]: 2025-10-07 14:55:17.723 2 DEBUG oslo_concurrency.lockutils [req-bfecb761-31dc-494b-adb3-1bb52219e98e req-ece69685-52f4-4f0b-9001-8ba726eea02c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:55:17 np0005473739 systemd[1]: libpod-99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea.scope: Deactivated successfully.
Oct  7 10:55:17 np0005473739 systemd[1]: libpod-99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea.scope: Consumed 1.086s CPU time.
Oct  7 10:55:17 np0005473739 podman[421870]: 2025-10-07 14:55:17.732027463 +0000 UTC m=+1.243422658 container died 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:55:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ce5256b296090ad58d86ad45a70a0551452ce545bddcd655fc157fa6c1bd1666-merged.mount: Deactivated successfully.
Oct  7 10:55:17 np0005473739 podman[421870]: 2025-10-07 14:55:17.881275914 +0000 UTC m=+1.392671099 container remove 99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gallant_hellman, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True)
Oct  7 10:55:17 np0005473739 systemd[1]: libpod-conmon-99604ba6346721800959ae3121d2ff04af7771c518eb243a86b85fdc13bab9ea.scope: Deactivated successfully.
Oct  7 10:55:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:55:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:55:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 615bafa6-9496-4a09-adfe-a38ae51cafa2 does not exist
Oct  7 10:55:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1dea4b98-2bcc-4bb9-9e1f-f286fb85a19a does not exist
Oct  7 10:55:17 np0005473739 nova_compute[259550]: 2025-10-07 14:55:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.011 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.022 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.022 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:55:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2056866912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.496 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.542 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.584 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.585 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000092 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.739 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.741 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3449MB free_disk=59.96738052368164GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.742 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.742 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2792: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 654 KiB/s wr, 99 op/s
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.870 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b49c6d60-6c44-4afd-8dbc-521e4c1c31ac actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.871 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.871 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:55:18 np0005473739 nova_compute[259550]: 2025-10-07 14:55:18.918 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:55:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:55:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2917906744' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:55:19 np0005473739 nova_compute[259550]: 2025-10-07 14:55:19.439 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.521s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:19 np0005473739 nova_compute[259550]: 2025-10-07 14:55:19.447 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:55:19 np0005473739 nova_compute[259550]: 2025-10-07 14:55:19.471 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:55:19 np0005473739 nova_compute[259550]: 2025-10-07 14:55:19.525 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:55:19 np0005473739 nova_compute[259550]: 2025-10-07 14:55:19.526 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.783s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2793: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:55:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:55:22
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'cephfs.cephfs.meta', 'cephfs.cephfs.data', 'images', 'backups', 'default.rgw.meta', 'vms', '.rgw.root', '.mgr', 'volumes', 'default.rgw.log']
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:55:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2794: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:55:23 np0005473739 nova_compute[259550]: 2025-10-07 14:55:23.014 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:55:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:55:23 np0005473739 nova_compute[259550]: 2025-10-07 14:55:23.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2795: 305 pgs: 305 active+clean; 88 MiB data, 979 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct  7 10:55:25 np0005473739 podman[422030]: 2025-10-07 14:55:25.092821336 +0000 UTC m=+0.070392243 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:55:25 np0005473739 podman[422031]: 2025-10-07 14:55:25.110293069 +0000 UTC m=+0.087784684 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 10:55:25 np0005473739 nova_compute[259550]: 2025-10-07 14:55:25.527 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:25 np0005473739 nova_compute[259550]: 2025-10-07 14:55:25.528 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:55:25 np0005473739 nova_compute[259550]: 2025-10-07 14:55:25.553 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:55:25 np0005473739 nova_compute[259550]: 2025-10-07 14:55:25.554 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:55:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2796: 305 pgs: 305 active+clean; 91 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 527 KiB/s wr, 51 op/s
Oct  7 10:55:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:26Z|00198|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:e0:c5:45 10.100.0.6
Oct  7 10:55:26 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:26Z|00199|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:e0:c5:45 10.100.0.6
Oct  7 10:55:28 np0005473739 nova_compute[259550]: 2025-10-07 14:55:28.017 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:28 np0005473739 nova_compute[259550]: 2025-10-07 14:55:28.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2797: 305 pgs: 305 active+clean; 117 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  7 10:55:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2798: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #135. Immutable memtables: 0.
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.824015) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 81] Flushing memtable with next log file: 135
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931824102, "job": 81, "event": "flush_started", "num_memtables": 1, "num_entries": 1706, "num_deletes": 250, "total_data_size": 2765726, "memory_usage": 2818840, "flush_reason": "Manual Compaction"}
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 81] Level-0 flush table #136: started
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931835380, "cf_name": "default", "job": 81, "event": "table_file_creation", "file_number": 136, "file_size": 1601559, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 56993, "largest_seqno": 58698, "table_properties": {"data_size": 1595821, "index_size": 2812, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1861, "raw_key_size": 15007, "raw_average_key_size": 20, "raw_value_size": 1583216, "raw_average_value_size": 2189, "num_data_blocks": 129, "num_entries": 723, "num_filter_entries": 723, "num_deletions": 250, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848752, "oldest_key_time": 1759848752, "file_creation_time": 1759848931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 136, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 81] Flush lasted 11410 microseconds, and 4162 cpu microseconds.
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.835434) [db/flush_job.cc:967] [default] [JOB 81] Level-0 flush table #136: 1601559 bytes OK
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.835455) [db/memtable_list.cc:519] [default] Level-0 commit table #136 started
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.838392) [db/memtable_list.cc:722] [default] Level-0 commit table #136: memtable #1 done
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.838407) EVENT_LOG_v1 {"time_micros": 1759848931838402, "job": 81, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.838425) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 81] Try to delete WAL files size 2758382, prev total WAL file size 2758382, number of live WAL files 2.
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000132.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.839721) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032323533' seq:72057594037927935, type:22 .. '6D6772737461740032353034' seq:0, type:0; will stop at (end)
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 82] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 81 Base level 0, inputs: [136(1564KB)], [134(10148KB)]
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931839807, "job": 82, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [136], "files_L6": [134], "score": -1, "input_data_size": 11993210, "oldest_snapshot_seqno": -1}
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 82] Generated table #137: 7865 keys, 9787612 bytes, temperature: kUnknown
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931915091, "cf_name": "default", "job": 82, "event": "table_file_creation", "file_number": 137, "file_size": 9787612, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9737630, "index_size": 29197, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19717, "raw_key_size": 204741, "raw_average_key_size": 26, "raw_value_size": 9599807, "raw_average_value_size": 1220, "num_data_blocks": 1143, "num_entries": 7865, "num_filter_entries": 7865, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848931, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 137, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.915626) [db/compaction/compaction_job.cc:1663] [default] [JOB 82] Compacted 1@0 + 1@6 files to L6 => 9787612 bytes
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.917489) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 159.2 rd, 129.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.5, 9.9 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(13.6) write-amplify(6.1) OK, records in: 8293, records dropped: 428 output_compression: NoCompression
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.917513) EVENT_LOG_v1 {"time_micros": 1759848931917502, "job": 82, "event": "compaction_finished", "compaction_time_micros": 75346, "compaction_time_cpu_micros": 26599, "output_level": 6, "num_output_files": 1, "total_output_size": 9787612, "num_input_records": 8293, "num_output_records": 7865, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000136.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931918180, "job": 82, "event": "table_file_deletion", "file_number": 136}
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000134.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848931920781, "job": 82, "event": "table_file_deletion", "file_number": 134}
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.839540) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920868) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920887) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920889) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920891) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:31 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:31.920893) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:55:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2728124249' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:55:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:55:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2728124249' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2799: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:55:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:55:33 np0005473739 nova_compute[259550]: 2025-10-07 14:55:33.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:33 np0005473739 nova_compute[259550]: 2025-10-07 14:55:33.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2800: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:55:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2801: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 309 KiB/s rd, 2.1 MiB/s wr, 60 op/s
Oct  7 10:55:38 np0005473739 nova_compute[259550]: 2025-10-07 14:55:38.031 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:38 np0005473739 podman[422073]: 2025-10-07 14:55:38.073745746 +0000 UTC m=+0.055271714 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:55:38 np0005473739 podman[422074]: 2025-10-07 14:55:38.104768477 +0000 UTC m=+0.084583860 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  7 10:55:38 np0005473739 nova_compute[259550]: 2025-10-07 14:55:38.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2802: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 291 KiB/s rd, 1.6 MiB/s wr, 54 op/s
Oct  7 10:55:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2803: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 682 B/s rd, 62 KiB/s wr, 2 op/s
Oct  7 10:55:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2804: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct  7 10:55:43 np0005473739 nova_compute[259550]: 2025-10-07 14:55:43.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #138. Immutable memtables: 0.
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.568429) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 83] Flushing memtable with next log file: 138
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943568534, "job": 83, "event": "flush_started", "num_memtables": 1, "num_entries": 344, "num_deletes": 251, "total_data_size": 182906, "memory_usage": 189576, "flush_reason": "Manual Compaction"}
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 83] Level-0 flush table #139: started
Oct  7 10:55:43 np0005473739 nova_compute[259550]: 2025-10-07 14:55:43.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943696133, "cf_name": "default", "job": 83, "event": "table_file_creation", "file_number": 139, "file_size": 181444, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 58699, "largest_seqno": 59042, "table_properties": {"data_size": 179264, "index_size": 343, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 773, "raw_key_size": 5402, "raw_average_key_size": 18, "raw_value_size": 175033, "raw_average_value_size": 599, "num_data_blocks": 15, "num_entries": 292, "num_filter_entries": 292, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848932, "oldest_key_time": 1759848932, "file_creation_time": 1759848943, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 139, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 83] Flush lasted 127753 microseconds, and 1983 cpu microseconds.
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.696188) [db/flush_job.cc:967] [default] [JOB 83] Level-0 flush table #139: 181444 bytes OK
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.696213) [db/memtable_list.cc:519] [default] Level-0 commit table #139 started
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760296) [db/memtable_list.cc:722] [default] Level-0 commit table #139: memtable #1 done
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760348) EVENT_LOG_v1 {"time_micros": 1759848943760337, "job": 83, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760374) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 83] Try to delete WAL files size 180559, prev total WAL file size 180559, number of live WAL files 2.
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000135.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760985) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035353232' seq:72057594037927935, type:22 .. '7061786F730035373734' seq:0, type:0; will stop at (end)
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 84] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 83 Base level 0, inputs: [139(177KB)], [137(9558KB)]
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943761016, "job": 84, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [139], "files_L6": [137], "score": -1, "input_data_size": 9969056, "oldest_snapshot_seqno": -1}
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 84] Generated table #140: 7648 keys, 8238760 bytes, temperature: kUnknown
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943883053, "cf_name": "default", "job": 84, "event": "table_file_creation", "file_number": 140, "file_size": 8238760, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8191734, "index_size": 26817, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19141, "raw_key_size": 200924, "raw_average_key_size": 26, "raw_value_size": 8059230, "raw_average_value_size": 1053, "num_data_blocks": 1034, "num_entries": 7648, "num_filter_entries": 7648, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759848943, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 140, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.883290) [db/compaction/compaction_job.cc:1663] [default] [JOB 84] Compacted 1@0 + 1@6 files to L6 => 8238760 bytes
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.941311) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 81.6 rd, 67.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.2, 9.3 +0.0 blob) out(7.9 +0.0 blob), read-write-amplify(100.3) write-amplify(45.4) OK, records in: 8157, records dropped: 509 output_compression: NoCompression
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.941346) EVENT_LOG_v1 {"time_micros": 1759848943941333, "job": 84, "event": "compaction_finished", "compaction_time_micros": 122116, "compaction_time_cpu_micros": 22938, "output_level": 6, "num_output_files": 1, "total_output_size": 8238760, "num_input_records": 8157, "num_output_records": 7648, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000139.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943941565, "job": 84, "event": "table_file_deletion", "file_number": 139}
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000137.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759848943943306, "job": 84, "event": "table_file_deletion", "file_number": 137}
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.760840) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943351) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943356) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943357) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943359) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:43 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:55:43.943361) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:55:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2805: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct  7 10:55:45 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:45Z|01601|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Oct  7 10:55:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2806: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 15 KiB/s wr, 0 op/s
Oct  7 10:55:48 np0005473739 nova_compute[259550]: 2025-10-07 14:55:48.070 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:48 np0005473739 nova_compute[259550]: 2025-10-07 14:55:48.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2807: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s wr, 0 op/s
Oct  7 10:55:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2808: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 4.3 KiB/s wr, 0 op/s
Oct  7 10:55:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:55:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.734 2 DEBUG nova.compute.manager [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.734 2 DEBUG nova.compute.manager [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing instance network info cache due to event network-changed-0f10619b-f7b9-4cfb-a093-06e296ce6a63. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.735 2 DEBUG oslo_concurrency.lockutils [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.735 2 DEBUG oslo_concurrency.lockutils [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.735 2 DEBUG nova.network.neutron [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Refreshing network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.824 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.825 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.825 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.825 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.825 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.827 2 INFO nova.compute.manager [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Terminating instance#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.827 2 DEBUG nova.compute.manager [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:55:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2809: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 1023 B/s wr, 0 op/s
Oct  7 10:55:52 np0005473739 kernel: tap0f10619b-f7 (unregistering): left promiscuous mode
Oct  7 10:55:52 np0005473739 NetworkManager[44949]: <info>  [1759848952.8881] device (tap0f10619b-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:55:52 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:52Z|01602|binding|INFO|Releasing lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 from this chassis (sb_readonly=0)
Oct  7 10:55:52 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:52Z|01603|binding|INFO|Setting lport 0f10619b-f7b9-4cfb-a093-06e296ce6a63 down in Southbound
Oct  7 10:55:52 np0005473739 ovn_controller[151684]: 2025-10-07T14:55:52Z|01604|binding|INFO|Removing iface tap0f10619b-f7 ovn-installed in OVS
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.897 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.902 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:e0:c5:45 10.100.0.6'], port_security=['fa:16:3e:e0:c5:45 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': 'b49c6d60-6c44-4afd-8dbc-521e4c1c31ac', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-d7162764-2824-4b84-93f3-e59b2b2d0365', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '1fd27387-7887-47bf-8a95-e87828fb4a63 9aa4f30d-9ce8-4452-a71d-172dedad2762', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c238a0b0-d3a1-4d4e-9703-74187cbb70ac, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=0f10619b-f7b9-4cfb-a093-06e296ce6a63) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:55:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.904 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 0f10619b-f7b9-4cfb-a093-06e296ce6a63 in datapath d7162764-2824-4b84-93f3-e59b2b2d0365 unbound from our chassis#033[00m
Oct  7 10:55:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.904 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network d7162764-2824-4b84-93f3-e59b2b2d0365, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:55:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.905 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[aefd6fef-7d14-4c5a-9576-32e995795a6b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:52 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:52.906 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 namespace which is not needed anymore#033[00m
Oct  7 10:55:52 np0005473739 nova_compute[259550]: 2025-10-07 14:55:52.920 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:52 np0005473739 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Deactivated successfully.
Oct  7 10:55:52 np0005473739 systemd[1]: machine-qemu\x2d180\x2dinstance\x2d00000092.scope: Consumed 15.120s CPU time.
Oct  7 10:55:52 np0005473739 systemd-machined[214580]: Machine qemu-180-instance-00000092 terminated.
Oct  7 10:55:53 np0005473739 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [NOTICE]   (421396) : haproxy version is 2.8.14-c23fe91
Oct  7 10:55:53 np0005473739 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [NOTICE]   (421396) : path to executable is /usr/sbin/haproxy
Oct  7 10:55:53 np0005473739 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [WARNING]  (421396) : Exiting Master process...
Oct  7 10:55:53 np0005473739 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [WARNING]  (421396) : Exiting Master process...
Oct  7 10:55:53 np0005473739 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [ALERT]    (421396) : Current worker (421398) exited with code 143 (Terminated)
Oct  7 10:55:53 np0005473739 neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365[421390]: [WARNING]  (421396) : All workers exited. Exiting... (0)
Oct  7 10:55:53 np0005473739 systemd[1]: libpod-14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275.scope: Deactivated successfully.
Oct  7 10:55:53 np0005473739 podman[422143]: 2025-10-07 14:55:53.065320939 +0000 UTC m=+0.063196864 container died 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.068 2 INFO nova.virt.libvirt.driver [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Instance destroyed successfully.#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.068 2 DEBUG nova.objects.instance [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid b49c6d60-6c44-4afd-8dbc-521e4c1c31ac obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.072 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.090 2 DEBUG nova.virt.libvirt.vif [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:55:00Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-17551502',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=146,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBECPh1pByCCitV1nn5SXhUBD70Y3hmlmSyk96xTNnlcIfpBRz1+IZ9Aq3nwoGJZoH4mSlndssl/oL5+TBJGoYqXuFU3nK+musxCwWhV9dKumK0LA7mZMLKLnViO0sCqyFA==',key_name='tempest-TestSecurityGroupsBasicOps-911554559',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:55:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-5q94rep4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:55:12Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b49c6d60-6c44-4afd-8dbc-521e4c1c31ac,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.091 2 DEBUG nova.network.os_vif_util [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.207", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.092 2 DEBUG nova.network.os_vif_util [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.092 2 DEBUG os_vif [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.095 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap0f10619b-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.097 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.099 2 INFO os_vif [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:e0:c5:45,bridge_name='br-int',has_traffic_filtering=True,id=0f10619b-f7b9-4cfb-a093-06e296ce6a63,network=Network(d7162764-2824-4b84-93f3-e59b2b2d0365),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap0f10619b-f7')#033[00m
Oct  7 10:55:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275-userdata-shm.mount: Deactivated successfully.
Oct  7 10:55:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-980dd10139229014c16cb6749bcc4d9b0aa60136ed3872e78879cb5d137c8d02-merged.mount: Deactivated successfully.
Oct  7 10:55:53 np0005473739 podman[422143]: 2025-10-07 14:55:53.163107547 +0000 UTC m=+0.160983472 container cleanup 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 10:55:53 np0005473739 systemd[1]: libpod-conmon-14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275.scope: Deactivated successfully.
Oct  7 10:55:53 np0005473739 podman[422200]: 2025-10-07 14:55:53.245008115 +0000 UTC m=+0.059342292 container remove 14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.251 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b00ca7ad-e0e2-4d78-8a81-ddbd1f6784e5]: (4, ('Tue Oct  7 02:55:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 (14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275)\n14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275\nTue Oct  7 02:55:53 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 (14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275)\n14e297b5a2df633b613b56babe25250ad4c2a29c79a349a6de10c58522dec275\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.253 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9fe5b746-1eff-47e3-b6b7-398319859c38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.254 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd7162764-20, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:53 np0005473739 kernel: tapd7162764-20: left promiscuous mode
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.280 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.283 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[70e6c077-9715-463a-a4c3-cf485f6bba82]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.309 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0c6d3aca-a344-4ea4-81fa-7d9d9802d781]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.311 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5a645e4e-e344-4a37-94e0-1d2806181544]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.326 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[031766d2-57cc-47d2-9306-8497dfba1830]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 949438, 'reachable_time': 23730, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 422215, 'error': None, 'target': 'ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:53 np0005473739 systemd[1]: run-netns-ovnmeta\x2dd7162764\x2d2824\x2d4b84\x2d93f3\x2de59b2b2d0365.mount: Deactivated successfully.
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.330 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-d7162764-2824-4b84-93f3-e59b2b2d0365 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:55:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:53.330 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[6d1cd18e-0520-4044-a2e1-df346a266d7e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.723 2 INFO nova.virt.libvirt.driver [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Deleting instance files /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_del#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.724 2 INFO nova.virt.libvirt.driver [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Deletion of /var/lib/nova/instances/b49c6d60-6c44-4afd-8dbc-521e4c1c31ac_del complete#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.785 2 INFO nova.compute.manager [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Took 0.96 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.786 2 DEBUG oslo.service.loopingcall [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.787 2 DEBUG nova.compute.manager [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:55:53 np0005473739 nova_compute[259550]: 2025-10-07 14:55:53.787 2 DEBUG nova.network.neutron [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.243 2 DEBUG nova.network.neutron [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updated VIF entry in instance network info cache for port 0f10619b-f7b9-4cfb-a093-06e296ce6a63. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.244 2 DEBUG nova.network.neutron [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [{"id": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "address": "fa:16:3e:e0:c5:45", "network": {"id": "d7162764-2824-4b84-93f3-e59b2b2d0365", "bridge": "br-int", "label": "tempest-network-smoke--1444228734", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap0f10619b-f7", "ovs_interfaceid": "0f10619b-f7b9-4cfb-a093-06e296ce6a63", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.267 2 DEBUG oslo_concurrency.lockutils [req-334ff3d6-7d9b-4a1b-bef3-d769d270b523 req-87b1d5f5-8020-4710-981f-9312f936afaf 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.594 2 DEBUG nova.network.neutron [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.616 2 INFO nova.compute.manager [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Took 0.83 seconds to deallocate network for instance.#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.658 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.659 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.720 2 DEBUG oslo_concurrency.processutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.811 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-unplugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.812 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.813 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.813 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.813 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] No waiting events found dispatching network-vif-unplugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.813 2 WARNING nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received unexpected event network-vif-unplugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.814 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.814 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.814 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.815 2 DEBUG oslo_concurrency.lockutils [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.815 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] No waiting events found dispatching network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.815 2 WARNING nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received unexpected event network-vif-plugged-0f10619b-f7b9-4cfb-a093-06e296ce6a63 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:55:54 np0005473739 nova_compute[259550]: 2025-10-07 14:55:54.815 2 DEBUG nova.compute.manager [req-7c47d14d-c5a0-4d1f-a3ee-da0baefa8f5d req-0446478f-40b5-4766-b063-758b36ff644e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Received event network-vif-deleted-0f10619b-f7b9-4cfb-a093-06e296ce6a63 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:55:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2810: 305 pgs: 305 active+clean; 75 MiB data, 1008 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 KiB/s wr, 22 op/s
Oct  7 10:55:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:55:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204438503' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:55:55 np0005473739 nova_compute[259550]: 2025-10-07 14:55:55.160 2 DEBUG oslo_concurrency.processutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.440s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:55:55 np0005473739 nova_compute[259550]: 2025-10-07 14:55:55.166 2 DEBUG nova.compute.provider_tree [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:55:55 np0005473739 nova_compute[259550]: 2025-10-07 14:55:55.184 2 DEBUG nova.scheduler.client.report [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:55:55 np0005473739 nova_compute[259550]: 2025-10-07 14:55:55.204 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:55 np0005473739 nova_compute[259550]: 2025-10-07 14:55:55.229 2 INFO nova.scheduler.client.report [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance b49c6d60-6c44-4afd-8dbc-521e4c1c31ac#033[00m
Oct  7 10:55:55 np0005473739 nova_compute[259550]: 2025-10-07 14:55:55.309 2 DEBUG oslo_concurrency.lockutils [None req-28673ef0-ceb6-4582-801b-568b74847f14 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b49c6d60-6c44-4afd-8dbc-521e4c1c31ac" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.484s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:55:56 np0005473739 podman[422240]: 2025-10-07 14:55:56.068081927 +0000 UTC m=+0.058389237 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:55:56 np0005473739 podman[422239]: 2025-10-07 14:55:56.069166056 +0000 UTC m=+0.061304634 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd)
Oct  7 10:55:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:56.308 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=55, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=54) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:55:56 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:55:56.309 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:55:56 np0005473739 nova_compute[259550]: 2025-10-07 14:55:56.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:55:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2811: 305 pgs: 305 active+clean; 41 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 2.2 KiB/s wr, 27 op/s
Oct  7 10:55:58 np0005473739 nova_compute[259550]: 2025-10-07 14:55:58.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:58 np0005473739 nova_compute[259550]: 2025-10-07 14:55:58.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:58 np0005473739 nova_compute[259550]: 2025-10-07 14:55:58.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:58 np0005473739 nova_compute[259550]: 2025-10-07 14:55:58.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:55:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2812: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:56:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:00.093 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:00.094 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:00.094 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2813: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:56:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2814: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:56:02 np0005473739 nova_compute[259550]: 2025-10-07 14:56:02.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:03 np0005473739 nova_compute[259550]: 2025-10-07 14:56:03.109 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:03.311 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '55'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:56:03 np0005473739 nova_compute[259550]: 2025-10-07 14:56:03.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2815: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:56:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2816: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 341 B/s wr, 5 op/s
Oct  7 10:56:06 np0005473739 nova_compute[259550]: 2025-10-07 14:56:06.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:07 np0005473739 nova_compute[259550]: 2025-10-07 14:56:07.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:07 np0005473739 nova_compute[259550]: 2025-10-07 14:56:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:08 np0005473739 nova_compute[259550]: 2025-10-07 14:56:08.067 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759848953.0660236, b49c6d60-6c44-4afd-8dbc-521e4c1c31ac => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:56:08 np0005473739 nova_compute[259550]: 2025-10-07 14:56:08.067 2 INFO nova.compute.manager [-] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:56:08 np0005473739 nova_compute[259550]: 2025-10-07 14:56:08.093 2 DEBUG nova.compute.manager [None req-5da3938b-6b65-48fb-9a81-a31d70c9924c - - - - - -] [instance: b49c6d60-6c44-4afd-8dbc-521e4c1c31ac] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:56:08 np0005473739 nova_compute[259550]: 2025-10-07 14:56:08.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:08 np0005473739 nova_compute[259550]: 2025-10-07 14:56:08.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2817: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 85 B/s rd, 0 B/s wr, 0 op/s
Oct  7 10:56:09 np0005473739 podman[422281]: 2025-10-07 14:56:09.054779999 +0000 UTC m=+0.047729914 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:56:09 np0005473739 podman[422282]: 2025-10-07 14:56:09.082873443 +0000 UTC m=+0.075321014 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 10:56:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2818: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:56:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:11 np0005473739 nova_compute[259550]: 2025-10-07 14:56:11.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:11 np0005473739 nova_compute[259550]: 2025-10-07 14:56:11.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:56:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2819: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:56:13 np0005473739 nova_compute[259550]: 2025-10-07 14:56:13.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:13 np0005473739 nova_compute[259550]: 2025-10-07 14:56:13.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:13 np0005473739 nova_compute[259550]: 2025-10-07 14:56:13.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2820: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:56:16 np0005473739 nova_compute[259550]: 2025-10-07 14:56:16.803 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:16 np0005473739 nova_compute[259550]: 2025-10-07 14:56:16.803 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2821: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:56:16 np0005473739 nova_compute[259550]: 2025-10-07 14:56:16.939 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.210 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.211 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.220 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.220 2 INFO nova.compute.claims [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.362 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:56:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/418693607' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.833 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.838 2 DEBUG nova.compute.provider_tree [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.858 2 DEBUG nova.scheduler.client.report [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.887 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.676s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.888 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.944 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.945 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.967 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:56:17 np0005473739 nova_compute[259550]: 2025-10-07 14:56:17.987 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.076 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.077 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.078 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Creating image(s)#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.102 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.128 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.153 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.158 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.193 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.237 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.239 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.239 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.240 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.263 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.267 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.302 2 DEBUG nova.policy [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2822: 305 pgs: 305 active+clean; 41 MiB data, 977 MiB used, 59 GiB / 60 GiB avail; 2.2 KiB/s rd, 170 B/s wr, 3 op/s
Oct  7 10:56:18 np0005473739 nova_compute[259550]: 2025-10-07 14:56:18.992 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Successfully created port: 6b0ce98f-f482-4715-a60b-a5b393b822f1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:56:19 np0005473739 podman[422612]: 2025-10-07 14:56:19.057688207 +0000 UTC m=+0.295381998 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:56:19 np0005473739 podman[422634]: 2025-10-07 14:56:19.422285666 +0000 UTC m=+0.237897627 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:56:19 np0005473739 podman[422612]: 2025-10-07 14:56:19.629697375 +0000 UTC m=+0.867391186 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.760 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.824 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Successfully updated port: 6b0ce98f-f482-4715-a60b-a5b393b822f1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.867 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.991 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.993 2 DEBUG nova.compute.manager [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.993 2 DEBUG nova.compute.manager [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing instance network info cache due to event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.993 2 DEBUG oslo_concurrency.lockutils [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.994 2 DEBUG oslo_concurrency.lockutils [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.994 2 DEBUG nova.network.neutron [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:56:19 np0005473739 nova_compute[259550]: 2025-10-07 14:56:19.996 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.020 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.020 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.223 2 DEBUG nova.objects.instance [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.231 2 DEBUG nova.network.neutron [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.241 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.242 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Ensure instance console log exists: /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.242 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.242 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.243 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:56:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:56:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:56:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2049733192' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.556 2 DEBUG nova.network.neutron [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.575 2 DEBUG oslo_concurrency.lockutils [req-1bff8f07-95a2-46c2-a2d3-c6d76646be7e req-b1ecb655-3da0-4684-b2a3-1e1ae71057e0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.577 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.577 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.579 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.559s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.752 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.763 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.764 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3628MB free_disk=59.98827362060547GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.764 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.764 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.831 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.832 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.833 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:56:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2823: 305 pgs: 305 active+clean; 58 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 636 KiB/s wr, 14 op/s
Oct  7 10:56:20 np0005473739 nova_compute[259550]: 2025-10-07 14:56:20.871 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:21 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1b7453e9-a58e-4451-85a9-6bfc45015c8b does not exist
Oct  7 10:56:21 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ee918ae6-2cb2-4019-9761-daaa0b126d47 does not exist
Oct  7 10:56:21 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f6d4539b-8f88-4708-965a-289cde9acb3f does not exist
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/699554650' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.348 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.355 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.374 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.399 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.400 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.635s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.594 2 DEBUG nova.network.neutron [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.618 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.619 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance network_info: |[{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.623 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start _get_guest_xml network_info=[{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.630 2 WARNING nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.637 2 DEBUG nova.virt.libvirt.host [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.638 2 DEBUG nova.virt.libvirt.host [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.645 2 DEBUG nova.virt.libvirt.host [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.646 2 DEBUG nova.virt.libvirt.host [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.646 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.646 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.647 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.647 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.647 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.648 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.648 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.648 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.648 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.649 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.649 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.649 2 DEBUG nova.virt.hardware [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:56:21 np0005473739 nova_compute[259550]: 2025-10-07 14:56:21.653 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:21 np0005473739 podman[423157]: 2025-10-07 14:56:21.832364078 +0000 UTC m=+0.030579480 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:56:21 np0005473739 podman[423157]: 2025-10-07 14:56:21.933840964 +0000 UTC m=+0.132056336 container create 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:56:22 np0005473739 systemd[1]: Started libpod-conmon-3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613.scope.
Oct  7 10:56:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:56:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2988443510' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:56:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.144 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.492s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.169 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.173 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:22 np0005473739 podman[423157]: 2025-10-07 14:56:22.223455609 +0000 UTC m=+0.421671001 container init 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:56:22 np0005473739 podman[423157]: 2025-10-07 14:56:22.231616654 +0000 UTC m=+0.429832026 container start 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:56:22 np0005473739 beautiful_bell[423193]: 167 167
Oct  7 10:56:22 np0005473739 systemd[1]: libpod-3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613.scope: Deactivated successfully.
Oct  7 10:56:22 np0005473739 podman[423157]: 2025-10-07 14:56:22.304500673 +0000 UTC m=+0.502716065 container attach 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 10:56:22 np0005473739 podman[423157]: 2025-10-07 14:56:22.304880974 +0000 UTC m=+0.503096346 container died 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True)
Oct  7 10:56:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a88d8996b1a84837a668bdde9ef353111b808a7270c4a00c8faa2f0d5e7bcede-merged.mount: Deactivated successfully.
Oct  7 10:56:22 np0005473739 podman[423157]: 2025-10-07 14:56:22.610140303 +0000 UTC m=+0.808355675 container remove 3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=beautiful_bell, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:56:22 np0005473739 systemd[1]: libpod-conmon-3eaebbb00b2195f4a8dab11ef1bb2e0dc49bfb756db1b0c3155e177c010fa613.scope: Deactivated successfully.
Oct  7 10:56:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:56:22 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1639954895' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.691 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.695 2 DEBUG nova.virt.libvirt.vif [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=147,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-v02jgwcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:56:18Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=1713ed1d-a673-4fc9-ac2b-14c7fda46d8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.695 2 DEBUG nova.network.os_vif_util [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.697 2 DEBUG nova.network.os_vif_util [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.699 2 DEBUG nova.objects.instance [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.722 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <uuid>1713ed1d-a673-4fc9-ac2b-14c7fda46d8b</uuid>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <name>instance-00000093</name>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046</nova:name>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:56:21</nova:creationTime>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <nova:port uuid="6b0ce98f-f482-4715-a60b-a5b393b822f1">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <entry name="serial">1713ed1d-a673-4fc9-ac2b-14c7fda46d8b</entry>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <entry name="uuid">1713ed1d-a673-4fc9-ac2b-14c7fda46d8b</entry>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:87:48:40"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <target dev="tap6b0ce98f-f4"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/console.log" append="off"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:56:22 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:56:22 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:56:22 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:56:22 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.724 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Preparing to wait for external event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.724 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.725 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.725 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.726 2 DEBUG nova.virt.libvirt.vif [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=147,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-v02jgwcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:56:18Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=1713ed1d-a673-4fc9-ac2b-14c7fda46d8b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.726 2 DEBUG nova.network.os_vif_util [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.727 2 DEBUG nova.network.os_vif_util [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.727 2 DEBUG os_vif [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.729 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.730 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6b0ce98f-f4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.734 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6b0ce98f-f4, col_values=(('external_ids', {'iface-id': '6b0ce98f-f482-4715-a60b-a5b393b822f1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:87:48:40', 'vm-uuid': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:22 np0005473739 NetworkManager[44949]: <info>  [1759848982.7386] manager: (tap6b0ce98f-f4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/647)
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.739 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.751 2 INFO os_vif [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4')#033[00m
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:56:22
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', 'vms', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', '.mgr', 'images', 'backups', 'default.rgw.meta']
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.802 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.803 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.803 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:87:48:40, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.804 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Using config drive#033[00m
Oct  7 10:56:22 np0005473739 nova_compute[259550]: 2025-10-07 14:56:22.840 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2824: 305 pgs: 305 active+clean; 58 MiB data, 985 MiB used, 59 GiB / 60 GiB avail; 8.7 KiB/s rd, 636 KiB/s wr, 14 op/s
Oct  7 10:56:22 np0005473739 podman[423261]: 2025-10-07 14:56:22.86022458 +0000 UTC m=+0.079822853 container create da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:56:22 np0005473739 podman[423261]: 2025-10-07 14:56:22.814428219 +0000 UTC m=+0.034026502 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:56:22 np0005473739 systemd[1]: Started libpod-conmon-da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923.scope.
Oct  7 10:56:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:56:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:23 np0005473739 podman[423261]: 2025-10-07 14:56:23.122087821 +0000 UTC m=+0.341686114 container init da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:56:23 np0005473739 podman[423261]: 2025-10-07 14:56:23.133614937 +0000 UTC m=+0.353213210 container start da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:56:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:56:23 np0005473739 podman[423261]: 2025-10-07 14:56:23.300151354 +0000 UTC m=+0.519749627 container attach da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 10:56:23 np0005473739 nova_compute[259550]: 2025-10-07 14:56:23.304 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Creating config drive at /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config#033[00m
Oct  7 10:56:23 np0005473739 nova_compute[259550]: 2025-10-07 14:56:23.312 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy2iphfz0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:23 np0005473739 nova_compute[259550]: 2025-10-07 14:56:23.389 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:23 np0005473739 nova_compute[259550]: 2025-10-07 14:56:23.477 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpy2iphfz0" returned: 0 in 0.165s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:23 np0005473739 nova_compute[259550]: 2025-10-07 14:56:23.503 2 DEBUG nova.storage.rbd_utils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:23 np0005473739 nova_compute[259550]: 2025-10-07 14:56:23.508 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:23 np0005473739 nova_compute[259550]: 2025-10-07 14:56:23.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:24 np0005473739 pedantic_bouman[423295]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:56:24 np0005473739 pedantic_bouman[423295]: --> relative data size: 1.0
Oct  7 10:56:24 np0005473739 pedantic_bouman[423295]: --> All data devices are unavailable
Oct  7 10:56:24 np0005473739 systemd[1]: libpod-da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923.scope: Deactivated successfully.
Oct  7 10:56:24 np0005473739 systemd[1]: libpod-da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923.scope: Consumed 1.095s CPU time.
Oct  7 10:56:24 np0005473739 podman[423261]: 2025-10-07 14:56:24.647761968 +0000 UTC m=+1.867360251 container died da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:56:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2825: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:56:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-842c7698d42687d2c75dc6e095d4a0fddcfb41710b5580961f3d47925b4e407f-merged.mount: Deactivated successfully.
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.182 2 DEBUG oslo_concurrency.processutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.674s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.184 2 INFO nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Deleting local config drive /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b/disk.config because it was imported into RBD.#033[00m
Oct  7 10:56:25 np0005473739 kernel: tap6b0ce98f-f4: entered promiscuous mode
Oct  7 10:56:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:25Z|01605|binding|INFO|Claiming lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 for this chassis.
Oct  7 10:56:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:25Z|01606|binding|INFO|6b0ce98f-f482-4715-a60b-a5b393b822f1: Claiming fa:16:3e:87:48:40 10.100.0.8
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.249 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:25 np0005473739 NetworkManager[44949]: <info>  [1759848985.2564] manager: (tap6b0ce98f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/648)
Oct  7 10:56:25 np0005473739 podman[423261]: 2025-10-07 14:56:25.272656725 +0000 UTC m=+2.492254998 container remove da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_bouman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:56:25 np0005473739 systemd[1]: libpod-conmon-da529cd8bfc8d5e5e5623d04bda1126d8dfd1a3e12bfd2333c9dbe5af15f1923.scope: Deactivated successfully.
Oct  7 10:56:25 np0005473739 systemd-machined[214580]: New machine qemu-181-instance-00000093.
Oct  7 10:56:25 np0005473739 systemd-udevd[423390]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:56:25 np0005473739 systemd[1]: Started Virtual Machine qemu-181-instance-00000093.
Oct  7 10:56:25 np0005473739 NetworkManager[44949]: <info>  [1759848985.3130] device (tap6b0ce98f-f4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:56:25 np0005473739 NetworkManager[44949]: <info>  [1759848985.3144] device (tap6b0ce98f-f4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:25Z|01607|binding|INFO|Setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 ovn-installed in OVS
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.334 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:25Z|01608|binding|INFO|Setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 up in Southbound
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.344 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:48:40 10.100.0.8'], port_security=['fa:16:3e:87:48:40 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30 ce5386dc-6551-4933-ba47-e2cefb87e49d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b0ce98f-f482-4715-a60b-a5b393b822f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.345 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0ce98f-f482-4715-a60b-a5b393b822f1 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 bound to our chassis#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.346 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de9e106e-fa40-4336-9ad7-f811682b66d2#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.363 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[57ea48d4-1c35-4f4e-a3c7-49034052b8c5]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.364 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapde9e106e-f1 in ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.366 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapde9e106e-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.366 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2de0944f-7d62-4e9d-b5c2-2873dfad0534]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.367 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5054d5e9-b9c9-4586-8d5c-fcb941f73b82]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.380 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[5184d496-6a00-4752-823d-e4d1495167af]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.409 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee95c29b-e588-4eb9-a0ae-ba14902163b3]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.447 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d5f3069a-52a4-4020-a3b6-0f052a1169db]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.452 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9a9724f-3653-45b8-95ee-4b9098244a24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 NetworkManager[44949]: <info>  [1759848985.4538] manager: (tapde9e106e-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/649)
Oct  7 10:56:25 np0005473739 systemd-udevd[423392]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.498 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2395a4ef-813b-4e54-8f3a-02615800612d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.502 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3f63a79e-f3c8-4128-924d-b42b87711093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 NetworkManager[44949]: <info>  [1759848985.5278] device (tapde9e106e-f0): carrier: link connected
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.533 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[3aef7bea-3ede-437a-94df-c1632cdda675]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.552 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b53d2196-dccc-42cb-b83d-0a099419ff66]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde9e106e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a2:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956909, 'reachable_time': 21559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 423496, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.568 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bd116021-a68d-4c88-a2a1-bac1fa9d54fd]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed9:a2b1'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956909, 'tstamp': 956909}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 423499, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.585 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[bf84466c-ce08-40f4-8fa9-829f61bde85f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde9e106e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a2:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956909, 'reachable_time': 21559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 423507, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.623 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[134d219a-f522-43a5-b2bd-aebfd3be00d3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.707 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[4fc8c12c-7a16-422a-a1e4-b33328dc75a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde9e106e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.709 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde9e106e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:25 np0005473739 NetworkManager[44949]: <info>  [1759848985.7126] manager: (tapde9e106e-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/650)
Oct  7 10:56:25 np0005473739 kernel: tapde9e106e-f0: entered promiscuous mode
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.714 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde9e106e-f0, col_values=(('external_ids', {'iface-id': 'df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:25 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:25Z|01609|binding|INFO|Releasing lport df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b from this chassis (sb_readonly=0)
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.732 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/de9e106e-fa40-4336-9ad7-f811682b66d2.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/de9e106e-fa40-4336-9ad7-f811682b66d2.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.734 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a9afdba2-a0fa-4d5c-bbca-a54e2fc45919]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.734 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-de9e106e-fa40-4336-9ad7-f811682b66d2
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/de9e106e-fa40-4336-9ad7-f811682b66d2.pid.haproxy
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID de9e106e-fa40-4336-9ad7-f811682b66d2
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:56:25 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:56:25.735 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'env', 'PROCESS_TAG=haproxy-de9e106e-fa40-4336-9ad7-f811682b66d2', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/de9e106e-fa40-4336-9ad7-f811682b66d2.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:56:25 np0005473739 nova_compute[259550]: 2025-10-07 14:56:25.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.007 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.007 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.007 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:26 np0005473739 podman[423614]: 2025-10-07 14:56:26.015961587 +0000 UTC m=+0.062226937 container create c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:56:26 np0005473739 systemd[1]: Started libpod-conmon-c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3.scope.
Oct  7 10:56:26 np0005473739 podman[423614]: 2025-10-07 14:56:25.978508635 +0000 UTC m=+0.024774015 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:56:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:56:26 np0005473739 podman[423614]: 2025-10-07 14:56:26.177523763 +0000 UTC m=+0.223789143 container init c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:56:26 np0005473739 podman[423614]: 2025-10-07 14:56:26.189035148 +0000 UTC m=+0.235300498 container start c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 10:56:26 np0005473739 stoic_borg[423649]: 167 167
Oct  7 10:56:26 np0005473739 systemd[1]: libpod-c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3.scope: Deactivated successfully.
Oct  7 10:56:26 np0005473739 podman[423650]: 2025-10-07 14:56:26.129503922 +0000 UTC m=+0.046535953 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:56:26 np0005473739 podman[423614]: 2025-10-07 14:56:26.208640437 +0000 UTC m=+0.254905817 container attach c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:56:26 np0005473739 podman[423614]: 2025-10-07 14:56:26.211133123 +0000 UTC m=+0.257398483 container died c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 10:56:26 np0005473739 podman[423650]: 2025-10-07 14:56:26.236987677 +0000 UTC m=+0.154019718 container create a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  7 10:56:26 np0005473739 systemd[1]: Started libpod-conmon-a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511.scope.
Oct  7 10:56:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1ec8982c3297fd74b8e18954d96ec6a24520b1c950eb8b1e62bb2b7bd1ad61e0-merged.mount: Deactivated successfully.
Oct  7 10:56:26 np0005473739 podman[423664]: 2025-10-07 14:56:26.297807187 +0000 UTC m=+0.158528448 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 10:56:26 np0005473739 podman[423614]: 2025-10-07 14:56:26.307841702 +0000 UTC m=+0.354107052 container remove c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_borg, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 10:56:26 np0005473739 systemd[1]: libpod-conmon-c86efd6e1e7f0ded719f40e4e7befa0037065cbb4c108b56d1b2a2b3511674f3.scope: Deactivated successfully.
Oct  7 10:56:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:56:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/febe76e8dd5c30f47b13b78fd42131538300a58f00e7858de7bede20aa45a62a/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:26 np0005473739 podman[423650]: 2025-10-07 14:56:26.363625859 +0000 UTC m=+0.280657910 container init a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:56:26 np0005473739 podman[423650]: 2025-10-07 14:56:26.3723638 +0000 UTC m=+0.289395841 container start a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.schema-version=1.0)
Oct  7 10:56:26 np0005473739 podman[423663]: 2025-10-07 14:56:26.388957389 +0000 UTC m=+0.254326032 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.392 2 DEBUG nova.compute.manager [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.392 2 DEBUG oslo_concurrency.lockutils [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.392 2 DEBUG oslo_concurrency.lockutils [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.393 2 DEBUG oslo_concurrency.lockutils [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.393 2 DEBUG nova.compute.manager [req-6cba2ebc-ac3a-44f8-aa0f-be04ccd9637b req-5fc26886-cefd-46a4-afe2-cfc2efdcf240 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Processing event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:56:26 np0005473739 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [NOTICE]   (423728) : New worker (423731) forked
Oct  7 10:56:26 np0005473739 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [NOTICE]   (423728) : Loading success.
Oct  7 10:56:26 np0005473739 podman[423745]: 2025-10-07 14:56:26.510905486 +0000 UTC m=+0.047037266 container create 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 10:56:26 np0005473739 systemd[1]: Started libpod-conmon-7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35.scope.
Oct  7 10:56:26 np0005473739 podman[423745]: 2025-10-07 14:56:26.4921956 +0000 UTC m=+0.028327400 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:56:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:56:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.631 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848986.6305578, 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.632 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] VM Started (Lifecycle Event)#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.636 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:56:26 np0005473739 podman[423745]: 2025-10-07 14:56:26.641389209 +0000 UTC m=+0.177521019 container init 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.641 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.647 2 INFO nova.virt.libvirt.driver [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance spawned successfully.#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.647 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:56:26 np0005473739 podman[423745]: 2025-10-07 14:56:26.651409744 +0000 UTC m=+0.187541524 container start 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.656 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:56:26 np0005473739 podman[423745]: 2025-10-07 14:56:26.658918923 +0000 UTC m=+0.195050723 container attach 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.665 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.670 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.670 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.671 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.671 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.671 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.672 2 DEBUG nova.virt.libvirt.driver [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.699 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.700 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848986.6307867, 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.700 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.726 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.731 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759848986.6400173, 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.731 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.739 2 INFO nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Took 8.66 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.740 2 DEBUG nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.750 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.753 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.786 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.820 2 INFO nova.compute.manager [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Took 9.64 seconds to build instance.#033[00m
Oct  7 10:56:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:26 np0005473739 nova_compute[259550]: 2025-10-07 14:56:26.840 2 DEBUG oslo_concurrency.lockutils [None req-cee98efe-f6e4-4b32-9712-89454beb238b 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.036s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2826: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.8 MiB/s wr, 28 op/s
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]: {
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:    "0": [
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:        {
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "devices": [
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "/dev/loop3"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            ],
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_name": "ceph_lv0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_size": "21470642176",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "name": "ceph_lv0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "tags": {
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cluster_name": "ceph",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.crush_device_class": "",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.encrypted": "0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osd_id": "0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.type": "block",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.vdo": "0"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            },
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "type": "block",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "vg_name": "ceph_vg0"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:        }
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:    ],
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:    "1": [
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:        {
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "devices": [
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "/dev/loop4"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            ],
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_name": "ceph_lv1",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_size": "21470642176",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "name": "ceph_lv1",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "tags": {
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cluster_name": "ceph",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.crush_device_class": "",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.encrypted": "0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osd_id": "1",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.type": "block",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.vdo": "0"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            },
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "type": "block",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "vg_name": "ceph_vg1"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:        }
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:    ],
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:    "2": [
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:        {
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "devices": [
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "/dev/loop5"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            ],
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_name": "ceph_lv2",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_size": "21470642176",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "name": "ceph_lv2",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "tags": {
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.cluster_name": "ceph",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.crush_device_class": "",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.encrypted": "0",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osd_id": "2",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.type": "block",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:                "ceph.vdo": "0"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            },
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "type": "block",
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:            "vg_name": "ceph_vg2"
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:        }
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]:    ]
Oct  7 10:56:27 np0005473739 interesting_nightingale[423760]: }
Oct  7 10:56:27 np0005473739 systemd[1]: libpod-7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35.scope: Deactivated successfully.
Oct  7 10:56:27 np0005473739 podman[423745]: 2025-10-07 14:56:27.613329561 +0000 UTC m=+1.149461341 container died 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:56:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f98efbbce7c83cc5b5ef65b8af85c7885d4aaadb7b9f8b81954cfc624439196f-merged.mount: Deactivated successfully.
Oct  7 10:56:27 np0005473739 podman[423745]: 2025-10-07 14:56:27.677747676 +0000 UTC m=+1.213879456 container remove 7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_nightingale, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:56:27 np0005473739 systemd[1]: libpod-conmon-7ee4802ecb91746232c491f96aa2796896cbe94f1a25436aae181b37257b8c35.scope: Deactivated successfully.
Oct  7 10:56:27 np0005473739 nova_compute[259550]: 2025-10-07 14:56:27.738 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:28 np0005473739 podman[423918]: 2025-10-07 14:56:28.428484104 +0000 UTC m=+0.048731451 container create 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:56:28 np0005473739 systemd[1]: Started libpod-conmon-74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e.scope.
Oct  7 10:56:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:56:28 np0005473739 podman[423918]: 2025-10-07 14:56:28.407490929 +0000 UTC m=+0.027738296 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:56:28 np0005473739 podman[423918]: 2025-10-07 14:56:28.514761718 +0000 UTC m=+0.135009095 container init 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:56:28 np0005473739 podman[423918]: 2025-10-07 14:56:28.52316224 +0000 UTC m=+0.143409587 container start 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 10:56:28 np0005473739 podman[423918]: 2025-10-07 14:56:28.527250848 +0000 UTC m=+0.147498195 container attach 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:56:28 np0005473739 nervous_tharp[423934]: 167 167
Oct  7 10:56:28 np0005473739 systemd[1]: libpod-74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e.scope: Deactivated successfully.
Oct  7 10:56:28 np0005473739 podman[423918]: 2025-10-07 14:56:28.533093123 +0000 UTC m=+0.153340470 container died 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 10:56:28 np0005473739 nova_compute[259550]: 2025-10-07 14:56:28.550 2 DEBUG nova.compute.manager [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:56:28 np0005473739 nova_compute[259550]: 2025-10-07 14:56:28.550 2 DEBUG oslo_concurrency.lockutils [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:28 np0005473739 nova_compute[259550]: 2025-10-07 14:56:28.550 2 DEBUG oslo_concurrency.lockutils [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:28 np0005473739 nova_compute[259550]: 2025-10-07 14:56:28.551 2 DEBUG oslo_concurrency.lockutils [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:28 np0005473739 nova_compute[259550]: 2025-10-07 14:56:28.551 2 DEBUG nova.compute.manager [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] No waiting events found dispatching network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:56:28 np0005473739 nova_compute[259550]: 2025-10-07 14:56:28.551 2 WARNING nova.compute.manager [req-441390ef-1dc8-4000-955d-724ab15f8f40 req-9652ee3c-637a-4655-a867-1c51bc637f8d 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received unexpected event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:56:28 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4069cfee9c1f7f7607216075dd0c90b2654f4aa67053720b052db6f962833216-merged.mount: Deactivated successfully.
Oct  7 10:56:28 np0005473739 podman[423918]: 2025-10-07 14:56:28.582244334 +0000 UTC m=+0.202491691 container remove 74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_tharp, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 10:56:28 np0005473739 systemd[1]: libpod-conmon-74a1482b41c05a2d8c5d3a4bfe7875f51a4ea83cbf99a6ce2ca2f949e739411e.scope: Deactivated successfully.
Oct  7 10:56:28 np0005473739 nova_compute[259550]: 2025-10-07 14:56:28.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:28 np0005473739 podman[423960]: 2025-10-07 14:56:28.771899453 +0000 UTC m=+0.047793376 container create 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:56:28 np0005473739 systemd[1]: Started libpod-conmon-00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67.scope.
Oct  7 10:56:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:56:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:56:28 np0005473739 podman[423960]: 2025-10-07 14:56:28.75136783 +0000 UTC m=+0.027261783 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:56:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2827: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.2 MiB/s rd, 1.8 MiB/s wr, 73 op/s
Oct  7 10:56:28 np0005473739 podman[423960]: 2025-10-07 14:56:28.864761501 +0000 UTC m=+0.140655444 container init 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  7 10:56:28 np0005473739 podman[423960]: 2025-10-07 14:56:28.875125815 +0000 UTC m=+0.151019738 container start 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:56:28 np0005473739 podman[423960]: 2025-10-07 14:56:28.883749323 +0000 UTC m=+0.159643266 container attach 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]: {
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "osd_id": 2,
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "type": "bluestore"
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:    },
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "osd_id": 1,
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "type": "bluestore"
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:    },
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "osd_id": 0,
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:        "type": "bluestore"
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]:    }
Oct  7 10:56:29 np0005473739 kind_mccarthy[423977]: }
Oct  7 10:56:29 np0005473739 systemd[1]: libpod-00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67.scope: Deactivated successfully.
Oct  7 10:56:29 np0005473739 systemd[1]: libpod-00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67.scope: Consumed 1.119s CPU time.
Oct  7 10:56:29 np0005473739 podman[423960]: 2025-10-07 14:56:29.996302567 +0000 UTC m=+1.272196500 container died 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:56:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-677536e0f10ec528ca46b74831e35873909e225dd734c4489c011ecbb7afbb4a-merged.mount: Deactivated successfully.
Oct  7 10:56:30 np0005473739 podman[423960]: 2025-10-07 14:56:30.062010566 +0000 UTC m=+1.337904489 container remove 00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_mccarthy, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 10:56:30 np0005473739 systemd[1]: libpod-conmon-00161ea453763a499e4713972e7e2cc8edececa0a9a44823935c4e3b5fe7fc67.scope: Deactivated successfully.
Oct  7 10:56:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:56:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:56:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:30 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8a8ff7e7-d0b9-41e1-bb64-8a1732029abe does not exist
Oct  7 10:56:30 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev fd470abd-1253-46ad-8048-72530a644d75 does not exist
Oct  7 10:56:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:30 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:56:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2828: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 97 op/s
Oct  7 10:56:31 np0005473739 NetworkManager[44949]: <info>  [1759848991.3571] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/651)
Oct  7 10:56:31 np0005473739 NetworkManager[44949]: <info>  [1759848991.3581] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/652)
Oct  7 10:56:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:31Z|01610|binding|INFO|Releasing lport df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b from this chassis (sb_readonly=0)
Oct  7 10:56:31 np0005473739 nova_compute[259550]: 2025-10-07 14:56:31.360 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:31Z|01611|binding|INFO|Releasing lport df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b from this chassis (sb_readonly=0)
Oct  7 10:56:31 np0005473739 nova_compute[259550]: 2025-10-07 14:56:31.401 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:31 np0005473739 nova_compute[259550]: 2025-10-07 14:56:31.408 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:31 np0005473739 nova_compute[259550]: 2025-10-07 14:56:31.810 2 DEBUG nova.compute.manager [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:56:31 np0005473739 nova_compute[259550]: 2025-10-07 14:56:31.811 2 DEBUG nova.compute.manager [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing instance network info cache due to event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:56:31 np0005473739 nova_compute[259550]: 2025-10-07 14:56:31.811 2 DEBUG oslo_concurrency.lockutils [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:56:31 np0005473739 nova_compute[259550]: 2025-10-07 14:56:31.811 2 DEBUG oslo_concurrency.lockutils [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:56:31 np0005473739 nova_compute[259550]: 2025-10-07 14:56:31.811 2 DEBUG nova.network.neutron [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:56:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:32 np0005473739 nova_compute[259550]: 2025-10-07 14:56:32.742 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:56:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1491135164' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:56:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:56:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1491135164' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00034841348814872695 of space, bias 1.0, pg target 0.10452404644461809 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:56:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2829: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct  7 10:56:33 np0005473739 nova_compute[259550]: 2025-10-07 14:56:33.253 2 DEBUG nova.network.neutron [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updated VIF entry in instance network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:56:33 np0005473739 nova_compute[259550]: 2025-10-07 14:56:33.254 2 DEBUG nova.network.neutron [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:56:33 np0005473739 nova_compute[259550]: 2025-10-07 14:56:33.285 2 DEBUG oslo_concurrency.lockutils [req-0ad6a2d5-a77b-40fb-a2e5-53e34a6a2e94 req-510d23b5-193f-4dfb-a283-4ad7d629a5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:56:33 np0005473739 nova_compute[259550]: 2025-10-07 14:56:33.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2830: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.2 MiB/s wr, 85 op/s
Oct  7 10:56:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2831: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:56:37 np0005473739 nova_compute[259550]: 2025-10-07 14:56:37.744 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:38 np0005473739 nova_compute[259550]: 2025-10-07 14:56:38.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2832: 305 pgs: 305 active+clean; 88 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 341 B/s wr, 75 op/s
Oct  7 10:56:40 np0005473739 podman[424073]: 2025-10-07 14:56:40.108030604 +0000 UTC m=+0.087579710 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct  7 10:56:40 np0005473739 podman[424074]: 2025-10-07 14:56:40.112784899 +0000 UTC m=+0.089274823 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  7 10:56:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2833: 305 pgs: 305 active+clean; 96 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 821 KiB/s rd, 647 KiB/s wr, 42 op/s
Oct  7 10:56:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:42Z|00200|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:87:48:40 10.100.0.8
Oct  7 10:56:42 np0005473739 ovn_controller[151684]: 2025-10-07T14:56:42Z|00201|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:87:48:40 10.100.0.8
Oct  7 10:56:42 np0005473739 nova_compute[259550]: 2025-10-07 14:56:42.746 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2834: 305 pgs: 305 active+clean; 96 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 647 KiB/s wr, 15 op/s
Oct  7 10:56:43 np0005473739 nova_compute[259550]: 2025-10-07 14:56:43.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2835: 305 pgs: 305 active+clean; 109 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 225 KiB/s rd, 2.0 MiB/s wr, 40 op/s
Oct  7 10:56:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2836: 305 pgs: 305 active+clean; 115 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 301 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 10:56:47 np0005473739 nova_compute[259550]: 2025-10-07 14:56:47.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:48 np0005473739 nova_compute[259550]: 2025-10-07 14:56:48.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2837: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 65 op/s
Oct  7 10:56:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2838: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 312 KiB/s rd, 2.1 MiB/s wr, 63 op/s
Oct  7 10:56:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:56:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:56:52 np0005473739 nova_compute[259550]: 2025-10-07 14:56:52.750 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2839: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.675 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.676 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.710 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.835 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.835 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.844 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.845 2 INFO nova.compute.claims [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:56:53 np0005473739 nova_compute[259550]: 2025-10-07 14:56:53.985 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:56:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1382234774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.444 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.459s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.452 2 DEBUG nova.compute.provider_tree [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.482 2 DEBUG nova.scheduler.client.report [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.527 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.528 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.614 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.615 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.664 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.691 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.819 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.821 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.821 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Creating image(s)#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.845 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2840: 305 pgs: 305 active+clean; 121 MiB data, 1021 MiB used, 59 GiB / 60 GiB avail; 295 KiB/s rd, 1.5 MiB/s wr, 50 op/s
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.881 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.914 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:54 np0005473739 nova_compute[259550]: 2025-10-07 14:56:54.922 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:55 np0005473739 nova_compute[259550]: 2025-10-07 14:56:55.014 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:55 np0005473739 nova_compute[259550]: 2025-10-07 14:56:55.015 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:55 np0005473739 nova_compute[259550]: 2025-10-07 14:56:55.016 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:55 np0005473739 nova_compute[259550]: 2025-10-07 14:56:55.016 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:55 np0005473739 nova_compute[259550]: 2025-10-07 14:56:55.042 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:56:55 np0005473739 nova_compute[259550]: 2025-10-07 14:56:55.048 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:56:55 np0005473739 nova_compute[259550]: 2025-10-07 14:56:55.280 2 DEBUG nova.policy [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:56:56 np0005473739 nova_compute[259550]: 2025-10-07 14:56:56.063 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.015s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:56:56 np0005473739 nova_compute[259550]: 2025-10-07 14:56:56.129 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:56:56 np0005473739 nova_compute[259550]: 2025-10-07 14:56:56.451 2 DEBUG nova.objects.instance [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid b02ddcfd-0f68-4cc3-9c20-24cac0572be8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:56:56 np0005473739 nova_compute[259550]: 2025-10-07 14:56:56.487 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:56:56 np0005473739 nova_compute[259550]: 2025-10-07 14:56:56.487 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Ensure instance console log exists: /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:56:56 np0005473739 nova_compute[259550]: 2025-10-07 14:56:56.488 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:56:56 np0005473739 nova_compute[259550]: 2025-10-07 14:56:56.488 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:56:56 np0005473739 nova_compute[259550]: 2025-10-07 14:56:56.489 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:56:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:56:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2841: 305 pgs: 305 active+clean; 121 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 105 KiB/s rd, 155 KiB/s wr, 34 op/s
Oct  7 10:56:57 np0005473739 podman[424306]: 2025-10-07 14:56:57.066010486 +0000 UTC m=+0.054577666 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid)
Oct  7 10:56:57 np0005473739 podman[424305]: 2025-10-07 14:56:57.065995795 +0000 UTC m=+0.056432605 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:56:57 np0005473739 nova_compute[259550]: 2025-10-07 14:56:57.752 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:57 np0005473739 nova_compute[259550]: 2025-10-07 14:56:57.766 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Successfully created port: 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:56:58 np0005473739 nova_compute[259550]: 2025-10-07 14:56:58.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:56:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2842: 305 pgs: 305 active+clean; 143 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 39 KiB/s rd, 817 KiB/s wr, 25 op/s
Oct  7 10:56:58 np0005473739 nova_compute[259550]: 2025-10-07 14:56:58.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:56:58 np0005473739 nova_compute[259550]: 2025-10-07 14:56:58.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.037 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.569 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Successfully updated port: 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.608 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.609 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.609 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.744 2 DEBUG nova.compute.manager [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.744 2 DEBUG nova.compute.manager [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing instance network info cache due to event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.745 2 DEBUG oslo_concurrency.lockutils [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:56:59 np0005473739 nova_compute[259550]: 2025-10-07 14:56:59.822 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:57:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:00.093 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:00.094 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:00.094 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2843: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:57:00 np0005473739 nova_compute[259550]: 2025-10-07 14:57:00.933 2 DEBUG nova.network.neutron [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.057 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.058 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance network_info: |[{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.059 2 DEBUG oslo_concurrency.lockutils [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.059 2 DEBUG nova.network.neutron [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.062 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start _get_guest_xml network_info=[{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.067 2 WARNING nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.072 2 DEBUG nova.virt.libvirt.host [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.073 2 DEBUG nova.virt.libvirt.host [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.078 2 DEBUG nova.virt.libvirt.host [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.078 2 DEBUG nova.virt.libvirt.host [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.079 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.079 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.080 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.081 2 DEBUG nova.virt.hardware [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.084 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:57:01 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3389181530' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.589 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.504s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.611 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:57:01 np0005473739 nova_compute[259550]: 2025-10-07 14:57:01.615 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:57:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1344967948' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.054 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.439s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.056 2 DEBUG nova.virt.libvirt.vif [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=148,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-hydkwpjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:56:54Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b02ddcfd-0f68-4cc3-9c20-24cac0572be8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.057 2 DEBUG nova.network.os_vif_util [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.058 2 DEBUG nova.network.os_vif_util [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.059 2 DEBUG nova.objects.instance [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid b02ddcfd-0f68-4cc3-9c20-24cac0572be8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.081 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <uuid>b02ddcfd-0f68-4cc3-9c20-24cac0572be8</uuid>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <name>instance-00000094</name>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825</nova:name>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:57:01</nova:creationTime>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <nova:port uuid="791c5fbe-b06e-4554-b2ad-f3f5c1709fc8">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.5" ipVersion="4"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <entry name="serial">b02ddcfd-0f68-4cc3-9c20-24cac0572be8</entry>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <entry name="uuid">b02ddcfd-0f68-4cc3-9c20-24cac0572be8</entry>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:9f:5e:cf"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <target dev="tap791c5fbe-b0"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/console.log" append="off"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:57:02 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:57:02 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:57:02 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:57:02 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.082 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Preparing to wait for external event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.082 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.083 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.083 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.084 2 DEBUG nova.virt.libvirt.vif [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=148,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-hydkwpjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:56:54Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b02ddcfd-0f68-4cc3-9c20-24cac0572be8,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.084 2 DEBUG nova.network.os_vif_util [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.085 2 DEBUG nova.network.os_vif_util [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.085 2 DEBUG os_vif [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.086 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.086 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.090 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap791c5fbe-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.090 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap791c5fbe-b0, col_values=(('external_ids', {'iface-id': '791c5fbe-b06e-4554-b2ad-f3f5c1709fc8', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9f:5e:cf', 'vm-uuid': 'b02ddcfd-0f68-4cc3-9c20-24cac0572be8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.092 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:02 np0005473739 NetworkManager[44949]: <info>  [1759849022.0930] manager: (tap791c5fbe-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/653)
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.098 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.100 2 INFO os_vif [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0')#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.227 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.228 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.228 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:9f:5e:cf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.228 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Using config drive#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.257 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.800 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Creating config drive at /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.806 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcqdow3ne execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2844: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.947 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpcqdow3ne" returned: 0 in 0.141s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.982 2 DEBUG nova.storage.rbd_utils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:57:02 np0005473739 nova_compute[259550]: 2025-10-07 14:57:02.988 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:03 np0005473739 nova_compute[259550]: 2025-10-07 14:57:03.345 2 DEBUG nova.network.neutron [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updated VIF entry in instance network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:57:03 np0005473739 nova_compute[259550]: 2025-10-07 14:57:03.346 2 DEBUG nova.network.neutron [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:57:03 np0005473739 nova_compute[259550]: 2025-10-07 14:57:03.371 2 DEBUG oslo_concurrency.lockutils [req-65634a2d-3280-4ed4-88c9-0bf765b13e11 req-39154594-fcc5-4776-b251-9788c3a9ee9e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:57:03 np0005473739 nova_compute[259550]: 2025-10-07 14:57:03.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:03 np0005473739 nova_compute[259550]: 2025-10-07 14:57:03.871 2 DEBUG oslo_concurrency.processutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config b02ddcfd-0f68-4cc3-9c20-24cac0572be8_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.883s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:03 np0005473739 nova_compute[259550]: 2025-10-07 14:57:03.871 2 INFO nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Deleting local config drive /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8/disk.config because it was imported into RBD.#033[00m
Oct  7 10:57:03 np0005473739 kernel: tap791c5fbe-b0: entered promiscuous mode
Oct  7 10:57:03 np0005473739 NetworkManager[44949]: <info>  [1759849023.9246] manager: (tap791c5fbe-b0): new Tun device (/org/freedesktop/NetworkManager/Devices/654)
Oct  7 10:57:03 np0005473739 nova_compute[259550]: 2025-10-07 14:57:03.926 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:03Z|01612|binding|INFO|Claiming lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 for this chassis.
Oct  7 10:57:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:03Z|01613|binding|INFO|791c5fbe-b06e-4554-b2ad-f3f5c1709fc8: Claiming fa:16:3e:9f:5e:cf 10.100.0.5
Oct  7 10:57:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:03.936 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:5e:cf 10.100.0.5'], port_security=['fa:16:3e:9f:5e:cf 10.100.0.5'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b02ddcfd-0f68-4cc3-9c20-24cac0572be8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:57:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:03.937 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 bound to our chassis#033[00m
Oct  7 10:57:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:03.938 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de9e106e-fa40-4336-9ad7-f811682b66d2#033[00m
Oct  7 10:57:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:03Z|01614|binding|INFO|Setting lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 ovn-installed in OVS
Oct  7 10:57:03 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:03Z|01615|binding|INFO|Setting lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 up in Southbound
Oct  7 10:57:03 np0005473739 nova_compute[259550]: 2025-10-07 14:57:03.944 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:03 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:03.956 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[fd39ac61-5dab-4689-a1f6-9f3db7cfd698]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:03 np0005473739 systemd-udevd[424483]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:57:03 np0005473739 systemd-machined[214580]: New machine qemu-182-instance-00000094.
Oct  7 10:57:03 np0005473739 NetworkManager[44949]: <info>  [1759849023.9768] device (tap791c5fbe-b0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:57:03 np0005473739 NetworkManager[44949]: <info>  [1759849023.9780] device (tap791c5fbe-b0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:57:03 np0005473739 systemd[1]: Started Virtual Machine qemu-182-instance-00000094.
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.000 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[86abc616-349a-4bde-9f3f-8ebd40168e60]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.003 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d55f427e-e946-4f2a-ab66-39e457e3e1fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.035 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[5efb5385-abc4-4040-a423-41d498967870]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.054 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2dd46687-2e20-47ea-9450-1650533248ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde9e106e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a2:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956909, 'reachable_time': 21559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424496, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.070 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[badfc6e6-8316-4e90-ac44-2ccdd833e216]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapde9e106e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956922, 'tstamp': 956922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424497, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapde9e106e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956926, 'tstamp': 956926}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424497, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.072 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde9e106e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:04 np0005473739 nova_compute[259550]: 2025-10-07 14:57:04.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.075 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde9e106e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.075 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.076 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde9e106e-f0, col_values=(('external_ids', {'iface-id': 'df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.076 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:57:04 np0005473739 nova_compute[259550]: 2025-10-07 14:57:04.535 2 DEBUG nova.compute.manager [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:04 np0005473739 nova_compute[259550]: 2025-10-07 14:57:04.536 2 DEBUG oslo_concurrency.lockutils [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:04 np0005473739 nova_compute[259550]: 2025-10-07 14:57:04.536 2 DEBUG oslo_concurrency.lockutils [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:04 np0005473739 nova_compute[259550]: 2025-10-07 14:57:04.536 2 DEBUG oslo_concurrency.lockutils [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:04 np0005473739 nova_compute[259550]: 2025-10-07 14:57:04.536 2 DEBUG nova.compute.manager [req-dcb361d8-3eb3-4e80-a11d-36f89c208985 req-f66c4482-e818-4525-82f7-a4dcc31d4b3c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Processing event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.648 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=56, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=55) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:57:04 np0005473739 nova_compute[259550]: 2025-10-07 14:57:04.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:04.649 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:57:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2845: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.037 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.327 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.328 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849025.327855, b02ddcfd-0f68-4cc3-9c20-24cac0572be8 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.328 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] VM Started (Lifecycle Event)#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.331 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.335 2 INFO nova.virt.libvirt.driver [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance spawned successfully.#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.336 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.371 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.376 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.377 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.377 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.377 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.378 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.378 2 DEBUG nova.virt.libvirt.driver [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.386 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.442 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.443 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849025.3286908, b02ddcfd-0f68-4cc3-9c20-24cac0572be8 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.443 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.467 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.471 2 INFO nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Took 10.65 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.472 2 DEBUG nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.474 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849025.3306851, b02ddcfd-0f68-4cc3-9c20-24cac0572be8 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.474 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.496 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.500 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.545 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.562 2 INFO nova.compute.manager [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Took 11.77 seconds to build instance.#033[00m
Oct  7 10:57:05 np0005473739 nova_compute[259550]: 2025-10-07 14:57:05.587 2 DEBUG oslo_concurrency.lockutils [None req-82775bb2-1163-4a64-9b12-a8e0601727f5 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.911s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:06 np0005473739 nova_compute[259550]: 2025-10-07 14:57:06.643 2 DEBUG nova.compute.manager [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:06 np0005473739 nova_compute[259550]: 2025-10-07 14:57:06.643 2 DEBUG oslo_concurrency.lockutils [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:06 np0005473739 nova_compute[259550]: 2025-10-07 14:57:06.644 2 DEBUG oslo_concurrency.lockutils [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:06 np0005473739 nova_compute[259550]: 2025-10-07 14:57:06.644 2 DEBUG oslo_concurrency.lockutils [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:06 np0005473739 nova_compute[259550]: 2025-10-07 14:57:06.644 2 DEBUG nova.compute.manager [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] No waiting events found dispatching network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:57:06 np0005473739 nova_compute[259550]: 2025-10-07 14:57:06.644 2 WARNING nova.compute.manager [req-7b71bf87-06e7-46cf-b7e1-5b99bbe1c58f req-afc3fd89-7a66-4b46-a23d-19c0d2687de7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received unexpected event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:57:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2846: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  7 10:57:07 np0005473739 nova_compute[259550]: 2025-10-07 14:57:07.094 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:07 np0005473739 nova_compute[259550]: 2025-10-07 14:57:07.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:08 np0005473739 nova_compute[259550]: 2025-10-07 14:57:08.638 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:08 np0005473739 nova_compute[259550]: 2025-10-07 14:57:08.733 2 DEBUG nova.compute.manager [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:08 np0005473739 nova_compute[259550]: 2025-10-07 14:57:08.734 2 DEBUG nova.compute.manager [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing instance network info cache due to event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:57:08 np0005473739 nova_compute[259550]: 2025-10-07 14:57:08.735 2 DEBUG oslo_concurrency.lockutils [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:57:08 np0005473739 nova_compute[259550]: 2025-10-07 14:57:08.735 2 DEBUG oslo_concurrency.lockutils [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:57:08 np0005473739 nova_compute[259550]: 2025-10-07 14:57:08.735 2 DEBUG nova.network.neutron [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:57:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2847: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 824 KiB/s rd, 1.8 MiB/s wr, 56 op/s
Oct  7 10:57:08 np0005473739 nova_compute[259550]: 2025-10-07 14:57:08.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:08 np0005473739 nova_compute[259550]: 2025-10-07 14:57:08.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:10 np0005473739 nova_compute[259550]: 2025-10-07 14:57:10.262 2 DEBUG nova.network.neutron [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updated VIF entry in instance network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:57:10 np0005473739 nova_compute[259550]: 2025-10-07 14:57:10.263 2 DEBUG nova.network.neutron [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:57:10 np0005473739 nova_compute[259550]: 2025-10-07 14:57:10.289 2 DEBUG oslo_concurrency.lockutils [req-de73354e-585f-42a9-8e24-b0db98f36f6e req-2f260ea0-c051-4b23-bda4-d6c3f20cc226 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:57:10 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:10.652 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '56'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:10 np0005473739 nova_compute[259550]: 2025-10-07 14:57:10.828 2 DEBUG nova.compute.manager [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:10 np0005473739 nova_compute[259550]: 2025-10-07 14:57:10.829 2 DEBUG nova.compute.manager [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing instance network info cache due to event network-changed-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:57:10 np0005473739 nova_compute[259550]: 2025-10-07 14:57:10.830 2 DEBUG oslo_concurrency.lockutils [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:57:10 np0005473739 nova_compute[259550]: 2025-10-07 14:57:10.830 2 DEBUG oslo_concurrency.lockutils [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:57:10 np0005473739 nova_compute[259550]: 2025-10-07 14:57:10.830 2 DEBUG nova.network.neutron [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Refreshing network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:57:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2848: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 79 op/s
Oct  7 10:57:11 np0005473739 podman[424540]: 2025-10-07 14:57:11.100820396 +0000 UTC m=+0.086524910 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Oct  7 10:57:11 np0005473739 podman[424541]: 2025-10-07 14:57:11.136498921 +0000 UTC m=+0.119983637 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Oct  7 10:57:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:11 np0005473739 nova_compute[259550]: 2025-10-07 14:57:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:12 np0005473739 nova_compute[259550]: 2025-10-07 14:57:12.096 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:12 np0005473739 nova_compute[259550]: 2025-10-07 14:57:12.271 2 DEBUG nova.network.neutron [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updated VIF entry in instance network info cache for port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:57:12 np0005473739 nova_compute[259550]: 2025-10-07 14:57:12.272 2 DEBUG nova.network.neutron [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [{"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:57:12 np0005473739 nova_compute[259550]: 2025-10-07 14:57:12.295 2 DEBUG oslo_concurrency.lockutils [req-d5ed69a9-19c9-49c6-a9f1-0d05b362a2df req-b7752085-a970-4d74-938f-63bf79e07f4e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-b02ddcfd-0f68-4cc3-9c20-24cac0572be8" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:57:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2849: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  7 10:57:13 np0005473739 nova_compute[259550]: 2025-10-07 14:57:13.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:14 np0005473739 nova_compute[259550]: 2025-10-07 14:57:14.004 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:14 np0005473739 nova_compute[259550]: 2025-10-07 14:57:14.004 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:57:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2850: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  7 10:57:15 np0005473739 nova_compute[259550]: 2025-10-07 14:57:15.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2851: 305 pgs: 305 active+clean; 167 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 4.4 KiB/s wr, 71 op/s
Oct  7 10:57:16 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  7 10:57:17 np0005473739 nova_compute[259550]: 2025-10-07 14:57:17.099 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:17Z|00202|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9f:5e:cf 10.100.0.5
Oct  7 10:57:17 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:17Z|00203|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9f:5e:cf 10.100.0.5
Oct  7 10:57:17 np0005473739 nova_compute[259550]: 2025-10-07 14:57:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:17 np0005473739 nova_compute[259550]: 2025-10-07 14:57:17.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 10:57:18 np0005473739 nova_compute[259550]: 2025-10-07 14:57:18.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2852: 305 pgs: 305 active+clean; 189 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 1.0 MiB/s wr, 106 op/s
Oct  7 10:57:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2853: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 2.1 MiB/s wr, 100 op/s
Oct  7 10:57:21 np0005473739 nova_compute[259550]: 2025-10-07 14:57:21.131 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:21 np0005473739 nova_compute[259550]: 2025-10-07 14:57:21.324 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:21 np0005473739 nova_compute[259550]: 2025-10-07 14:57:21.325 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:21 np0005473739 nova_compute[259550]: 2025-10-07 14:57:21.325 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:21 np0005473739 nova_compute[259550]: 2025-10-07 14:57:21.325 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:57:21 np0005473739 nova_compute[259550]: 2025-10-07 14:57:21.326 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:57:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2621711359' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:57:21 np0005473739 nova_compute[259550]: 2025-10-07 14:57:21.882 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.556s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.092 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.092 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000093 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.096 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.097 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000094 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.313 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.314 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3206MB free_disk=59.897308349609375GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.315 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.315 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.551 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.552 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance b02ddcfd-0f68-4cc3-9c20-24cac0572be8 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.552 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.552 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:57:22 np0005473739 nova_compute[259550]: 2025-10-07 14:57:22.610 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:57:22
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.control', 'cephfs.cephfs.meta', 'images', 'backups', 'default.rgw.meta', 'volumes', '.mgr', 'vms', 'default.rgw.log', '.rgw.root']
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:57:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2854: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  7 10:57:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:57:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1097252514' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:57:23 np0005473739 nova_compute[259550]: 2025-10-07 14:57:23.108 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.498s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:23 np0005473739 nova_compute[259550]: 2025-10-07 14:57:23.115 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:57:23 np0005473739 nova_compute[259550]: 2025-10-07 14:57:23.131 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:57:23 np0005473739 nova_compute[259550]: 2025-10-07 14:57:23.155 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:57:23 np0005473739 nova_compute[259550]: 2025-10-07 14:57:23.155 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.840s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:57:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:57:23 np0005473739 nova_compute[259550]: 2025-10-07 14:57:23.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.348 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.349 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.349 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.349 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.349 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.351 2 INFO nova.compute.manager [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Terminating instance#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.352 2 DEBUG nova.compute.manager [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:57:24 np0005473739 kernel: tap791c5fbe-b0 (unregistering): left promiscuous mode
Oct  7 10:57:24 np0005473739 NetworkManager[44949]: <info>  [1759849044.4504] device (tap791c5fbe-b0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:24Z|01616|binding|INFO|Releasing lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 from this chassis (sb_readonly=0)
Oct  7 10:57:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:24Z|01617|binding|INFO|Setting lport 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 down in Southbound
Oct  7 10:57:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:24Z|01618|binding|INFO|Removing iface tap791c5fbe-b0 ovn-installed in OVS
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.463 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.476 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9f:5e:cf 10.100.0.5', 'unknown'], port_security=[], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.5/28', 'neutron:device_id': 'b02ddcfd-0f68-4cc3-9c20-24cac0572be8', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '6', 'neutron:security_group_ids': '', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.477 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 unbound from our chassis#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.479 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network de9e106e-fa40-4336-9ad7-f811682b66d2#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.498 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[20629c84-4e92-437f-9238-ad1136108cd9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:24 np0005473739 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Deactivated successfully.
Oct  7 10:57:24 np0005473739 systemd[1]: machine-qemu\x2d182\x2dinstance\x2d00000094.scope: Consumed 13.794s CPU time.
Oct  7 10:57:24 np0005473739 systemd-machined[214580]: Machine qemu-182-instance-00000094 terminated.
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.536 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[13573cea-a176-48f5-a8e4-116014577d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.539 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[2a1f7d61-0d05-4dd2-9739-b8408943d0c3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.573 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[955398a2-64b0-4655-8b1b-76393ccf3e09]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.586 2 INFO nova.virt.libvirt.driver [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Instance destroyed successfully.#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.586 2 DEBUG nova.objects.instance [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid b02ddcfd-0f68-4cc3-9c20-24cac0572be8 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.596 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d9c128df-9d3f-4869-a447-92bf659ecd01]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapde9e106e-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d9:a2:b1'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 21, 'tx_packets': 7, 'rx_bytes': 1670, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 463], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956909, 'reachable_time': 21559, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 15, 'inoctets': 1208, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 15, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 1208, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 15, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 424648, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.609 2 DEBUG nova.virt.libvirt.vif [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:56:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1074090825',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=148,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:57:05Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-hydkwpjc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:57:05Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=b02ddcfd-0f68-4cc3-9c20-24cac0572be8,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.611 2 DEBUG nova.network.os_vif_util [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "address": "fa:16:3e:9f:5e:cf", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.5", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap791c5fbe-b0", "ovs_interfaceid": "791c5fbe-b06e-4554-b2ad-f3f5c1709fc8", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.613 2 DEBUG nova.network.os_vif_util [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.613 2 DEBUG os_vif [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.617 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap791c5fbe-b0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.616 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[652f1550-7ed3-483e-b923-52b7d845e21c]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tapde9e106e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956922, 'tstamp': 956922}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424653, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapde9e106e-f1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 956926, 'tstamp': 956926}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 424653, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.618 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde9e106e-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapde9e106e-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapde9e106e-f0, col_values=(('external_ids', {'iface-id': 'df6ec9ae-b5a3-4dbd-9ad0-fe284c253a7b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:24 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:24.622 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:57:24 np0005473739 nova_compute[259550]: 2025-10-07 14:57:24.624 2 INFO os_vif [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9f:5e:cf,bridge_name='br-int',has_traffic_filtering=True,id=791c5fbe-b06e-4554-b2ad-f3f5c1709fc8,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap791c5fbe-b0')#033[00m
Oct  7 10:57:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2855: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 335 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  7 10:57:25 np0005473739 nova_compute[259550]: 2025-10-07 14:57:25.527 2 DEBUG nova.compute.manager [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-unplugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:25 np0005473739 nova_compute[259550]: 2025-10-07 14:57:25.528 2 DEBUG oslo_concurrency.lockutils [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:25 np0005473739 nova_compute[259550]: 2025-10-07 14:57:25.529 2 DEBUG oslo_concurrency.lockutils [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:25 np0005473739 nova_compute[259550]: 2025-10-07 14:57:25.529 2 DEBUG oslo_concurrency.lockutils [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:25 np0005473739 nova_compute[259550]: 2025-10-07 14:57:25.529 2 DEBUG nova.compute.manager [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] No waiting events found dispatching network-vif-unplugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:57:25 np0005473739 nova_compute[259550]: 2025-10-07 14:57:25.529 2 DEBUG nova.compute.manager [req-7b0775ca-5150-466e-b452-58d7940d68d6 req-ebb9f46b-e85e-4901-9f0a-878716719bc5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-unplugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:57:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2856: 305 pgs: 305 active+clean; 165 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 338 KiB/s rd, 2.1 MiB/s wr, 69 op/s
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.677 2 DEBUG nova.compute.manager [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.678 2 DEBUG oslo_concurrency.lockutils [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.678 2 DEBUG oslo_concurrency.lockutils [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.678 2 DEBUG oslo_concurrency.lockutils [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.678 2 DEBUG nova.compute.manager [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] No waiting events found dispatching network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.678 2 WARNING nova.compute.manager [req-6473191f-e484-418c-9486-e56983e0345f req-732af724-5549-4ff0-bc53-4796085eb5d2 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received unexpected event network-vif-plugged-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 for instance with vm_state active and task_state deleting.#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.756 2 INFO nova.virt.libvirt.driver [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Deleting instance files /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8_del#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.757 2 INFO nova.virt.libvirt.driver [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Deletion of /var/lib/nova/instances/b02ddcfd-0f68-4cc3-9c20-24cac0572be8_del complete#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.816 2 INFO nova.compute.manager [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Took 3.46 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.817 2 DEBUG oslo.service.loopingcall [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.817 2 DEBUG nova.compute.manager [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:57:27 np0005473739 nova_compute[259550]: 2025-10-07 14:57:27.818 2 DEBUG nova.network.neutron [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:57:28 np0005473739 podman[424673]: 2025-10-07 14:57:28.07397667 +0000 UTC m=+0.058910930 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 10:57:28 np0005473739 podman[424674]: 2025-10-07 14:57:28.073969779 +0000 UTC m=+0.058855628 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 10:57:28 np0005473739 nova_compute[259550]: 2025-10-07 14:57:28.646 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:28 np0005473739 nova_compute[259550]: 2025-10-07 14:57:28.718 2 DEBUG nova.network.neutron [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:57:28 np0005473739 nova_compute[259550]: 2025-10-07 14:57:28.752 2 INFO nova.compute.manager [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Took 0.93 seconds to deallocate network for instance.#033[00m
Oct  7 10:57:28 np0005473739 nova_compute[259550]: 2025-10-07 14:57:28.820 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:28 np0005473739 nova_compute[259550]: 2025-10-07 14:57:28.821 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2857: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 352 KiB/s rd, 2.1 MiB/s wr, 88 op/s
Oct  7 10:57:28 np0005473739 nova_compute[259550]: 2025-10-07 14:57:28.908 2 DEBUG oslo_concurrency.processutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.007 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.008 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.008 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.252 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.252 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.252 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.253 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:57:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:57:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1658601581' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.371 2 DEBUG oslo_concurrency.processutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.463s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.377 2 DEBUG nova.compute.provider_tree [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.398 2 DEBUG nova.scheduler.client.report [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.423 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.602s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.457 2 INFO nova.scheduler.client.report [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance b02ddcfd-0f68-4cc3-9c20-24cac0572be8#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.537 2 DEBUG oslo_concurrency.lockutils [None req-7a8a553b-29d2-4320-a807-eb4a7cc29739 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "b02ddcfd-0f68-4cc3-9c20-24cac0572be8" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.620 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:29 np0005473739 nova_compute[259550]: 2025-10-07 14:57:29.773 2 DEBUG nova.compute.manager [req-f400b7d2-6df9-4c2d-a76a-629a357600d0 req-e399b02c-1dda-4826-b7ee-18142cf9cc47 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Received event network-vif-deleted-791c5fbe-b06e-4554-b2ad-f3f5c1709fc8 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.700 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.702 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.703 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.703 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.703 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.704 2 INFO nova.compute.manager [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Terminating instance#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.705 2 DEBUG nova.compute.manager [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.861 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:57:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2858: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 118 KiB/s rd, 1.1 MiB/s wr, 54 op/s
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.884 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.884 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 10:57:30 np0005473739 nova_compute[259550]: 2025-10-07 14:57:30.885 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:57:31 np0005473739 kernel: tap6b0ce98f-f4 (unregistering): left promiscuous mode
Oct  7 10:57:31 np0005473739 NetworkManager[44949]: <info>  [1759849051.0679] device (tap6b0ce98f-f4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01619|binding|INFO|Releasing lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 from this chassis (sb_readonly=0)
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01620|binding|INFO|Setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 down in Southbound
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01621|binding|INFO|Removing iface tap6b0ce98f-f4 ovn-installed in OVS
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.095 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:57:31 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 21145e5a-9e8c-47ef-a52f-5f4b02122ed5 does not exist
Oct  7 10:57:31 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev fc8394f9-c185-4100-8d5e-fe80f6a9cc4f does not exist
Oct  7 10:57:31 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8b146528-28b3-4788-82ce-d53c74b1c21d does not exist
Oct  7 10:57:31 np0005473739 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Deactivated successfully.
Oct  7 10:57:31 np0005473739 systemd[1]: machine-qemu\x2d181\x2dinstance\x2d00000093.scope: Consumed 17.000s CPU time.
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:57:31 np0005473739 systemd-machined[214580]: Machine qemu-181-instance-00000093 terminated.
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.195 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:48:40 10.100.0.8'], port_security=['fa:16:3e:87:48:40 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30 ce5386dc-6551-4933-ba47-e2cefb87e49d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b0ce98f-f482-4715-a60b-a5b393b822f1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.196 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0ce98f-f482-4715-a60b-a5b393b822f1 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 unbound from our chassis#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.197 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de9e106e-fa40-4336-9ad7-f811682b66d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.198 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2208f2b4-b2e1-4847-a2f1-f819b890a98d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.199 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 namespace which is not needed anymore#033[00m
Oct  7 10:57:31 np0005473739 kernel: tap6b0ce98f-f4: entered promiscuous mode
Oct  7 10:57:31 np0005473739 NetworkManager[44949]: <info>  [1759849051.3269] manager: (tap6b0ce98f-f4): new Tun device (/org/freedesktop/NetworkManager/Devices/655)
Oct  7 10:57:31 np0005473739 systemd-udevd[424868]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01622|binding|INFO|Claiming lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 for this chassis.
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01623|binding|INFO|6b0ce98f-f482-4715-a60b-a5b393b822f1: Claiming fa:16:3e:87:48:40 10.100.0.8
Oct  7 10:57:31 np0005473739 kernel: tap6b0ce98f-f4 (unregistering): left promiscuous mode
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.333 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:31 np0005473739 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [NOTICE]   (423728) : haproxy version is 2.8.14-c23fe91
Oct  7 10:57:31 np0005473739 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [NOTICE]   (423728) : path to executable is /usr/sbin/haproxy
Oct  7 10:57:31 np0005473739 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [WARNING]  (423728) : Exiting Master process...
Oct  7 10:57:31 np0005473739 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [WARNING]  (423728) : Exiting Master process...
Oct  7 10:57:31 np0005473739 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [ALERT]    (423728) : Current worker (423731) exited with code 143 (Terminated)
Oct  7 10:57:31 np0005473739 neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2[423710]: [WARNING]  (423728) : All workers exited. Exiting... (0)
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01624|binding|INFO|Setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 ovn-installed in OVS
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01625|if_status|INFO|Dropped 2 log messages in last 797 seconds (most recently, 797 seconds ago) due to excessive rate
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01626|if_status|INFO|Not setting lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 down as sb is readonly
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:31 np0005473739 systemd[1]: libpod-a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511.scope: Deactivated successfully.
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.366 2 INFO nova.virt.libvirt.driver [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance destroyed successfully.#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.367 2 DEBUG nova.objects.instance [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:57:31 np0005473739 podman[424941]: 2025-10-07 14:57:31.36891027 +0000 UTC m=+0.067775035 container died a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.369 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:48:40 10.100.0.8'], port_security=['fa:16:3e:87:48:40 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30 ce5386dc-6551-4933-ba47-e2cefb87e49d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b0ce98f-f482-4715-a60b-a5b393b822f1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:57:31 np0005473739 ovn_controller[151684]: 2025-10-07T14:57:31Z|01627|binding|INFO|Releasing lport 6b0ce98f-f482-4715-a60b-a5b393b822f1 from this chassis (sb_readonly=0)
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.388 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:87:48:40 10.100.0.8'], port_security=['fa:16:3e:87:48:40 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': '1713ed1d-a673-4fc9-ac2b-14c7fda46d8b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-de9e106e-fa40-4336-9ad7-f811682b66d2', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'bfc0506d-4bd3-43b3-bacb-108e14651f30 ce5386dc-6551-4933-ba47-e2cefb87e49d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=8921e707-2004-4697-abc9-199bf17e9157, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6b0ce98f-f482-4715-a60b-a5b393b822f1) old=Port_Binding(chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.388 2 DEBUG nova.virt.libvirt.vif [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:56:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1451855046',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=147,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBAN4U7sP6SJsynS4Ra+moKlo0gRKOYuYBnoBAljs45JBWYzT3XXBVBWAQMcJqohTF8wfY170SJ+ijw1ZYS57Fc+FviOO7AVF8Pk5bQI1pIusmI737U1/kZZxNMNFXAeEBQ==',key_name='tempest-TestSecurityGroupsBasicOps-1693465167',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:56:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-v02jgwcv',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:56:26Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=1713ed1d-a673-4fc9-ac2b-14c7fda46d8b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.389 2 DEBUG nova.network.os_vif_util [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.198", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.390 2 DEBUG nova.network.os_vif_util [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.390 2 DEBUG os_vif [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.392 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.392 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6b0ce98f-f4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.394 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.397 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.402 2 INFO os_vif [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:87:48:40,bridge_name='br-int',has_traffic_filtering=True,id=6b0ce98f-f482-4715-a60b-a5b393b822f1,network=Network(de9e106e-fa40-4336-9ad7-f811682b66d2),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6b0ce98f-f4')#033[00m
Oct  7 10:57:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511-userdata-shm.mount: Deactivated successfully.
Oct  7 10:57:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-febe76e8dd5c30f47b13b78fd42131538300a58f00e7858de7bede20aa45a62a-merged.mount: Deactivated successfully.
Oct  7 10:57:31 np0005473739 podman[424941]: 2025-10-07 14:57:31.495221882 +0000 UTC m=+0.194086657 container cleanup a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 10:57:31 np0005473739 systemd[1]: libpod-conmon-a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511.scope: Deactivated successfully.
Oct  7 10:57:31 np0005473739 podman[425037]: 2025-10-07 14:57:31.611111449 +0000 UTC m=+0.091250455 container remove a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.618 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0024fb33-38d9-497a-84e9-d964a15ddda4]: (4, ('Tue Oct  7 02:57:31 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 (a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511)\na5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511\nTue Oct  7 02:57:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 (a5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511)\na5bcbf9dad9bef2e6ee03a8350e136775670e7111e8f39f82b827c76ff9c0511\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.620 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c1b26da1-ebab-4e4a-9477-3373bab63876]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.621 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapde9e106e-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:31 np0005473739 kernel: tapde9e106e-f0: left promiscuous mode
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.639 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.642 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[d4573e2a-c144-4de8-8d56-df36a3357538]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.679 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[32c0a4ac-4ade-47c8-bfbd-60c1c6000db5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.681 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e750dcb0-d7ab-4fa1-868a-17118cf085ec]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.697 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e4f1c559-94b8-49de-9217-0376a7233245]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 956900, 'reachable_time': 33371, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 425075, 'error': None, 'target': 'ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.699 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-de9e106e-fa40-4336-9ad7-f811682b66d2 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.699 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[f053f989-756c-4543-b92a-b2216f5f0a38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.700 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0ce98f-f482-4715-a60b-a5b393b822f1 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 unbound from our chassis#033[00m
Oct  7 10:57:31 np0005473739 systemd[1]: run-netns-ovnmeta\x2dde9e106e\x2dfa40\x2d4336\x2d9ad7\x2df811682b66d2.mount: Deactivated successfully.
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.701 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de9e106e-fa40-4336-9ad7-f811682b66d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.702 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0fc214-f517-4213-ab22-11a020d0f678]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.703 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6b0ce98f-f482-4715-a60b-a5b393b822f1 in datapath de9e106e-fa40-4336-9ad7-f811682b66d2 unbound from our chassis#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.703 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network de9e106e-fa40-4336-9ad7-f811682b66d2, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:57:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:57:31.704 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c5baac45-ba0f-4f8b-b72f-488622644134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:57:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:31 np0005473739 podman[425090]: 2025-10-07 14:57:31.87107109 +0000 UTC m=+0.073679972 container create 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.879 2 DEBUG nova.compute.manager [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.879 2 DEBUG nova.compute.manager [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing instance network info cache due to event network-changed-6b0ce98f-f482-4715-a60b-a5b393b822f1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.880 2 DEBUG oslo_concurrency.lockutils [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.880 2 DEBUG oslo_concurrency.lockutils [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:57:31 np0005473739 nova_compute[259550]: 2025-10-07 14:57:31.880 2 DEBUG nova.network.neutron [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Refreshing network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:57:31 np0005473739 podman[425090]: 2025-10-07 14:57:31.828549074 +0000 UTC m=+0.031157976 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:57:31 np0005473739 systemd[1]: Started libpod-conmon-7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817.scope.
Oct  7 10:57:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:57:32 np0005473739 podman[425090]: 2025-10-07 14:57:32.046136693 +0000 UTC m=+0.248745605 container init 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:57:32 np0005473739 podman[425090]: 2025-10-07 14:57:32.054432672 +0000 UTC m=+0.257041554 container start 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 10:57:32 np0005473739 infallible_kowalevski[425107]: 167 167
Oct  7 10:57:32 np0005473739 systemd[1]: libpod-7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817.scope: Deactivated successfully.
Oct  7 10:57:32 np0005473739 podman[425090]: 2025-10-07 14:57:32.067367934 +0000 UTC m=+0.269976816 container attach 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 10:57:32 np0005473739 podman[425090]: 2025-10-07 14:57:32.067852077 +0000 UTC m=+0.270460959 container died 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True)
Oct  7 10:57:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-04e419fc3e7ba72985f34118c48bfcd8ccd9ffca2179d39b3652355d43ac1673-merged.mount: Deactivated successfully.
Oct  7 10:57:32 np0005473739 podman[425090]: 2025-10-07 14:57:32.280594317 +0000 UTC m=+0.483203199 container remove 7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:57:32 np0005473739 systemd[1]: libpod-conmon-7bd01e376cc44ae1b253894bcf2b00e8cf0c72fd65139d0671d590e8c7cd9817.scope: Deactivated successfully.
Oct  7 10:57:32 np0005473739 podman[425131]: 2025-10-07 14:57:32.492747402 +0000 UTC m=+0.068979587 container create ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 10:57:32 np0005473739 podman[425131]: 2025-10-07 14:57:32.450259067 +0000 UTC m=+0.026491282 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:57:32 np0005473739 systemd[1]: Started libpod-conmon-ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d.scope.
Oct  7 10:57:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:57:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:32 np0005473739 podman[425131]: 2025-10-07 14:57:32.622661581 +0000 UTC m=+0.198893776 container init ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 10:57:32 np0005473739 podman[425131]: 2025-10-07 14:57:32.62982569 +0000 UTC m=+0.206057875 container start ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef)
Oct  7 10:57:32 np0005473739 podman[425131]: 2025-10-07 14:57:32.643777439 +0000 UTC m=+0.220009644 container attach ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.723 2 INFO nova.virt.libvirt.driver [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Deleting instance files /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_del#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.725 2 INFO nova.virt.libvirt.driver [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Deletion of /var/lib/nova/instances/1713ed1d-a673-4fc9-ac2b-14c7fda46d8b_del complete#033[00m
Oct  7 10:57:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:57:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688123210' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:57:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:57:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2688123210' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.812 2 DEBUG nova.compute.manager [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-unplugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.813 2 DEBUG oslo_concurrency.lockutils [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.813 2 DEBUG oslo_concurrency.lockutils [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.814 2 DEBUG oslo_concurrency.lockutils [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.814 2 DEBUG nova.compute.manager [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] No waiting events found dispatching network-vif-unplugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.814 2 DEBUG nova.compute.manager [req-6293952f-5447-42a8-8252-adb8e42128ab req-07750b11-8536-4316-b445-e11dda7d762c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-unplugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007600997305788891 of space, bias 1.0, pg target 0.22802991917366675 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.853 2 INFO nova.compute.manager [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Took 2.15 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.853 2 DEBUG oslo.service.loopingcall [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.854 2 DEBUG nova.compute.manager [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:57:32 np0005473739 nova_compute[259550]: 2025-10-07 14:57:32.854 2 DEBUG nova.network.neutron [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:57:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2859: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 15 KiB/s wr, 28 op/s
Oct  7 10:57:33 np0005473739 nova_compute[259550]: 2025-10-07 14:57:33.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:33 np0005473739 happy_engelbart[425148]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:57:33 np0005473739 happy_engelbart[425148]: --> relative data size: 1.0
Oct  7 10:57:33 np0005473739 happy_engelbart[425148]: --> All data devices are unavailable
Oct  7 10:57:33 np0005473739 systemd[1]: libpod-ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d.scope: Deactivated successfully.
Oct  7 10:57:33 np0005473739 systemd[1]: libpod-ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d.scope: Consumed 1.082s CPU time.
Oct  7 10:57:33 np0005473739 podman[425131]: 2025-10-07 14:57:33.775267784 +0000 UTC m=+1.351499979 container died ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 10:57:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2356f2d3c08e8421ff718535658ad0a4afd95ea7db90eda43eef6965ab69a431-merged.mount: Deactivated successfully.
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.006 2 DEBUG nova.network.neutron [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.024 2 INFO nova.compute.manager [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Took 1.17 seconds to deallocate network for instance.#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.073 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.074 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.120 2 DEBUG oslo_concurrency.processutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.158 2 DEBUG nova.network.neutron [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updated VIF entry in instance network info cache for port 6b0ce98f-f482-4715-a60b-a5b393b822f1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.159 2 DEBUG nova.network.neutron [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Updating instance_info_cache with network_info: [{"id": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "address": "fa:16:3e:87:48:40", "network": {"id": "de9e106e-fa40-4336-9ad7-f811682b66d2", "bridge": "br-int", "label": "tempest-network-smoke--1983386276", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6b0ce98f-f4", "ovs_interfaceid": "6b0ce98f-f482-4715-a60b-a5b393b822f1", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.179 2 DEBUG oslo_concurrency.lockutils [req-080092dd-e9aa-4164-b2aa-c9e1214d8c69 req-fae4bcac-2759-46f7-ab51-c848d10580ba 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:57:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:57:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3985743802' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.656 2 DEBUG oslo_concurrency.processutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.536s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:34 np0005473739 podman[425131]: 2025-10-07 14:57:34.658075607 +0000 UTC m=+2.234307792 container remove ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=happy_engelbart, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.669 2 DEBUG nova.compute.provider_tree [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.689 2 DEBUG nova.scheduler.client.report [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.723 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.649s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.750 2 INFO nova.scheduler.client.report [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b#033[00m
Oct  7 10:57:34 np0005473739 systemd[1]: libpod-conmon-ca7ac2767441dd8f391acfe043a3acdd05fdaa4f2bf1f1f7d49c62758dc70e6d.scope: Deactivated successfully.
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.819 2 DEBUG oslo_concurrency.lockutils [None req-bab348fe-28e7-438c-9257-53ff2748da5d 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2860: 305 pgs: 305 active+clean; 72 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 16 KiB/s wr, 51 op/s
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.928 2 DEBUG nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.928 2 DEBUG oslo_concurrency.lockutils [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.929 2 DEBUG oslo_concurrency.lockutils [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.929 2 DEBUG oslo_concurrency.lockutils [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "1713ed1d-a673-4fc9-ac2b-14c7fda46d8b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.929 2 DEBUG nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] No waiting events found dispatching network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.929 2 WARNING nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received unexpected event network-vif-plugged-6b0ce98f-f482-4715-a60b-a5b393b822f1 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.929 2 DEBUG nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Received event network-vif-deleted-6b0ce98f-f482-4715-a60b-a5b393b822f1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.930 2 INFO nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Neutron deleted interface 6b0ce98f-f482-4715-a60b-a5b393b822f1; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.930 2 DEBUG nova.network.neutron [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  7 10:57:34 np0005473739 nova_compute[259550]: 2025-10-07 14:57:34.931 2 DEBUG nova.compute.manager [req-40702152-21bc-4cbd-8e86-d87683f754a7 req-ee877b80-fd8a-4a8c-af02-fa93790034f7 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Detach interface failed, port_id=6b0ce98f-f482-4715-a60b-a5b393b822f1, reason: Instance 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:57:35 np0005473739 podman[425352]: 2025-10-07 14:57:35.276226237 +0000 UTC m=+0.022201739 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:57:35 np0005473739 podman[425352]: 2025-10-07 14:57:35.541535118 +0000 UTC m=+0.287510620 container create f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:57:35 np0005473739 systemd[1]: Started libpod-conmon-f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86.scope.
Oct  7 10:57:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:57:35 np0005473739 podman[425352]: 2025-10-07 14:57:35.870633647 +0000 UTC m=+0.616609149 container init f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 10:57:35 np0005473739 podman[425352]: 2025-10-07 14:57:35.881542876 +0000 UTC m=+0.627518358 container start f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 10:57:35 np0005473739 friendly_lamarr[425368]: 167 167
Oct  7 10:57:35 np0005473739 systemd[1]: libpod-f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86.scope: Deactivated successfully.
Oct  7 10:57:35 np0005473739 podman[425352]: 2025-10-07 14:57:35.894024297 +0000 UTC m=+0.639999799 container attach f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True)
Oct  7 10:57:35 np0005473739 podman[425352]: 2025-10-07 14:57:35.894377776 +0000 UTC m=+0.640353268 container died f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 10:57:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b68ee51a63dc5020fb555501a4909a41aab83d14eb61a082f2681d8924e5f736-merged.mount: Deactivated successfully.
Oct  7 10:57:36 np0005473739 podman[425352]: 2025-10-07 14:57:36.025154307 +0000 UTC m=+0.771129789 container remove f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_lamarr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 10:57:36 np0005473739 systemd[1]: libpod-conmon-f7e6c8a75d20e6fbfaff589d153014631573833100743af09390eff27a202a86.scope: Deactivated successfully.
Oct  7 10:57:36 np0005473739 podman[425393]: 2025-10-07 14:57:36.202380187 +0000 UTC m=+0.055268003 container create 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 10:57:36 np0005473739 podman[425393]: 2025-10-07 14:57:36.169036704 +0000 UTC m=+0.021924550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:57:36 np0005473739 systemd[1]: Started libpod-conmon-5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911.scope.
Oct  7 10:57:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:57:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:36 np0005473739 podman[425393]: 2025-10-07 14:57:36.328585317 +0000 UTC m=+0.181473133 container init 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 10:57:36 np0005473739 podman[425393]: 2025-10-07 14:57:36.337570035 +0000 UTC m=+0.190457851 container start 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 10:57:36 np0005473739 podman[425393]: 2025-10-07 14:57:36.360097191 +0000 UTC m=+0.212984997 container attach 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:57:36 np0005473739 nova_compute[259550]: 2025-10-07 14:57:36.396 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2861: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]: {
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:    "0": [
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:        {
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "devices": [
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "/dev/loop3"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            ],
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_name": "ceph_lv0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_size": "21470642176",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "name": "ceph_lv0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "tags": {
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cluster_name": "ceph",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.crush_device_class": "",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.encrypted": "0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osd_id": "0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.type": "block",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.vdo": "0"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            },
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "type": "block",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "vg_name": "ceph_vg0"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:        }
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:    ],
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:    "1": [
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:        {
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "devices": [
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "/dev/loop4"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            ],
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_name": "ceph_lv1",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_size": "21470642176",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "name": "ceph_lv1",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "tags": {
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cluster_name": "ceph",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.crush_device_class": "",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.encrypted": "0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osd_id": "1",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.type": "block",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.vdo": "0"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            },
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "type": "block",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "vg_name": "ceph_vg1"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:        }
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:    ],
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:    "2": [
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:        {
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "devices": [
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "/dev/loop5"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            ],
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_name": "ceph_lv2",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_size": "21470642176",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "name": "ceph_lv2",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "tags": {
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.cluster_name": "ceph",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.crush_device_class": "",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.encrypted": "0",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osd_id": "2",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.type": "block",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:                "ceph.vdo": "0"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            },
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "type": "block",
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:            "vg_name": "ceph_vg2"
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:        }
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]:    ]
Oct  7 10:57:37 np0005473739 interesting_yalow[425409]: }
Oct  7 10:57:37 np0005473739 systemd[1]: libpod-5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911.scope: Deactivated successfully.
Oct  7 10:57:37 np0005473739 podman[425393]: 2025-10-07 14:57:37.248670027 +0000 UTC m=+1.101557853 container died 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 10:57:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3a3e7d9e28048cabb42c4161ab21b8d1d19736b2a17e8f9174e4a38be74acbb4-merged.mount: Deactivated successfully.
Oct  7 10:57:37 np0005473739 podman[425393]: 2025-10-07 14:57:37.319022449 +0000 UTC m=+1.171910265 container remove 5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:57:37 np0005473739 systemd[1]: libpod-conmon-5a4fbbd7ccf785a680f67e276c884c3a7f36d1c3539e699f7803f40fdf703911.scope: Deactivated successfully.
Oct  7 10:57:38 np0005473739 podman[425569]: 2025-10-07 14:57:38.000078363 +0000 UTC m=+0.049608963 container create fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:57:38 np0005473739 systemd[1]: Started libpod-conmon-fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371.scope.
Oct  7 10:57:38 np0005473739 podman[425569]: 2025-10-07 14:57:37.974389194 +0000 UTC m=+0.023919804 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:57:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:57:38 np0005473739 podman[425569]: 2025-10-07 14:57:38.122239896 +0000 UTC m=+0.171770516 container init fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:57:38 np0005473739 podman[425569]: 2025-10-07 14:57:38.130267409 +0000 UTC m=+0.179797999 container start fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 10:57:38 np0005473739 sharp_elion[425585]: 167 167
Oct  7 10:57:38 np0005473739 systemd[1]: libpod-fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371.scope: Deactivated successfully.
Oct  7 10:57:38 np0005473739 podman[425569]: 2025-10-07 14:57:38.148161502 +0000 UTC m=+0.197692112 container attach fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:57:38 np0005473739 podman[425569]: 2025-10-07 14:57:38.148616894 +0000 UTC m=+0.198147494 container died fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 10:57:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-188799ff27af07fbda60f5e5039199cdef13593cd4055227f4773bdc7d0726d3-merged.mount: Deactivated successfully.
Oct  7 10:57:38 np0005473739 podman[425569]: 2025-10-07 14:57:38.201786431 +0000 UTC m=+0.251317021 container remove fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_elion, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:57:38 np0005473739 systemd[1]: libpod-conmon-fd53e7e68bdb752b5edf063fbc779f353fd6a157eec06ad47febddce1c588371.scope: Deactivated successfully.
Oct  7 10:57:38 np0005473739 podman[425611]: 2025-10-07 14:57:38.371116183 +0000 UTC m=+0.047393196 container create ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:57:38 np0005473739 systemd[1]: Started libpod-conmon-ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5.scope.
Oct  7 10:57:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:57:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:57:38 np0005473739 podman[425611]: 2025-10-07 14:57:38.348296399 +0000 UTC m=+0.024573452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:57:38 np0005473739 podman[425611]: 2025-10-07 14:57:38.447946516 +0000 UTC m=+0.124223529 container init ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:57:38 np0005473739 podman[425611]: 2025-10-07 14:57:38.456700857 +0000 UTC m=+0.132977870 container start ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 10:57:38 np0005473739 podman[425611]: 2025-10-07 14:57:38.463092327 +0000 UTC m=+0.139369360 container attach ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 10:57:38 np0005473739 nova_compute[259550]: 2025-10-07 14:57:38.649 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2862: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 14 KiB/s wr, 50 op/s
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]: {
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "osd_id": 2,
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "type": "bluestore"
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:    },
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "osd_id": 1,
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "type": "bluestore"
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:    },
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "osd_id": 0,
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:        "type": "bluestore"
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]:    }
Oct  7 10:57:39 np0005473739 suspicious_pasteur[425628]: }
Oct  7 10:57:39 np0005473739 systemd[1]: libpod-ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5.scope: Deactivated successfully.
Oct  7 10:57:39 np0005473739 systemd[1]: libpod-ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5.scope: Consumed 1.052s CPU time.
Oct  7 10:57:39 np0005473739 podman[425611]: 2025-10-07 14:57:39.505497184 +0000 UTC m=+1.181774207 container died ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:57:39 np0005473739 nova_compute[259550]: 2025-10-07 14:57:39.583 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849044.581324, b02ddcfd-0f68-4cc3-9c20-24cac0572be8 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:57:39 np0005473739 nova_compute[259550]: 2025-10-07 14:57:39.583 2 INFO nova.compute.manager [-] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:57:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-56a1c618c922e4a6c80befe6df6fd5840341f13fe34a340d23e025e21e9c6c00-merged.mount: Deactivated successfully.
Oct  7 10:57:39 np0005473739 nova_compute[259550]: 2025-10-07 14:57:39.606 2 DEBUG nova.compute.manager [None req-e4a76e7a-113c-4331-852f-9e7587e475e7 - - - - - -] [instance: b02ddcfd-0f68-4cc3-9c20-24cac0572be8] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:57:39 np0005473739 podman[425611]: 2025-10-07 14:57:39.689150525 +0000 UTC m=+1.365427538 container remove ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_pasteur, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:57:39 np0005473739 systemd[1]: libpod-conmon-ec21a69087f044022da04c6209f3b6531424c863fab3ddad0abfa450e36f56b5.scope: Deactivated successfully.
Oct  7 10:57:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:57:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:57:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:57:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:57:39 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4b33b418-87ad-4034-b70d-df0984820eb9 does not exist
Oct  7 10:57:39 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4e02ce49-6cd0-4918-8843-0d3f98af91ee does not exist
Oct  7 10:57:40 np0005473739 nova_compute[259550]: 2025-10-07 14:57:40.469 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:40 np0005473739 nova_compute[259550]: 2025-10-07 14:57:40.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:40 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:57:40 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:57:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2863: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 31 op/s
Oct  7 10:57:41 np0005473739 nova_compute[259550]: 2025-10-07 14:57:41.402 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:42 np0005473739 podman[425726]: 2025-10-07 14:57:42.080239044 +0000 UTC m=+0.070215789 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:57:42 np0005473739 podman[425727]: 2025-10-07 14:57:42.111099531 +0000 UTC m=+0.099977396 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:57:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2864: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:57:43 np0005473739 nova_compute[259550]: 2025-10-07 14:57:43.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2865: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:57:46 np0005473739 nova_compute[259550]: 2025-10-07 14:57:46.358 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849051.3568137, 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:57:46 np0005473739 nova_compute[259550]: 2025-10-07 14:57:46.358 2 INFO nova.compute.manager [-] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:57:46 np0005473739 nova_compute[259550]: 2025-10-07 14:57:46.381 2 DEBUG nova.compute.manager [None req-2d8155e4-0242-4b3f-a53d-382ef24014fd - - - - - -] [instance: 1713ed1d-a673-4fc9-ac2b-14c7fda46d8b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:57:46 np0005473739 nova_compute[259550]: 2025-10-07 14:57:46.406 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2866: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail; 2.6 KiB/s rd, 341 B/s wr, 5 op/s
Oct  7 10:57:48 np0005473739 nova_compute[259550]: 2025-10-07 14:57:48.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2867: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:57:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2868: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:57:51 np0005473739 nova_compute[259550]: 2025-10-07 14:57:51.409 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:57:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:57:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2869: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:57:53 np0005473739 nova_compute[259550]: 2025-10-07 14:57:53.654 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:53 np0005473739 nova_compute[259550]: 2025-10-07 14:57:53.857 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:53 np0005473739 nova_compute[259550]: 2025-10-07 14:57:53.858 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.013 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.108 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.109 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.120 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.120 2 INFO nova.compute.claims [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.353 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:57:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4263650502' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.854 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.862 2 DEBUG nova.compute.provider_tree [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.885 2 DEBUG nova.scheduler.client.report [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:57:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2870: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.922 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.813s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:54 np0005473739 nova_compute[259550]: 2025-10-07 14:57:54.923 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.030 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.030 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.124 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.160 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.254 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.255 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.256 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Creating image(s)#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.282 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.311 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.336 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.340 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.382 2 DEBUG nova.policy [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.428 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.429 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.430 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.430 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.454 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:57:55 np0005473739 nova_compute[259550]: 2025-10-07 14:57:55.458 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5dbed140-a0ed-4dbb-b782-da386ad68471_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:57:56 np0005473739 nova_compute[259550]: 2025-10-07 14:57:56.411 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:57:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2871: 305 pgs: 305 active+clean; 41 MiB data, 1010 MiB used, 59 GiB / 60 GiB avail
Oct  7 10:57:58 np0005473739 nova_compute[259550]: 2025-10-07 14:57:58.188 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 5dbed140-a0ed-4dbb-b782-da386ad68471_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 2.730s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:57:58 np0005473739 nova_compute[259550]: 2025-10-07 14:57:58.261 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:57:58 np0005473739 nova_compute[259550]: 2025-10-07 14:57:58.477 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Successfully created port: 6d023b7b-1ffe-467f-b731-8b53fc063b54 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:57:58 np0005473739 nova_compute[259550]: 2025-10-07 14:57:58.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:57:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2872: 305 pgs: 305 active+clean; 58 MiB data, 1023 MiB used, 59 GiB / 60 GiB avail; 14 KiB/s rd, 1.1 MiB/s wr, 21 op/s
Oct  7 10:57:59 np0005473739 podman[425939]: 2025-10-07 14:57:59.079313383 +0000 UTC m=+0.065569596 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:57:59 np0005473739 podman[425940]: 2025-10-07 14:57:59.082880368 +0000 UTC m=+0.066090770 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:57:59 np0005473739 nova_compute[259550]: 2025-10-07 14:57:59.129 2 DEBUG nova.objects.instance [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 5dbed140-a0ed-4dbb-b782-da386ad68471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:57:59 np0005473739 nova_compute[259550]: 2025-10-07 14:57:59.190 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:57:59 np0005473739 nova_compute[259550]: 2025-10-07 14:57:59.191 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Ensure instance console log exists: /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:57:59 np0005473739 nova_compute[259550]: 2025-10-07 14:57:59.192 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:57:59 np0005473739 nova_compute[259550]: 2025-10-07 14:57:59.192 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:57:59 np0005473739 nova_compute[259550]: 2025-10-07 14:57:59.192 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:00.095 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:00.095 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:00.095 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2873: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.154 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Successfully updated port: 6d023b7b-1ffe-467f-b731-8b53fc063b54 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.179 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.179 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.180 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.274 2 DEBUG nova.compute.manager [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.275 2 DEBUG nova.compute.manager [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing instance network info cache due to event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.275 2 DEBUG oslo_concurrency.lockutils [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.383 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:58:01 np0005473739 nova_compute[259550]: 2025-10-07 14:58:01.414 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.492 2 DEBUG nova.network.neutron [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.606 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.606 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance network_info: |[{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.606 2 DEBUG oslo_concurrency.lockutils [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.607 2 DEBUG nova.network.neutron [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.610 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start _get_guest_xml network_info=[{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.613 2 WARNING nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.618 2 DEBUG nova.virt.libvirt.host [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.619 2 DEBUG nova.virt.libvirt.host [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.624 2 DEBUG nova.virt.libvirt.host [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.625 2 DEBUG nova.virt.libvirt.host [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.625 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.625 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.626 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.626 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.626 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.626 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.627 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.628 2 DEBUG nova.virt.hardware [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:58:02 np0005473739 nova_compute[259550]: 2025-10-07 14:58:02.630 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2874: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 15 KiB/s rd, 1.8 MiB/s wr, 24 op/s
Oct  7 10:58:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:58:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/590169032' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.131 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.161 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.166 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:58:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2218428295' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.658 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.673 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.507s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.675 2 DEBUG nova.virt.libvirt.vif [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=149,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-d2b8ujzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:57:55Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5dbed140-a0ed-4dbb-b782-da386ad68471,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.676 2 DEBUG nova.network.os_vif_util [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.678 2 DEBUG nova.network.os_vif_util [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.680 2 DEBUG nova.objects.instance [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 5dbed140-a0ed-4dbb-b782-da386ad68471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.700 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <uuid>5dbed140-a0ed-4dbb-b782-da386ad68471</uuid>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <name>instance-00000095</name>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973</nova:name>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:58:02</nova:creationTime>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <nova:port uuid="6d023b7b-1ffe-467f-b731-8b53fc063b54">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.7" ipVersion="4"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <entry name="serial">5dbed140-a0ed-4dbb-b782-da386ad68471</entry>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <entry name="uuid">5dbed140-a0ed-4dbb-b782-da386ad68471</entry>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5dbed140-a0ed-4dbb-b782-da386ad68471_disk">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:b5:43:f9"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <target dev="tap6d023b7b-1f"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/console.log" append="off"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:58:03 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:58:03 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:58:03 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:58:03 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.702 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Preparing to wait for external event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.703 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.703 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.703 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.704 2 DEBUG nova.virt.libvirt.vif [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=149,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-d2b8ujzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:57:55Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5dbed140-a0ed-4dbb-b782-da386ad68471,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.704 2 DEBUG nova.network.os_vif_util [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.705 2 DEBUG nova.network.os_vif_util [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.705 2 DEBUG os_vif [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.706 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.707 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.711 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6d023b7b-1f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.711 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6d023b7b-1f, col_values=(('external_ids', {'iface-id': '6d023b7b-1ffe-467f-b731-8b53fc063b54', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b5:43:f9', 'vm-uuid': '5dbed140-a0ed-4dbb-b782-da386ad68471'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:03 np0005473739 NetworkManager[44949]: <info>  [1759849083.7140] manager: (tap6d023b7b-1f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/656)
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.723 2 INFO os_vif [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f')#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.794 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.795 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.795 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:b5:43:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.796 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Using config drive#033[00m
Oct  7 10:58:03 np0005473739 nova_compute[259550]: 2025-10-07 14:58:03.879 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.376 2 DEBUG nova.network.neutron [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updated VIF entry in instance network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.376 2 DEBUG nova.network.neutron [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.398 2 DEBUG oslo_concurrency.lockutils [req-51f0797b-3c47-461b-95b1-6173094b93da req-fd4c34dc-92a0-4d51-995b-e4eddebde560 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.480 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Creating config drive at /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config#033[00m
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.486 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpza97nzpl execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.629 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpza97nzpl" returned: 0 in 0.143s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.658 2 DEBUG nova.storage.rbd_utils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.663 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2875: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:58:04 np0005473739 nova_compute[259550]: 2025-10-07 14:58:04.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.156 2 DEBUG oslo_concurrency.processutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config 5dbed140-a0ed-4dbb-b782-da386ad68471_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.494s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.158 2 INFO nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Deleting local config drive /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471/disk.config because it was imported into RBD.#033[00m
Oct  7 10:58:06 np0005473739 kernel: tap6d023b7b-1f: entered promiscuous mode
Oct  7 10:58:06 np0005473739 NetworkManager[44949]: <info>  [1759849086.2217] manager: (tap6d023b7b-1f): new Tun device (/org/freedesktop/NetworkManager/Devices/657)
Oct  7 10:58:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:06Z|01628|binding|INFO|Claiming lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 for this chassis.
Oct  7 10:58:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:06Z|01629|binding|INFO|6d023b7b-1ffe-467f-b731-8b53fc063b54: Claiming fa:16:3e:b5:43:f9 10.100.0.7
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.223 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.228 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.239 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:43:f9 10.100.0.7'], port_security=['fa:16:3e:b5:43:f9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5dbed140-a0ed-4dbb-b782-da386ad68471', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': '21c14362-d44c-45e9-a632-669517d5fa0c ce726bee-ed69-45a1-b3aa-f723c1140437', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262e3557-a916-444a-8a9a-3fa6e70341a2, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6d023b7b-1ffe-467f-b731-8b53fc063b54) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.241 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6d023b7b-1ffe-467f-b731-8b53fc063b54 in datapath 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a bound to our chassis#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.242 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.259 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[62dadfbb-5b0f-48a8-bbf4-b2c4508e494a]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.260 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap249c8beb-31 in ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:58:06 np0005473739 systemd-udevd[426132]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.262 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap249c8beb-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.262 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[92cad7fb-54a4-4264-b0fc-dd09a1ecdc21]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 systemd-machined[214580]: New machine qemu-183-instance-00000095.
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.263 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a0e7d7d4-8447-4351-a23c-5c0223e4b812]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.276 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[95cc23fb-78b5-4c03-af6f-f55f04fa6144]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 NetworkManager[44949]: <info>  [1759849086.2802] device (tap6d023b7b-1f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:58:06 np0005473739 NetworkManager[44949]: <info>  [1759849086.2810] device (tap6d023b7b-1f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.289 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[128586e2-bc08-418b-b9c4-f88affacbefa]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.290 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:06Z|01630|binding|INFO|Setting lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 ovn-installed in OVS
Oct  7 10:58:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:06Z|01631|binding|INFO|Setting lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 up in Southbound
Oct  7 10:58:06 np0005473739 systemd[1]: Started Virtual Machine qemu-183-instance-00000095.
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.297 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.325 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[d33da4e7-17ce-4ad6-9306-f55151da28d5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.330 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c48f0640-95a1-43ef-ad3d-5fa770ef493a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 NetworkManager[44949]: <info>  [1759849086.3317] manager: (tap249c8beb-30): new Veth device (/org/freedesktop/NetworkManager/Devices/658)
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.366 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[544d6ece-3f49-4904-bb58-e57fecebbb55]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.369 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[8b8836d4-da90-4419-9f20-397cfc91d641]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 NetworkManager[44949]: <info>  [1759849086.3985] device (tap249c8beb-30): carrier: link connected
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.406 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[528985a8-2fb3-4106-83e3-885be617cc7a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.425 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[81776116-a964-49a0-ac47-a38ec7b421af]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249c8beb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:fe:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966995, 'reachable_time': 19037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 426165, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.448 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c7cd46d4-998d-4d35-9239-ee09c09fcc9b]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe37:fe18'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 966995, 'tstamp': 966995}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 426166, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.472 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e91e5b60-e6ee-437b-9975-a3c012ab1cae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249c8beb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:fe:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966995, 'reachable_time': 19037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 426167, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.512 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ca1f8b74-670d-4ca5-b523-2e04b2be8789]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.557 2 DEBUG nova.compute.manager [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.558 2 DEBUG oslo_concurrency.lockutils [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.558 2 DEBUG oslo_concurrency.lockutils [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.558 2 DEBUG oslo_concurrency.lockutils [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.558 2 DEBUG nova.compute.manager [req-6502b19d-77a7-4f80-812e-04ead8dcb032 req-2564ac80-db7e-4473-8dfe-77727a13922a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Processing event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.582 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2938115d-9e73-4aeb-bf3c-bdb8c93a92ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.584 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249c8beb-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.584 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.585 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap249c8beb-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 NetworkManager[44949]: <info>  [1759849086.5872] manager: (tap249c8beb-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/659)
Oct  7 10:58:06 np0005473739 kernel: tap249c8beb-30: entered promiscuous mode
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.592 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap249c8beb-30, col_values=(('external_ids', {'iface-id': 'ae63080e-cf28-419c-b3fe-501836a95f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:06 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:06Z|01632|binding|INFO|Releasing lport ae63080e-cf28-419c-b3fe-501836a95f3b from this chassis (sb_readonly=0)
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.609 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/249c8beb-3ee0-47fa-bbca-7ba20e00ad6a.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/249c8beb-3ee0-47fa-bbca-7ba20e00ad6a.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.610 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[11977b1f-949c-401f-9a9f-3dfc0d893563]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.611 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/249c8beb-3ee0-47fa-bbca-7ba20e00ad6a.pid.haproxy
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.612 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'env', 'PROCESS_TAG=haproxy-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/249c8beb-3ee0-47fa-bbca-7ba20e00ad6a.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:58:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:06.680 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=57, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=56) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:58:06 np0005473739 nova_compute[259550]: 2025-10-07 14:58:06.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2876: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:58:07 np0005473739 podman[426217]: 2025-10-07 14:58:06.949129667 +0000 UTC m=+0.022071985 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:58:07 np0005473739 podman[426217]: 2025-10-07 14:58:07.540102977 +0000 UTC m=+0.613045305 container create e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:58:07 np0005473739 systemd[1]: Started libpod-conmon-e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df.scope.
Oct  7 10:58:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:58:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e1e09a94c0c4d5c30406c651a2371f59d3e0389c901886c205db67c6aa86d5a6/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:07 np0005473739 podman[426217]: 2025-10-07 14:58:07.716763032 +0000 UTC m=+0.789705330 container init e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 10:58:07 np0005473739 podman[426217]: 2025-10-07 14:58:07.725097673 +0000 UTC m=+0.798039981 container start e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 10:58:07 np0005473739 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [NOTICE]   (426261) : New worker (426263) forked
Oct  7 10:58:07 np0005473739 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [NOTICE]   (426261) : Loading success.
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.837 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.838 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849087.8368425, 5dbed140-a0ed-4dbb-b782-da386ad68471 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.838 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] VM Started (Lifecycle Event)#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.844 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.849 2 INFO nova.virt.libvirt.driver [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance spawned successfully.#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.850 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.857 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.861 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:58:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:07.866 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.870 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.871 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.872 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.872 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.873 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.873 2 DEBUG nova.virt.libvirt.driver [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.903 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.904 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849087.8370867, 5dbed140-a0ed-4dbb-b782-da386ad68471 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.904 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.931 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.934 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849087.8430917, 5dbed140-a0ed-4dbb-b782-da386ad68471 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.935 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.941 2 INFO nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Took 12.69 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.942 2 DEBUG nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.953 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.956 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.981 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:58:07 np0005473739 nova_compute[259550]: 2025-10-07 14:58:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.012 2 INFO nova.compute.manager [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Took 13.93 seconds to build instance.#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.093 2 DEBUG oslo_concurrency.lockutils [None req-9151e3e8-588f-442d-89f9-c5d0a22f2990 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 14.235s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.663 2 DEBUG nova.compute.manager [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.664 2 DEBUG oslo_concurrency.lockutils [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.664 2 DEBUG oslo_concurrency.lockutils [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.664 2 DEBUG oslo_concurrency.lockutils [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.665 2 DEBUG nova.compute.manager [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] No waiting events found dispatching network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.665 2 WARNING nova.compute.manager [req-cc5063f1-c130-488e-a8c5-c3b84f8bd43e req-58b2aeba-042e-4951-ae4c-d912b6f2c754 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received unexpected event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2877: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 130 KiB/s rd, 1.8 MiB/s wr, 39 op/s
Oct  7 10:58:08 np0005473739 nova_compute[259550]: 2025-10-07 14:58:08.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:09 np0005473739 nova_compute[259550]: 2025-10-07 14:58:09.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2878: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 702 KiB/s wr, 45 op/s
Oct  7 10:58:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:12Z|01633|binding|INFO|Releasing lport ae63080e-cf28-419c-b3fe-501836a95f3b from this chassis (sb_readonly=0)
Oct  7 10:58:12 np0005473739 NetworkManager[44949]: <info>  [1759849092.6828] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/660)
Oct  7 10:58:12 np0005473739 NetworkManager[44949]: <info>  [1759849092.6842] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/661)
Oct  7 10:58:12 np0005473739 nova_compute[259550]: 2025-10-07 14:58:12.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:12 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:12Z|01634|binding|INFO|Releasing lport ae63080e-cf28-419c-b3fe-501836a95f3b from this chassis (sb_readonly=0)
Oct  7 10:58:12 np0005473739 nova_compute[259550]: 2025-10-07 14:58:12.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:12 np0005473739 nova_compute[259550]: 2025-10-07 14:58:12.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2879: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 895 KiB/s rd, 12 KiB/s wr, 42 op/s
Oct  7 10:58:13 np0005473739 podman[426273]: 2025-10-07 14:58:13.062175489 +0000 UTC m=+0.052108150 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Oct  7 10:58:13 np0005473739 podman[426274]: 2025-10-07 14:58:13.101653453 +0000 UTC m=+0.089512740 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.290 2 DEBUG nova.compute.manager [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.291 2 DEBUG nova.compute.manager [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing instance network info cache due to event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.292 2 DEBUG oslo_concurrency.lockutils [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.293 2 DEBUG oslo_concurrency.lockutils [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.294 2 DEBUG nova.network.neutron [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.715 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:13 np0005473739 nova_compute[259550]: 2025-10-07 14:58:13.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:58:14 np0005473739 nova_compute[259550]: 2025-10-07 14:58:14.789 2 DEBUG nova.network.neutron [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updated VIF entry in instance network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:58:14 np0005473739 nova_compute[259550]: 2025-10-07 14:58:14.790 2 DEBUG nova.network.neutron [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:58:14 np0005473739 nova_compute[259550]: 2025-10-07 14:58:14.823 2 DEBUG oslo_concurrency.lockutils [req-093c4ead-f4a0-4be8-8b24-3f4d56325f89 req-6952a686-4546-4b4a-9a87-2d571c28d9a1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:58:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2880: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct  7 10:58:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:16.868 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '57'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2881: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:58:16 np0005473739 nova_compute[259550]: 2025-10-07 14:58:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:18 np0005473739 nova_compute[259550]: 2025-10-07 14:58:18.668 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:18 np0005473739 nova_compute[259550]: 2025-10-07 14:58:18.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2882: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 10:58:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2883: 305 pgs: 305 active+clean; 93 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 822 KiB/s wr, 74 op/s
Oct  7 10:58:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:58:22
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'images', 'default.rgw.control', 'default.rgw.meta', 'vms', '.mgr', 'default.rgw.log', 'cephfs.cephfs.meta', 'backups', 'volumes', '.rgw.root']
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:58:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2884: 305 pgs: 305 active+clean; 93 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 810 KiB/s wr, 46 op/s
Oct  7 10:58:22 np0005473739 nova_compute[259550]: 2025-10-07 14:58:22.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.110 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.110 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.111 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.111 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.111 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:58:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:58:23 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:58:23 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4179381753' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.602 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.491s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.670 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.707 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.708 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000095 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.944 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.946 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3433MB free_disk=59.95820236206055GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:23 np0005473739 nova_compute[259550]: 2025-10-07 14:58:23.946 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.033 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 5dbed140-a0ed-4dbb-b782-da386ad68471 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.034 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.035 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.073 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:24Z|00204|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b5:43:f9 10.100.0.7
Oct  7 10:58:24 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:24Z|00205|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b5:43:f9 10.100.0.7
Oct  7 10:58:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:58:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2520959172' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.592 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.520s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.600 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.618 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.644 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:58:24 np0005473739 nova_compute[259550]: 2025-10-07 14:58:24.644 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.698s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2885: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 2.0 MiB/s wr, 62 op/s
Oct  7 10:58:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2886: 305 pgs: 305 active+clean; 113 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 226 KiB/s rd, 2.1 MiB/s wr, 48 op/s
Oct  7 10:58:27 np0005473739 nova_compute[259550]: 2025-10-07 14:58:27.641 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:27 np0005473739 nova_compute[259550]: 2025-10-07 14:58:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:27 np0005473739 nova_compute[259550]: 2025-10-07 14:58:27.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:58:28 np0005473739 nova_compute[259550]: 2025-10-07 14:58:28.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:58:28 np0005473739 nova_compute[259550]: 2025-10-07 14:58:28.025 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:58:28 np0005473739 nova_compute[259550]: 2025-10-07 14:58:28.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:28 np0005473739 nova_compute[259550]: 2025-10-07 14:58:28.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2887: 305 pgs: 305 active+clean; 119 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 273 KiB/s rd, 2.1 MiB/s wr, 55 op/s
Oct  7 10:58:30 np0005473739 podman[426362]: 2025-10-07 14:58:30.062961834 +0000 UTC m=+0.057106323 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:58:30 np0005473739 podman[426363]: 2025-10-07 14:58:30.062989364 +0000 UTC m=+0.051353040 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:58:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2888: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 290 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  7 10:58:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:58:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138065770' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:58:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:58:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4138065770' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00075666583235658 of space, bias 1.0, pg target 0.226999749706974 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:58:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2889: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Oct  7 10:58:33 np0005473739 nova_compute[259550]: 2025-10-07 14:58:33.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:33 np0005473739 nova_compute[259550]: 2025-10-07 14:58:33.720 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:34 np0005473739 nova_compute[259550]: 2025-10-07 14:58:34.337 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:34 np0005473739 nova_compute[259550]: 2025-10-07 14:58:34.338 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:34 np0005473739 nova_compute[259550]: 2025-10-07 14:58:34.356 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:58:34 np0005473739 nova_compute[259550]: 2025-10-07 14:58:34.430 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:34 np0005473739 nova_compute[259550]: 2025-10-07 14:58:34.431 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:34 np0005473739 nova_compute[259550]: 2025-10-07 14:58:34.440 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:58:34 np0005473739 nova_compute[259550]: 2025-10-07 14:58:34.441 2 INFO nova.compute.claims [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:58:34 np0005473739 nova_compute[259550]: 2025-10-07 14:58:34.569 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2890: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 261 KiB/s rd, 1.3 MiB/s wr, 47 op/s
Oct  7 10:58:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:58:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1828718441' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.079 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.511s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.087 2 DEBUG nova.compute.provider_tree [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.113 2 DEBUG nova.scheduler.client.report [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.149 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.719s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.150 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.211 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.211 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.234 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.263 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.358 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.360 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.360 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Creating image(s)#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.387 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.409 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.430 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.434 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.509 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.510 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.511 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.511 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.555 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:35 np0005473739 nova_compute[259550]: 2025-10-07 14:58:35.560 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 27840587-4b28-416e-a84d-176918007fb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:36 np0005473739 nova_compute[259550]: 2025-10-07 14:58:36.359 2 DEBUG nova.policy [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '229f8f54ad8b4adcb7d392a6d730edbd', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '2dd1166031634469bed4993a4eb97989', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:58:36 np0005473739 nova_compute[259550]: 2025-10-07 14:58:36.571 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 27840587-4b28-416e-a84d-176918007fb6_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 1.011s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:36 np0005473739 nova_compute[259550]: 2025-10-07 14:58:36.644 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] resizing rbd image 27840587-4b28-416e-a84d-176918007fb6_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:58:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2891: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 192 KiB/s rd, 177 KiB/s wr, 32 op/s
Oct  7 10:58:37 np0005473739 nova_compute[259550]: 2025-10-07 14:58:37.056 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Successfully created port: 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:58:37 np0005473739 nova_compute[259550]: 2025-10-07 14:58:37.339 2 DEBUG nova.objects.instance [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'migration_context' on Instance uuid 27840587-4b28-416e-a84d-176918007fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:58:37 np0005473739 nova_compute[259550]: 2025-10-07 14:58:37.361 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:58:37 np0005473739 nova_compute[259550]: 2025-10-07 14:58:37.361 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Ensure instance console log exists: /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:58:37 np0005473739 nova_compute[259550]: 2025-10-07 14:58:37.362 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:37 np0005473739 nova_compute[259550]: 2025-10-07 14:58:37.362 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:37 np0005473739 nova_compute[259550]: 2025-10-07 14:58:37.363 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:38 np0005473739 nova_compute[259550]: 2025-10-07 14:58:38.679 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:38 np0005473739 nova_compute[259550]: 2025-10-07 14:58:38.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2892: 305 pgs: 305 active+clean; 153 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 74 KiB/s rd, 1.3 MiB/s wr, 28 op/s
Oct  7 10:58:39 np0005473739 nova_compute[259550]: 2025-10-07 14:58:39.300 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Successfully updated port: 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:58:39 np0005473739 nova_compute[259550]: 2025-10-07 14:58:39.324 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:58:39 np0005473739 nova_compute[259550]: 2025-10-07 14:58:39.325 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquired lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:58:39 np0005473739 nova_compute[259550]: 2025-10-07 14:58:39.325 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:58:39 np0005473739 nova_compute[259550]: 2025-10-07 14:58:39.445 2 DEBUG nova.compute.manager [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-changed-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:58:39 np0005473739 nova_compute[259550]: 2025-10-07 14:58:39.446 2 DEBUG nova.compute.manager [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Refreshing instance network info cache due to event network-changed-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:58:39 np0005473739 nova_compute[259550]: 2025-10-07 14:58:39.446 2 DEBUG oslo_concurrency.lockutils [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:58:40 np0005473739 nova_compute[259550]: 2025-10-07 14:58:40.291 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:58:40 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 52e41822-2fe5-41fd-beec-ee57505e7aed does not exist
Oct  7 10:58:40 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 99dfcdc1-1df7-4ada-828f-49350d41ec63 does not exist
Oct  7 10:58:40 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b91a160a-1170-4233-931e-3badd4382d30 does not exist
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:58:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:58:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2893: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 1.8 MiB/s wr, 32 op/s
Oct  7 10:58:41 np0005473739 podman[426862]: 2025-10-07 14:58:41.360580044 +0000 UTC m=+0.039650750 container create eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:58:41 np0005473739 systemd[1]: Started libpod-conmon-eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8.scope.
Oct  7 10:58:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:58:41 np0005473739 podman[426862]: 2025-10-07 14:58:41.342250589 +0000 UTC m=+0.021321315 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:58:41 np0005473739 podman[426862]: 2025-10-07 14:58:41.456445171 +0000 UTC m=+0.135515897 container init eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 10:58:41 np0005473739 podman[426862]: 2025-10-07 14:58:41.465164782 +0000 UTC m=+0.144235488 container start eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:58:41 np0005473739 tender_lamarr[426878]: 167 167
Oct  7 10:58:41 np0005473739 systemd[1]: libpod-eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8.scope: Deactivated successfully.
Oct  7 10:58:41 np0005473739 podman[426862]: 2025-10-07 14:58:41.477235861 +0000 UTC m=+0.156306577 container attach eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 10:58:41 np0005473739 podman[426862]: 2025-10-07 14:58:41.477878559 +0000 UTC m=+0.156949265 container died eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:58:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e1f49412782a21f12ec51e1efb0361f64790c4a629d2725e61593700b85952ef-merged.mount: Deactivated successfully.
Oct  7 10:58:41 np0005473739 podman[426862]: 2025-10-07 14:58:41.548138068 +0000 UTC m=+0.227208784 container remove eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=tender_lamarr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:58:41 np0005473739 systemd[1]: libpod-conmon-eb387fe70071221fe69b5ad6529114a31da4fc4d53e1083744fc45d158ff42e8.scope: Deactivated successfully.
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.593 2 DEBUG nova.network.neutron [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updating instance_info_cache with network_info: [{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.626 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Releasing lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.627 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance network_info: |[{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.627 2 DEBUG oslo_concurrency.lockutils [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.628 2 DEBUG nova.network.neutron [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Refreshing network info cache for port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.631 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start _get_guest_xml network_info=[{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.637 2 WARNING nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.642 2 DEBUG nova.virt.libvirt.host [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.643 2 DEBUG nova.virt.libvirt.host [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.646 2 DEBUG nova.virt.libvirt.host [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.647 2 DEBUG nova.virt.libvirt.host [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.647 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.647 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.648 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.649 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.649 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.649 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.649 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.650 2 DEBUG nova.virt.hardware [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:58:41 np0005473739 nova_compute[259550]: 2025-10-07 14:58:41.652 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:41 np0005473739 podman[426902]: 2025-10-07 14:58:41.740780327 +0000 UTC m=+0.059369223 container create d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:58:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:58:41 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:58:41 np0005473739 systemd[1]: Started libpod-conmon-d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905.scope.
Oct  7 10:58:41 np0005473739 podman[426902]: 2025-10-07 14:58:41.713921665 +0000 UTC m=+0.032510591 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:58:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:58:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:41 np0005473739 podman[426902]: 2025-10-07 14:58:41.844539002 +0000 UTC m=+0.163127898 container init d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 10:58:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:41 np0005473739 podman[426902]: 2025-10-07 14:58:41.853175411 +0000 UTC m=+0.171764317 container start d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True)
Oct  7 10:58:41 np0005473739 podman[426902]: 2025-10-07 14:58:41.856827987 +0000 UTC m=+0.175416903 container attach d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 10:58:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:58:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4075044755' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.171 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.519s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.199 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.204 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:58:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4055149411' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.653 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.449s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.655 2 DEBUG nova.virt.libvirt.vif [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:58:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=150,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-le2uqduc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:58:35Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=27840587-4b28-416e-a84d-176918007fb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.656 2 DEBUG nova.network.os_vif_util [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.657 2 DEBUG nova.network.os_vif_util [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.658 2 DEBUG nova.objects.instance [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'pci_devices' on Instance uuid 27840587-4b28-416e-a84d-176918007fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.675 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <uuid>27840587-4b28-416e-a84d-176918007fb6</uuid>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <name>instance-00000096</name>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <nova:name>tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404</nova:name>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:58:41</nova:creationTime>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <nova:user uuid="229f8f54ad8b4adcb7d392a6d730edbd">tempest-TestSecurityGroupsBasicOps-1946829349-project-member</nova:user>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <nova:project uuid="2dd1166031634469bed4993a4eb97989">tempest-TestSecurityGroupsBasicOps-1946829349</nova:project>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <nova:port uuid="345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <entry name="serial">27840587-4b28-416e-a84d-176918007fb6</entry>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <entry name="uuid">27840587-4b28-416e-a84d-176918007fb6</entry>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/27840587-4b28-416e-a84d-176918007fb6_disk">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/27840587-4b28-416e-a84d-176918007fb6_disk.config">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:dc:6f:6c"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <target dev="tap345470b2-e0"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/console.log" append="off"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:58:42 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:58:42 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:58:42 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:58:42 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.676 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Preparing to wait for external event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.676 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.677 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.677 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.677 2 DEBUG nova.virt.libvirt.vif [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:58:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=150,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-le2uqduc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:58:35Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=27840587-4b28-416e-a84d-176918007fb6,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.678 2 DEBUG nova.network.os_vif_util [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.679 2 DEBUG nova.network.os_vif_util [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.679 2 DEBUG os_vif [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.681 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.687 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap345470b2-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.687 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap345470b2-e0, col_values=(('external_ids', {'iface-id': '345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:6f:6c', 'vm-uuid': '27840587-4b28-416e-a84d-176918007fb6'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:42 np0005473739 NetworkManager[44949]: <info>  [1759849122.7298] manager: (tap345470b2-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/662)
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.737 2 INFO os_vif [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0')#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.785 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.785 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.786 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] No VIF found with MAC fa:16:3e:dc:6f:6c, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.786 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Using config drive#033[00m
Oct  7 10:58:42 np0005473739 nova_compute[259550]: 2025-10-07 14:58:42.808 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2894: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:58:42 np0005473739 vigilant_hofstadter[426919]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:58:42 np0005473739 vigilant_hofstadter[426919]: --> relative data size: 1.0
Oct  7 10:58:42 np0005473739 vigilant_hofstadter[426919]: --> All data devices are unavailable
Oct  7 10:58:43 np0005473739 systemd[1]: libpod-d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905.scope: Deactivated successfully.
Oct  7 10:58:43 np0005473739 podman[426902]: 2025-10-07 14:58:43.016782715 +0000 UTC m=+1.335371621 container died d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:58:43 np0005473739 systemd[1]: libpod-d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905.scope: Consumed 1.055s CPU time.
Oct  7 10:58:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-561c7e64f816ace0dead067c0ef02ad8fae7bedfb2c565bcdf76dfdde9d561bb-merged.mount: Deactivated successfully.
Oct  7 10:58:43 np0005473739 podman[426902]: 2025-10-07 14:58:43.093763933 +0000 UTC m=+1.412352829 container remove d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_hofstadter, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:58:43 np0005473739 systemd[1]: libpod-conmon-d43249a4d5ef42e3c1657728801482c4ca5ba9a92794e06e38ef254832cbb905.scope: Deactivated successfully.
Oct  7 10:58:43 np0005473739 podman[427043]: 2025-10-07 14:58:43.224722938 +0000 UTC m=+0.081373815 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true)
Oct  7 10:58:43 np0005473739 podman[427044]: 2025-10-07 14:58:43.235392281 +0000 UTC m=+0.092038277 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.271 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Creating config drive at /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.276 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq4_huy2s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.416 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpq4_huy2s" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.468 2 DEBUG nova.storage.rbd_utils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] rbd image 27840587-4b28-416e-a84d-176918007fb6_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.472 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config 27840587-4b28-416e-a84d-176918007fb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.511 2 DEBUG nova.network.neutron [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updated VIF entry in instance network info cache for port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.512 2 DEBUG nova.network.neutron [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updating instance_info_cache with network_info: [{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.535 2 DEBUG oslo_concurrency.lockutils [req-2f609867-a634-4ac9-98bd-30caceda9953 req-73cc1bae-abd9-42dc-bdba-c3a3302998e1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:58:43 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #51. Immutable memtables: 8.
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.665 2 DEBUG oslo_concurrency.processutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config 27840587-4b28-416e-a84d-176918007fb6_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.193s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.667 2 INFO nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Deleting local config drive /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6/disk.config because it was imported into RBD.#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:43 np0005473739 kernel: tap345470b2-e0: entered promiscuous mode
Oct  7 10:58:43 np0005473739 NetworkManager[44949]: <info>  [1759849123.7234] manager: (tap345470b2-e0): new Tun device (/org/freedesktop/NetworkManager/Devices/663)
Oct  7 10:58:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:43Z|01635|binding|INFO|Claiming lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 for this chassis.
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.726 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:43Z|01636|binding|INFO|345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4: Claiming fa:16:3e:dc:6f:6c 10.100.0.13
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.737 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:6f:6c 10.100.0.13'], port_security=['fa:16:3e:dc:6f:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '27840587-4b28-416e-a84d-176918007fb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ce726bee-ed69-45a1-b3aa-f723c1140437', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262e3557-a916-444a-8a9a-3fa6e70341a2, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.738 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 in datapath 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a bound to our chassis#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.739 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a#033[00m
Oct  7 10:58:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:43Z|01637|binding|INFO|Setting lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 ovn-installed in OVS
Oct  7 10:58:43 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:43Z|01638|binding|INFO|Setting lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 up in Southbound
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.748 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.761 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1238e0bf-6454-4c9e-bbdd-e460b2975c17]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:43 np0005473739 systemd-udevd[427289]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:58:43 np0005473739 systemd-machined[214580]: New machine qemu-184-instance-00000096.
Oct  7 10:58:43 np0005473739 podman[427266]: 2025-10-07 14:58:43.773186053 +0000 UTC m=+0.067303312 container create 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 10:58:43 np0005473739 systemd[1]: Started Virtual Machine qemu-184-instance-00000096.
Oct  7 10:58:43 np0005473739 NetworkManager[44949]: <info>  [1759849123.7780] device (tap345470b2-e0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:58:43 np0005473739 NetworkManager[44949]: <info>  [1759849123.7801] device (tap345470b2-e0): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.799 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[56bf1dcb-4b36-4a33-863d-7536b9c3f461]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.804 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6a256198-fc3d-4e43-adba-11688287d745]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:43 np0005473739 systemd[1]: Started libpod-conmon-774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d.scope.
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.834 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[6efa1990-7ef9-43b3-9e7e-93105fb3a53e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:43 np0005473739 podman[427266]: 2025-10-07 14:58:43.753444261 +0000 UTC m=+0.047561550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:58:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.852 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a3e950d6-c117-470e-85fe-62c827c70fce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249c8beb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:fe:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966995, 'reachable_time': 19037, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427306, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:43 np0005473739 podman[427266]: 2025-10-07 14:58:43.859725184 +0000 UTC m=+0.153842443 container init 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 10:58:43 np0005473739 podman[427266]: 2025-10-07 14:58:43.868538787 +0000 UTC m=+0.162656046 container start 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.867 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[9abe13fa-5b25-4dbf-90d0-86681466af15]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap249c8beb-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967010, 'tstamp': 967010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427308, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap249c8beb-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967014, 'tstamp': 967014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427308, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.870 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249c8beb-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.873 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap249c8beb-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.873 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:58:43 np0005473739 nova_compute[259550]: 2025-10-07 14:58:43.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.873 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap249c8beb-30, col_values=(('external_ids', {'iface-id': 'ae63080e-cf28-419c-b3fe-501836a95f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:58:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:58:43.874 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:58:43 np0005473739 podman[427266]: 2025-10-07 14:58:43.87505768 +0000 UTC m=+0.169175019 container attach 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 10:58:43 np0005473739 quizzical_borg[427298]: 167 167
Oct  7 10:58:43 np0005473739 systemd[1]: libpod-774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d.scope: Deactivated successfully.
Oct  7 10:58:43 np0005473739 conmon[427298]: conmon 774f6d9cef435ed253c0 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d.scope/container/memory.events
Oct  7 10:58:43 np0005473739 podman[427266]: 2025-10-07 14:58:43.879439436 +0000 UTC m=+0.173556695 container died 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 10:58:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ce3a7fa9ad6ebb397fcdfcb7d355630e82e8430ce3e21621f1f42cbf34f919fe-merged.mount: Deactivated successfully.
Oct  7 10:58:43 np0005473739 podman[427266]: 2025-10-07 14:58:43.946667825 +0000 UTC m=+0.240785084 container remove 774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 10:58:43 np0005473739 systemd[1]: libpod-conmon-774f6d9cef435ed253c0e3dacb6a18bdfa15be65436d1f275f3a3ce26d97c09d.scope: Deactivated successfully.
Oct  7 10:58:44 np0005473739 podman[427331]: 2025-10-07 14:58:44.134763943 +0000 UTC m=+0.053169569 container create ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:58:44 np0005473739 systemd[1]: Started libpod-conmon-ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097.scope.
Oct  7 10:58:44 np0005473739 podman[427331]: 2025-10-07 14:58:44.105474977 +0000 UTC m=+0.023880623 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:58:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:58:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:44 np0005473739 podman[427331]: 2025-10-07 14:58:44.313828911 +0000 UTC m=+0.232234537 container init ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 10:58:44 np0005473739 podman[427331]: 2025-10-07 14:58:44.321187977 +0000 UTC m=+0.239593603 container start ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:58:44 np0005473739 podman[427331]: 2025-10-07 14:58:44.32662896 +0000 UTC m=+0.245034606 container attach ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.428 2 DEBUG nova.compute.manager [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.430 2 DEBUG oslo_concurrency.lockutils [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.430 2 DEBUG oslo_concurrency.lockutils [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.431 2 DEBUG oslo_concurrency.lockutils [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.431 2 DEBUG nova.compute.manager [req-e1b058b8-4e8b-4afe-91b7-a1e1c413576e req-1a3750d2-37cd-4391-ae08-f902c654f95f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Processing event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.810 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849124.8101046, 27840587-4b28-416e-a84d-176918007fb6 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.812 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] VM Started (Lifecycle Event)#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.814 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.819 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.823 2 INFO nova.virt.libvirt.driver [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance spawned successfully.#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.823 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.835 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.840 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.846 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.846 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.847 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.847 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.848 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.848 2 DEBUG nova.virt.libvirt.driver [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.863 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.864 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849124.8104093, 27840587-4b28-416e-a84d-176918007fb6 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.864 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.894 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.899 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849124.816494, 27840587-4b28-416e-a84d-176918007fb6 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.899 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:58:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2895: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.8 MiB/s wr, 30 op/s
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.917 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.921 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.925 2 INFO nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Took 9.57 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.925 2 DEBUG nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.951 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:58:44 np0005473739 nova_compute[259550]: 2025-10-07 14:58:44.984 2 INFO nova.compute.manager [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Took 10.58 seconds to build instance.#033[00m
Oct  7 10:58:45 np0005473739 nova_compute[259550]: 2025-10-07 14:58:45.002 2 DEBUG oslo_concurrency.lockutils [None req-10e1d66e-246b-4523-8422-88c54ddea8ad 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]: {
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:    "0": [
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:        {
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "devices": [
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "/dev/loop3"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            ],
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_name": "ceph_lv0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_size": "21470642176",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "name": "ceph_lv0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "tags": {
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cluster_name": "ceph",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.crush_device_class": "",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.encrypted": "0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osd_id": "0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.type": "block",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.vdo": "0"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            },
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "type": "block",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "vg_name": "ceph_vg0"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:        }
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:    ],
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:    "1": [
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:        {
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "devices": [
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "/dev/loop4"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            ],
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_name": "ceph_lv1",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_size": "21470642176",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "name": "ceph_lv1",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "tags": {
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cluster_name": "ceph",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.crush_device_class": "",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.encrypted": "0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osd_id": "1",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.type": "block",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.vdo": "0"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            },
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "type": "block",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "vg_name": "ceph_vg1"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:        }
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:    ],
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:    "2": [
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:        {
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "devices": [
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "/dev/loop5"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            ],
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_name": "ceph_lv2",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_size": "21470642176",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "name": "ceph_lv2",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "tags": {
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.cluster_name": "ceph",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.crush_device_class": "",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.encrypted": "0",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osd_id": "2",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.type": "block",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:                "ceph.vdo": "0"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            },
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "type": "block",
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:            "vg_name": "ceph_vg2"
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:        }
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]:    ]
Oct  7 10:58:45 np0005473739 priceless_wilbur[427381]: }
Oct  7 10:58:45 np0005473739 systemd[1]: libpod-ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097.scope: Deactivated successfully.
Oct  7 10:58:45 np0005473739 conmon[427381]: conmon ad4259bf764e87834929 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097.scope/container/memory.events
Oct  7 10:58:45 np0005473739 podman[427331]: 2025-10-07 14:58:45.180688903 +0000 UTC m=+1.099094529 container died ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 10:58:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c691f69fa29c64e1eeb2d9ca4c154906ef5ab47c27e1fcfb2614ab3abcf2a01d-merged.mount: Deactivated successfully.
Oct  7 10:58:45 np0005473739 podman[427331]: 2025-10-07 14:58:45.545092217 +0000 UTC m=+1.463497843 container remove ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_wilbur, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:58:45 np0005473739 systemd[1]: libpod-conmon-ad4259bf764e87834929306e17fb6850bed6b5ea990c5bd7f966eb36b4dd0097.scope: Deactivated successfully.
Oct  7 10:58:46 np0005473739 podman[427552]: 2025-10-07 14:58:46.165032023 +0000 UTC m=+0.023146913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:58:46 np0005473739 podman[427552]: 2025-10-07 14:58:46.337041286 +0000 UTC m=+0.195156176 container create a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:58:46 np0005473739 systemd[1]: Started libpod-conmon-a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d.scope.
Oct  7 10:58:46 np0005473739 nova_compute[259550]: 2025-10-07 14:58:46.581 2 DEBUG nova.compute.manager [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:58:46 np0005473739 nova_compute[259550]: 2025-10-07 14:58:46.583 2 DEBUG oslo_concurrency.lockutils [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:58:46 np0005473739 nova_compute[259550]: 2025-10-07 14:58:46.583 2 DEBUG oslo_concurrency.lockutils [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:58:46 np0005473739 nova_compute[259550]: 2025-10-07 14:58:46.583 2 DEBUG oslo_concurrency.lockutils [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:58:46 np0005473739 nova_compute[259550]: 2025-10-07 14:58:46.583 2 DEBUG nova.compute.manager [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] No waiting events found dispatching network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:58:46 np0005473739 nova_compute[259550]: 2025-10-07 14:58:46.584 2 WARNING nova.compute.manager [req-2abfe958-9249-425e-a940-2bdf36a50a04 req-2f46fe51-af74-47d4-b85d-58f63327c899 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received unexpected event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:58:46 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:58:46 np0005473739 podman[427552]: 2025-10-07 14:58:46.663238968 +0000 UTC m=+0.521353828 container init a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 10:58:46 np0005473739 podman[427552]: 2025-10-07 14:58:46.677085335 +0000 UTC m=+0.535200195 container start a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 10:58:46 np0005473739 eager_kalam[427568]: 167 167
Oct  7 10:58:46 np0005473739 systemd[1]: libpod-a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d.scope: Deactivated successfully.
Oct  7 10:58:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:46 np0005473739 podman[427552]: 2025-10-07 14:58:46.857262144 +0000 UTC m=+0.715377054 container attach a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 10:58:46 np0005473739 podman[427552]: 2025-10-07 14:58:46.85790044 +0000 UTC m=+0.716015310 container died a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:58:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2896: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 190 KiB/s rd, 1.8 MiB/s wr, 38 op/s
Oct  7 10:58:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5b45993188fc29c8052779cf74b67ce92b348422d1563e704de348436e9b9fa7-merged.mount: Deactivated successfully.
Oct  7 10:58:47 np0005473739 podman[427552]: 2025-10-07 14:58:47.099750101 +0000 UTC m=+0.957864961 container remove a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_kalam, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:58:47 np0005473739 systemd[1]: libpod-conmon-a995f98709b46071d1f0897493b025c3d56305c0f75a4daeb89714033751824d.scope: Deactivated successfully.
Oct  7 10:58:47 np0005473739 podman[427592]: 2025-10-07 14:58:47.308159587 +0000 UTC m=+0.070621231 container create 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:58:47 np0005473739 podman[427592]: 2025-10-07 14:58:47.26107874 +0000 UTC m=+0.023540414 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:58:47 np0005473739 systemd[1]: Started libpod-conmon-913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c.scope.
Oct  7 10:58:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:58:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:58:47 np0005473739 podman[427592]: 2025-10-07 14:58:47.44167467 +0000 UTC m=+0.204136314 container init 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:58:47 np0005473739 podman[427592]: 2025-10-07 14:58:47.449916908 +0000 UTC m=+0.212378552 container start 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 10:58:47 np0005473739 podman[427592]: 2025-10-07 14:58:47.515657088 +0000 UTC m=+0.278118742 container attach 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:58:47 np0005473739 nova_compute[259550]: 2025-10-07 14:58:47.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:48 np0005473739 kind_davinci[427609]: {
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "osd_id": 2,
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "type": "bluestore"
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:    },
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "osd_id": 1,
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "type": "bluestore"
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:    },
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "osd_id": 0,
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:        "type": "bluestore"
Oct  7 10:58:48 np0005473739 kind_davinci[427609]:    }
Oct  7 10:58:48 np0005473739 kind_davinci[427609]: }
Oct  7 10:58:48 np0005473739 systemd[1]: libpod-913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c.scope: Deactivated successfully.
Oct  7 10:58:48 np0005473739 systemd[1]: libpod-913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c.scope: Consumed 1.062s CPU time.
Oct  7 10:58:48 np0005473739 podman[427592]: 2025-10-07 14:58:48.506921092 +0000 UTC m=+1.269382736 container died 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:58:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-dee18d7f1181c2bb0f43b02559bd92122357d2255839032f0f6f5189a0216fb7-merged.mount: Deactivated successfully.
Oct  7 10:58:48 np0005473739 nova_compute[259550]: 2025-10-07 14:58:48.682 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:48 np0005473739 podman[427592]: 2025-10-07 14:58:48.713569611 +0000 UTC m=+1.476031255 container remove 913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_davinci, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 10:58:48 np0005473739 systemd[1]: libpod-conmon-913611efd9e2ed5aaed5a0e11f3194a17261ab0f49a43b123cd01fe4e3f3a86c.scope: Deactivated successfully.
Oct  7 10:58:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:58:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:58:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:58:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:58:48 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c3df6443-03b5-4591-9a68-e9ea0e21a8f4 does not exist
Oct  7 10:58:48 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 78888be8-3a13-42ae-95db-7ff0727ff714 does not exist
Oct  7 10:58:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2897: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.5 MiB/s rd, 1.8 MiB/s wr, 86 op/s
Oct  7 10:58:49 np0005473739 nova_compute[259550]: 2025-10-07 14:58:49.661 2 DEBUG nova.compute.manager [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-changed-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:58:49 np0005473739 nova_compute[259550]: 2025-10-07 14:58:49.662 2 DEBUG nova.compute.manager [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Refreshing instance network info cache due to event network-changed-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:58:49 np0005473739 nova_compute[259550]: 2025-10-07 14:58:49.662 2 DEBUG oslo_concurrency.lockutils [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:58:49 np0005473739 nova_compute[259550]: 2025-10-07 14:58:49.663 2 DEBUG oslo_concurrency.lockutils [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:58:49 np0005473739 nova_compute[259550]: 2025-10-07 14:58:49.663 2 DEBUG nova.network.neutron [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Refreshing network info cache for port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:58:49 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:58:49 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:58:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2898: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 520 KiB/s wr, 86 op/s
Oct  7 10:58:51 np0005473739 nova_compute[259550]: 2025-10-07 14:58:51.337 2 DEBUG nova.network.neutron [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updated VIF entry in instance network info cache for port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:58:51 np0005473739 nova_compute[259550]: 2025-10-07 14:58:51.338 2 DEBUG nova.network.neutron [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updating instance_info_cache with network_info: [{"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:58:51 np0005473739 nova_compute[259550]: 2025-10-07 14:58:51.376 2 DEBUG oslo_concurrency.lockutils [req-4ce7b2d1-e6ea-4672-b991-c0fd6e09d415 req-51ef9f2f-1e9c-4a20-85b4-9451257ed3ef 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-27840587-4b28-416e-a84d-176918007fb6" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:58:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:58:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:58:52 np0005473739 nova_compute[259550]: 2025-10-07 14:58:52.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2899: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  7 10:58:53 np0005473739 nova_compute[259550]: 2025-10-07 14:58:53.684 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2900: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  7 10:58:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:58:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2901: 305 pgs: 305 active+clean; 167 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 3.7 KiB/s wr, 71 op/s
Oct  7 10:58:57 np0005473739 nova_compute[259550]: 2025-10-07 14:58:57.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:58 np0005473739 nova_compute[259550]: 2025-10-07 14:58:58.686 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:58:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2902: 305 pgs: 305 active+clean; 187 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.8 MiB/s rd, 1.4 MiB/s wr, 91 op/s
Oct  7 10:58:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:59Z|00206|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:dc:6f:6c 10.100.0.13
Oct  7 10:58:59 np0005473739 ovn_controller[151684]: 2025-10-07T14:58:59Z|00207|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:dc:6f:6c 10.100.0.13
Oct  7 10:59:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:00.096 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:00.096 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:00.097 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2903: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 773 KiB/s rd, 2.1 MiB/s wr, 78 op/s
Oct  7 10:59:01 np0005473739 podman[427707]: 2025-10-07 14:59:01.083722717 +0000 UTC m=+0.067461697 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 10:59:01 np0005473739 podman[427706]: 2025-10-07 14:59:01.085717079 +0000 UTC m=+0.069457319 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 10:59:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:02 np0005473739 nova_compute[259550]: 2025-10-07 14:59:02.736 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2904: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 327 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  7 10:59:03 np0005473739 nova_compute[259550]: 2025-10-07 14:59:03.688 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.445 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.446 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.446 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.446 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.447 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.449 2 INFO nova.compute.manager [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Terminating instance#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.450 2 DEBUG nova.compute.manager [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:59:04 np0005473739 kernel: tap345470b2-e0 (unregistering): left promiscuous mode
Oct  7 10:59:04 np0005473739 NetworkManager[44949]: <info>  [1759849144.5477] device (tap345470b2-e0): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:59:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:04Z|01639|binding|INFO|Releasing lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 from this chassis (sb_readonly=0)
Oct  7 10:59:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:04Z|01640|binding|INFO|Setting lport 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 down in Southbound
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:04Z|01641|binding|INFO|Removing iface tap345470b2-e0 ovn-installed in OVS
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.571 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:6f:6c 10.100.0.13'], port_security=['fa:16:3e:dc:6f:6c 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '27840587-4b28-416e-a84d-176918007fb6', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '5', 'neutron:security_group_ids': 'abaa319c-3a20-4f59-984b-df0700b81045', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262e3557-a916-444a-8a9a-3fa6e70341a2, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.572 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 in datapath 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a unbound from our chassis#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.573 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.577 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f42870e5-5e3e-4d2b-a157-3f7e2e488abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.626 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[952b07a8-5d60-4f47-851a-53cdabb475a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:04 np0005473739 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Deactivated successfully.
Oct  7 10:59:04 np0005473739 systemd[1]: machine-qemu\x2d184\x2dinstance\x2d00000096.scope: Consumed 14.327s CPU time.
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.629 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[9373ba5f-1df1-4a2c-a4d7-a90044bba633]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:04 np0005473739 systemd-machined[214580]: Machine qemu-184-instance-00000096 terminated.
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.656 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[b3ca8754-979a-4cf0-8bad-e6d089db066f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.673 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.677 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[43f3d6d6-778f-4cfb-9257-fc95cfe8cd0d]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap249c8beb-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:37:fe:18'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 7, 'rx_bytes': 958, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 468], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966995, 'reachable_time': 38312, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427759, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.692 2 INFO nova.virt.libvirt.driver [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Instance destroyed successfully.#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.693 2 DEBUG nova.objects.instance [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 27840587-4b28-416e-a84d-176918007fb6 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.695 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[756d78df-9ebd-419c-b7ff-4a1d4c59e43b]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap249c8beb-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967010, 'tstamp': 967010}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427767, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap249c8beb-31'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 967014, 'tstamp': 967014}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 427767, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.696 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249c8beb-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.697 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.703 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap249c8beb-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.703 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.703 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap249c8beb-30, col_values=(('external_ids', {'iface-id': 'ae63080e-cf28-419c-b3fe-501836a95f3b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:04 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:04.704 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.706 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.708 2 DEBUG nova.virt.libvirt.vif [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:58:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-gen-1-1089669404',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ge',id=150,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:58:44Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-le2uqduc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:58:44Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=27840587-4b28-416e-a84d-176918007fb6,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.709 2 DEBUG nova.network.os_vif_util [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "address": "fa:16:3e:dc:6f:6c", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap345470b2-e0", "ovs_interfaceid": "345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.709 2 DEBUG nova.network.os_vif_util [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.710 2 DEBUG os_vif [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.713 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap345470b2-e0, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.719 2 INFO os_vif [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:dc:6f:6c,bridge_name='br-int',has_traffic_filtering=True,id=345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap345470b2-e0')#033[00m
Oct  7 10:59:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2905: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 328 KiB/s rd, 2.1 MiB/s wr, 64 op/s
Oct  7 10:59:04 np0005473739 nova_compute[259550]: 2025-10-07 14:59:04.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.393 2 INFO nova.virt.libvirt.driver [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Deleting instance files /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6_del#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.394 2 INFO nova.virt.libvirt.driver [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Deletion of /var/lib/nova/instances/27840587-4b28-416e-a84d-176918007fb6_del complete#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.454 2 INFO nova.compute.manager [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.456 2 DEBUG oslo.service.loopingcall [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.456 2 DEBUG nova.compute.manager [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.456 2 DEBUG nova.network.neutron [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.480 2 DEBUG nova.compute.manager [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-unplugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.481 2 DEBUG oslo_concurrency.lockutils [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.481 2 DEBUG oslo_concurrency.lockutils [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.482 2 DEBUG oslo_concurrency.lockutils [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.482 2 DEBUG nova.compute.manager [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] No waiting events found dispatching network-vif-unplugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:59:05 np0005473739 nova_compute[259550]: 2025-10-07 14:59:05.482 2 DEBUG nova.compute.manager [req-0e5e1fe2-fba6-4401-aa3e-a77a6d27807c req-04b7eed6-b0e9-4c66-ab0b-04528d586c98 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-unplugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:59:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:06.708 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=58, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=57) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:59:06 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:06.709 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 10:59:06 np0005473739 nova_compute[259550]: 2025-10-07 14:59:06.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:06 np0005473739 nova_compute[259550]: 2025-10-07 14:59:06.736 2 DEBUG nova.network.neutron [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:59:06 np0005473739 nova_compute[259550]: 2025-10-07 14:59:06.759 2 INFO nova.compute.manager [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] Took 1.30 seconds to deallocate network for instance.#033[00m
Oct  7 10:59:06 np0005473739 nova_compute[259550]: 2025-10-07 14:59:06.802 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:06 np0005473739 nova_compute[259550]: 2025-10-07 14:59:06.802 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:06 np0005473739 nova_compute[259550]: 2025-10-07 14:59:06.850 2 DEBUG nova.compute.manager [req-b0f3c282-8c61-47d4-8b91-78388f442310 req-fee7adee-e552-4141-9601-5282d9102dc3 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-deleted-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:06 np0005473739 nova_compute[259550]: 2025-10-07 14:59:06.879 2 DEBUG oslo_concurrency.processutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2906: 305 pgs: 305 active+clean; 173 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 332 KiB/s rd, 2.1 MiB/s wr, 73 op/s
Oct  7 10:59:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:59:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/442935267' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.316 2 DEBUG oslo_concurrency.processutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.324 2 DEBUG nova.compute.provider_tree [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.565 2 DEBUG nova.scheduler.client.report [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.588 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.786s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.611 2 INFO nova.scheduler.client.report [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 27840587-4b28-416e-a84d-176918007fb6#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.622 2 DEBUG nova.compute.manager [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.623 2 DEBUG oslo_concurrency.lockutils [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "27840587-4b28-416e-a84d-176918007fb6-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.623 2 DEBUG oslo_concurrency.lockutils [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.623 2 DEBUG oslo_concurrency.lockutils [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.624 2 DEBUG nova.compute.manager [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] No waiting events found dispatching network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.624 2 WARNING nova.compute.manager [req-caff3358-1307-4193-bd25-7c398b682f88 req-7c4178fb-c45b-45a0-ae6f-de97de2d6133 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 27840587-4b28-416e-a84d-176918007fb6] Received unexpected event network-vif-plugged-345470b2-e0a9-4f28-a9fa-b0f3c0a9efa4 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:59:07 np0005473739 nova_compute[259550]: 2025-10-07 14:59:07.689 2 DEBUG oslo_concurrency.lockutils [None req-d77f0c10-0acf-4df8-ae54-870a998aa92f 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "27840587-4b28-416e-a84d-176918007fb6" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.243s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:08 np0005473739 nova_compute[259550]: 2025-10-07 14:59:08.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2907: 305 pgs: 305 active+clean; 121 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 346 KiB/s rd, 2.1 MiB/s wr, 92 op/s
Oct  7 10:59:08 np0005473739 nova_compute[259550]: 2025-10-07 14:59:08.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:08 np0005473739 nova_compute[259550]: 2025-10-07 14:59:08.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.055 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.056 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.056 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.057 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.057 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.058 2 INFO nova.compute.manager [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Terminating instance#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.059 2 DEBUG nova.compute.manager [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 10:59:09 np0005473739 kernel: tap6d023b7b-1f (unregistering): left promiscuous mode
Oct  7 10:59:09 np0005473739 NetworkManager[44949]: <info>  [1759849149.1803] device (tap6d023b7b-1f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 10:59:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:09Z|01642|binding|INFO|Releasing lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 from this chassis (sb_readonly=0)
Oct  7 10:59:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:09Z|01643|binding|INFO|Setting lport 6d023b7b-1ffe-467f-b731-8b53fc063b54 down in Southbound
Oct  7 10:59:09 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:09Z|01644|binding|INFO|Removing iface tap6d023b7b-1f ovn-installed in OVS
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.186 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.192 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b5:43:f9 10.100.0.7'], port_security=['fa:16:3e:b5:43:f9 10.100.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.7/28', 'neutron:device_id': '5dbed140-a0ed-4dbb-b782-da386ad68471', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '2dd1166031634469bed4993a4eb97989', 'neutron:revision_number': '4', 'neutron:security_group_ids': '21c14362-d44c-45e9-a632-669517d5fa0c ce726bee-ed69-45a1-b3aa-f723c1140437', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=262e3557-a916-444a-8a9a-3fa6e70341a2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=6d023b7b-1ffe-467f-b731-8b53fc063b54) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.193 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 6d023b7b-1ffe-467f-b731-8b53fc063b54 in datapath 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a unbound from our chassis#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.194 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.196 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[1bb9cd75-f985-4dc1-8dc9-9924c679e7fb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.197 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a namespace which is not needed anymore#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:09 np0005473739 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Deactivated successfully.
Oct  7 10:59:09 np0005473739 systemd[1]: machine-qemu\x2d183\x2dinstance\x2d00000095.scope: Consumed 15.666s CPU time.
Oct  7 10:59:09 np0005473739 systemd-machined[214580]: Machine qemu-183-instance-00000095 terminated.
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.297 2 INFO nova.virt.libvirt.driver [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance destroyed successfully.#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.298 2 DEBUG nova.objects.instance [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lazy-loading 'resources' on Instance uuid 5dbed140-a0ed-4dbb-b782-da386ad68471 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.311 2 DEBUG nova.virt.libvirt.vif [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:57:52Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',display_name='tempest-server-tempest-TestSecurityGroupsBasicOps-1946829349-access_point-1531578973',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-server-tempest-testsecuritygroupsbasicops-1946829349-ac',id=149,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBDSFN/mFQx9H7dlmOCtth1LrSvWe++YjThy0IEMqu/uGYCDYUjvf7c45GqSO97LvsDFzXDoa1ytH4+EWghxiErPTj2Cb9endnZ6Iv01/bnsmOGiX5Tg0V7s6thKKuGN4cw==',key_name='tempest-TestSecurityGroupsBasicOps-206963290',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:58:07Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='2dd1166031634469bed4993a4eb97989',ramdisk_id='',reservation_id='r-d2b8ujzr',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSecurityGroupsBasicOps-1946829349',owner_user_name='tempest-TestSecurityGroupsBasicOps-1946829349-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T14:58:07Z,user_data=None,user_id='229f8f54ad8b4adcb7d392a6d730edbd',uuid=5dbed140-a0ed-4dbb-b782-da386ad68471,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.311 2 DEBUG nova.network.os_vif_util [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converting VIF {"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.237", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.312 2 DEBUG nova.network.os_vif_util [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.312 2 DEBUG os_vif [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.314 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6d023b7b-1f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.320 2 INFO os_vif [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b5:43:f9,bridge_name='br-int',has_traffic_filtering=True,id=6d023b7b-1ffe-467f-b731-8b53fc063b54,network=Network(249c8beb-3ee0-47fa-bbca-7ba20e00ad6a),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6d023b7b-1f')#033[00m
Oct  7 10:59:09 np0005473739 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [NOTICE]   (426261) : haproxy version is 2.8.14-c23fe91
Oct  7 10:59:09 np0005473739 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [NOTICE]   (426261) : path to executable is /usr/sbin/haproxy
Oct  7 10:59:09 np0005473739 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [WARNING]  (426261) : Exiting Master process...
Oct  7 10:59:09 np0005473739 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [WARNING]  (426261) : Exiting Master process...
Oct  7 10:59:09 np0005473739 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [ALERT]    (426261) : Current worker (426263) exited with code 143 (Terminated)
Oct  7 10:59:09 np0005473739 neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a[426257]: [WARNING]  (426261) : All workers exited. Exiting... (0)
Oct  7 10:59:09 np0005473739 systemd[1]: libpod-e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df.scope: Deactivated successfully.
Oct  7 10:59:09 np0005473739 podman[427842]: 2025-10-07 14:59:09.366372647 +0000 UTC m=+0.063536533 container died e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:59:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df-userdata-shm.mount: Deactivated successfully.
Oct  7 10:59:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e1e09a94c0c4d5c30406c651a2371f59d3e0389c901886c205db67c6aa86d5a6-merged.mount: Deactivated successfully.
Oct  7 10:59:09 np0005473739 podman[427842]: 2025-10-07 14:59:09.482077139 +0000 UTC m=+0.179241025 container cleanup e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.vendor=CentOS)
Oct  7 10:59:09 np0005473739 systemd[1]: libpod-conmon-e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df.scope: Deactivated successfully.
Oct  7 10:59:09 np0005473739 podman[427895]: 2025-10-07 14:59:09.570572971 +0000 UTC m=+0.065845834 container remove e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.576 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[166c430d-3d4d-4c6e-b6f3-78146d3c1efb]: (4, ('Tue Oct  7 02:59:09 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a (e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df)\ne648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df\nTue Oct  7 02:59:09 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a (e648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df)\ne648e12853acb179dad41b807bed29ed2d623750faac3637080762a89d0b11df\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.578 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[a49eb191-faca-4438-b7d9-0318978c7803]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.579 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap249c8beb-30, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.580 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:09 np0005473739 kernel: tap249c8beb-30: left promiscuous mode
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.584 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.586 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c95d7af3-0f53-44a2-8a54-8c558ec2687f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.598 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.621 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0251e70d-e8db-4a00-a381-e7fefb1fd46e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.622 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[0773d553-4685-4055-887d-7b2c3e70ce68]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.641 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3ad5e0d2-3d90-47fa-8f72-d51ec25f6da8]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 966988, 'reachable_time': 43718, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 427910, 'error': None, 'target': 'ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:09 np0005473739 systemd[1]: run-netns-ovnmeta\x2d249c8beb\x2d3ee0\x2d47fa\x2dbbca\x2d7ba20e00ad6a.mount: Deactivated successfully.
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.646 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-249c8beb-3ee0-47fa-bbca-7ba20e00ad6a deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 10:59:09 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:09.646 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[cc5628c0-85f4-4984-812c-b54faffc3a68]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.699 2 DEBUG nova.compute.manager [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.699 2 DEBUG nova.compute.manager [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing instance network info cache due to event network-changed-6d023b7b-1ffe-467f-b731-8b53fc063b54. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.699 2 DEBUG oslo_concurrency.lockutils [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.700 2 DEBUG oslo_concurrency.lockutils [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.700 2 DEBUG nova.network.neutron [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Refreshing network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.791 2 DEBUG nova.compute.manager [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-unplugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.792 2 DEBUG oslo_concurrency.lockutils [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.792 2 DEBUG oslo_concurrency.lockutils [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.792 2 DEBUG oslo_concurrency.lockutils [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.793 2 DEBUG nova.compute.manager [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] No waiting events found dispatching network-vif-unplugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:59:09 np0005473739 nova_compute[259550]: 2025-10-07 14:59:09.793 2 DEBUG nova.compute.manager [req-6b5b1440-c2e3-4e51-98a6-271dca525a13 req-0789ad6d-6636-4b92-bcf8-9f213fa01eb0 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-unplugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 10:59:10 np0005473739 nova_compute[259550]: 2025-10-07 14:59:10.179 2 INFO nova.virt.libvirt.driver [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Deleting instance files /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471_del#033[00m
Oct  7 10:59:10 np0005473739 nova_compute[259550]: 2025-10-07 14:59:10.179 2 INFO nova.virt.libvirt.driver [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Deletion of /var/lib/nova/instances/5dbed140-a0ed-4dbb-b782-da386ad68471_del complete#033[00m
Oct  7 10:59:10 np0005473739 nova_compute[259550]: 2025-10-07 14:59:10.240 2 INFO nova.compute.manager [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Took 1.18 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 10:59:10 np0005473739 nova_compute[259550]: 2025-10-07 14:59:10.240 2 DEBUG oslo.service.loopingcall [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 10:59:10 np0005473739 nova_compute[259550]: 2025-10-07 14:59:10.241 2 DEBUG nova.compute.manager [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 10:59:10 np0005473739 nova_compute[259550]: 2025-10-07 14:59:10.241 2 DEBUG nova.network.neutron [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 10:59:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2908: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 258 KiB/s rd, 817 KiB/s wr, 67 op/s
Oct  7 10:59:10 np0005473739 nova_compute[259550]: 2025-10-07 14:59:10.993 2 DEBUG nova.network.neutron [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.017 2 INFO nova.compute.manager [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Took 0.78 seconds to deallocate network for instance.#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.074 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.074 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.133 2 DEBUG oslo_concurrency.processutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.351 2 DEBUG nova.network.neutron [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updated VIF entry in instance network info cache for port 6d023b7b-1ffe-467f-b731-8b53fc063b54. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.352 2 DEBUG nova.network.neutron [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Updating instance_info_cache with network_info: [{"id": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "address": "fa:16:3e:b5:43:f9", "network": {"id": "249c8beb-3ee0-47fa-bbca-7ba20e00ad6a", "bridge": "br-int", "label": "tempest-network-smoke--1513367118", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "2dd1166031634469bed4993a4eb97989", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6d023b7b-1f", "ovs_interfaceid": "6d023b7b-1ffe-467f-b731-8b53fc063b54", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.412 2 DEBUG oslo_concurrency.lockutils [req-4178abf0-084d-4dbc-85a6-217fef9ffd8c req-645aea56-622d-4fab-bf98-1396f6d5d653 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-5dbed140-a0ed-4dbb-b782-da386ad68471" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:59:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:59:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1034001136' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.600 2 DEBUG oslo_concurrency.processutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.468s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.607 2 DEBUG nova.compute.provider_tree [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.620 2 DEBUG nova.scheduler.client.report [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.641 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.663 2 INFO nova.scheduler.client.report [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Deleted allocations for instance 5dbed140-a0ed-4dbb-b782-da386ad68471#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.760 2 DEBUG oslo_concurrency.lockutils [None req-d6170650-2cd1-4082-a79e-4a870c7832b3 229f8f54ad8b4adcb7d392a6d730edbd 2dd1166031634469bed4993a4eb97989 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.704s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.874 2 DEBUG nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.874 2 DEBUG oslo_concurrency.lockutils [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.875 2 DEBUG oslo_concurrency.lockutils [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.875 2 DEBUG oslo_concurrency.lockutils [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "5dbed140-a0ed-4dbb-b782-da386ad68471-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.876 2 DEBUG nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] No waiting events found dispatching network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.876 2 WARNING nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received unexpected event network-vif-plugged-6d023b7b-1ffe-467f-b731-8b53fc063b54 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.877 2 DEBUG nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Received event network-vif-deleted-6d023b7b-1ffe-467f-b731-8b53fc063b54 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.877 2 INFO nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Neutron deleted interface 6d023b7b-1ffe-467f-b731-8b53fc063b54; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.878 2 DEBUG nova.network.neutron [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.881 2 DEBUG nova.compute.manager [req-f27ce18c-2df7-41e8-a38d-285522fae585 req-07678292-3485-4453-8c4a-93104cee42fc 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Detach interface failed, port_id=6d023b7b-1ffe-467f-b731-8b53fc063b54, reason: Instance 5dbed140-a0ed-4dbb-b782-da386ad68471 could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 10:59:11 np0005473739 nova_compute[259550]: 2025-10-07 14:59:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2909: 305 pgs: 305 active+clean; 83 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 20 KiB/s rd, 15 KiB/s wr, 31 op/s
Oct  7 10:59:13 np0005473739 nova_compute[259550]: 2025-10-07 14:59:13.692 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:14 np0005473739 podman[427935]: 2025-10-07 14:59:14.102733604 +0000 UTC m=+0.088118623 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:59:14 np0005473739 podman[427934]: 2025-10-07 14:59:14.103106844 +0000 UTC m=+0.087839556 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Oct  7 10:59:14 np0005473739 nova_compute[259550]: 2025-10-07 14:59:14.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2910: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct  7 10:59:15 np0005473739 nova_compute[259550]: 2025-10-07 14:59:15.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:15 np0005473739 nova_compute[259550]: 2025-10-07 14:59:15.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:15 np0005473739 nova_compute[259550]: 2025-10-07 14:59:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:15 np0005473739 nova_compute[259550]: 2025-10-07 14:59:15.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 10:59:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:16.711 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '58'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2911: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 38 KiB/s rd, 16 KiB/s wr, 56 op/s
Oct  7 10:59:16 np0005473739 nova_compute[259550]: 2025-10-07 14:59:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:18 np0005473739 nova_compute[259550]: 2025-10-07 14:59:18.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2912: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 13 KiB/s wr, 47 op/s
Oct  7 10:59:19 np0005473739 nova_compute[259550]: 2025-10-07 14:59:19.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:19 np0005473739 nova_compute[259550]: 2025-10-07 14:59:19.691 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849144.690233, 27840587-4b28-416e-a84d-176918007fb6 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:59:19 np0005473739 nova_compute[259550]: 2025-10-07 14:59:19.691 2 INFO nova.compute.manager [-] [instance: 27840587-4b28-416e-a84d-176918007fb6] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:59:19 np0005473739 nova_compute[259550]: 2025-10-07 14:59:19.719 2 DEBUG nova.compute.manager [None req-9b1ca84a-24a2-4f0b-bf9f-7adaad1cc013 - - - - - -] [instance: 27840587-4b28-416e-a84d-176918007fb6] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:59:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2913: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 19 KiB/s rd, 1.2 KiB/s wr, 27 op/s
Oct  7 10:59:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_14:59:22
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'vms', 'default.rgw.log', 'default.rgw.control', '.mgr', 'backups', 'cephfs.cephfs.meta', 'volumes', 'cephfs.cephfs.data', 'default.rgw.meta', '.rgw.root']
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 10:59:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2914: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:59:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 10:59:23 np0005473739 nova_compute[259550]: 2025-10-07 14:59:23.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:24 np0005473739 nova_compute[259550]: 2025-10-07 14:59:24.295 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849149.2936287, 5dbed140-a0ed-4dbb-b782-da386ad68471 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:59:24 np0005473739 nova_compute[259550]: 2025-10-07 14:59:24.295 2 INFO nova.compute.manager [-] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] VM Stopped (Lifecycle Event)#033[00m
Oct  7 10:59:24 np0005473739 nova_compute[259550]: 2025-10-07 14:59:24.316 2 DEBUG nova.compute.manager [None req-c3b6f157-05da-4124-b253-22904e492994 - - - - - -] [instance: 5dbed140-a0ed-4dbb-b782-da386ad68471] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:59:24 np0005473739 nova_compute[259550]: 2025-10-07 14:59:24.320 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2915: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.2 KiB/s wr, 25 op/s
Oct  7 10:59:24 np0005473739 nova_compute[259550]: 2025-10-07 14:59:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.014 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.014 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:59:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/302169912' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.476 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.462s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.641 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.643 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3653MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.643 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.643 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.862 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.863 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.923 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.996 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 10:59:25 np0005473739 nova_compute[259550]: 2025-10-07 14:59:25.996 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 10:59:26 np0005473739 nova_compute[259550]: 2025-10-07 14:59:26.011 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 10:59:26 np0005473739 nova_compute[259550]: 2025-10-07 14:59:26.031 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 10:59:26 np0005473739 nova_compute[259550]: 2025-10-07 14:59:26.051 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:59:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2045185835' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:59:26 np0005473739 nova_compute[259550]: 2025-10-07 14:59:26.488 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.437s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:26 np0005473739 nova_compute[259550]: 2025-10-07 14:59:26.494 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:59:26 np0005473739 nova_compute[259550]: 2025-10-07 14:59:26.516 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:59:26 np0005473739 nova_compute[259550]: 2025-10-07 14:59:26.541 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 10:59:26 np0005473739 nova_compute[259550]: 2025-10-07 14:59:26.541 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.898s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2916: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #141. Immutable memtables: 0.
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.695501) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 85] Flushing memtable with next log file: 141
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167695547, "job": 85, "event": "flush_started", "num_memtables": 1, "num_entries": 2052, "num_deletes": 251, "total_data_size": 3387322, "memory_usage": 3448632, "flush_reason": "Manual Compaction"}
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 85] Level-0 flush table #142: started
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167730629, "cf_name": "default", "job": 85, "event": "table_file_creation", "file_number": 142, "file_size": 3320738, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 59043, "largest_seqno": 61094, "table_properties": {"data_size": 3311418, "index_size": 5877, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2373, "raw_key_size": 18811, "raw_average_key_size": 20, "raw_value_size": 3292958, "raw_average_value_size": 3521, "num_data_blocks": 261, "num_entries": 935, "num_filter_entries": 935, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759848944, "oldest_key_time": 1759848944, "file_creation_time": 1759849167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 142, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 85] Flush lasted 35189 microseconds, and 7551 cpu microseconds.
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.730685) [db/flush_job.cc:967] [default] [JOB 85] Level-0 flush table #142: 3320738 bytes OK
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.730711) [db/memtable_list.cc:519] [default] Level-0 commit table #142 started
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.752689) [db/memtable_list.cc:722] [default] Level-0 commit table #142: memtable #1 done
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.752737) EVENT_LOG_v1 {"time_micros": 1759849167752726, "job": 85, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.752763) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 85] Try to delete WAL files size 3378731, prev total WAL file size 3379368, number of live WAL files 2.
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000138.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.757412) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730035373733' seq:72057594037927935, type:22 .. '7061786F730036303235' seq:0, type:0; will stop at (end)
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 86] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 85 Base level 0, inputs: [142(3242KB)], [140(8045KB)]
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167757464, "job": 86, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [142], "files_L6": [140], "score": -1, "input_data_size": 11559498, "oldest_snapshot_seqno": -1}
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 86] Generated table #143: 8069 keys, 9825013 bytes, temperature: kUnknown
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167903903, "cf_name": "default", "job": 86, "event": "table_file_creation", "file_number": 143, "file_size": 9825013, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9773608, "index_size": 30108, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20229, "raw_key_size": 210308, "raw_average_key_size": 26, "raw_value_size": 9632309, "raw_average_value_size": 1193, "num_data_blocks": 1170, "num_entries": 8069, "num_filter_entries": 8069, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849167, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 143, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.904342) [db/compaction/compaction_job.cc:1663] [default] [JOB 86] Compacted 1@0 + 1@6 files to L6 => 9825013 bytes
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.930327) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 78.8 rd, 67.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(3.2, 7.9 +0.0 blob) out(9.4 +0.0 blob), read-write-amplify(6.4) write-amplify(3.0) OK, records in: 8583, records dropped: 514 output_compression: NoCompression
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.930390) EVENT_LOG_v1 {"time_micros": 1759849167930367, "job": 86, "event": "compaction_finished", "compaction_time_micros": 146621, "compaction_time_cpu_micros": 27754, "output_level": 6, "num_output_files": 1, "total_output_size": 9825013, "num_input_records": 8583, "num_output_records": 8069, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000142.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167931477, "job": 86, "event": "table_file_deletion", "file_number": 142}
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000140.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849167933600, "job": 86, "event": "table_file_deletion", "file_number": 140}
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.757260) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933686) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933694) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933696) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933698) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:27 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:27.933700) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:28 np0005473739 nova_compute[259550]: 2025-10-07 14:59:28.707 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2917: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:29 np0005473739 nova_compute[259550]: 2025-10-07 14:59:29.322 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2918: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:31 np0005473739 nova_compute[259550]: 2025-10-07 14:59:31.542 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:31 np0005473739 nova_compute[259550]: 2025-10-07 14:59:31.542 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 10:59:31 np0005473739 nova_compute[259550]: 2025-10-07 14:59:31.543 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 10:59:31 np0005473739 nova_compute[259550]: 2025-10-07 14:59:31.560 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 10:59:31 np0005473739 nova_compute[259550]: 2025-10-07 14:59:31.561 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 10:59:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:32 np0005473739 podman[428026]: 2025-10-07 14:59:32.070646325 +0000 UTC m=+0.062117605 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 10:59:32 np0005473739 podman[428027]: 2025-10-07 14:59:32.072635677 +0000 UTC m=+0.058472778 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0)
Oct  7 10:59:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 10:59:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3777546043' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 10:59:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 10:59:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3777546043' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 10:59:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2919: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:33 np0005473739 nova_compute[259550]: 2025-10-07 14:59:33.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:34 np0005473739 nova_compute[259550]: 2025-10-07 14:59:34.324 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2920: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2921: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:38 np0005473739 nova_compute[259550]: 2025-10-07 14:59:38.710 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2922: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:39 np0005473739 nova_compute[259550]: 2025-10-07 14:59:39.326 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2923: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2924: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.635 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.635 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.652 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.721 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.722 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.732 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.733 2 INFO nova.compute.claims [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 10:59:43 np0005473739 nova_compute[259550]: 2025-10-07 14:59:43.819 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 10:59:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2626730701' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.318 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.499s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.324 2 DEBUG nova.compute.provider_tree [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.328 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.345 2 DEBUG nova.scheduler.client.report [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.374 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.651s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.375 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.417 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.417 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.439 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.455 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.545 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.546 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.546 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Creating image(s)#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.563 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.584 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.603 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.607 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.644 2 DEBUG nova.policy [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a7552cec1354175be418fba9a7588af', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.680 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.681 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.682 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.682 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.704 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:59:44 np0005473739 nova_compute[259550]: 2025-10-07 14:59:44.707 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a6b18035-4aef-4825-90e6-799173979626_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2925: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 10:59:45 np0005473739 podman[428183]: 2025-10-07 14:59:45.109844357 +0000 UTC m=+0.090135247 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:59:45 np0005473739 podman[428184]: 2025-10-07 14:59:45.114727076 +0000 UTC m=+0.096346171 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 10:59:45 np0005473739 nova_compute[259550]: 2025-10-07 14:59:45.465 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 a6b18035-4aef-4825-90e6-799173979626_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.758s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:45 np0005473739 nova_compute[259550]: 2025-10-07 14:59:45.517 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] resizing rbd image a6b18035-4aef-4825-90e6-799173979626_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 10:59:45 np0005473739 nova_compute[259550]: 2025-10-07 14:59:45.734 2 DEBUG nova.objects.instance [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:59:45 np0005473739 nova_compute[259550]: 2025-10-07 14:59:45.748 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 10:59:45 np0005473739 nova_compute[259550]: 2025-10-07 14:59:45.749 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Ensure instance console log exists: /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 10:59:45 np0005473739 nova_compute[259550]: 2025-10-07 14:59:45.749 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:45 np0005473739 nova_compute[259550]: 2025-10-07 14:59:45.749 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:45 np0005473739 nova_compute[259550]: 2025-10-07 14:59:45.750 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:46 np0005473739 nova_compute[259550]: 2025-10-07 14:59:46.338 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Successfully created port: 9e613de1-4d71-4293-836f-5f1e121f0bb5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 10:59:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2926: 305 pgs: 305 active+clean; 64 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.5 KiB/s rd, 766 KiB/s wr, 13 op/s
Oct  7 10:59:47 np0005473739 nova_compute[259550]: 2025-10-07 14:59:47.493 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Successfully updated port: 9e613de1-4d71-4293-836f-5f1e121f0bb5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 10:59:47 np0005473739 nova_compute[259550]: 2025-10-07 14:59:47.508 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:59:47 np0005473739 nova_compute[259550]: 2025-10-07 14:59:47.508 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:59:47 np0005473739 nova_compute[259550]: 2025-10-07 14:59:47.509 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 10:59:47 np0005473739 nova_compute[259550]: 2025-10-07 14:59:47.579 2 DEBUG nova.compute.manager [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:47 np0005473739 nova_compute[259550]: 2025-10-07 14:59:47.579 2 DEBUG nova.compute.manager [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing instance network info cache due to event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:59:47 np0005473739 nova_compute[259550]: 2025-10-07 14:59:47.579 2 DEBUG oslo_concurrency.lockutils [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:59:47 np0005473739 nova_compute[259550]: 2025-10-07 14:59:47.628 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.728 2 DEBUG nova.network.neutron [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.752 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.752 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance network_info: |[{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.753 2 DEBUG oslo_concurrency.lockutils [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.753 2 DEBUG nova.network.neutron [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.755 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start _get_guest_xml network_info=[{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.759 2 WARNING nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.764 2 DEBUG nova.virt.libvirt.host [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.764 2 DEBUG nova.virt.libvirt.host [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.772 2 DEBUG nova.virt.libvirt.host [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.773 2 DEBUG nova.virt.libvirt.host [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.774 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.774 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.775 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.776 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.776 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.776 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.777 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.777 2 DEBUG nova.virt.hardware [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 10:59:48 np0005473739 nova_compute[259550]: 2025-10-07 14:59:48.780 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2927: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 25 op/s
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3398001066' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.294 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.513s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.315 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.320 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.355 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:59:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7150cb73-734b-4e70-befa-e81b51a9ae47 does not exist
Oct  7 10:59:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0261e093-26f1-4ab7-8475-25462c3b8b2a does not exist
Oct  7 10:59:49 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bfb38346-fab7-4632-9653-228a509f115b does not exist
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 10:59:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1256694600' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.807 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.808 2 DEBUG nova.virt.libvirt.vif [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-821190433',display_name='tempest-TestSnapshotPattern-server-821190433',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-821190433',id=151,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-vnbjscds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:59:44Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=a6b18035-4aef-4825-90e6-799173979626,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.808 2 DEBUG nova.network.os_vif_util [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.809 2 DEBUG nova.network.os_vif_util [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.811 2 DEBUG nova.objects.instance [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.830 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] End _get_guest_xml xml=<domain type="kvm">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <uuid>a6b18035-4aef-4825-90e6-799173979626</uuid>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <name>instance-00000097</name>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestSnapshotPattern-server-821190433</nova:name>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 14:59:48</nova:creationTime>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <nova:user uuid="1a7552cec1354175be418fba9a7588af">tempest-TestSnapshotPattern-1480624877-project-member</nova:user>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <nova:project uuid="aa7bd91eb3b040c89929aa23c9775dc9">tempest-TestSnapshotPattern-1480624877</nova:project>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <nova:port uuid="9e613de1-4d71-4293-836f-5f1e121f0bb5">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.8" ipVersion="4"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <system>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <entry name="serial">a6b18035-4aef-4825-90e6-799173979626</entry>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <entry name="uuid">a6b18035-4aef-4825-90e6-799173979626</entry>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </system>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <os>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  </os>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <features>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  </features>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  </clock>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  <devices>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a6b18035-4aef-4825-90e6-799173979626_disk">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/a6b18035-4aef-4825-90e6-799173979626_disk.config">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      </source>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      </auth>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </disk>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:b8:68:98"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <target dev="tap9e613de1-4d"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </interface>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/console.log" append="off"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </serial>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <video>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </video>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </rng>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 10:59:49 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 10:59:49 np0005473739 nova_compute[259550]:  </devices>
Oct  7 10:59:49 np0005473739 nova_compute[259550]: </domain>
Oct  7 10:59:49 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.833 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Preparing to wait for external event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.833 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.833 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.834 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.834 2 DEBUG nova.virt.libvirt.vif [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T14:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-821190433',display_name='tempest-TestSnapshotPattern-server-821190433',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-821190433',id=151,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-vnbjscds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T14:59:44Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=a6b18035-4aef-4825-90e6-799173979626,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.835 2 DEBUG nova.network.os_vif_util [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.835 2 DEBUG nova.network.os_vif_util [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.836 2 DEBUG os_vif [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.836 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.837 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.841 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9e613de1-4d, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.842 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9e613de1-4d, col_values=(('external_ids', {'iface-id': '9e613de1-4d71-4293-836f-5f1e121f0bb5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:68:98', 'vm-uuid': 'a6b18035-4aef-4825-90e6-799173979626'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.843 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:49 np0005473739 NetworkManager[44949]: <info>  [1759849189.8447] manager: (tap9e613de1-4d): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/664)
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.852 2 INFO os_vif [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d')#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.894 2 DEBUG nova.network.neutron [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated VIF entry in instance network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.895 2 DEBUG nova.network.neutron [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.921 2 DEBUG oslo_concurrency.lockutils [req-72e2ac2c-e41b-443f-b929-c5b43ec53803 req-82a1be52-6326-45c7-9732-6a155b1d8d83 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.925 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.925 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.925 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No VIF found with MAC fa:16:3e:b8:68:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.925 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Using config drive#033[00m
Oct  7 10:59:49 np0005473739 nova_compute[259550]: 2025-10-07 14:59:49.947 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:59:50 np0005473739 podman[428650]: 2025-10-07 14:59:50.231358727 +0000 UTC m=+0.048152715 container create df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:59:50 np0005473739 podman[428650]: 2025-10-07 14:59:50.203418167 +0000 UTC m=+0.020212175 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:59:50 np0005473739 systemd[1]: Started libpod-conmon-df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b.scope.
Oct  7 10:59:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:59:50 np0005473739 podman[428650]: 2025-10-07 14:59:50.384100979 +0000 UTC m=+0.200895017 container init df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:59:50 np0005473739 podman[428650]: 2025-10-07 14:59:50.391762642 +0000 UTC m=+0.208556630 container start df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:59:50 np0005473739 quirky_ishizaka[428666]: 167 167
Oct  7 10:59:50 np0005473739 systemd[1]: libpod-df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b.scope: Deactivated successfully.
Oct  7 10:59:50 np0005473739 podman[428650]: 2025-10-07 14:59:50.407783036 +0000 UTC m=+0.224577024 container attach df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 10:59:50 np0005473739 podman[428650]: 2025-10-07 14:59:50.409031059 +0000 UTC m=+0.225825047 container died df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:59:50 np0005473739 nova_compute[259550]: 2025-10-07 14:59:50.461 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Creating config drive at /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config#033[00m
Oct  7 10:59:50 np0005473739 nova_compute[259550]: 2025-10-07 14:59:50.467 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp8xqf73c execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 10:59:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:59:50 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 10:59:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7bcee7878689e2f91fb4e4297519b7d022c959de8cb14dd51107eff5e7954a42-merged.mount: Deactivated successfully.
Oct  7 10:59:50 np0005473739 nova_compute[259550]: 2025-10-07 14:59:50.616 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp8xqf73c" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:50 np0005473739 nova_compute[259550]: 2025-10-07 14:59:50.642 2 DEBUG nova.storage.rbd_utils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image a6b18035-4aef-4825-90e6-799173979626_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 10:59:50 np0005473739 nova_compute[259550]: 2025-10-07 14:59:50.646 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config a6b18035-4aef-4825-90e6-799173979626_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 10:59:50 np0005473739 podman[428650]: 2025-10-07 14:59:50.66147266 +0000 UTC m=+0.478266648 container remove df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quirky_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 10:59:50 np0005473739 systemd[1]: libpod-conmon-df3b210dadaed2e071f88f9f9ffd2855a91f5e104866858546698785bb40201b.scope: Deactivated successfully.
Oct  7 10:59:50 np0005473739 podman[428728]: 2025-10-07 14:59:50.83867941 +0000 UTC m=+0.045863795 container create f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 10:59:50 np0005473739 nova_compute[259550]: 2025-10-07 14:59:50.874 2 DEBUG oslo_concurrency.processutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config a6b18035-4aef-4825-90e6-799173979626_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.228s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 10:59:50 np0005473739 nova_compute[259550]: 2025-10-07 14:59:50.875 2 INFO nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Deleting local config drive /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626/disk.config because it was imported into RBD.#033[00m
Oct  7 10:59:50 np0005473739 systemd[1]: Started libpod-conmon-f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe.scope.
Oct  7 10:59:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:59:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:50 np0005473739 podman[428728]: 2025-10-07 14:59:50.817352135 +0000 UTC m=+0.024536550 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:59:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2928: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:59:50 np0005473739 kernel: tap9e613de1-4d: entered promiscuous mode
Oct  7 10:59:50 np0005473739 podman[428728]: 2025-10-07 14:59:50.939326093 +0000 UTC m=+0.146510488 container init f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 10:59:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:50Z|01645|binding|INFO|Claiming lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 for this chassis.
Oct  7 10:59:50 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:50Z|01646|binding|INFO|9e613de1-4d71-4293-836f-5f1e121f0bb5: Claiming fa:16:3e:b8:68:98 10.100.0.8
Oct  7 10:59:50 np0005473739 NetworkManager[44949]: <info>  [1759849190.9449] manager: (tap9e613de1-4d): new Tun device (/org/freedesktop/NetworkManager/Devices/665)
Oct  7 10:59:50 np0005473739 nova_compute[259550]: 2025-10-07 14:59:50.942 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:50 np0005473739 podman[428728]: 2025-10-07 14:59:50.949833362 +0000 UTC m=+0.157017747 container start f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:59:50 np0005473739 podman[428728]: 2025-10-07 14:59:50.955621205 +0000 UTC m=+0.162805590 container attach f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:59:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.960 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:68:98 10.100.0.8'], port_security=['fa:16:3e:b8:68:98 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6b18035-4aef-4825-90e6-799173979626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5714012d-b182-4fef-9241-3afcb9c700d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a9b7c48-78a6-4a08-9a38-f3e228e68bdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f64d47ed-3333-494f-a3a9-07b1b0158b50, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9e613de1-4d71-4293-836f-5f1e121f0bb5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 10:59:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.962 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9e613de1-4d71-4293-836f-5f1e121f0bb5 in datapath 5714012d-b182-4fef-9241-3afcb9c700d6 bound to our chassis#033[00m
Oct  7 10:59:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.964 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5714012d-b182-4fef-9241-3afcb9c700d6#033[00m
Oct  7 10:59:50 np0005473739 systemd-udevd[428766]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:59:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.979 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2b3b0929-0e5f-4fad-9da4-835b2f9edd29]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.980 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5714012d-b1 in ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Oct  7 10:59:50 np0005473739 NetworkManager[44949]: <info>  [1759849190.9927] device (tap9e613de1-4d): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 10:59:50 np0005473739 NetworkManager[44949]: <info>  [1759849190.9941] device (tap9e613de1-4d): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 10:59:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.984 275502 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5714012d-b0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Oct  7 10:59:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.985 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[2f22d0fa-9a97-4b4b-a10f-87a34446e540]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:50 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:50.987 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[335887ec-c008-43d4-ba4c-790952277c11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:50 np0005473739 systemd-machined[214580]: New machine qemu-185-instance-00000097.
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.001 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[228ae7ce-e9f5-4ca8-aeb8-098182970c32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.025 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:51 np0005473739 systemd[1]: Started Virtual Machine qemu-185-instance-00000097.
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.030 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e575df02-254e-450b-8096-783077102a26]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:51Z|01647|binding|INFO|Setting lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 ovn-installed in OVS
Oct  7 10:59:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:51Z|01648|binding|INFO|Setting lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 up in Southbound
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.060 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[49062261-605e-44c8-9dbe-b734bb857b25]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 systemd-udevd[428770]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 10:59:51 np0005473739 NetworkManager[44949]: <info>  [1759849191.0691] manager: (tap5714012d-b0): new Veth device (/org/freedesktop/NetworkManager/Devices/666)
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.068 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b4b22645-9df5-4ae8-abb6-88fb0951c0df]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.106 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[bdd5a55d-18dc-4de4-9df1-2ac527db3e63]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.111 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[0d384bb9-3a3f-4b13-b15e-c36c7797e720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 NetworkManager[44949]: <info>  [1759849191.1378] device (tap5714012d-b0): carrier: link connected
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.140 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[cb70f400-e95f-4735-9b2f-17e1708e6183]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.160 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[b22d82c8-e6cd-4f32-8511-1ca35c35da1c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5714012d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:84:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977469, 'reachable_time': 27955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 428799, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.178 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[f3241a81-2367-4b16-b6d0-efe9165e2dbe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe98:84e6'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977469, 'tstamp': 977469}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 428800, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.200 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7db7ba1b-cc1b-416b-bcf2-dd1e43229dcd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5714012d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:84:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 110, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977469, 'reachable_time': 27955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 96, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 96, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 428801, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.233 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[173e6b7c-d844-40a5-bfaf-0c419f1e393f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.287 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[db3ca3c5-f6ea-42b9-a810-b678cdf30b5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.288 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5714012d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.289 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5714012d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.291 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:51 np0005473739 NetworkManager[44949]: <info>  [1759849191.2955] manager: (tap5714012d-b0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/667)
Oct  7 10:59:51 np0005473739 kernel: tap5714012d-b0: entered promiscuous mode
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.301 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5714012d-b0, col_values=(('external_ids', {'iface-id': '8ae40a35-baff-4538-b31d-4c05f61bc2b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 10:59:51 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:51Z|01649|binding|INFO|Releasing lport 8ae40a35-baff-4538-b31d-4c05f61bc2b8 from this chassis (sb_readonly=0)
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.304 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.323 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.323 161536 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5714012d-b182-4fef-9241-3afcb9c700d6.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5714012d-b182-4fef-9241-3afcb9c700d6.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.324 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[3675b6fb-ca3e-4b9c-80e0-dcdff84b1111]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.325 161536 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: global
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    log         /dev/log local0 debug
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    log-tag     haproxy-metadata-proxy-5714012d-b182-4fef-9241-3afcb9c700d6
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    user        root
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    group       root
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    maxconn     1024
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    pidfile     /var/lib/neutron/external/pids/5714012d-b182-4fef-9241-3afcb9c700d6.pid.haproxy
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    daemon
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: defaults
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    log global
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    mode http
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    option httplog
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    option dontlognull
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    option http-server-close
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    option forwardfor
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    retries                 3
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    timeout http-request    30s
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    timeout connect         30s
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    timeout client          32s
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    timeout server          32s
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    timeout http-keep-alive 30s
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: listen listener
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    bind 169.254.169.254:80
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    server metadata /var/lib/neutron/metadata_proxy
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]:    http-request add-header X-OVN-Network-ID 5714012d-b182-4fef-9241-3afcb9c700d6
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Oct  7 10:59:51 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 14:59:51.326 161536 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'env', 'PROCESS_TAG=haproxy-5714012d-b182-4fef-9241-3afcb9c700d6', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5714012d-b182-4fef-9241-3afcb9c700d6.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.465 2 DEBUG nova.compute.manager [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.466 2 DEBUG oslo_concurrency.lockutils [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.466 2 DEBUG oslo_concurrency.lockutils [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.466 2 DEBUG oslo_concurrency.lockutils [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:51 np0005473739 nova_compute[259550]: 2025-10-07 14:59:51.466 2 DEBUG nova.compute.manager [req-17658c69-58c5-42b2-a45c-6e91b25f8e3c req-e6729576-6feb-4f1e-952a-6e0bddb021b1 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Processing event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 10:59:51 np0005473739 podman[428836]: 2025-10-07 14:59:51.720658701 +0000 UTC m=+0.058372606 container create c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 10:59:51 np0005473739 systemd[1]: Started libpod-conmon-c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7.scope.
Oct  7 10:59:51 np0005473739 podman[428836]: 2025-10-07 14:59:51.685375438 +0000 UTC m=+0.023089363 image pull 26280da617d52ac64ac1fa9a18a315d65ac237c1373028f8064008a821dbfd8d quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7
Oct  7 10:59:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:59:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0737f8a91a4236f462b27e601747872518f52d58de889974962d2146e39c7d2/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:51 np0005473739 podman[428836]: 2025-10-07 14:59:51.813651472 +0000 UTC m=+0.151365407 container init c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 10:59:51 np0005473739 podman[428836]: 2025-10-07 14:59:51.821354786 +0000 UTC m=+0.159068701 container start c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:59:51 np0005473739 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [NOTICE]   (428907) : New worker (428909) forked
Oct  7 10:59:51 np0005473739 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [NOTICE]   (428907) : Loading success.
Oct  7 10:59:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:52 np0005473739 exciting_lumiere[428748]: --> passed data devices: 0 physical, 3 LVM
Oct  7 10:59:52 np0005473739 exciting_lumiere[428748]: --> relative data size: 1.0
Oct  7 10:59:52 np0005473739 exciting_lumiere[428748]: --> All data devices are unavailable
Oct  7 10:59:52 np0005473739 systemd[1]: libpod-f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe.scope: Deactivated successfully.
Oct  7 10:59:52 np0005473739 systemd[1]: libpod-f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe.scope: Consumed 1.012s CPU time.
Oct  7 10:59:52 np0005473739 podman[428728]: 2025-10-07 14:59:52.046227228 +0000 UTC m=+1.253411623 container died f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:59:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4d7e5f2aae8ce8f3763efb924d10457e38aa722f7c4b742a28b3f0019845c509-merged.mount: Deactivated successfully.
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.295 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849192.295185, a6b18035-4aef-4825-90e6-799173979626 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.297 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] VM Started (Lifecycle Event)#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.299 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.306 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.310 2 INFO nova.virt.libvirt.driver [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance spawned successfully.#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.311 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.376 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.379 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.406 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.406 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.406 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.407 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.407 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.407 2 DEBUG nova.virt.libvirt.driver [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 10:59:52 np0005473739 podman[428728]: 2025-10-07 14:59:52.426337237 +0000 UTC m=+1.633521622 container remove f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_lumiere, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:59:52 np0005473739 systemd[1]: libpod-conmon-f8a11c588ee77294b5af95b271e98403862c7f25642e6ef3296a8b956cf5fdbe.scope: Deactivated successfully.
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.453 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.453 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849192.2952929, a6b18035-4aef-4825-90e6-799173979626 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.453 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] VM Paused (Lifecycle Event)#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.480 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.486 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849192.306175, a6b18035-4aef-4825-90e6-799173979626 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.486 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] VM Resumed (Lifecycle Event)#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.489 2 INFO nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 7.94 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.489 2 DEBUG nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.655 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.661 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.681 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.695 2 INFO nova.compute.manager [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 9.00 seconds to build instance.#033[00m
Oct  7 10:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 10:59:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 10:59:52 np0005473739 nova_compute[259550]: 2025-10-07 14:59:52.713 2 DEBUG oslo_concurrency.lockutils [None req-97bddce1-af45-4a08-9ce0-1e72291fe73a 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.078s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2929: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 17 KiB/s rd, 1.8 MiB/s wr, 27 op/s
Oct  7 10:59:53 np0005473739 podman[429083]: 2025-10-07 14:59:53.075084206 +0000 UTC m=+0.043666457 container create 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:59:53 np0005473739 systemd[1]: Started libpod-conmon-3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7.scope.
Oct  7 10:59:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:59:53 np0005473739 podman[429083]: 2025-10-07 14:59:53.056420632 +0000 UTC m=+0.025002913 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:59:53 np0005473739 podman[429083]: 2025-10-07 14:59:53.166720161 +0000 UTC m=+0.135302442 container init 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 10:59:53 np0005473739 podman[429083]: 2025-10-07 14:59:53.175259897 +0000 UTC m=+0.143842158 container start 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:59:53 np0005473739 podman[429083]: 2025-10-07 14:59:53.179838578 +0000 UTC m=+0.148420919 container attach 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:59:53 np0005473739 optimistic_neumann[429100]: 167 167
Oct  7 10:59:53 np0005473739 systemd[1]: libpod-3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7.scope: Deactivated successfully.
Oct  7 10:59:53 np0005473739 conmon[429100]: conmon 3b2fd1f2e66c6189a99e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7.scope/container/memory.events
Oct  7 10:59:53 np0005473739 podman[429083]: 2025-10-07 14:59:53.184328017 +0000 UTC m=+0.152910288 container died 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:59:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b63eb7c8616cdd72a656982b3b992999736a5950b42ef08ea922a5ec3841dc35-merged.mount: Deactivated successfully.
Oct  7 10:59:53 np0005473739 podman[429083]: 2025-10-07 14:59:53.249011209 +0000 UTC m=+0.217593470 container remove 3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=optimistic_neumann, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 10:59:53 np0005473739 systemd[1]: libpod-conmon-3b2fd1f2e66c6189a99e2bd99e45f1b2bdf4fbee35e9b5fce455976f492d88d7.scope: Deactivated successfully.
Oct  7 10:59:53 np0005473739 podman[429124]: 2025-10-07 14:59:53.440890267 +0000 UTC m=+0.041829828 container create 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 10:59:53 np0005473739 systemd[1]: Started libpod-conmon-473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9.scope.
Oct  7 10:59:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:59:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:53 np0005473739 podman[429124]: 2025-10-07 14:59:53.423412794 +0000 UTC m=+0.024352375 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:59:53 np0005473739 podman[429124]: 2025-10-07 14:59:53.517765212 +0000 UTC m=+0.118704783 container init 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3)
Oct  7 10:59:53 np0005473739 podman[429124]: 2025-10-07 14:59:53.526113483 +0000 UTC m=+0.127053034 container start 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:59:53 np0005473739 podman[429124]: 2025-10-07 14:59:53.530580511 +0000 UTC m=+0.131520102 container attach 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True)
Oct  7 10:59:53 np0005473739 nova_compute[259550]: 2025-10-07 14:59:53.558 2 DEBUG nova.compute.manager [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:53 np0005473739 nova_compute[259550]: 2025-10-07 14:59:53.560 2 DEBUG oslo_concurrency.lockutils [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 10:59:53 np0005473739 nova_compute[259550]: 2025-10-07 14:59:53.560 2 DEBUG oslo_concurrency.lockutils [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 10:59:53 np0005473739 nova_compute[259550]: 2025-10-07 14:59:53.560 2 DEBUG oslo_concurrency.lockutils [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 10:59:53 np0005473739 nova_compute[259550]: 2025-10-07 14:59:53.560 2 DEBUG nova.compute.manager [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] No waiting events found dispatching network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 10:59:53 np0005473739 nova_compute[259550]: 2025-10-07 14:59:53.561 2 WARNING nova.compute.manager [req-210b6217-e434-4dc2-a8d5-e4169c42813c req-a272c014-5e30-41b2-887d-1dd97e3a798c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received unexpected event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 for instance with vm_state active and task_state None.#033[00m
Oct  7 10:59:53 np0005473739 nova_compute[259550]: 2025-10-07 14:59:53.717 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]: {
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:    "0": [
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:        {
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "devices": [
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "/dev/loop3"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            ],
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_name": "ceph_lv0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_size": "21470642176",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "name": "ceph_lv0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "tags": {
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cluster_name": "ceph",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.crush_device_class": "",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.encrypted": "0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osd_id": "0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.type": "block",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.vdo": "0"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            },
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "type": "block",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "vg_name": "ceph_vg0"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:        }
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:    ],
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:    "1": [
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:        {
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "devices": [
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "/dev/loop4"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            ],
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_name": "ceph_lv1",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_size": "21470642176",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "name": "ceph_lv1",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "tags": {
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cluster_name": "ceph",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.crush_device_class": "",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.encrypted": "0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osd_id": "1",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.type": "block",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.vdo": "0"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            },
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "type": "block",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "vg_name": "ceph_vg1"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:        }
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:    ],
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:    "2": [
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:        {
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "devices": [
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "/dev/loop5"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            ],
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_name": "ceph_lv2",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_size": "21470642176",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "name": "ceph_lv2",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "tags": {
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cephx_lockbox_secret": "",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.cluster_name": "ceph",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.crush_device_class": "",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.encrypted": "0",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osd_id": "2",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.type": "block",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:                "ceph.vdo": "0"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            },
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "type": "block",
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:            "vg_name": "ceph_vg2"
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:        }
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]:    ]
Oct  7 10:59:54 np0005473739 vibrant_mccarthy[429141]: }
Oct  7 10:59:54 np0005473739 systemd[1]: libpod-473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9.scope: Deactivated successfully.
Oct  7 10:59:54 np0005473739 podman[429150]: 2025-10-07 14:59:54.400080992 +0000 UTC m=+0.026947155 container died 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:59:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4c756079f35c9a12409fbb3e7865ec690fec67aa364ec2a48b8443cdab0b8682-merged.mount: Deactivated successfully.
Oct  7 10:59:54 np0005473739 nova_compute[259550]: 2025-10-07 14:59:54.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:54 np0005473739 NetworkManager[44949]: <info>  [1759849194.8525] manager: (patch-br-int-to-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/668)
Oct  7 10:59:54 np0005473739 NetworkManager[44949]: <info>  [1759849194.8534] manager: (patch-provnet-fee451c8-553b-4b1e-ac42-8a95db610ae1-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/669)
Oct  7 10:59:54 np0005473739 nova_compute[259550]: 2025-10-07 14:59:54.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2930: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.8 MiB/s wr, 70 op/s
Oct  7 10:59:54 np0005473739 nova_compute[259550]: 2025-10-07 14:59:54.941 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:54 np0005473739 ovn_controller[151684]: 2025-10-07T14:59:54Z|01650|binding|INFO|Releasing lport 8ae40a35-baff-4538-b31d-4c05f61bc2b8 from this chassis (sb_readonly=0)
Oct  7 10:59:54 np0005473739 nova_compute[259550]: 2025-10-07 14:59:54.949 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:55 np0005473739 nova_compute[259550]: 2025-10-07 14:59:55.272 2 DEBUG nova.compute.manager [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 10:59:55 np0005473739 nova_compute[259550]: 2025-10-07 14:59:55.273 2 DEBUG nova.compute.manager [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing instance network info cache due to event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 10:59:55 np0005473739 nova_compute[259550]: 2025-10-07 14:59:55.273 2 DEBUG oslo_concurrency.lockutils [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 10:59:55 np0005473739 nova_compute[259550]: 2025-10-07 14:59:55.274 2 DEBUG oslo_concurrency.lockutils [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 10:59:55 np0005473739 nova_compute[259550]: 2025-10-07 14:59:55.274 2 DEBUG nova.network.neutron [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 10:59:55 np0005473739 podman[429150]: 2025-10-07 14:59:55.419713887 +0000 UTC m=+1.046580030 container remove 473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vibrant_mccarthy, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 10:59:55 np0005473739 systemd[1]: libpod-conmon-473b55c7ca030a9c782031dfb9462e504133bf725c14b0b6114c3a95725c4ef9.scope: Deactivated successfully.
Oct  7 10:59:56 np0005473739 podman[429306]: 2025-10-07 14:59:56.058953044 +0000 UTC m=+0.037460672 container create 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:59:56 np0005473739 systemd[1]: Started libpod-conmon-7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44.scope.
Oct  7 10:59:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:59:56 np0005473739 podman[429306]: 2025-10-07 14:59:56.042780316 +0000 UTC m=+0.021287954 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:59:56 np0005473739 podman[429306]: 2025-10-07 14:59:56.154695448 +0000 UTC m=+0.133203116 container init 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True)
Oct  7 10:59:56 np0005473739 podman[429306]: 2025-10-07 14:59:56.160741478 +0000 UTC m=+0.139249106 container start 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:59:56 np0005473739 pedantic_pasteur[429323]: 167 167
Oct  7 10:59:56 np0005473739 podman[429306]: 2025-10-07 14:59:56.164750744 +0000 UTC m=+0.143258372 container attach 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:59:56 np0005473739 systemd[1]: libpod-7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44.scope: Deactivated successfully.
Oct  7 10:59:56 np0005473739 podman[429306]: 2025-10-07 14:59:56.166908641 +0000 UTC m=+0.145416269 container died 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 10:59:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1532678f27fd4fa9a72350c65d3906fec359699e1c1bc49167ce332d057521d3-merged.mount: Deactivated successfully.
Oct  7 10:59:56 np0005473739 nova_compute[259550]: 2025-10-07 14:59:56.203 2 DEBUG nova.network.neutron [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated VIF entry in instance network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 10:59:56 np0005473739 nova_compute[259550]: 2025-10-07 14:59:56.205 2 DEBUG nova.network.neutron [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 10:59:56 np0005473739 podman[429306]: 2025-10-07 14:59:56.222179794 +0000 UTC m=+0.200687422 container remove 7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_pasteur, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 10:59:56 np0005473739 nova_compute[259550]: 2025-10-07 14:59:56.223 2 DEBUG oslo_concurrency.lockutils [req-4b7ce120-6321-4a47-9fa0-1b98c0de9b98 req-6f8d2663-ff95-4482-b563-dba42271f98a 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 10:59:56 np0005473739 systemd[1]: libpod-conmon-7db6af7b105132c438df236ab3e07e22eb8491fd523fd7cc70e409708a5d6c44.scope: Deactivated successfully.
Oct  7 10:59:56 np0005473739 podman[429347]: 2025-10-07 14:59:56.377090654 +0000 UTC m=+0.037739180 container create 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 10:59:56 np0005473739 systemd[1]: Started libpod-conmon-6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f.scope.
Oct  7 10:59:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 10:59:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:56 np0005473739 podman[429347]: 2025-10-07 14:59:56.361779949 +0000 UTC m=+0.022428495 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 10:59:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:56 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 10:59:56 np0005473739 podman[429347]: 2025-10-07 14:59:56.477011568 +0000 UTC m=+0.137660124 container init 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 10:59:56 np0005473739 podman[429347]: 2025-10-07 14:59:56.484584189 +0000 UTC m=+0.145232715 container start 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 10:59:56 np0005473739 podman[429347]: 2025-10-07 14:59:56.491966184 +0000 UTC m=+0.152614710 container attach 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #144. Immutable memtables: 0.
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.875996) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 87] Flushing memtable with next log file: 144
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196876285, "job": 87, "event": "flush_started", "num_memtables": 1, "num_entries": 503, "num_deletes": 257, "total_data_size": 420171, "memory_usage": 429656, "flush_reason": "Manual Compaction"}
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 87] Level-0 flush table #145: started
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196881960, "cf_name": "default", "job": 87, "event": "table_file_creation", "file_number": 145, "file_size": 416094, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61095, "largest_seqno": 61597, "table_properties": {"data_size": 413308, "index_size": 757, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 965, "raw_key_size": 6642, "raw_average_key_size": 18, "raw_value_size": 407658, "raw_average_value_size": 1129, "num_data_blocks": 35, "num_entries": 361, "num_filter_entries": 361, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849167, "oldest_key_time": 1759849167, "file_creation_time": 1759849196, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 145, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 87] Flush lasted 6009 microseconds, and 2334 cpu microseconds.
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.882013) [db/flush_job.cc:967] [default] [JOB 87] Level-0 flush table #145: 416094 bytes OK
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.882034) [db/memtable_list.cc:519] [default] Level-0 commit table #145 started
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.883486) [db/memtable_list.cc:722] [default] Level-0 commit table #145: memtable #1 done
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.883500) EVENT_LOG_v1 {"time_micros": 1759849196883496, "job": 87, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.883517) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 87] Try to delete WAL files size 417209, prev total WAL file size 417209, number of live WAL files 2.
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000141.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.884076) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032353135' seq:72057594037927935, type:22 .. '6C6F676D0032373638' seq:0, type:0; will stop at (end)
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 88] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 87 Base level 0, inputs: [145(406KB)], [143(9594KB)]
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196884148, "job": 88, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [145], "files_L6": [143], "score": -1, "input_data_size": 10241107, "oldest_snapshot_seqno": -1}
Oct  7 10:59:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2931: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 99 op/s
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 88] Generated table #146: 7906 keys, 10129329 bytes, temperature: kUnknown
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196963213, "cf_name": "default", "job": 88, "event": "table_file_creation", "file_number": 146, "file_size": 10129329, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10078130, "index_size": 30350, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 19781, "raw_key_size": 207880, "raw_average_key_size": 26, "raw_value_size": 9938710, "raw_average_value_size": 1257, "num_data_blocks": 1178, "num_entries": 7906, "num_filter_entries": 7906, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849196, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 146, "seqno_to_time_mapping": "N/A"}}
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.963408) [db/compaction/compaction_job.cc:1663] [default] [JOB 88] Compacted 1@0 + 1@6 files to L6 => 10129329 bytes
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.969219) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 129.4 rd, 128.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.4, 9.4 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(49.0) write-amplify(24.3) OK, records in: 8430, records dropped: 524 output_compression: NoCompression
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.969239) EVENT_LOG_v1 {"time_micros": 1759849196969230, "job": 88, "event": "compaction_finished", "compaction_time_micros": 79116, "compaction_time_cpu_micros": 35192, "output_level": 6, "num_output_files": 1, "total_output_size": 10129329, "num_input_records": 8430, "num_output_records": 7906, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000145.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196969422, "job": 88, "event": "table_file_deletion", "file_number": 145}
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000143.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849196971154, "job": 88, "event": "table_file_deletion", "file_number": 143}
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.883858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971201) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971207) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971208) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971210) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-14:59:56.971211) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]: {
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "osd_id": 2,
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "type": "bluestore"
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:    },
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "osd_id": 1,
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "type": "bluestore"
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:    },
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "osd_id": 0,
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:        "type": "bluestore"
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]:    }
Oct  7 10:59:57 np0005473739 determined_leavitt[429364]: }
Oct  7 10:59:57 np0005473739 systemd[1]: libpod-6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f.scope: Deactivated successfully.
Oct  7 10:59:57 np0005473739 systemd[1]: libpod-6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f.scope: Consumed 1.013s CPU time.
Oct  7 10:59:57 np0005473739 podman[429347]: 2025-10-07 14:59:57.499012305 +0000 UTC m=+1.159660831 container died 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 10:59:57 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2300897d88ac58b7bc37499001b7428874fb3d14637811830f411c1280195a44-merged.mount: Deactivated successfully.
Oct  7 10:59:57 np0005473739 podman[429347]: 2025-10-07 14:59:57.564431537 +0000 UTC m=+1.225080063 container remove 6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=determined_leavitt, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 10:59:57 np0005473739 systemd[1]: libpod-conmon-6c877f712303aa2358e01d2285bea2836253997c0a826a6583e79640f10db16f.scope: Deactivated successfully.
Oct  7 10:59:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 10:59:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:59:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 10:59:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:59:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 97819306-58ca-4909-9a4b-248c36972704 does not exist
Oct  7 10:59:57 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c4a142a5-7ec5-452b-9704-c8432c8d2778 does not exist
Oct  7 10:59:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:59:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 10:59:58 np0005473739 nova_compute[259550]: 2025-10-07 14:59:58.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 10:59:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2932: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.0 MiB/s wr, 87 op/s
Oct  7 10:59:59 np0005473739 nova_compute[259550]: 2025-10-07 14:59:59.848 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:00.097 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:00.097 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:00.098 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2933: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 74 op/s
Oct  7 11:00:01 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2934: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 73 op/s
Oct  7 11:00:03 np0005473739 podman[429461]: 2025-10-07 15:00:03.080104509 +0000 UTC m=+0.065461994 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:00:03 np0005473739 podman[429460]: 2025-10-07 15:00:03.080187521 +0000 UTC m=+0.068751980 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:00:03 np0005473739 nova_compute[259550]: 2025-10-07 15:00:03.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:04 np0005473739 nova_compute[259550]: 2025-10-07 15:00:04.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2935: 305 pgs: 305 active+clean; 91 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 465 KiB/s wr, 77 op/s
Oct  7 11:00:05 np0005473739 nova_compute[259550]: 2025-10-07 15:00:05.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:06 np0005473739 ovn_controller[151684]: 2025-10-07T15:00:06Z|00208|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:68:98 10.100.0.8
Oct  7 11:00:06 np0005473739 ovn_controller[151684]: 2025-10-07T15:00:06Z|00209|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:68:98 10.100.0.8
Oct  7 11:00:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2936: 305 pgs: 305 active+clean; 104 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.4 MiB/s wr, 71 op/s
Oct  7 11:00:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:08 np0005473739 nova_compute[259550]: 2025-10-07 15:00:08.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2937: 305 pgs: 305 active+clean; 117 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 333 KiB/s rd, 2.1 MiB/s wr, 52 op/s
Oct  7 11:00:09 np0005473739 nova_compute[259550]: 2025-10-07 15:00:09.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:09 np0005473739 nova_compute[259550]: 2025-10-07 15:00:09.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2938: 305 pgs: 305 active+clean; 120 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  7 11:00:10 np0005473739 nova_compute[259550]: 2025-10-07 15:00:10.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:11 np0005473739 nova_compute[259550]: 2025-10-07 15:00:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2939: 305 pgs: 305 active+clean; 120 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 313 KiB/s rd, 2.1 MiB/s wr, 61 op/s
Oct  7 11:00:13 np0005473739 nova_compute[259550]: 2025-10-07 15:00:13.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:14 np0005473739 nova_compute[259550]: 2025-10-07 15:00:14.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2940: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 325 KiB/s rd, 2.1 MiB/s wr, 62 op/s
Oct  7 11:00:15 np0005473739 nova_compute[259550]: 2025-10-07 15:00:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:15 np0005473739 nova_compute[259550]: 2025-10-07 15:00:15.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:00:16 np0005473739 podman[429498]: 2025-10-07 15:00:16.064056119 +0000 UTC m=+0.049755968 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  7 11:00:16 np0005473739 podman[429499]: 2025-10-07 15:00:16.128303739 +0000 UTC m=+0.111667355 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251001)
Oct  7 11:00:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2941: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 322 KiB/s rd, 1.7 MiB/s wr, 59 op/s
Oct  7 11:00:16 np0005473739 nova_compute[259550]: 2025-10-07 15:00:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:18 np0005473739 nova_compute[259550]: 2025-10-07 15:00:18.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2942: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 96 KiB/s rd, 775 KiB/s wr, 22 op/s
Oct  7 11:00:19 np0005473739 nova_compute[259550]: 2025-10-07 15:00:19.859 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2943: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 56 KiB/s wr, 12 op/s
Oct  7 11:00:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:00:22
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.rgw.root', 'backups', 'default.rgw.log', 'cephfs.cephfs.meta', 'images', 'cephfs.cephfs.data', 'default.rgw.meta', '.mgr', 'volumes', 'default.rgw.control', 'vms']
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:00:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2944: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:00:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:00:23 np0005473739 nova_compute[259550]: 2025-10-07 15:00:23.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:24 np0005473739 ovn_controller[151684]: 2025-10-07T15:00:24Z|01651|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Oct  7 11:00:24 np0005473739 nova_compute[259550]: 2025-10-07 15:00:24.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2945: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 14 KiB/s wr, 1 op/s
Oct  7 11:00:25 np0005473739 nova_compute[259550]: 2025-10-07 15:00:25.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2946: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s wr, 0 op/s
Oct  7 11:00:26 np0005473739 nova_compute[259550]: 2025-10-07 15:00:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.025 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.026 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.026 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.026 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.027 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:00:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:00:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3712701967' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.478 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.576 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.577 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.813 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.815 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3420MB free_disk=59.942745208740234GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.815 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.815 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.888 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a6b18035-4aef-4825-90e6-799173979626 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.889 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.889 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:00:27 np0005473739 nova_compute[259550]: 2025-10-07 15:00:27.933 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:00:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:00:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3550013779' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:00:28 np0005473739 nova_compute[259550]: 2025-10-07 15:00:28.500 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.566s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:00:28 np0005473739 nova_compute[259550]: 2025-10-07 15:00:28.506 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:00:28 np0005473739 nova_compute[259550]: 2025-10-07 15:00:28.530 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:00:28 np0005473739 nova_compute[259550]: 2025-10-07 15:00:28.551 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:00:28 np0005473739 nova_compute[259550]: 2025-10-07 15:00:28.551 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.736s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:28 np0005473739 nova_compute[259550]: 2025-10-07 15:00:28.732 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2947: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.7 KiB/s rd, 19 KiB/s wr, 1 op/s
Oct  7 11:00:29 np0005473739 nova_compute[259550]: 2025-10-07 15:00:29.223 2 DEBUG nova.compute.manager [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:00:29 np0005473739 nova_compute[259550]: 2025-10-07 15:00:29.272 2 INFO nova.compute.manager [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] instance snapshotting#033[00m
Oct  7 11:00:29 np0005473739 nova_compute[259550]: 2025-10-07 15:00:29.547 2 INFO nova.virt.libvirt.driver [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Beginning live snapshot process#033[00m
Oct  7 11:00:29 np0005473739 nova_compute[259550]: 2025-10-07 15:00:29.692 2 DEBUG nova.virt.libvirt.imagebackend [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No parent info for 1c7e024e-3dd7-433b-91ff-f363a3d5a581; asking the Image API where its store is _get_parent_pool /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1163#033[00m
Oct  7 11:00:29 np0005473739 nova_compute[259550]: 2025-10-07 15:00:29.864 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:29 np0005473739 nova_compute[259550]: 2025-10-07 15:00:29.938 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] creating snapshot(f3fa92b8d0f948688249e80b56b0cd9a) on rbd image(a6b18035-4aef-4825-90e6-799173979626_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 11:00:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e284 do_prune osdmap full prune enabled
Oct  7 11:00:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e285 e285: 3 total, 3 up, 3 in
Oct  7 11:00:30 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e285: 3 total, 3 up, 3 in
Oct  7 11:00:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2949: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.8 KiB/s rd, 23 KiB/s wr, 4 op/s
Oct  7 11:00:30 np0005473739 nova_compute[259550]: 2025-10-07 15:00:30.967 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] cloning vms/a6b18035-4aef-4825-90e6-799173979626_disk@f3fa92b8d0f948688249e80b56b0cd9a to images/a45275a6-57fe-4099-b442-be8d2cb87827 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.121 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] flattening images/a45275a6-57fe-4099-b442-be8d2cb87827 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.552 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.552 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.552 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.602 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.603 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.603 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.603 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:00:31 np0005473739 nova_compute[259550]: 2025-10-07 15:00:31.621 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] removing snapshot(f3fa92b8d0f948688249e80b56b0cd9a) on rbd image(a6b18035-4aef-4825-90e6-799173979626_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 11:00:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e285 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e285 do_prune osdmap full prune enabled
Oct  7 11:00:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e286 e286: 3 total, 3 up, 3 in
Oct  7 11:00:32 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e286: 3 total, 3 up, 3 in
Oct  7 11:00:32 np0005473739 nova_compute[259550]: 2025-10-07 15:00:32.701 2 DEBUG nova.storage.rbd_utils [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] creating snapshot(snap) on rbd image(a45275a6-57fe-4099-b442-be8d2cb87827) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 11:00:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:00:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1919368522' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:00:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:00:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1919368522' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0007603540934101713 of space, bias 1.0, pg target 0.2281062280230514 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:00:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2951: 305 pgs: 305 active+clean; 121 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 4.7 KiB/s rd, 28 KiB/s wr, 5 op/s
Oct  7 11:00:33 np0005473739 nova_compute[259550]: 2025-10-07 15:00:33.487 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:00:33 np0005473739 nova_compute[259550]: 2025-10-07 15:00:33.506 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 11:00:33 np0005473739 nova_compute[259550]: 2025-10-07 15:00:33.506 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 11:00:33 np0005473739 nova_compute[259550]: 2025-10-07 15:00:33.507 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:00:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e286 do_prune osdmap full prune enabled
Oct  7 11:00:33 np0005473739 nova_compute[259550]: 2025-10-07 15:00:33.735 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e287 e287: 3 total, 3 up, 3 in
Oct  7 11:00:33 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e287: 3 total, 3 up, 3 in
Oct  7 11:00:34 np0005473739 podman[429730]: 2025-10-07 15:00:34.094650808 +0000 UTC m=+0.056629549 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Oct  7 11:00:34 np0005473739 podman[429731]: 2025-10-07 15:00:34.094567956 +0000 UTC m=+0.056176518 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=iscsid, managed_by=edpm_ansible)
Oct  7 11:00:34 np0005473739 nova_compute[259550]: 2025-10-07 15:00:34.866 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2953: 305 pgs: 305 active+clean; 189 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 4.4 MiB/s rd, 6.6 MiB/s wr, 124 op/s
Oct  7 11:00:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:00:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 13K writes, 61K keys, 13K commit groups, 1.0 writes per commit group, ingest: 0.08 GB, 0.02 MB/s#012Cumulative WAL: 13K writes, 13K syncs, 1.00 writes per sync, written: 0.08 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1355 writes, 6386 keys, 1355 commit groups, 1.0 writes per commit group, ingest: 8.73 MB, 0.01 MB/s#012Interval WAL: 1355 writes, 1355 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     70.2      1.07              0.23        44    0.024       0      0       0.0       0.0#012  L6      1/0    9.66 MB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   4.7    138.0    116.6      3.05              1.02        43    0.071    274K    23K       0.0       0.0#012 Sum      1/0    9.66 MB   0.0      0.4     0.1      0.3       0.4      0.1       0.0   5.7    102.2    104.6      4.12              1.25        87    0.047    274K    23K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.1       0.1      0.0       0.0   7.2     82.5     84.8      0.76              0.19        12    0.063     49K   3011       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.4     0.1      0.3       0.3      0.0       0.0   0.0    138.0    116.6      3.05              1.02        43    0.071    274K    23K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     70.4      1.06              0.23        43    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 5400.0 total, 600.0 interval#012Flush(GB): cumulative 0.073, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.42 GB write, 0.08 MB/s write, 0.41 GB read, 0.08 MB/s read, 4.1 seconds#012Interval compaction: 0.06 GB write, 0.11 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.8 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 47.98 MB table_size: 0 occupancy: 18446744073709551615 collections: 10 last_copies: 0 last_secs: 0.000743 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3105,46.01 MB,15.1339%) FilterBlock(88,762.61 KB,0.244979%) IndexBlock(88,1.23 MB,0.405507%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 11:00:35 np0005473739 nova_compute[259550]: 2025-10-07 15:00:35.642 2 INFO nova.virt.libvirt.driver [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Snapshot image upload complete#033[00m
Oct  7 11:00:35 np0005473739 nova_compute[259550]: 2025-10-07 15:00:35.643 2 INFO nova.compute.manager [None req-b5066672-50e6-40db-90a8-90fa95770df1 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 6.37 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 11:00:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2954: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.3 MiB/s rd, 7.3 MiB/s wr, 160 op/s
Oct  7 11:00:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:38 np0005473739 nova_compute[259550]: 2025-10-07 15:00:38.753 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2955: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.9 MiB/s rd, 5.8 MiB/s wr, 131 op/s
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.611 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.611 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.629 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.694 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.694 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.702 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.702 2 INFO nova.compute.claims [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.852 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:00:39 np0005473739 nova_compute[259550]: 2025-10-07 15:00:39.890 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:00:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1994964867' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.341 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.489s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.349 2 DEBUG nova.compute.provider_tree [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.390 2 DEBUG nova.scheduler.client.report [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.432 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.738s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.433 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.486 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.486 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.506 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.531 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.626 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.627 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.628 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Creating image(s)#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.657 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.687 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.714 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.718 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "bbd3df744fb203e041adee356c0511b079943c80" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:40 np0005473739 nova_compute[259550]: 2025-10-07 15:00:40.719 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "bbd3df744fb203e041adee356c0511b079943c80" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2956: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 5.6 MiB/s rd, 5.6 MiB/s wr, 125 op/s
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.000 2 DEBUG nova.virt.libvirt.imagebackend [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image locations are: [{'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/a45275a6-57fe-4099-b442-be8d2cb87827/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/a45275a6-57fe-4099-b442-be8d2cb87827/snap', 'metadata': {}}] clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1085#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.046 2 DEBUG nova.virt.libvirt.imagebackend [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Selected location: {'url': 'rbd://82044f27-a8da-5b2a-a297-ff6afc620e1f/images/a45275a6-57fe-4099-b442-be8d2cb87827/snap', 'metadata': {'store': 'default_backend'}} clone /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagebackend.py:1094#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.047 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] cloning images/a45275a6-57fe-4099-b442-be8d2cb87827@snap to None/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.128 2 DEBUG nova.policy [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1a7552cec1354175be418fba9a7588af', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.209 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "bbd3df744fb203e041adee356c0511b079943c80" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.490s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.430 2 DEBUG nova.objects.instance [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'migration_context' on Instance uuid 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.542 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.543 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Ensure instance console log exists: /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.543 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.543 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:41 np0005473739 nova_compute[259550]: 2025-10-07 15:00:41.544 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e287 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e287 do_prune osdmap full prune enabled
Oct  7 11:00:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 e288: 3 total, 3 up, 3 in
Oct  7 11:00:42 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e288: 3 total, 3 up, 3 in
Oct  7 11:00:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2958: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.3 MiB/s rd, 829 KiB/s wr, 36 op/s
Oct  7 11:00:43 np0005473739 nova_compute[259550]: 2025-10-07 15:00:43.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:43.172 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=59, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=58) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:00:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:43.173 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 11:00:43 np0005473739 nova_compute[259550]: 2025-10-07 15:00:43.290 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Successfully created port: fc3eccef-04e2-406f-b8a2-3b5015d4e10a _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Oct  7 11:00:43 np0005473739 nova_compute[259550]: 2025-10-07 15:00:43.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:44.175 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '59'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.192 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Successfully updated port: fc3eccef-04e2-406f-b8a2-3b5015d4e10a _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.312 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.313 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquired lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.313 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.384 2 DEBUG nova.compute.manager [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.384 2 DEBUG nova.compute.manager [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing instance network info cache due to event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.384 2 DEBUG oslo_concurrency.lockutils [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.637 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 11:00:44 np0005473739 nova_compute[259550]: 2025-10-07 15:00:44.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2959: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.1 MiB/s rd, 755 KiB/s wr, 47 op/s
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.512 2 DEBUG nova.network.neutron [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.534 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Releasing lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.535 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance network_info: |[{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.535 2 DEBUG oslo_concurrency.lockutils [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.535 2 DEBUG nova.network.neutron [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.538 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start _get_guest_xml network_info=[{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T15:00:29Z,direct_url=<?>,disk_format='raw',id=a45275a6-57fe-4099-b442-be8d2cb87827,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1465212491',owner='aa7bd91eb3b040c89929aa23c9775dc9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T15:00:36Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': 'a45275a6-57fe-4099-b442-be8d2cb87827'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.542 2 WARNING nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.547 2 DEBUG nova.virt.libvirt.host [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.547 2 DEBUG nova.virt.libvirt.host [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.552 2 DEBUG nova.virt.libvirt.host [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.553 2 DEBUG nova.virt.libvirt.host [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.554 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.554 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='',container_format='bare',created_at=2025-10-07T15:00:29Z,direct_url=<?>,disk_format='raw',id=a45275a6-57fe-4099-b442-be8d2cb87827,min_disk=1,min_ram=0,name='tempest-TestSnapshotPatternsnapshot-1465212491',owner='aa7bd91eb3b040c89929aa23c9775dc9',properties=ImageMetaProps,protected=<?>,size=1073741824,status='active',tags=<?>,updated_at=2025-10-07T15:00:36Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.554 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.554 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.555 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.555 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.555 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.556 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.556 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.556 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.556 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.557 2 DEBUG nova.virt.hardware [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 11:00:46 np0005473739 nova_compute[259550]: 2025-10-07 15:00:46.559 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:00:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2960: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.9 KiB/s wr, 37 op/s
Oct  7 11:00:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 11:00:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204450414' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.002 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.443s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.026 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.030 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:00:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:47 np0005473739 podman[429987]: 2025-10-07 15:00:47.064515035 +0000 UTC m=+0.058822368 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:00:47 np0005473739 podman[429988]: 2025-10-07 15:00:47.093945664 +0000 UTC m=+0.084821656 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Oct  7 11:00:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 11:00:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/122746718' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.501 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.471s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.503 2 DEBUG nova.virt.libvirt.vif [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T15:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-257341137',display_name='tempest-TestSnapshotPattern-server-257341137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-257341137',id=152,image_ref='a45275a6-57fe-4099-b442-be8d2cb87827',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-tb8l5uf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a6b18035-4aef-4825-90e6-799173979626',image_min_disk='1',image_min_ram='0',image_owner_id='aa7bd91eb3b040c89929aa23c9775dc9',image_owner_project_name='tempest-TestSnapshotPattern-1480624877',image_owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member',image_user_id='1a7552cec1354175be418fba9a7588af',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T15:00:40Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=98da6205-c6cb-48d4-9502-fa1ca0f3e4ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.503 2 DEBUG nova.network.os_vif_util [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.504 2 DEBUG nova.network.os_vif_util [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.505 2 DEBUG nova.objects.instance [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.521 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] End _get_guest_xml xml=<domain type="kvm">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <uuid>98da6205-c6cb-48d4-9502-fa1ca0f3e4ce</uuid>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <name>instance-00000098</name>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <nova:name>tempest-TestSnapshotPattern-server-257341137</nova:name>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 15:00:46</nova:creationTime>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <nova:user uuid="1a7552cec1354175be418fba9a7588af">tempest-TestSnapshotPattern-1480624877-project-member</nova:user>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <nova:project uuid="aa7bd91eb3b040c89929aa23c9775dc9">tempest-TestSnapshotPattern-1480624877</nova:project>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="a45275a6-57fe-4099-b442-be8d2cb87827"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <nova:ports>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <nova:port uuid="fc3eccef-04e2-406f-b8a2-3b5015d4e10a">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        </nova:port>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      </nova:ports>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <system>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <entry name="serial">98da6205-c6cb-48d4-9502-fa1ca0f3e4ce</entry>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <entry name="uuid">98da6205-c6cb-48d4-9502-fa1ca0f3e4ce</entry>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </system>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <os>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  </os>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <features>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  </features>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  </clock>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  <devices>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      </source>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      </auth>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </disk>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      </source>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      </auth>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </disk>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <interface type="ethernet">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <mac address="fa:16:3e:0b:41:f9"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <driver name="vhost" rx_queue_size="512"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <mtu size="1442"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <target dev="tapfc3eccef-04"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </interface>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/console.log" append="off"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </serial>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <video>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </video>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <input type="keyboard" bus="usb"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </rng>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 11:00:47 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 11:00:47 np0005473739 nova_compute[259550]:  </devices>
Oct  7 11:00:47 np0005473739 nova_compute[259550]: </domain>
Oct  7 11:00:47 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.523 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Preparing to wait for external event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.523 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.524 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.524 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.524 2 DEBUG nova.virt.libvirt.vif [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-10-07T15:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-257341137',display_name='tempest-TestSnapshotPattern-server-257341137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-257341137',id=152,image_ref='a45275a6-57fe-4099-b442-be8d2cb87827',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-tb8l5uf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a6b18035-4aef-4825-90e6-799173979626',image_min_disk='1',image_min_ram='0',image_owner_id='aa7bd91eb3b040c89929aa23c9775dc9',image_owner_project_name='tempest-TestSnapshotPattern-1480624877',image_owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member',image_user_id='1a7552cec1354175be418fba9a7588af',image_version='8.0',network_allocated='True',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-10-07T15:00:40Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=98da6205-c6cb-48d4-9502-fa1ca0f3e4ce,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.525 2 DEBUG nova.network.os_vif_util [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.525 2 DEBUG nova.network.os_vif_util [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.525 2 DEBUG os_vif [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.526 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.527 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfc3eccef-04, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.530 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfc3eccef-04, col_values=(('external_ids', {'iface-id': 'fc3eccef-04e2-406f-b8a2-3b5015d4e10a', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0b:41:f9', 'vm-uuid': '98da6205-c6cb-48d4-9502-fa1ca0f3e4ce'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:47 np0005473739 NetworkManager[44949]: <info>  [1759849247.5332] manager: (tapfc3eccef-04): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/670)
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.534 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.538 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.539 2 INFO os_vif [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04')#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.600 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.601 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.601 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] No VIF found with MAC fa:16:3e:0b:41:f9, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.601 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Using config drive#033[00m
Oct  7 11:00:47 np0005473739 nova_compute[259550]: 2025-10-07 15:00:47.624 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.503 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Creating config drive at /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.508 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaag5z_gm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.648 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaag5z_gm" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.676 2 DEBUG nova.storage.rbd_utils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] rbd image 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.680 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.747 2 DEBUG nova.network.neutron [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updated VIF entry in instance network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.749 2 DEBUG nova.network.neutron [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.756 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.816 2 DEBUG oslo_concurrency.lockutils [req-31488e1b-d89d-42ff-99da-a4e38ed3e3b4 req-c319425f-c986-4b6d-98b7-d6adfa34827e 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.886 2 DEBUG oslo_concurrency.processutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.205s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.886 2 INFO nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Deleting local config drive /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce/disk.config because it was imported into RBD.#033[00m
Oct  7 11:00:48 np0005473739 kernel: tapfc3eccef-04: entered promiscuous mode
Oct  7 11:00:48 np0005473739 ovn_controller[151684]: 2025-10-07T15:00:48Z|01652|binding|INFO|Claiming lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a for this chassis.
Oct  7 11:00:48 np0005473739 ovn_controller[151684]: 2025-10-07T15:00:48Z|01653|binding|INFO|fc3eccef-04e2-406f-b8a2-3b5015d4e10a: Claiming fa:16:3e:0b:41:f9 10.100.0.4
Oct  7 11:00:48 np0005473739 NetworkManager[44949]: <info>  [1759849248.9534] manager: (tapfc3eccef-04): new Tun device (/org/freedesktop/NetworkManager/Devices/671)
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.953 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2961: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 1.1 KiB/s wr, 32 op/s
Oct  7 11:00:48 np0005473739 ovn_controller[151684]: 2025-10-07T15:00:48Z|01654|binding|INFO|Setting lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a ovn-installed in OVS
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:48 np0005473739 ovn_controller[151684]: 2025-10-07T15:00:48Z|01655|binding|INFO|Setting lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a up in Southbound
Oct  7 11:00:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:48.978 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:41:f9 10.100.0.4'], port_security=['fa:16:3e:0b:41:f9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '98da6205-c6cb-48d4-9502-fa1ca0f3e4ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5714012d-b182-4fef-9241-3afcb9c700d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'neutron:revision_number': '2', 'neutron:security_group_ids': '5a9b7c48-78a6-4a08-9a38-f3e228e68bdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f64d47ed-3333-494f-a3a9-07b1b0158b50, chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fc3eccef-04e2-406f-b8a2-3b5015d4e10a) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:00:48 np0005473739 nova_compute[259550]: 2025-10-07 15:00:48.977 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:48.979 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fc3eccef-04e2-406f-b8a2-3b5015d4e10a in datapath 5714012d-b182-4fef-9241-3afcb9c700d6 bound to our chassis#033[00m
Oct  7 11:00:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:48.981 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5714012d-b182-4fef-9241-3afcb9c700d6#033[00m
Oct  7 11:00:48 np0005473739 systemd-machined[214580]: New machine qemu-186-instance-00000098.
Oct  7 11:00:48 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:48.996 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7aca2c21-7ea4-4598-8af0-0ce30378d114]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:00:49 np0005473739 systemd-udevd[430144]: Network interface NamePolicy= disabled on kernel command line.
Oct  7 11:00:49 np0005473739 systemd[1]: Started Virtual Machine qemu-186-instance-00000098.
Oct  7 11:00:49 np0005473739 NetworkManager[44949]: <info>  [1759849249.0246] device (tapfc3eccef-04): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Oct  7 11:00:49 np0005473739 NetworkManager[44949]: <info>  [1759849249.0263] device (tapfc3eccef-04): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.044 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[182f630a-8ced-4c28-a80e-c8100da39250]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.048 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[1105270d-ca2f-4999-ac5d-d374f2a6860b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.077 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9d6d8f-550b-4c2a-a8cd-4562d6b6be82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.099 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[beb1ee3e-216f-40d7-a8f6-87082a0fddae]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5714012d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:84:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 5, 'rx_bytes': 916, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977469, 'reachable_time': 27955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 430154, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.118 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[69ab2987-885c-429d-a0bf-0d8693f09ca8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5714012d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977482, 'tstamp': 977482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430157, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5714012d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977484, 'tstamp': 977484}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 430157, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.121 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5714012d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:00:49 np0005473739 nova_compute[259550]: 2025-10-07 15:00:49.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:49 np0005473739 nova_compute[259550]: 2025-10-07 15:00:49.124 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.125 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5714012d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.126 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.126 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5714012d-b0, col_values=(('external_ids', {'iface-id': '8ae40a35-baff-4538-b31d-4c05f61bc2b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:00:49 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:00:49.126 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 11:00:49 np0005473739 nova_compute[259550]: 2025-10-07 15:00:49.181 2 DEBUG nova.compute.manager [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:00:49 np0005473739 nova_compute[259550]: 2025-10-07 15:00:49.182 2 DEBUG oslo_concurrency.lockutils [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:49 np0005473739 nova_compute[259550]: 2025-10-07 15:00:49.182 2 DEBUG oslo_concurrency.lockutils [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:49 np0005473739 nova_compute[259550]: 2025-10-07 15:00:49.182 2 DEBUG oslo_concurrency.lockutils [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:49 np0005473739 nova_compute[259550]: 2025-10-07 15:00:49.182 2 DEBUG nova.compute.manager [req-1f6f2a3e-b125-48ee-aba6-23c2efd11aac req-50ba8819-0d8c-4a63-b2ba-73551d817228 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Processing event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.112 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849250.112383, 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.113 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] VM Started (Lifecycle Event)#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.115 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.117 2 DEBUG nova.virt.libvirt.driver [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.121 2 INFO nova.virt.libvirt.driver [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance spawned successfully.#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.121 2 INFO nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 9.49 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.121 2 DEBUG nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.136 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.139 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.164 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.165 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849250.1126544, 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.165 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] VM Paused (Lifecycle Event)#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.189 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.192 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849250.1175108, 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.192 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] VM Resumed (Lifecycle Event)#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.201 2 INFO nova.compute.manager [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 10.53 seconds to build instance.#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.215 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.217 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 11:00:50 np0005473739 nova_compute[259550]: 2025-10-07 15:00:50.227 2 DEBUG oslo_concurrency.lockutils [None req-9afc3a2c-f76d-4349-89d0-bf9877cfc3b8 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2962: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 62 KiB/s rd, 16 KiB/s wr, 37 op/s
Oct  7 11:00:51 np0005473739 nova_compute[259550]: 2025-10-07 15:00:51.334 2 DEBUG nova.compute.manager [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:00:51 np0005473739 nova_compute[259550]: 2025-10-07 15:00:51.335 2 DEBUG oslo_concurrency.lockutils [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:00:51 np0005473739 nova_compute[259550]: 2025-10-07 15:00:51.335 2 DEBUG oslo_concurrency.lockutils [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:00:51 np0005473739 nova_compute[259550]: 2025-10-07 15:00:51.335 2 DEBUG oslo_concurrency.lockutils [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:00:51 np0005473739 nova_compute[259550]: 2025-10-07 15:00:51.335 2 DEBUG nova.compute.manager [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] No waiting events found dispatching network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 11:00:51 np0005473739 nova_compute[259550]: 2025-10-07 15:00:51.336 2 WARNING nova.compute.manager [req-710b56a5-27ae-4e7c-b48d-01cef29a514e req-1a8d5518-4bdf-45a2-baa8-484c54a810a5 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received unexpected event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a for instance with vm_state active and task_state None.#033[00m
Oct  7 11:00:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:52 np0005473739 nova_compute[259550]: 2025-10-07 15:00:52.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:00:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:00:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2963: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 57 KiB/s rd, 14 KiB/s wr, 34 op/s
Oct  7 11:00:53 np0005473739 nova_compute[259550]: 2025-10-07 15:00:53.758 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2964: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 13 KiB/s wr, 90 op/s
Oct  7 11:00:55 np0005473739 nova_compute[259550]: 2025-10-07 15:00:55.624 2 DEBUG nova.compute.manager [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:00:55 np0005473739 nova_compute[259550]: 2025-10-07 15:00:55.625 2 DEBUG nova.compute.manager [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing instance network info cache due to event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 11:00:55 np0005473739 nova_compute[259550]: 2025-10-07 15:00:55.626 2 DEBUG oslo_concurrency.lockutils [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 11:00:55 np0005473739 nova_compute[259550]: 2025-10-07 15:00:55.626 2 DEBUG oslo_concurrency.lockutils [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 11:00:55 np0005473739 nova_compute[259550]: 2025-10-07 15:00:55.626 2 DEBUG nova.network.neutron [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 11:00:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2965: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 92 op/s
Oct  7 11:00:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:00:57 np0005473739 nova_compute[259550]: 2025-10-07 15:00:57.345 2 DEBUG nova.network.neutron [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updated VIF entry in instance network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 11:00:57 np0005473739 nova_compute[259550]: 2025-10-07 15:00:57.347 2 DEBUG nova.network.neutron [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:00:57 np0005473739 nova_compute[259550]: 2025-10-07 15:00:57.372 2 DEBUG oslo_concurrency.lockutils [req-15e0f812-b599-42a9-82eb-f8c391697b3e req-9fbcd70b-6f7a-4615-a828-6fe80f6a6de8 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 11:00:57 np0005473739 nova_compute[259550]: 2025-10-07 15:00:57.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:00:58 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3b55c3b4-4536-49ae-9880-8bff6bae950c does not exist
Oct  7 11:00:58 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bab1311a-4b66-4dd6-9ce7-d140c9b6aa15 does not exist
Oct  7 11:00:58 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3cb9ec95-f4e9-4e44-87dd-d2eb82480044 does not exist
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:00:58 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:00:58 np0005473739 nova_compute[259550]: 2025-10-07 15:00:58.760 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:00:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2966: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 12 KiB/s wr, 76 op/s
Oct  7 11:00:59 np0005473739 podman[430472]: 2025-10-07 15:00:59.146257507 +0000 UTC m=+0.046167783 container create c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:00:59 np0005473739 systemd[1]: Started libpod-conmon-c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b.scope.
Oct  7 11:00:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:00:59 np0005473739 podman[430472]: 2025-10-07 15:00:59.122396136 +0000 UTC m=+0.022306442 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:00:59 np0005473739 podman[430472]: 2025-10-07 15:00:59.222744622 +0000 UTC m=+0.122654938 container init c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:00:59 np0005473739 podman[430472]: 2025-10-07 15:00:59.231268507 +0000 UTC m=+0.131178783 container start c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 11:00:59 np0005473739 podman[430472]: 2025-10-07 15:00:59.237077191 +0000 UTC m=+0.136987517 container attach c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:00:59 np0005473739 wonderful_blackburn[430489]: 167 167
Oct  7 11:00:59 np0005473739 systemd[1]: libpod-c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b.scope: Deactivated successfully.
Oct  7 11:00:59 np0005473739 podman[430472]: 2025-10-07 15:00:59.238720624 +0000 UTC m=+0.138630910 container died c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:00:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-460852c0455135d6704b58f5311c9ed0ca953c7db7a6d6ca6a7f90118426d610-merged.mount: Deactivated successfully.
Oct  7 11:00:59 np0005473739 podman[430472]: 2025-10-07 15:00:59.288002418 +0000 UTC m=+0.187912694 container remove c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackburn, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 11:00:59 np0005473739 systemd[1]: libpod-conmon-c275f6b97c04dd70129807b783fa87305030bb6ed89a6f5de427989bd109814b.scope: Deactivated successfully.
Oct  7 11:00:59 np0005473739 podman[430513]: 2025-10-07 15:00:59.480666558 +0000 UTC m=+0.045768443 container create d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2)
Oct  7 11:00:59 np0005473739 systemd[1]: Started libpod-conmon-d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc.scope.
Oct  7 11:00:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:00:59 np0005473739 podman[430513]: 2025-10-07 15:00:59.459792785 +0000 UTC m=+0.024894640 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:00:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:00:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:00:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:00:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:00:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:00:59 np0005473739 podman[430513]: 2025-10-07 15:00:59.58166075 +0000 UTC m=+0.146762615 container init d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:00:59 np0005473739 podman[430513]: 2025-10-07 15:00:59.589314423 +0000 UTC m=+0.154416268 container start d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 11:00:59 np0005473739 podman[430513]: 2025-10-07 15:00:59.595146928 +0000 UTC m=+0.160248803 container attach d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:01:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:00.097 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:00.100 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:00.101 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:00 np0005473739 compassionate_mayer[430529]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:01:00 np0005473739 compassionate_mayer[430529]: --> relative data size: 1.0
Oct  7 11:01:00 np0005473739 compassionate_mayer[430529]: --> All data devices are unavailable
Oct  7 11:01:00 np0005473739 systemd[1]: libpod-d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc.scope: Deactivated successfully.
Oct  7 11:01:00 np0005473739 systemd[1]: libpod-d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc.scope: Consumed 1.022s CPU time.
Oct  7 11:01:00 np0005473739 podman[430558]: 2025-10-07 15:01:00.725957274 +0000 UTC m=+0.030784995 container died d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3)
Oct  7 11:01:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-493c84fc704a82e2a226e95334ddf5c94e372d5f58ccfccc266bf9610ae0e5ce-merged.mount: Deactivated successfully.
Oct  7 11:01:00 np0005473739 podman[430558]: 2025-10-07 15:01:00.798375801 +0000 UTC m=+0.103203512 container remove d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_mayer, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True)
Oct  7 11:01:00 np0005473739 systemd[1]: libpod-conmon-d39b26a36618454d6438410f5e5803404fec79a3ca366d50353be3558bbfeffc.scope: Deactivated successfully.
Oct  7 11:01:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2967: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 13 KiB/s wr, 76 op/s
Oct  7 11:01:01 np0005473739 podman[430726]: 2025-10-07 15:01:01.440185467 +0000 UTC m=+0.051437173 container create 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507)
Oct  7 11:01:01 np0005473739 systemd[1]: Started libpod-conmon-0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8.scope.
Oct  7 11:01:01 np0005473739 podman[430726]: 2025-10-07 15:01:01.411322622 +0000 UTC m=+0.022574328 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:01:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:01:01 np0005473739 podman[430726]: 2025-10-07 15:01:01.534702158 +0000 UTC m=+0.145953884 container init 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:01:01 np0005473739 podman[430726]: 2025-10-07 15:01:01.545912614 +0000 UTC m=+0.157164320 container start 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:01:01 np0005473739 podman[430726]: 2025-10-07 15:01:01.54988666 +0000 UTC m=+0.161138396 container attach 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:01:01 np0005473739 systemd[1]: libpod-0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8.scope: Deactivated successfully.
Oct  7 11:01:01 np0005473739 eloquent_panini[430742]: 167 167
Oct  7 11:01:01 np0005473739 conmon[430742]: conmon 0be1bab65f6cdfe77b61 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8.scope/container/memory.events
Oct  7 11:01:01 np0005473739 podman[430726]: 2025-10-07 15:01:01.554996505 +0000 UTC m=+0.166248211 container died 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 11:01:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7dbe514f764c418a612ab73491261f5a0111ead3919308013295c60ac0f83ec1-merged.mount: Deactivated successfully.
Oct  7 11:01:01 np0005473739 podman[430726]: 2025-10-07 15:01:01.600902299 +0000 UTC m=+0.212154005 container remove 0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_panini, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507)
Oct  7 11:01:01 np0005473739 systemd[1]: libpod-conmon-0be1bab65f6cdfe77b61d433b247f8eb6eaba1712c071d6afd599f7be97b7fa8.scope: Deactivated successfully.
Oct  7 11:01:01 np0005473739 podman[430765]: 2025-10-07 15:01:01.796365383 +0000 UTC m=+0.042479585 container create 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:01:01 np0005473739 systemd[1]: Started libpod-conmon-4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813.scope.
Oct  7 11:01:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:01:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:01:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:01:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:01:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:01:01 np0005473739 podman[430765]: 2025-10-07 15:01:01.781388336 +0000 UTC m=+0.027502558 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:01:01 np0005473739 podman[430765]: 2025-10-07 15:01:01.879198915 +0000 UTC m=+0.125313137 container init 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 11:01:01 np0005473739 podman[430765]: 2025-10-07 15:01:01.884868305 +0000 UTC m=+0.130982507 container start 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:01:01 np0005473739 podman[430765]: 2025-10-07 15:01:01.893987956 +0000 UTC m=+0.140102158 container attach 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 11:01:01 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:01Z|00210|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.4
Oct  7 11:01:01 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:01Z|00211|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0b:41:f9 10.100.0.4
Oct  7 11:01:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:02 np0005473739 nova_compute[259550]: 2025-10-07 15:01:02.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]: {
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:    "0": [
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:        {
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "devices": [
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "/dev/loop3"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            ],
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_name": "ceph_lv0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_size": "21470642176",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "name": "ceph_lv0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "tags": {
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cluster_name": "ceph",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.crush_device_class": "",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.encrypted": "0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osd_id": "0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.type": "block",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.vdo": "0"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            },
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "type": "block",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "vg_name": "ceph_vg0"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:        }
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:    ],
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:    "1": [
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:        {
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "devices": [
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "/dev/loop4"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            ],
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_name": "ceph_lv1",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_size": "21470642176",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "name": "ceph_lv1",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "tags": {
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cluster_name": "ceph",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.crush_device_class": "",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.encrypted": "0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osd_id": "1",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.type": "block",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.vdo": "0"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            },
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "type": "block",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "vg_name": "ceph_vg1"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:        }
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:    ],
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:    "2": [
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:        {
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "devices": [
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "/dev/loop5"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            ],
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_name": "ceph_lv2",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_size": "21470642176",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "name": "ceph_lv2",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "tags": {
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.cluster_name": "ceph",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.crush_device_class": "",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.encrypted": "0",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osd_id": "2",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.type": "block",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:                "ceph.vdo": "0"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            },
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "type": "block",
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:            "vg_name": "ceph_vg2"
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:        }
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]:    ]
Oct  7 11:01:02 np0005473739 gifted_rosalind[430782]: }
Oct  7 11:01:02 np0005473739 systemd[1]: libpod-4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813.scope: Deactivated successfully.
Oct  7 11:01:02 np0005473739 podman[430765]: 2025-10-07 15:01:02.665573896 +0000 UTC m=+0.911688118 container died 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:01:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-821f265375ee869cd035a38d38ee9fcdfe0f7f8d38096fa87b5ea5a60d874959-merged.mount: Deactivated successfully.
Oct  7 11:01:02 np0005473739 podman[430765]: 2025-10-07 15:01:02.748295205 +0000 UTC m=+0.994409407 container remove 4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gifted_rosalind, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 11:01:02 np0005473739 systemd[1]: libpod-conmon-4853d8dcd787bcc3576f62489432edc70d3f48b93ddf328fa18803f8935a9813.scope: Deactivated successfully.
Oct  7 11:01:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2968: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.4 KiB/s wr, 72 op/s
Oct  7 11:01:03 np0005473739 podman[430945]: 2025-10-07 15:01:03.358493935 +0000 UTC m=+0.043835682 container create c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 11:01:03 np0005473739 systemd[1]: Started libpod-conmon-c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00.scope.
Oct  7 11:01:03 np0005473739 podman[430945]: 2025-10-07 15:01:03.338769583 +0000 UTC m=+0.024111350 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:01:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:01:03 np0005473739 podman[430945]: 2025-10-07 15:01:03.460105573 +0000 UTC m=+0.145447330 container init c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:01:03 np0005473739 podman[430945]: 2025-10-07 15:01:03.469209985 +0000 UTC m=+0.154551732 container start c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:01:03 np0005473739 great_mirzakhani[430962]: 167 167
Oct  7 11:01:03 np0005473739 podman[430945]: 2025-10-07 15:01:03.474290928 +0000 UTC m=+0.159632715 container attach c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct  7 11:01:03 np0005473739 systemd[1]: libpod-c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00.scope: Deactivated successfully.
Oct  7 11:01:03 np0005473739 podman[430945]: 2025-10-07 15:01:03.475123681 +0000 UTC m=+0.160465448 container died c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 11:01:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-144038a8cc0fcd0a34379eff5d8fb04a35af8e86d27b3a798bcf7bec5914614a-merged.mount: Deactivated successfully.
Oct  7 11:01:03 np0005473739 podman[430945]: 2025-10-07 15:01:03.524502687 +0000 UTC m=+0.209844444 container remove c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 11:01:03 np0005473739 systemd[1]: libpod-conmon-c68726003cd4c9864f32ac63cf8c50e7b51077e68a80225888460f82cfcfcc00.scope: Deactivated successfully.
Oct  7 11:01:03 np0005473739 nova_compute[259550]: 2025-10-07 15:01:03.761 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:03 np0005473739 podman[430987]: 2025-10-07 15:01:03.766993695 +0000 UTC m=+0.063206594 container create 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:01:03 np0005473739 systemd[1]: Started libpod-conmon-00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307.scope.
Oct  7 11:01:03 np0005473739 podman[430987]: 2025-10-07 15:01:03.739828006 +0000 UTC m=+0.036040995 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:01:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:01:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:01:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:01:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:01:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:01:03 np0005473739 podman[430987]: 2025-10-07 15:01:03.865276017 +0000 UTC m=+0.161489006 container init 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 11:01:03 np0005473739 podman[430987]: 2025-10-07 15:01:03.878971888 +0000 UTC m=+0.175184837 container start 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:01:03 np0005473739 podman[430987]: 2025-10-07 15:01:03.883372796 +0000 UTC m=+0.179585735 container attach 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]: {
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "osd_id": 2,
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "type": "bluestore"
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:    },
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "osd_id": 1,
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "type": "bluestore"
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:    },
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "osd_id": 0,
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:        "type": "bluestore"
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]:    }
Oct  7 11:01:04 np0005473739 hardcore_shirley[431003]: }
Oct  7 11:01:04 np0005473739 systemd[1]: libpod-00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307.scope: Deactivated successfully.
Oct  7 11:01:04 np0005473739 podman[430987]: 2025-10-07 15:01:04.844355738 +0000 UTC m=+1.140568637 container died 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 11:01:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8fadb46fc814549b13ea6593b2e96987499f82c10258f716d91206a4d1eb3a03-merged.mount: Deactivated successfully.
Oct  7 11:01:04 np0005473739 podman[430987]: 2025-10-07 15:01:04.911857144 +0000 UTC m=+1.208070033 container remove 00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_shirley, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:01:04 np0005473739 systemd[1]: libpod-conmon-00939e25da0657790ad4ad9e3b61c745dc05b408f3a6e8b2892269d531301307.scope: Deactivated successfully.
Oct  7 11:01:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:01:04 np0005473739 podman[431047]: 2025-10-07 15:01:04.949303945 +0000 UTC m=+0.065946336 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:01:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:01:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:01:04 np0005473739 podman[431036]: 2025-10-07 15:01:04.954504053 +0000 UTC m=+0.073272321 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:01:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:01:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b83f5c80-4e69-48b5-b0a6-f3e20e5b2371 does not exist
Oct  7 11:01:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f4a3e2ce-6515-4a58-80d9-e9d5f7dad699 does not exist
Oct  7 11:01:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2969: 305 pgs: 305 active+clean; 205 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 2.7 MiB/s rd, 65 KiB/s wr, 100 op/s
Oct  7 11:01:04 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:04Z|00212|pinctrl(ovn_pinctrl0)|WARN|DHCPREQUEST requested IP 10.100.0.8 does not match offer 10.100.0.4
Oct  7 11:01:04 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:04Z|00213|pinctrl(ovn_pinctrl0)|INFO|DHCPNAK fa:16:3e:0b:41:f9 10.100.0.4
Oct  7 11:01:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:01:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:01:06 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:06Z|00214|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0b:41:f9 10.100.0.4
Oct  7 11:01:06 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:06Z|00215|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0b:41:f9 10.100.0.4
Oct  7 11:01:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2970: 305 pgs: 305 active+clean; 214 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.4 MiB/s rd, 500 KiB/s wr, 65 op/s
Oct  7 11:01:06 np0005473739 nova_compute[259550]: 2025-10-07 15:01:06.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:07 np0005473739 nova_compute[259550]: 2025-10-07 15:01:07.545 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:08 np0005473739 nova_compute[259550]: 2025-10-07 15:01:08.763 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2971: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 547 KiB/s wr, 54 op/s
Oct  7 11:01:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2972: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 547 KiB/s wr, 54 op/s
Oct  7 11:01:10 np0005473739 nova_compute[259550]: 2025-10-07 15:01:10.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:11 np0005473739 nova_compute[259550]: 2025-10-07 15:01:11.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:12 np0005473739 nova_compute[259550]: 2025-10-07 15:01:12.549 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2973: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct  7 11:01:12 np0005473739 nova_compute[259550]: 2025-10-07 15:01:12.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:13 np0005473739 nova_compute[259550]: 2025-10-07 15:01:13.765 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2974: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.0 MiB/s rd, 546 KiB/s wr, 54 op/s
Oct  7 11:01:15 np0005473739 nova_compute[259550]: 2025-10-07 15:01:15.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:15 np0005473739 nova_compute[259550]: 2025-10-07 15:01:15.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:01:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2975: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 260 KiB/s rd, 483 KiB/s wr, 26 op/s
Oct  7 11:01:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:17 np0005473739 nova_compute[259550]: 2025-10-07 15:01:17.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:18 np0005473739 podman[431142]: 2025-10-07 15:01:18.087720732 +0000 UTC m=+0.064460927 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:01:18 np0005473739 podman[431143]: 2025-10-07 15:01:18.121041064 +0000 UTC m=+0.094532432 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 11:01:18 np0005473739 nova_compute[259550]: 2025-10-07 15:01:18.767 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2976: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 33 KiB/s rd, 50 KiB/s wr, 2 op/s
Oct  7 11:01:18 np0005473739 nova_compute[259550]: 2025-10-07 15:01:18.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:20 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2977: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct  7 11:01:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:22 np0005473739 nova_compute[259550]: 2025-10-07 15:01:22.606 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:01:22
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', '.mgr', 'vms', 'default.rgw.control', 'cephfs.cephfs.data', 'images', '.rgw.root', 'backups', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.meta']
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:01:22 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2978: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:01:23 np0005473739 ceph-mgr[74587]: client.0 ms_handle_reset on v2:192.168.122.100:6800/3626055412
Oct  7 11:01:23 np0005473739 nova_compute[259550]: 2025-10-07 15:01:23.769 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:24 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:24Z|01656|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Oct  7 11:01:24 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2979: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct  7 11:01:26 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2980: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 0 B/s rd, 3.3 KiB/s wr, 0 op/s
Oct  7 11:01:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:27 np0005473739 nova_compute[259550]: 2025-10-07 15:01:27.610 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:27 np0005473739 nova_compute[259550]: 2025-10-07 15:01:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.022 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.022 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:01:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:01:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3925714958' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.482 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.460s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.615 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.615 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000098 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.619 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.619 2 DEBUG nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] skipping disk for instance-00000097 as it does not have a path _get_instance_disk_info_from_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:11231#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.771 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.785 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.786 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3184MB free_disk=59.9363899230957GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.786 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.787 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.959 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance a6b18035-4aef-4825-90e6-799173979626 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.959 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.960 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:01:28 np0005473739 nova_compute[259550]: 2025-10-07 15:01:28.960 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=59GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:01:28 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2981: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 207 KiB/s rd, 3.3 KiB/s wr, 5 op/s
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.019 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:01:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:01:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4268660978' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.461 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.467 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.505 2 DEBUG nova.compute.manager [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.543 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.564 2 INFO nova.compute.manager [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] instance snapshotting#033[00m
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.664 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.664 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:29 np0005473739 nova_compute[259550]: 2025-10-07 15:01:29.784 2 INFO nova.virt.libvirt.driver [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Beginning live snapshot process#033[00m
Oct  7 11:01:30 np0005473739 nova_compute[259550]: 2025-10-07 15:01:30.164 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] creating snapshot(08c8dc04d88e46189a248fcc9143d1c9) on rbd image(98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 11:01:30 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2982: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 215 KiB/s rd, 11 op/s
Oct  7 11:01:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e288 do_prune osdmap full prune enabled
Oct  7 11:01:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e289 e289: 3 total, 3 up, 3 in
Oct  7 11:01:31 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e289: 3 total, 3 up, 3 in
Oct  7 11:01:31 np0005473739 nova_compute[259550]: 2025-10-07 15:01:31.254 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] cloning vms/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk@08c8dc04d88e46189a248fcc9143d1c9 to images/30696525-f806-4144-b1e6-ccc7d3798a90 clone /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:261#033[00m
Oct  7 11:01:31 np0005473739 nova_compute[259550]: 2025-10-07 15:01:31.369 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] flattening images/30696525-f806-4144-b1e6-ccc7d3798a90 flatten /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:314#033[00m
Oct  7 11:01:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e289 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:32 np0005473739 nova_compute[259550]: 2025-10-07 15:01:32.166 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] removing snapshot(08c8dc04d88e46189a248fcc9143d1c9) on rbd image(98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_disk) remove_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:489#033[00m
Oct  7 11:01:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e289 do_prune osdmap full prune enabled
Oct  7 11:01:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e290 e290: 3 total, 3 up, 3 in
Oct  7 11:01:32 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e290: 3 total, 3 up, 3 in
Oct  7 11:01:32 np0005473739 nova_compute[259550]: 2025-10-07 15:01:32.259 2 DEBUG nova.storage.rbd_utils [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] creating snapshot(snap) on rbd image(30696525-f806-4144-b1e6-ccc7d3798a90) create_snap /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:462#033[00m
Oct  7 11:01:32 np0005473739 nova_compute[259550]: 2025-10-07 15:01:32.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:32 np0005473739 nova_compute[259550]: 2025-10-07 15:01:32.666 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:32 np0005473739 nova_compute[259550]: 2025-10-07 15:01:32.667 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:01:32 np0005473739 nova_compute[259550]: 2025-10-07 15:01:32.667 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:01:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:01:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834333303' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:01:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:01:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2834333303' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.0008650243984827766 of space, bias 1.0, pg target 0.25950731954483297 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0014241774923487661 of space, bias 1.0, pg target 0.42725324770462986 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:01:32 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2985: 305 pgs: 305 active+clean; 217 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 323 KiB/s rd, 16 op/s
Oct  7 11:01:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e290 do_prune osdmap full prune enabled
Oct  7 11:01:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e291 e291: 3 total, 3 up, 3 in
Oct  7 11:01:33 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e291: 3 total, 3 up, 3 in
Oct  7 11:01:33 np0005473739 nova_compute[259550]: 2025-10-07 15:01:33.331 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 11:01:33 np0005473739 nova_compute[259550]: 2025-10-07 15:01:33.332 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 11:01:33 np0005473739 nova_compute[259550]: 2025-10-07 15:01:33.332 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Oct  7 11:01:33 np0005473739 nova_compute[259550]: 2025-10-07 15:01:33.332 2 DEBUG nova.objects.instance [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:01:33 np0005473739 nova_compute[259550]: 2025-10-07 15:01:33.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:34 np0005473739 nova_compute[259550]: 2025-10-07 15:01:34.737 2 DEBUG nova.network.neutron [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:01:34 np0005473739 nova_compute[259550]: 2025-10-07 15:01:34.762 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 11:01:34 np0005473739 nova_compute[259550]: 2025-10-07 15:01:34.762 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Oct  7 11:01:34 np0005473739 nova_compute[259550]: 2025-10-07 15:01:34.763 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:01:34 np0005473739 nova_compute[259550]: 2025-10-07 15:01:34.897 2 INFO nova.virt.libvirt.driver [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Snapshot image upload complete#033[00m
Oct  7 11:01:34 np0005473739 nova_compute[259550]: 2025-10-07 15:01:34.897 2 INFO nova.compute.manager [None req-8664ac4e-5961-4630-988a-af81d2719416 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 5.33 seconds to snapshot the instance on the hypervisor.#033[00m
Oct  7 11:01:34 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2987: 305 pgs: 305 active+clean; 303 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 160 op/s
Oct  7 11:01:35 np0005473739 podman[431374]: 2025-10-07 15:01:35.064541283 +0000 UTC m=+0.053917498 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 11:01:35 np0005473739 podman[431373]: 2025-10-07 15:01:35.064537263 +0000 UTC m=+0.055699385 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:01:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e291 do_prune osdmap full prune enabled
Oct  7 11:01:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e292 e292: 3 total, 3 up, 3 in
Oct  7 11:01:35 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e292: 3 total, 3 up, 3 in
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.899 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.900 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.901 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.901 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.901 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.902 2 INFO nova.compute.manager [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Terminating instance#033[00m
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.903 2 DEBUG nova.compute.manager [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 11:01:36 np0005473739 kernel: tapfc3eccef-04 (unregistering): left promiscuous mode
Oct  7 11:01:36 np0005473739 NetworkManager[44949]: <info>  [1759849296.9583] device (tapfc3eccef-04): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 11:01:36 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:36Z|01657|binding|INFO|Releasing lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a from this chassis (sb_readonly=0)
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.970 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:36 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:36Z|01658|binding|INFO|Setting lport fc3eccef-04e2-406f-b8a2-3b5015d4e10a down in Southbound
Oct  7 11:01:36 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:36Z|01659|binding|INFO|Removing iface tapfc3eccef-04 ovn-installed in OVS
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.972 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:36 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2989: 305 pgs: 305 active+clean; 323 MiB data, 1.2 GiB used, 59 GiB / 60 GiB avail; 8.6 MiB/s rd, 16 MiB/s wr, 182 op/s
Oct  7 11:01:36 np0005473739 nova_compute[259550]: 2025-10-07 15:01:36.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:36.999 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0b:41:f9 10.100.0.4'], port_security=['fa:16:3e:0b:41:f9 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '98da6205-c6cb-48d4-9502-fa1ca0f3e4ce', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5714012d-b182-4fef-9241-3afcb9c700d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a9b7c48-78a6-4a08-9a38-f3e228e68bdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f64d47ed-3333-494f-a3a9-07b1b0158b50, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=fc3eccef-04e2-406f-b8a2-3b5015d4e10a) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.000 161536 INFO neutron.agent.ovn.metadata.agent [-] Port fc3eccef-04e2-406f-b8a2-3b5015d4e10a in datapath 5714012d-b182-4fef-9241-3afcb9c700d6 unbound from our chassis#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.001 161536 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5714012d-b182-4fef-9241-3afcb9c700d6#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.014 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[78810df1-05e6-4f19-bb2e-efcf831f8909]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:37 np0005473739 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Deactivated successfully.
Oct  7 11:01:37 np0005473739 systemd[1]: machine-qemu\x2d186\x2dinstance\x2d00000098.scope: Consumed 14.884s CPU time.
Oct  7 11:01:37 np0005473739 systemd-machined[214580]: Machine qemu-186-instance-00000098 terminated.
Oct  7 11:01:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e292 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.051 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[f15ed4e7-a4eb-4115-b6ca-83e1f4186f93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.056 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[7a4dfc13-3be5-4d37-9674-1f0e88efb810]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.084 275676 DEBUG oslo.privsep.daemon [-] privsep: reply[174ae8ad-de97-49be-b407-96065d2ad264]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.101 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ba7ee6c3-7c5f-4624-af6f-34579a662189]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5714012d-b1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:98:84:e6'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 12, 'tx_packets': 7, 'rx_bytes': 1000, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 473], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977469, 'reachable_time': 27955, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 8, 'inoctets': 720, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 8, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 720, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 8, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431425, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.118 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[c9d7f476-8456-4e13-8359-31056b97d9d6]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap5714012d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977482, 'tstamp': 977482}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431426, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap5714012d-b1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 977484, 'tstamp': 977484}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 431426, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.119 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5714012d-b0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.128 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5714012d-b0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.129 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.129 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5714012d-b0, col_values=(('external_ids', {'iface-id': '8ae40a35-baff-4538-b31d-4c05f61bc2b8'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:01:37 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:37.129 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.145 2 INFO nova.virt.libvirt.driver [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance destroyed successfully.#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.146 2 DEBUG nova.objects.instance [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'resources' on Instance uuid 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.166 2 DEBUG nova.virt.libvirt.vif [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T15:00:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-257341137',display_name='tempest-TestSnapshotPattern-server-257341137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-257341137',id=152,image_ref='a45275a6-57fe-4099-b442-be8d2cb87827',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=<?>,launch_index=0,launched_at=2025-10-07T15:00:50Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-tb8l5uf4',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_boot_roles='member,reader',image_container_format='bare',image_disk_format='raw',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_image_location='snapshot',image_image_state='available',image_image_type='snapshot',image_instance_uuid='a6b18035-4aef-4825-90e6-799173979626',image_min_disk='1',image_min_ram='0',image_owner_id='aa7bd91eb3b040c89929aa23c9775dc9',image_owner_project_name='tempest-TestSnapshotPattern-1480624877',image_owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member',image_user_id='1a7552cec1354175be418fba9a7588af',image_version='8.0',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T15:01:35Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=98da6205-c6cb-48d4-9502-fa1ca0f3e4ce,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.167 2 DEBUG nova.network.os_vif_util [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.214", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.168 2 DEBUG nova.network.os_vif_util [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.169 2 DEBUG os_vif [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.170 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.171 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfc3eccef-04, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.172 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.174 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.177 2 DEBUG nova.compute.manager [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.177 2 DEBUG nova.compute.manager [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing instance network info cache due to event network-changed-fc3eccef-04e2-406f-b8a2-3b5015d4e10a. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.178 2 DEBUG oslo_concurrency.lockutils [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.178 2 DEBUG oslo_concurrency.lockutils [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.178 2 DEBUG nova.network.neutron [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Refreshing network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.181 2 INFO os_vif [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0b:41:f9,bridge_name='br-int',has_traffic_filtering=True,id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfc3eccef-04')#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.590 2 INFO nova.virt.libvirt.driver [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Deleting instance files /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_del#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.591 2 INFO nova.virt.libvirt.driver [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Deletion of /var/lib/nova/instances/98da6205-c6cb-48d4-9502-fa1ca0f3e4ce_del complete#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.687 2 INFO nova.compute.manager [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 0.78 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.688 2 DEBUG oslo.service.loopingcall [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.688 2 DEBUG nova.compute.manager [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 11:01:37 np0005473739 nova_compute[259550]: 2025-10-07 15:01:37.688 2 DEBUG nova.network.neutron [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 11:01:38 np0005473739 nova_compute[259550]: 2025-10-07 15:01:38.315 2 DEBUG nova.compute.manager [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-unplugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:01:38 np0005473739 nova_compute[259550]: 2025-10-07 15:01:38.315 2 DEBUG oslo_concurrency.lockutils [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:38 np0005473739 nova_compute[259550]: 2025-10-07 15:01:38.316 2 DEBUG oslo_concurrency.lockutils [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:38 np0005473739 nova_compute[259550]: 2025-10-07 15:01:38.316 2 DEBUG oslo_concurrency.lockutils [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:38 np0005473739 nova_compute[259550]: 2025-10-07 15:01:38.316 2 DEBUG nova.compute.manager [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] No waiting events found dispatching network-vif-unplugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 11:01:38 np0005473739 nova_compute[259550]: 2025-10-07 15:01:38.317 2 DEBUG nova.compute.manager [req-d057e432-7a56-4a1f-9351-0669dbbd69bf req-873541f5-d721-4382-a7b3-15038dd44e4b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-unplugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 11:01:38 np0005473739 nova_compute[259550]: 2025-10-07 15:01:38.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:38 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2990: 305 pgs: 305 active+clean; 229 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 7.4 MiB/s rd, 14 MiB/s wr, 253 op/s
Oct  7 11:01:39 np0005473739 nova_compute[259550]: 2025-10-07 15:01:39.239 2 DEBUG nova.network.neutron [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:01:39 np0005473739 nova_compute[259550]: 2025-10-07 15:01:39.262 2 INFO nova.compute.manager [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Took 1.57 seconds to deallocate network for instance.#033[00m
Oct  7 11:01:39 np0005473739 nova_compute[259550]: 2025-10-07 15:01:39.326 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:39 np0005473739 nova_compute[259550]: 2025-10-07 15:01:39.327 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:39 np0005473739 nova_compute[259550]: 2025-10-07 15:01:39.568 2 DEBUG oslo_concurrency.processutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:01:39 np0005473739 nova_compute[259550]: 2025-10-07 15:01:39.632 2 DEBUG nova.network.neutron [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updated VIF entry in instance network info cache for port fc3eccef-04e2-406f-b8a2-3b5015d4e10a. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 11:01:39 np0005473739 nova_compute[259550]: 2025-10-07 15:01:39.633 2 DEBUG nova.network.neutron [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Updating instance_info_cache with network_info: [{"id": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "address": "fa:16:3e:0b:41:f9", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfc3eccef-04", "ovs_interfaceid": "fc3eccef-04e2-406f-b8a2-3b5015d4e10a", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:01:39 np0005473739 nova_compute[259550]: 2025-10-07 15:01:39.653 2 DEBUG oslo_concurrency.lockutils [req-0d8174b7-7829-4caf-abb8-67949526e6da req-a8ab461b-0bfa-4173-8e66-a5d9739f9f5b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 11:01:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:01:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1176012184' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.010 2 DEBUG oslo_concurrency.processutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.442s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.016 2 DEBUG nova.compute.provider_tree [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.033 2 DEBUG nova.scheduler.client.report [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.059 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.084 2 INFO nova.scheduler.client.report [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Deleted allocations for instance 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.145 2 DEBUG oslo_concurrency.lockutils [None req-41471cff-1a50-4d73-8856-ef43a3cc3e47 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.245s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.430 2 DEBUG nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.431 2 DEBUG oslo_concurrency.lockutils [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.431 2 DEBUG oslo_concurrency.lockutils [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.431 2 DEBUG oslo_concurrency.lockutils [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "98da6205-c6cb-48d4-9502-fa1ca0f3e4ce-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.431 2 DEBUG nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] No waiting events found dispatching network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.431 2 WARNING nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received unexpected event network-vif-plugged-fc3eccef-04e2-406f-b8a2-3b5015d4e10a for instance with vm_state deleted and task_state None.#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.432 2 DEBUG nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Received event network-vif-deleted-fc3eccef-04e2-406f-b8a2-3b5015d4e10a external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.432 2 INFO nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Neutron deleted interface fc3eccef-04e2-406f-b8a2-3b5015d4e10a; detaching it from the instance and deleting it from the info cache#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.432 2 DEBUG nova.network.neutron [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Oct  7 11:01:40 np0005473739 nova_compute[259550]: 2025-10-07 15:01:40.435 2 DEBUG nova.compute.manager [req-8992abbf-23fc-4a87-afa0-70727c87e9e7 req-c53286c1-325c-4345-ba30-943eb2ff76b4 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Detach interface failed, port_id=fc3eccef-04e2-406f-b8a2-3b5015d4e10a, reason: Instance 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce could not be found. _process_instance_vif_deleted_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10882#033[00m
Oct  7 11:01:40 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2991: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 6.2 MiB/s rd, 12 MiB/s wr, 219 op/s
Oct  7 11:01:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e292 do_prune osdmap full prune enabled
Oct  7 11:01:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e293 e293: 3 total, 3 up, 3 in
Oct  7 11:01:41 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e293: 3 total, 3 up, 3 in
Oct  7 11:01:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e293 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e293 do_prune osdmap full prune enabled
Oct  7 11:01:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e294 e294: 3 total, 3 up, 3 in
Oct  7 11:01:42 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e294: 3 total, 3 up, 3 in
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.173 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.781 2 DEBUG nova.compute.manager [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.781 2 DEBUG nova.compute.manager [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing instance network info cache due to event network-changed-9e613de1-4d71-4293-836f-5f1e121f0bb5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.782 2 DEBUG oslo_concurrency.lockutils [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.782 2 DEBUG oslo_concurrency.lockutils [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquired lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.782 2 DEBUG nova.network.neutron [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Refreshing network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.895 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.895 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.896 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.896 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.897 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.898 2 INFO nova.compute.manager [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Terminating instance#033[00m
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.899 2 DEBUG nova.compute.manager [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 11:01:42 np0005473739 kernel: tap9e613de1-4d (unregistering): left promiscuous mode
Oct  7 11:01:42 np0005473739 NetworkManager[44949]: <info>  [1759849302.9523] device (tap9e613de1-4d): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.960 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:42 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:42Z|01660|binding|INFO|Releasing lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 from this chassis (sb_readonly=0)
Oct  7 11:01:42 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:42Z|01661|binding|INFO|Setting lport 9e613de1-4d71-4293-836f-5f1e121f0bb5 down in Southbound
Oct  7 11:01:42 np0005473739 ovn_controller[151684]: 2025-10-07T15:01:42Z|01662|binding|INFO|Removing iface tap9e613de1-4d ovn-installed in OVS
Oct  7 11:01:42 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2994: 305 pgs: 305 active+clean; 200 MiB data, 1.1 GiB used, 59 GiB / 60 GiB avail; 1.6 MiB/s rd, 2.4 MiB/s wr, 112 op/s
Oct  7 11:01:42 np0005473739 nova_compute[259550]: 2025-10-07 15:01:42.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:43 np0005473739 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Deactivated successfully.
Oct  7 11:01:43 np0005473739 systemd[1]: machine-qemu\x2d185\x2dinstance\x2d00000097.scope: Consumed 17.809s CPU time.
Oct  7 11:01:43 np0005473739 systemd-machined[214580]: Machine qemu-185-instance-00000097 terminated.
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.137 2 INFO nova.virt.libvirt.driver [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Instance destroyed successfully.#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.139 2 DEBUG nova.objects.instance [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lazy-loading 'resources' on Instance uuid a6b18035-4aef-4825-90e6-799173979626 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.176 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:68:98 10.100.0.8'], port_security=['fa:16:3e:b8:68:98 10.100.0.8'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.8/28', 'neutron:device_id': 'a6b18035-4aef-4825-90e6-799173979626', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5714012d-b182-4fef-9241-3afcb9c700d6', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aa7bd91eb3b040c89929aa23c9775dc9', 'neutron:revision_number': '4', 'neutron:security_group_ids': '5a9b7c48-78a6-4a08-9a38-f3e228e68bdf', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f64d47ed-3333-494f-a3a9-07b1b0158b50, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>], logical_port=9e613de1-4d71-4293-836f-5f1e121f0bb5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7ff3e5d35fd0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.177 161536 INFO neutron.agent.ovn.metadata.agent [-] Port 9e613de1-4d71-4293-836f-5f1e121f0bb5 in datapath 5714012d-b182-4fef-9241-3afcb9c700d6 unbound from our chassis#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.178 161536 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5714012d-b182-4fef-9241-3afcb9c700d6, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.179 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ad252cd0-c246-4745-8967-65aaf5d2a6ce]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.179 161536 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 namespace which is not needed anymore#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.195 2 DEBUG nova.virt.libvirt.vif [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-10-07T14:59:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestSnapshotPattern-server-821190433',display_name='tempest-TestSnapshotPattern-server-821190433',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(4),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testsnapshotpattern-server-821190433',id=151,image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',info_cache=InstanceInfoCache,instance_type_id=4,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLS8No0f8lI6ufkJmHXUc83InsAiFTNLFmJKfMLwW232En9QazJdzMvhlDyJwZrw5ZgxsPztgc0fXNZCFoxmRX/wK0ADddqAh1D7rvdzceS1mG7VJugN4Nxl5xOWKIFTbQ==',key_name='tempest-TestSnapshotPattern-1267535545',keypairs=<?>,launch_index=0,launched_at=2025-10-07T14:59:52Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aa7bd91eb3b040c89929aa23c9775dc9',ramdisk_id='',reservation_id='r-vnbjscds',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='1c7e024e-3dd7-433b-91ff-f363a3d5a581',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestSnapshotPattern-1480624877',owner_user_name='tempest-TestSnapshotPattern-1480624877-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-10-07T15:00:35Z,user_data=None,user_id='1a7552cec1354175be418fba9a7588af',uuid=a6b18035-4aef-4825-90e6-799173979626,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.195 2 DEBUG nova.network.os_vif_util [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converting VIF {"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.242", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.196 2 DEBUG nova.network.os_vif_util [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.197 2 DEBUG os_vif [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 25 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.199 2 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9e613de1-4d, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.209 2 INFO os_vif [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:68:98,bridge_name='br-int',has_traffic_filtering=True,id=9e613de1-4d71-4293-836f-5f1e121f0bb5,network=Network(5714012d-b182-4fef-9241-3afcb9c700d6),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9e613de1-4d')#033[00m
Oct  7 11:01:43 np0005473739 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [NOTICE]   (428907) : haproxy version is 2.8.14-c23fe91
Oct  7 11:01:43 np0005473739 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [NOTICE]   (428907) : path to executable is /usr/sbin/haproxy
Oct  7 11:01:43 np0005473739 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [WARNING]  (428907) : Exiting Master process...
Oct  7 11:01:43 np0005473739 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [WARNING]  (428907) : Exiting Master process...
Oct  7 11:01:43 np0005473739 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [ALERT]    (428907) : Current worker (428909) exited with code 143 (Terminated)
Oct  7 11:01:43 np0005473739 neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6[428891]: [WARNING]  (428907) : All workers exited. Exiting... (0)
Oct  7 11:01:43 np0005473739 systemd[1]: libpod-c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7.scope: Deactivated successfully.
Oct  7 11:01:43 np0005473739 podman[431532]: 2025-10-07 15:01:43.342061226 +0000 UTC m=+0.065489813 container died c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2)
Oct  7 11:01:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d0737f8a91a4236f462b27e601747872518f52d58de889974962d2146e39c7d2-merged.mount: Deactivated successfully.
Oct  7 11:01:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7-userdata-shm.mount: Deactivated successfully.
Oct  7 11:01:43 np0005473739 podman[431532]: 2025-10-07 15:01:43.466196782 +0000 UTC m=+0.189625399 container cleanup c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Oct  7 11:01:43 np0005473739 systemd[1]: libpod-conmon-c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7.scope: Deactivated successfully.
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.487 2 DEBUG nova.compute.manager [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-unplugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.488 2 DEBUG oslo_concurrency.lockutils [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.488 2 DEBUG oslo_concurrency.lockutils [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.488 2 DEBUG oslo_concurrency.lockutils [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.488 2 DEBUG nova.compute.manager [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] No waiting events found dispatching network-vif-unplugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.489 2 DEBUG nova.compute.manager [req-df800850-ba99-4ab6-b929-8643ad987026 req-b97a8469-af0b-419a-b8e6-bdca13c7b360 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-unplugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Oct  7 11:01:43 np0005473739 podman[431563]: 2025-10-07 15:01:43.565904841 +0000 UTC m=+0.073363803 container remove c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.571 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[17669dbd-078b-4abd-b234-0375f20ee0e1]: (4, ('Tue Oct  7 03:01:43 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 (c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7)\nc5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7\nTue Oct  7 03:01:43 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 (c5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7)\nc5e9f7a796a8a503e40052b14e92c0988bc400b29e645239a55d38df3cac3dc7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.573 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[ee782722-eb72-4827-8744-63d863b85046]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.574 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5714012d-b0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.575 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:43 np0005473739 kernel: tap5714012d-b0: left promiscuous mode
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.594 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[e32ab513-2cd3-47df-ac50-ad75892af359]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.629 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[8aa1e29b-c934-45e0-b44e-16a9d7dad226]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.630 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[5277a918-d0c2-459e-b863-bb221d2a3da9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.651 275502 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb8cf7d-415d-45d4-8d06-d36c4a2b487f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 977461, 'reachable_time': 33316, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 431579, 'error': None, 'target': 'ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:43 np0005473739 systemd[1]: run-netns-ovnmeta\x2d5714012d\x2db182\x2d4fef\x2d9241\x2d3afcb9c700d6.mount: Deactivated successfully.
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.658 161647 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5714012d-b182-4fef-9241-3afcb9c700d6 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Oct  7 11:01:43 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:43.658 161647 DEBUG oslo.privsep.daemon [-] privsep: reply[3454b640-263e-468e-b6ac-3faf3adb0e28]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.776 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.851 2 INFO nova.virt.libvirt.driver [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Deleting instance files /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626_del#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.852 2 INFO nova.virt.libvirt.driver [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Deletion of /var/lib/nova/instances/a6b18035-4aef-4825-90e6-799173979626_del complete#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.901 2 INFO nova.compute.manager [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 1.00 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.902 2 DEBUG oslo.service.loopingcall [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.902 2 DEBUG nova.compute.manager [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 11:01:43 np0005473739 nova_compute[259550]: 2025-10-07 15:01:43.903 2 DEBUG nova.network.neutron [-] [instance: a6b18035-4aef-4825-90e6-799173979626] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.109 2 DEBUG nova.network.neutron [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updated VIF entry in instance network info cache for port 9e613de1-4d71-4293-836f-5f1e121f0bb5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.110 2 DEBUG nova.network.neutron [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [{"id": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "address": "fa:16:3e:b8:68:98", "network": {"id": "5714012d-b182-4fef-9241-3afcb9c700d6", "bridge": "br-int", "label": "tempest-TestSnapshotPattern-583996475-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.8", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aa7bd91eb3b040c89929aa23c9775dc9", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9e613de1-4d", "ovs_interfaceid": "9e613de1-4d71-4293-836f-5f1e121f0bb5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.134 2 DEBUG oslo_concurrency.lockutils [req-3dec6506-910d-4f32-9111-d7e90b59e898 req-ec807541-ab77-46d2-b7d4-575a80e6458f 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Releasing lock "refresh_cache-a6b18035-4aef-4825-90e6-799173979626" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 11:01:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:44.758 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=60, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=59) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:01:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:44.759 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.783 2 DEBUG nova.network.neutron [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.796 2 INFO nova.compute.manager [-] [instance: a6b18035-4aef-4825-90e6-799173979626] Took 0.89 seconds to deallocate network for instance.#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.838 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.839 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.854 2 DEBUG nova.compute.manager [req-cb3e5451-ac91-40b5-bb60-79e819226296 req-3541378b-a191-4a57-a130-76717128022c 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-deleted-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:01:44 np0005473739 nova_compute[259550]: 2025-10-07 15:01:44.883 2 DEBUG oslo_concurrency.processutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:01:44 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2995: 305 pgs: 305 active+clean; 89 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 85 KiB/s rd, 4.5 KiB/s wr, 121 op/s
Oct  7 11:01:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:01:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1603158260' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.383 2 DEBUG oslo_concurrency.processutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.500s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.391 2 DEBUG nova.compute.provider_tree [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.406 2 DEBUG nova.scheduler.client.report [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.424 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.585s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.448 2 INFO nova.scheduler.client.report [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Deleted allocations for instance a6b18035-4aef-4825-90e6-799173979626#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.526 2 DEBUG oslo_concurrency.lockutils [None req-0bf0a574-67c5-46e8-9d59-1483c2757e88 1a7552cec1354175be418fba9a7588af aa7bd91eb3b040c89929aa23c9775dc9 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.630s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.566 2 DEBUG nova.compute.manager [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.566 2 DEBUG oslo_concurrency.lockutils [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Acquiring lock "a6b18035-4aef-4825-90e6-799173979626-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.567 2 DEBUG oslo_concurrency.lockutils [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.567 2 DEBUG oslo_concurrency.lockutils [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] Lock "a6b18035-4aef-4825-90e6-799173979626-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.567 2 DEBUG nova.compute.manager [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] No waiting events found dispatching network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Oct  7 11:01:45 np0005473739 nova_compute[259550]: 2025-10-07 15:01:45.567 2 WARNING nova.compute.manager [req-6f45ca23-f462-499d-a13c-eb6a1227cd1c req-05ca206a-d838-4afe-a187-8b492e03f36b 4a98f9f4358746c18e4cbaa863dba39e c92de13ec1174c74a3cc0878edbb1f81 - - default default] [instance: a6b18035-4aef-4825-90e6-799173979626] Received unexpected event network-vif-plugged-9e613de1-4d71-4293-836f-5f1e121f0bb5 for instance with vm_state deleted and task_state None.#033[00m
Oct  7 11:01:46 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2996: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 49 KiB/s rd, 3.5 KiB/s wr, 75 op/s
Oct  7 11:01:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:47 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:01:47.761 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '60'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:01:48 np0005473739 nova_compute[259550]: 2025-10-07 15:01:48.202 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:48 np0005473739 nova_compute[259550]: 2025-10-07 15:01:48.778 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:48 np0005473739 nova_compute[259550]: 2025-10-07 15:01:48.850 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:48 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2997: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 51 KiB/s rd, 3.5 KiB/s wr, 76 op/s
Oct  7 11:01:49 np0005473739 nova_compute[259550]: 2025-10-07 15:01:49.005 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:49 np0005473739 podman[431602]: 2025-10-07 15:01:49.090654343 +0000 UTC m=+0.062359751 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Oct  7 11:01:49 np0005473739 podman[431604]: 2025-10-07 15:01:49.113421705 +0000 UTC m=+0.090019513 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Oct  7 11:01:50 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v2998: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.9 KiB/s wr, 62 op/s
Oct  7 11:01:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e294 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e294 do_prune osdmap full prune enabled
Oct  7 11:01:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 e295: 3 total, 3 up, 3 in
Oct  7 11:01:52 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e295: 3 total, 3 up, 3 in
Oct  7 11:01:52 np0005473739 nova_compute[259550]: 2025-10-07 15:01:52.142 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849297.1406746, 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 11:01:52 np0005473739 nova_compute[259550]: 2025-10-07 15:01:52.143 2 INFO nova.compute.manager [-] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] VM Stopped (Lifecycle Event)#033[00m
Oct  7 11:01:52 np0005473739 nova_compute[259550]: 2025-10-07 15:01:52.161 2 DEBUG nova.compute.manager [None req-facf4e0b-9c32-48a4-8db3-3d26e187e541 - - - - - -] [instance: 98da6205-c6cb-48d4-9502-fa1ca0f3e4ce] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:01:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:01:52 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3000: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 41 KiB/s rd, 2.8 KiB/s wr, 60 op/s
Oct  7 11:01:53 np0005473739 nova_compute[259550]: 2025-10-07 15:01:53.206 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:53 np0005473739 nova_compute[259550]: 2025-10-07 15:01:53.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:54 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3001: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.0 KiB/s wr, 33 op/s
Oct  7 11:01:56 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3002: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 3.3 KiB/s rd, 307 B/s wr, 4 op/s
Oct  7 11:01:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:01:58 np0005473739 nova_compute[259550]: 2025-10-07 15:01:58.136 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849303.1347249, a6b18035-4aef-4825-90e6-799173979626 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 11:01:58 np0005473739 nova_compute[259550]: 2025-10-07 15:01:58.136 2 INFO nova.compute.manager [-] [instance: a6b18035-4aef-4825-90e6-799173979626] VM Stopped (Lifecycle Event)#033[00m
Oct  7 11:01:58 np0005473739 nova_compute[259550]: 2025-10-07 15:01:58.158 2 DEBUG nova.compute.manager [None req-702c23bd-510d-41b1-8c59-1872f9bfb0f8 - - - - - -] [instance: a6b18035-4aef-4825-90e6-799173979626] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:01:58 np0005473739 nova_compute[259550]: 2025-10-07 15:01:58.208 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:58 np0005473739 nova_compute[259550]: 2025-10-07 15:01:58.781 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:01:58 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3003: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:02:00.099 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:02:00.099 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:02:00.099 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:00 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3004: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:02 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3005: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:03 np0005473739 nova_compute[259550]: 2025-10-07 15:02:03.210 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:03 np0005473739 nova_compute[259550]: 2025-10-07 15:02:03.783 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:04 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3006: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:05 np0005473739 podman[431675]: 2025-10-07 15:02:05.260089447 +0000 UTC m=+0.059770773 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Oct  7 11:02:05 np0005473739 podman[431676]: 2025-10-07 15:02:05.28934344 +0000 UTC m=+0.086252943 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid)
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:02:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a33e09a8-bbe2-4bfa-8f64-c613452366db does not exist
Oct  7 11:02:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 63f67e98-eea7-415d-a8ca-cca60a341d70 does not exist
Oct  7 11:02:05 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 22f1b031-bc2c-41d1-ba99-e5abb0c4e027 does not exist
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:02:05 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:02:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:02:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:02:06 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:02:06 np0005473739 podman[431958]: 2025-10-07 15:02:06.47507539 +0000 UTC m=+0.073675140 container create 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 11:02:06 np0005473739 systemd[1]: Started libpod-conmon-994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe.scope.
Oct  7 11:02:06 np0005473739 podman[431958]: 2025-10-07 15:02:06.447127261 +0000 UTC m=+0.045727021 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:02:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:02:06 np0005473739 podman[431958]: 2025-10-07 15:02:06.557897782 +0000 UTC m=+0.156497512 container init 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 11:02:06 np0005473739 podman[431958]: 2025-10-07 15:02:06.567018664 +0000 UTC m=+0.165618374 container start 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 11:02:06 np0005473739 podman[431958]: 2025-10-07 15:02:06.570511526 +0000 UTC m=+0.169111266 container attach 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:02:06 np0005473739 gracious_pare[431975]: 167 167
Oct  7 11:02:06 np0005473739 systemd[1]: libpod-994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe.scope: Deactivated successfully.
Oct  7 11:02:06 np0005473739 conmon[431975]: conmon 994c94548b640c9e123d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe.scope/container/memory.events
Oct  7 11:02:06 np0005473739 podman[431958]: 2025-10-07 15:02:06.5763079 +0000 UTC m=+0.174907610 container died 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:02:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5c7ba3bde43503d335c6ef321c10f8734e53289cee42e8186c36dcae7c97616c-merged.mount: Deactivated successfully.
Oct  7 11:02:06 np0005473739 podman[431958]: 2025-10-07 15:02:06.616568245 +0000 UTC m=+0.215167955 container remove 994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_pare, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 11:02:06 np0005473739 systemd[1]: libpod-conmon-994c94548b640c9e123d3d4ff86ce4d08befb5fb5478a5cddf95b357fded8ffe.scope: Deactivated successfully.
Oct  7 11:02:06 np0005473739 podman[431999]: 2025-10-07 15:02:06.782425555 +0000 UTC m=+0.041908400 container create 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 11:02:06 np0005473739 systemd[1]: Started libpod-conmon-99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d.scope.
Oct  7 11:02:06 np0005473739 podman[431999]: 2025-10-07 15:02:06.763484533 +0000 UTC m=+0.022967398 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:02:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:02:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:06 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:06 np0005473739 podman[431999]: 2025-10-07 15:02:06.893002791 +0000 UTC m=+0.152485686 container init 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 11:02:06 np0005473739 podman[431999]: 2025-10-07 15:02:06.904076285 +0000 UTC m=+0.163559130 container start 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:02:06 np0005473739 podman[431999]: 2025-10-07 15:02:06.907801042 +0000 UTC m=+0.167283887 container attach 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 11:02:06 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3007: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:07 np0005473739 great_vaughan[432015]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:02:07 np0005473739 great_vaughan[432015]: --> relative data size: 1.0
Oct  7 11:02:07 np0005473739 great_vaughan[432015]: --> All data devices are unavailable
Oct  7 11:02:07 np0005473739 nova_compute[259550]: 2025-10-07 15:02:07.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:07 np0005473739 nova_compute[259550]: 2025-10-07 15:02:07.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:07 np0005473739 nova_compute[259550]: 2025-10-07 15:02:07.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 11:02:08 np0005473739 systemd[1]: libpod-99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d.scope: Deactivated successfully.
Oct  7 11:02:08 np0005473739 systemd[1]: libpod-99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d.scope: Consumed 1.047s CPU time.
Oct  7 11:02:08 np0005473739 podman[431999]: 2025-10-07 15:02:08.012654602 +0000 UTC m=+1.272137467 container died 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 11:02:08 np0005473739 nova_compute[259550]: 2025-10-07 15:02:08.018 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 11:02:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-aa56bc841a7d8c96c1f0e3eaa914378e19669d876ffb51c229d4c01459bf4715-merged.mount: Deactivated successfully.
Oct  7 11:02:08 np0005473739 podman[431999]: 2025-10-07 15:02:08.078519436 +0000 UTC m=+1.338002281 container remove 99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_vaughan, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:02:08 np0005473739 systemd[1]: libpod-conmon-99acbafa1b726e2a2c9125b2aa719af598d42a1c61b8a62f5a5adec4280dc85d.scope: Deactivated successfully.
Oct  7 11:02:08 np0005473739 nova_compute[259550]: 2025-10-07 15:02:08.212 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:08 np0005473739 podman[432193]: 2025-10-07 15:02:08.685599892 +0000 UTC m=+0.049538832 container create 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 11:02:08 np0005473739 systemd[1]: Started libpod-conmon-60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581.scope.
Oct  7 11:02:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:02:08 np0005473739 podman[432193]: 2025-10-07 15:02:08.661510095 +0000 UTC m=+0.025449125 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:02:08 np0005473739 podman[432193]: 2025-10-07 15:02:08.768201798 +0000 UTC m=+0.132140748 container init 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:02:08 np0005473739 podman[432193]: 2025-10-07 15:02:08.775373368 +0000 UTC m=+0.139312348 container start 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 11:02:08 np0005473739 fervent_lehmann[432210]: 167 167
Oct  7 11:02:08 np0005473739 systemd[1]: libpod-60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581.scope: Deactivated successfully.
Oct  7 11:02:08 np0005473739 nova_compute[259550]: 2025-10-07 15:02:08.786 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:08 np0005473739 podman[432193]: 2025-10-07 15:02:08.859159395 +0000 UTC m=+0.223098455 container attach 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True)
Oct  7 11:02:08 np0005473739 podman[432193]: 2025-10-07 15:02:08.860156242 +0000 UTC m=+0.224095182 container died 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 11:02:08 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3008: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9f38f4289801c27221813f818b1477a444af9ecc36f564fdccb045fd9e7d0ecb-merged.mount: Deactivated successfully.
Oct  7 11:02:09 np0005473739 podman[432193]: 2025-10-07 15:02:09.073533219 +0000 UTC m=+0.437472149 container remove 60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_lehmann, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:02:09 np0005473739 systemd[1]: libpod-conmon-60e5d5807e20088fbaa2fd493fcf751146d231386f3ef0079186fc4701dbd581.scope: Deactivated successfully.
Oct  7 11:02:09 np0005473739 podman[432235]: 2025-10-07 15:02:09.256088351 +0000 UTC m=+0.052301236 container create 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:02:09 np0005473739 systemd[1]: Started libpod-conmon-92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74.scope.
Oct  7 11:02:09 np0005473739 podman[432235]: 2025-10-07 15:02:09.231715365 +0000 UTC m=+0.027928270 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:02:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:02:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:09 np0005473739 podman[432235]: 2025-10-07 15:02:09.362753103 +0000 UTC m=+0.158966018 container init 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:02:09 np0005473739 podman[432235]: 2025-10-07 15:02:09.370283362 +0000 UTC m=+0.166496247 container start 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 11:02:09 np0005473739 podman[432235]: 2025-10-07 15:02:09.387437656 +0000 UTC m=+0.183650571 container attach 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 11:02:10 np0005473739 sweet_galois[432252]: {
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:    "0": [
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:        {
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "devices": [
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "/dev/loop3"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            ],
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_name": "ceph_lv0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_size": "21470642176",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "name": "ceph_lv0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "tags": {
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cluster_name": "ceph",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.crush_device_class": "",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.encrypted": "0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osd_id": "0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.type": "block",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.vdo": "0"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            },
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "type": "block",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "vg_name": "ceph_vg0"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:        }
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:    ],
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:    "1": [
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:        {
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "devices": [
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "/dev/loop4"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            ],
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_name": "ceph_lv1",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_size": "21470642176",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "name": "ceph_lv1",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "tags": {
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cluster_name": "ceph",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.crush_device_class": "",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.encrypted": "0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osd_id": "1",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.type": "block",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.vdo": "0"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            },
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "type": "block",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "vg_name": "ceph_vg1"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:        }
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:    ],
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:    "2": [
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:        {
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "devices": [
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "/dev/loop5"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            ],
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_name": "ceph_lv2",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_size": "21470642176",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "name": "ceph_lv2",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "tags": {
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.cluster_name": "ceph",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.crush_device_class": "",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.encrypted": "0",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osd_id": "2",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.type": "block",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:                "ceph.vdo": "0"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            },
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "type": "block",
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:            "vg_name": "ceph_vg2"
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:        }
Oct  7 11:02:10 np0005473739 sweet_galois[432252]:    ]
Oct  7 11:02:10 np0005473739 sweet_galois[432252]: }
Oct  7 11:02:10 np0005473739 systemd[1]: libpod-92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74.scope: Deactivated successfully.
Oct  7 11:02:10 np0005473739 podman[432235]: 2025-10-07 15:02:10.169886844 +0000 UTC m=+0.966099729 container died 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:02:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ebf50abdec0fe2b98e3f573531d102839480d172c44db2ae4771cb663715a5c9-merged.mount: Deactivated successfully.
Oct  7 11:02:10 np0005473739 podman[432235]: 2025-10-07 15:02:10.259319001 +0000 UTC m=+1.055531916 container remove 92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_galois, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:02:10 np0005473739 systemd[1]: libpod-conmon-92e68cff02571e56e39bf0bb9900e3ef41f471bec9a89689ed525655b024dd74.scope: Deactivated successfully.
Oct  7 11:02:10 np0005473739 podman[432414]: 2025-10-07 15:02:10.871888293 +0000 UTC m=+0.046993896 container create 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 11:02:10 np0005473739 systemd[1]: Started libpod-conmon-49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463.scope.
Oct  7 11:02:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:02:10 np0005473739 podman[432414]: 2025-10-07 15:02:10.852328935 +0000 UTC m=+0.027434598 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:02:10 np0005473739 podman[432414]: 2025-10-07 15:02:10.946165308 +0000 UTC m=+0.121270941 container init 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 11:02:10 np0005473739 podman[432414]: 2025-10-07 15:02:10.954406727 +0000 UTC m=+0.129512340 container start 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:02:10 np0005473739 podman[432414]: 2025-10-07 15:02:10.959987694 +0000 UTC m=+0.135093367 container attach 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 11:02:10 np0005473739 practical_cohen[432431]: 167 167
Oct  7 11:02:10 np0005473739 systemd[1]: libpod-49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463.scope: Deactivated successfully.
Oct  7 11:02:10 np0005473739 conmon[432431]: conmon 49a0dd8ca4640861b71e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463.scope/container/memory.events
Oct  7 11:02:10 np0005473739 podman[432414]: 2025-10-07 15:02:10.962716646 +0000 UTC m=+0.137822279 container died 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef)
Oct  7 11:02:10 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3009: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-88e3435f4ffe8ce2b216d9c1de293c00c749edd4a4eef6ea55b8e459a4497f75-merged.mount: Deactivated successfully.
Oct  7 11:02:11 np0005473739 podman[432414]: 2025-10-07 15:02:11.026085123 +0000 UTC m=+0.201190756 container remove 49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=practical_cohen, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 11:02:11 np0005473739 systemd[1]: libpod-conmon-49a0dd8ca4640861b71e7de9f6de25a8c804491ffba46a003b2b2bf5e0444463.scope: Deactivated successfully.
Oct  7 11:02:11 np0005473739 podman[432454]: 2025-10-07 15:02:11.20205878 +0000 UTC m=+0.039434065 container create 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:02:11 np0005473739 systemd[1]: Started libpod-conmon-70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece.scope.
Oct  7 11:02:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:02:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:11 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:02:11 np0005473739 podman[432454]: 2025-10-07 15:02:11.184190208 +0000 UTC m=+0.021565513 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:02:11 np0005473739 podman[432454]: 2025-10-07 15:02:11.28630101 +0000 UTC m=+0.123676315 container init 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True)
Oct  7 11:02:11 np0005473739 podman[432454]: 2025-10-07 15:02:11.292986076 +0000 UTC m=+0.130361361 container start 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 11:02:11 np0005473739 podman[432454]: 2025-10-07 15:02:11.298247246 +0000 UTC m=+0.135622551 container attach 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:02:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]: {
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "osd_id": 2,
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "type": "bluestore"
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:    },
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "osd_id": 1,
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "type": "bluestore"
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:    },
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "osd_id": 0,
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:        "type": "bluestore"
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]:    }
Oct  7 11:02:12 np0005473739 bold_goldberg[432470]: }
Oct  7 11:02:12 np0005473739 systemd[1]: libpod-70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece.scope: Deactivated successfully.
Oct  7 11:02:12 np0005473739 conmon[432470]: conmon 70b7b676694bc6a76640 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece.scope/container/memory.events
Oct  7 11:02:12 np0005473739 podman[432454]: 2025-10-07 15:02:12.283162732 +0000 UTC m=+1.120538017 container died 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2)
Oct  7 11:02:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-969ae5c6bd278792287c8709ad1c6c42bfeec90418b7315cbd094c60349ac656-merged.mount: Deactivated successfully.
Oct  7 11:02:12 np0005473739 podman[432454]: 2025-10-07 15:02:12.341607608 +0000 UTC m=+1.178982933 container remove 70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_goldberg, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 11:02:12 np0005473739 systemd[1]: libpod-conmon-70b7b676694bc6a7664032280301bb0f575f5f69828080a48d17f9d2bb5e5ece.scope: Deactivated successfully.
Oct  7 11:02:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:02:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:02:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:02:12 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:02:12 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev af0312e4-c902-4695-a553-e0f1aa1634be does not exist
Oct  7 11:02:12 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bec471fd-f030-4c24-8bee-9494fea6a7e6 does not exist
Oct  7 11:02:12 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:02:12 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:02:12 np0005473739 nova_compute[259550]: 2025-10-07 15:02:12.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:12 np0005473739 nova_compute[259550]: 2025-10-07 15:02:12.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:12 np0005473739 nova_compute[259550]: 2025-10-07 15:02:12.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:12 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3010: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:13 np0005473739 nova_compute[259550]: 2025-10-07 15:02:13.217 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:13 np0005473739 nova_compute[259550]: 2025-10-07 15:02:13.788 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:02:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.0 total, 600.0 interval#012Cumulative writes: 44K writes, 180K keys, 44K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 44K writes, 15K syncs, 2.80 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2832 writes, 12K keys, 2832 commit groups, 1.0 writes per commit group, ingest: 13.48 MB, 0.02 MB/s#012Interval WAL: 2832 writes, 1026 syncs, 2.76 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:02:14 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3011: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:15 np0005473739 nova_compute[259550]: 2025-10-07 15:02:15.002 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:16 np0005473739 nova_compute[259550]: 2025-10-07 15:02:16.902 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:16 np0005473739 nova_compute[259550]: 2025-10-07 15:02:16.985 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:16 np0005473739 nova_compute[259550]: 2025-10-07 15:02:16.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:02:16 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3012: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:02:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 49K writes, 193K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 49K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2813 writes, 11K keys, 2813 commit groups, 1.0 writes per commit group, ingest: 11.66 MB, 0.02 MB/s#012Interval WAL: 2813 writes, 1097 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:02:18 np0005473739 nova_compute[259550]: 2025-10-07 15:02:18.219 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:18 np0005473739 nova_compute[259550]: 2025-10-07 15:02:18.792 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:18 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3013: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:19 np0005473739 nova_compute[259550]: 2025-10-07 15:02:19.065 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:20 np0005473739 podman[432567]: 2025-10-07 15:02:20.091869459 +0000 UTC m=+0.068675508 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:02:20 np0005473739 podman[432568]: 2025-10-07 15:02:20.125783886 +0000 UTC m=+0.102405621 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 11:02:20 np0005473739 nova_compute[259550]: 2025-10-07 15:02:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:20 np0005473739 nova_compute[259550]: 2025-10-07 15:02:20.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 11:02:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3014: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:02:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.80 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1971 writes, 8322 keys, 1971 commit groups, 1.0 writes per commit group, ingest: 9.10 MB, 0.02 MB/s#012Interval WAL: 1971 writes, 761 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:02:22
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['vms', 'default.rgw.log', 'default.rgw.meta', 'backups', 'cephfs.cephfs.meta', '.mgr', '.rgw.root', 'default.rgw.control', 'volumes', 'cephfs.cephfs.data', 'images']
Oct  7 11:02:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3015: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:02:23 np0005473739 nova_compute[259550]: 2025-10-07 15:02:23.254 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:02:23 np0005473739 nova_compute[259550]: 2025-10-07 15:02:23.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 11:02:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3016: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:26 np0005473739 nova_compute[259550]: 2025-10-07 15:02:26.996 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3017: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:27 np0005473739 nova_compute[259550]: 2025-10-07 15:02:27.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.018 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.019 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.020 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.256 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.292 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "f218f03c-5cbd-49aa-bf01-f486d865d347" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.293 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.312 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.393 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.394 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.404 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.404 2 INFO nova.compute.claims [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Claim successful on node compute-0.ctlplane.example.com#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.516 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:02:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3100737693' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.554 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.535s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.739 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.740 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3549MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.741 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:28 np0005473739 nova_compute[259550]: 2025-10-07 15:02:28.795 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3018: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:02:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2774696288' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.058 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.542s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.064 2 DEBUG nova.compute.provider_tree [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.082 2 DEBUG nova.scheduler.client.report [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.110 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.716s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.111 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.114 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.373s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.226 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.226 2 DEBUG nova.network.neutron [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.247 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Instance f218f03c-5cbd-49aa-bf01-f486d865d347 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.247 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.248 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=59GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.253 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.281 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.302 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.484 2 DEBUG nova.network.neutron [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] No network configured allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1188#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.485 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance network_info: |[]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.570 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.572 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.573 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Creating image(s)#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.598 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.622 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.646 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.651 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.753 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.754 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "df58437405374f3849a6707234f9941cabaf3122" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.754 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.755 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "df58437405374f3849a6707234f9941cabaf3122" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:02:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1923467118' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.784 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.789 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f218f03c-5cbd-49aa-bf01-f486d865d347_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.831 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.529s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.837 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.854 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.877 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:02:29 np0005473739 nova_compute[259550]: 2025-10-07 15:02:29.877 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.764s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.622 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 f218f03c-5cbd-49aa-bf01-f486d865d347_disk --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.833s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.678 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] resizing rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk to 1073741824 resize /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:288#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.767 2 DEBUG nova.objects.instance [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lazy-loading 'migration_context' on Instance uuid f218f03c-5cbd-49aa-bf01-f486d865d347 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.782 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.782 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Ensure instance console log exists: /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.783 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.783 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.784 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.785 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'boot_index': 0, 'device_type': 'disk', 'size': 0, 'disk_bus': 'virtio', 'device_name': '/dev/vda', 'encryption_format': None, 'encryption_options': None, 'encrypted': False, 'encryption_secret_uuid': None, 'image_id': '1c7e024e-3dd7-433b-91ff-f363a3d5a581'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.791 2 WARNING nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.796 2 DEBUG nova.virt.libvirt.host [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.797 2 DEBUG nova.virt.libvirt.host [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.800 2 DEBUG nova.virt.libvirt.host [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.800 2 DEBUG nova.virt.libvirt.host [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.801 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.801 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-10-07T14:02:29Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='46cdb24c-6466-4684-80c1-fe2aa86e3cea',id=4,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-10-07T14:02:30Z,direct_url=<?>,disk_format='qcow2',id=1c7e024e-3dd7-433b-91ff-f363a3d5a581,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='942c511a3bc940a3b6f39753738e8dad',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-10-07T14:02:31Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.801 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.802 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.802 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.802 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.802 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.803 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.803 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.803 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.804 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.804 2 DEBUG nova.virt.hardware [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Oct  7 11:02:30 np0005473739 nova_compute[259550]: 2025-10-07 15:02:30.806 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3019: 305 pgs: 305 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 268 KiB/s wr, 1 op/s
Oct  7 11:02:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 11:02:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/605935459' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.261 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.455s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.285 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.289 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon dump", "format": "json"} v 0) v1
Oct  7 11:02:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/698810980' entity='client.openstack' cmd=[{"prefix": "mon dump", "format": "json"}]: dispatch
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.719 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "ceph mon dump --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.430s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.723 2 DEBUG nova.objects.instance [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lazy-loading 'pci_devices' on Instance uuid f218f03c-5cbd-49aa-bf01-f486d865d347 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.743 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] End _get_guest_xml xml=<domain type="kvm">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <uuid>f218f03c-5cbd-49aa-bf01-f486d865d347</uuid>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <name>instance-00000099</name>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <memory>131072</memory>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <vcpu>1</vcpu>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <metadata>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <nova:name>tempest-AggregatesAdminTestJSON-server-1262446659</nova:name>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <nova:creationTime>2025-10-07 15:02:30</nova:creationTime>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <nova:flavor name="m1.nano">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <nova:memory>128</nova:memory>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <nova:disk>1</nova:disk>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <nova:swap>0</nova:swap>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <nova:ephemeral>0</nova:ephemeral>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <nova:vcpus>1</nova:vcpus>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      </nova:flavor>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <nova:owner>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <nova:user uuid="d77c23d5cf1f4accb328083c0f0bf419">tempest-AggregatesAdminTestJSON-1571393624-project-member</nova:user>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <nova:project uuid="cb6361241083417fa69181a79e143485">tempest-AggregatesAdminTestJSON-1571393624</nova:project>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      </nova:owner>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <nova:root type="image" uuid="1c7e024e-3dd7-433b-91ff-f363a3d5a581"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <nova:ports/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    </nova:instance>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  </metadata>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <sysinfo type="smbios">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <system>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <entry name="manufacturer">RDO</entry>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <entry name="product">OpenStack Compute</entry>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <entry name="serial">f218f03c-5cbd-49aa-bf01-f486d865d347</entry>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <entry name="uuid">f218f03c-5cbd-49aa-bf01-f486d865d347</entry>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <entry name="family">Virtual Machine</entry>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    </system>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  </sysinfo>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <os>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <type arch="x86_64" machine="q35">hvm</type>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <boot dev="hd"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <smbios mode="sysinfo"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  </os>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <features>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <acpi/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <apic/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <vmcoreinfo/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  </features>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <clock offset="utc">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <timer name="pit" tickpolicy="delay"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <timer name="rtc" tickpolicy="catchup"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <timer name="hpet" present="no"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  </clock>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <cpu mode="host-model" match="exact">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <topology sockets="1" cores="1" threads="1"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  </cpu>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  <devices>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <disk type="network" device="disk">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f218f03c-5cbd-49aa-bf01-f486d865d347_disk">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      </source>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      </auth>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <target dev="vda" bus="virtio"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    </disk>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <disk type="network" device="cdrom">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <driver type="raw" cache="none"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <source protocol="rbd" name="vms/f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <host name="192.168.122.100" port="6789"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      </source>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <auth username="openstack">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:        <secret type="ceph" uuid="82044f27-a8da-5b2a-a297-ff6afc620e1f"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      </auth>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <target dev="sda" bus="sata"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    </disk>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <serial type="pty">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <log file="/var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/console.log" append="off"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    </serial>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <video>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <model type="virtio"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    </video>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <input type="tablet" bus="usb"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <rng model="virtio">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <backend model="random">/dev/urandom</backend>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    </rng>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="pci" model="pcie-root-port"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <controller type="usb" index="0"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    <memballoon model="virtio">
Oct  7 11:02:31 np0005473739 nova_compute[259550]:      <stats period="10"/>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:    </memballoon>
Oct  7 11:02:31 np0005473739 nova_compute[259550]:  </devices>
Oct  7 11:02:31 np0005473739 nova_compute[259550]: </domain>
Oct  7 11:02:31 np0005473739 nova_compute[259550]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.828 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.829 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.830 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Using config drive#033[00m
Oct  7 11:02:31 np0005473739 nova_compute[259550]: 2025-10-07 15:02:31.854 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:02:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.344 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Creating config drive at /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config#033[00m
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.350 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar5cfv8n execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.496 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar5cfv8n" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.522 2 DEBUG nova.storage.rbd_utils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] rbd image f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config does not exist __init__ /usr/lib/python3.9/site-packages/nova/storage/rbd_utils.py:80#033[00m
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.525 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): rbd import --pool vms /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.689 2 DEBUG oslo_concurrency.processutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "rbd import --pool vms /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config f218f03c-5cbd-49aa-bf01-f486d865d347_disk.config --image-format=2 --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.164s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.690 2 INFO nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Deleting local config drive /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347/disk.config because it was imported into RBD.#033[00m
Oct  7 11:02:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:02:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010401128' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:02:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:02:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/4010401128' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:02:32 np0005473739 systemd-machined[214580]: New machine qemu-187-instance-00000099.
Oct  7 11:02:32 np0005473739 systemd[1]: Started Virtual Machine qemu-187-instance-00000099.
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.878 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.879 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 0.00015611518769942045 of space, bias 1.0, pg target 0.046834556309826136 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:02:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:02:32 np0005473739 nova_compute[259550]: 2025-10-07 15:02:32.903 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:02:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3020: 305 pgs: 305 active+clean; 47 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 268 KiB/s wr, 1 op/s
Oct  7 11:02:33 np0005473739 ovn_controller[151684]: 2025-10-07T15:02:33Z|01663|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.259 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.470 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849353.4694266, f218f03c-5cbd-49aa-bf01-f486d865d347 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.471 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] VM Resumed (Lifecycle Event)#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.473 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.473 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.477 2 INFO nova.virt.libvirt.driver [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance spawned successfully.#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.477 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.492 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.498 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.503 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.503 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.504 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.504 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.504 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.505 2 DEBUG nova.virt.libvirt.driver [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.514 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.515 2 DEBUG nova.virt.driver [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] Emitting event <LifecycleEvent: 1759849353.4709637, f218f03c-5cbd-49aa-bf01-f486d865d347 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.515 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] VM Started (Lifecycle Event)#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.535 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.541 2 DEBUG nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.570 2 INFO nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Took 4.00 seconds to spawn the instance on the hypervisor.#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.570 2 DEBUG nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.572 2 INFO nova.compute.manager [None req-389ee5cf-a293-406b-ba1a-4df1b64ca1e7 - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.632 2 INFO nova.compute.manager [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Took 5.27 seconds to build instance.#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.645 2 DEBUG oslo_concurrency.lockutils [None req-026a0ea9-67fd-495a-9f6b-7212ee3bb095 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:33 np0005473739 nova_compute[259550]: 2025-10-07 15:02:33.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:02:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3021: 305 pgs: 305 active+clean; 71 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 1.2 MiB/s wr, 33 op/s
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.729 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "f218f03c-5cbd-49aa-bf01-f486d865d347" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.729 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.730 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "f218f03c-5cbd-49aa-bf01-f486d865d347-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.730 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.730 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.731 2 INFO nova.compute.manager [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Terminating instance#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.732 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "refresh_cache-f218f03c-5cbd-49aa-bf01-f486d865d347" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.733 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquired lock "refresh_cache-f218f03c-5cbd-49aa-bf01-f486d865d347" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.733 2 DEBUG nova.network.neutron [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Oct  7 11:02:35 np0005473739 nova_compute[259550]: 2025-10-07 15:02:35.886 2 DEBUG nova.network.neutron [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 11:02:36 np0005473739 podman[433023]: 2025-10-07 15:02:36.083534128 +0000 UTC m=+0.063965714 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible)
Oct  7 11:02:36 np0005473739 podman[433022]: 2025-10-07 15:02:36.083550408 +0000 UTC m=+0.068301919 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:02:36 np0005473739 nova_compute[259550]: 2025-10-07 15:02:36.409 2 DEBUG nova.network.neutron [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:02:36 np0005473739 nova_compute[259550]: 2025-10-07 15:02:36.425 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Releasing lock "refresh_cache-f218f03c-5cbd-49aa-bf01-f486d865d347" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Oct  7 11:02:36 np0005473739 nova_compute[259550]: 2025-10-07 15:02:36.426 2 DEBUG nova.compute.manager [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Oct  7 11:02:36 np0005473739 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Deactivated successfully.
Oct  7 11:02:36 np0005473739 systemd[1]: machine-qemu\x2d187\x2dinstance\x2d00000099.scope: Consumed 3.651s CPU time.
Oct  7 11:02:36 np0005473739 systemd-machined[214580]: Machine qemu-187-instance-00000099 terminated.
Oct  7 11:02:36 np0005473739 nova_compute[259550]: 2025-10-07 15:02:36.651 2 INFO nova.virt.libvirt.driver [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance destroyed successfully.#033[00m
Oct  7 11:02:36 np0005473739 nova_compute[259550]: 2025-10-07 15:02:36.651 2 DEBUG nova.objects.instance [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lazy-loading 'resources' on Instance uuid f218f03c-5cbd-49aa-bf01-f486d865d347 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Oct  7 11:02:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3022: 305 pgs: 305 active+clean; 88 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 900 KiB/s rd, 1.8 MiB/s wr, 66 op/s
Oct  7 11:02:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.062 2 INFO nova.virt.libvirt.driver [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Deleting instance files /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347_del#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.063 2 INFO nova.virt.libvirt.driver [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Deletion of /var/lib/nova/instances/f218f03c-5cbd-49aa-bf01-f486d865d347_del complete#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.116 2 INFO nova.compute.manager [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Took 0.69 seconds to destroy the instance on the hypervisor.#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.116 2 DEBUG oslo.service.loopingcall [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.117 2 DEBUG nova.compute.manager [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.118 2 DEBUG nova.network.neutron [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.345 2 DEBUG nova.network.neutron [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.421 2 DEBUG nova.network.neutron [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.440 2 INFO nova.compute.manager [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Took 0.32 seconds to deallocate network for instance.#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.495 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.495 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.542 2 DEBUG oslo_concurrency.processutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:02:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:02:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1823466636' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:02:37 np0005473739 nova_compute[259550]: 2025-10-07 15:02:37.994 2 DEBUG oslo_concurrency.processutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.451s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:02:38 np0005473739 nova_compute[259550]: 2025-10-07 15:02:38.000 2 DEBUG nova.compute.provider_tree [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:02:38 np0005473739 nova_compute[259550]: 2025-10-07 15:02:38.021 2 DEBUG nova.scheduler.client.report [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:02:38 np0005473739 nova_compute[259550]: 2025-10-07 15:02:38.052 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.557s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:38 np0005473739 nova_compute[259550]: 2025-10-07 15:02:38.080 2 INFO nova.scheduler.client.report [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Deleted allocations for instance f218f03c-5cbd-49aa-bf01-f486d865d347#033[00m
Oct  7 11:02:38 np0005473739 nova_compute[259550]: 2025-10-07 15:02:38.154 2 DEBUG oslo_concurrency.lockutils [None req-adf2c40b-c54f-4346-a37e-4959f458adb7 d77c23d5cf1f4accb328083c0f0bf419 cb6361241083417fa69181a79e143485 - - default default] Lock "f218f03c-5cbd-49aa-bf01-f486d865d347" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.425s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:02:38 np0005473739 nova_compute[259550]: 2025-10-07 15:02:38.310 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:38 np0005473739 nova_compute[259550]: 2025-10-07 15:02:38.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3023: 305 pgs: 305 active+clean; 63 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 125 op/s
Oct  7 11:02:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3024: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.8 MiB/s wr, 127 op/s
Oct  7 11:02:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3025: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Oct  7 11:02:43 np0005473739 nova_compute[259550]: 2025-10-07 15:02:43.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:43 np0005473739 nova_compute[259550]: 2025-10-07 15:02:43.800 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:02:44.801 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=61, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=60) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:02:44 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:02:44.802 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 11:02:44 np0005473739 nova_compute[259550]: 2025-10-07 15:02:44.802 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3026: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 1.5 MiB/s wr, 125 op/s
Oct  7 11:02:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3027: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.9 MiB/s rd, 594 KiB/s wr, 93 op/s
Oct  7 11:02:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:48 np0005473739 nova_compute[259550]: 2025-10-07 15:02:48.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:48 np0005473739 nova_compute[259550]: 2025-10-07 15:02:48.804 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3028: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.1 MiB/s rd, 1.2 KiB/s wr, 60 op/s
Oct  7 11:02:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3029: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 937 B/s rd, 0 B/s wr, 1 op/s
Oct  7 11:02:51 np0005473739 podman[433104]: 2025-10-07 15:02:51.0865058 +0000 UTC m=+0.067016394 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:02:51 np0005473739 podman[433105]: 2025-10-07 15:02:51.103077749 +0000 UTC m=+0.087914418 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  7 11:02:51 np0005473739 nova_compute[259550]: 2025-10-07 15:02:51.650 2 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1759849356.6480515, f218f03c-5cbd-49aa-bf01-f486d865d347 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Oct  7 11:02:51 np0005473739 nova_compute[259550]: 2025-10-07 15:02:51.650 2 INFO nova.compute.manager [-] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] VM Stopped (Lifecycle Event)#033[00m
Oct  7 11:02:51 np0005473739 nova_compute[259550]: 2025-10-07 15:02:51.694 2 DEBUG nova.compute.manager [None req-60248d07-2a9d-49ee-9367-0e17060c4e6d - - - - - -] [instance: f218f03c-5cbd-49aa-bf01-f486d865d347] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Oct  7 11:02:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:02:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:02:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3030: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:53 np0005473739 nova_compute[259550]: 2025-10-07 15:02:53.316 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:53 np0005473739 nova_compute[259550]: 2025-10-07 15:02:53.806 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:54 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:02:54.804 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '61'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:02:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3031: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3032: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:02:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:02:58 np0005473739 nova_compute[259550]: 2025-10-07 15:02:58.318 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:58 np0005473739 nova_compute[259550]: 2025-10-07 15:02:58.807 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:02:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3033: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:03:00.100 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:03:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:03:00.100 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:03:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:03:00.100 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:03:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3034: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3035: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:03 np0005473739 nova_compute[259550]: 2025-10-07 15:03:03.321 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:03 np0005473739 nova_compute[259550]: 2025-10-07 15:03:03.808 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #147. Immutable memtables: 0.
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.683490) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 89] Flushing memtable with next log file: 147
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384683542, "job": 89, "event": "flush_started", "num_memtables": 1, "num_entries": 1865, "num_deletes": 255, "total_data_size": 2997142, "memory_usage": 3047120, "flush_reason": "Manual Compaction"}
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 89] Level-0 flush table #148: started
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384701118, "cf_name": "default", "job": 89, "event": "table_file_creation", "file_number": 148, "file_size": 2944251, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 61598, "largest_seqno": 63462, "table_properties": {"data_size": 2935548, "index_size": 5452, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2181, "raw_key_size": 17689, "raw_average_key_size": 20, "raw_value_size": 2918188, "raw_average_value_size": 3361, "num_data_blocks": 241, "num_entries": 868, "num_filter_entries": 868, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849197, "oldest_key_time": 1759849197, "file_creation_time": 1759849384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 148, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 89] Flush lasted 17681 microseconds, and 6882 cpu microseconds.
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.701166) [db/flush_job.cc:967] [default] [JOB 89] Level-0 flush table #148: 2944251 bytes OK
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.701188) [db/memtable_list.cc:519] [default] Level-0 commit table #148 started
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.703452) [db/memtable_list.cc:722] [default] Level-0 commit table #148: memtable #1 done
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.703470) EVENT_LOG_v1 {"time_micros": 1759849384703465, "job": 89, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.703499) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 89] Try to delete WAL files size 2989172, prev total WAL file size 2989172, number of live WAL files 2.
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000144.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.704597) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036303234' seq:72057594037927935, type:22 .. '7061786F730036323736' seq:0, type:0; will stop at (end)
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 90] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 89 Base level 0, inputs: [148(2875KB)], [146(9891KB)]
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384704633, "job": 90, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [148], "files_L6": [146], "score": -1, "input_data_size": 13073580, "oldest_snapshot_seqno": -1}
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 90] Generated table #149: 8250 keys, 11330515 bytes, temperature: kUnknown
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384760097, "cf_name": "default", "job": 90, "event": "table_file_creation", "file_number": 149, "file_size": 11330515, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11275555, "index_size": 33215, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20677, "raw_key_size": 215752, "raw_average_key_size": 26, "raw_value_size": 11128658, "raw_average_value_size": 1348, "num_data_blocks": 1295, "num_entries": 8250, "num_filter_entries": 8250, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849384, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 149, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.760318) [db/compaction/compaction_job.cc:1663] [default] [JOB 90] Compacted 1@0 + 1@6 files to L6 => 11330515 bytes
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.761834) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 235.4 rd, 204.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.8, 9.7 +0.0 blob) out(10.8 +0.0 blob), read-write-amplify(8.3) write-amplify(3.8) OK, records in: 8774, records dropped: 524 output_compression: NoCompression
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.762065) EVENT_LOG_v1 {"time_micros": 1759849384762056, "job": 90, "event": "compaction_finished", "compaction_time_micros": 55539, "compaction_time_cpu_micros": 26821, "output_level": 6, "num_output_files": 1, "total_output_size": 11330515, "num_input_records": 8774, "num_output_records": 8250, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000148.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384762580, "job": 90, "event": "table_file_deletion", "file_number": 148}
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000146.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849384764695, "job": 90, "event": "table_file_deletion", "file_number": 146}
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.704467) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764845) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764858) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764859) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764860) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:04 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:04.764862) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3036: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3037: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:07 np0005473739 podman[433146]: 2025-10-07 15:03:07.076463885 +0000 UTC m=+0.056094946 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Oct  7 11:03:07 np0005473739 podman[433147]: 2025-10-07 15:03:07.091942244 +0000 UTC m=+0.066067170 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 11:03:08 np0005473739 nova_compute[259550]: 2025-10-07 15:03:08.349 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:08 np0005473739 nova_compute[259550]: 2025-10-07 15:03:08.811 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3038: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:09 np0005473739 nova_compute[259550]: 2025-10-07 15:03:09.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3039: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:12 np0005473739 nova_compute[259550]: 2025-10-07 15:03:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3040: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:03:13 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a380af48-68b5-4f83-b746-7f721f235858 does not exist
Oct  7 11:03:13 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 7082204d-5285-4294-b615-590b5b9a44d9 does not exist
Oct  7 11:03:13 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c0903892-923a-4aef-b10c-e6cf431f1393 does not exist
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:03:13 np0005473739 nova_compute[259550]: 2025-10-07 15:03:13.351 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:03:13 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:03:13 np0005473739 nova_compute[259550]: 2025-10-07 15:03:13.814 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:13 np0005473739 podman[433455]: 2025-10-07 15:03:13.926340966 +0000 UTC m=+0.054857042 container create cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2)
Oct  7 11:03:13 np0005473739 systemd[1]: Started libpod-conmon-cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b.scope.
Oct  7 11:03:13 np0005473739 nova_compute[259550]: 2025-10-07 15:03:13.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:13 np0005473739 podman[433455]: 2025-10-07 15:03:13.894975916 +0000 UTC m=+0.023492012 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:03:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:03:14 np0005473739 podman[433455]: 2025-10-07 15:03:14.008505521 +0000 UTC m=+0.137021597 container init cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:03:14 np0005473739 podman[433455]: 2025-10-07 15:03:14.015096825 +0000 UTC m=+0.143612891 container start cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 11:03:14 np0005473739 ecstatic_yalow[433471]: 167 167
Oct  7 11:03:14 np0005473739 systemd[1]: libpod-cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b.scope: Deactivated successfully.
Oct  7 11:03:14 np0005473739 conmon[433471]: conmon cc07692f804ed904fc00 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b.scope/container/memory.events
Oct  7 11:03:14 np0005473739 podman[433455]: 2025-10-07 15:03:14.023794115 +0000 UTC m=+0.152310211 container attach cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:03:14 np0005473739 podman[433455]: 2025-10-07 15:03:14.024896424 +0000 UTC m=+0.153412500 container died cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 11:03:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1513757c0dc2b977b1fdd548db596ed46d5e374535daf9510ec63f4e2cafe524-merged.mount: Deactivated successfully.
Oct  7 11:03:14 np0005473739 podman[433455]: 2025-10-07 15:03:14.075554125 +0000 UTC m=+0.204070201 container remove cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_yalow, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 11:03:14 np0005473739 systemd[1]: libpod-conmon-cc07692f804ed904fc007e5556e8d442ae4b7cb57cee94687db195e2d7b0cb5b.scope: Deactivated successfully.
Oct  7 11:03:14 np0005473739 podman[433497]: 2025-10-07 15:03:14.232342844 +0000 UTC m=+0.039615989 container create 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:03:14 np0005473739 systemd[1]: Started libpod-conmon-42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae.scope.
Oct  7 11:03:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:03:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:14 np0005473739 podman[433497]: 2025-10-07 15:03:14.215870119 +0000 UTC m=+0.023143294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:03:14 np0005473739 podman[433497]: 2025-10-07 15:03:14.3156539 +0000 UTC m=+0.122927105 container init 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:03:14 np0005473739 podman[433497]: 2025-10-07 15:03:14.32741337 +0000 UTC m=+0.134686525 container start 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:03:14 np0005473739 podman[433497]: 2025-10-07 15:03:14.330613676 +0000 UTC m=+0.137886881 container attach 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 11:03:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3041: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:15 np0005473739 brave_kowalevski[433514]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:03:15 np0005473739 brave_kowalevski[433514]: --> relative data size: 1.0
Oct  7 11:03:15 np0005473739 brave_kowalevski[433514]: --> All data devices are unavailable
Oct  7 11:03:15 np0005473739 systemd[1]: libpod-42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae.scope: Deactivated successfully.
Oct  7 11:03:15 np0005473739 systemd[1]: libpod-42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae.scope: Consumed 1.041s CPU time.
Oct  7 11:03:15 np0005473739 podman[433497]: 2025-10-07 15:03:15.410167076 +0000 UTC m=+1.217440251 container died 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:03:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-626d624c33b26321a320d3b8b6c032025d80a937b74aa0915704cb8618d31869-merged.mount: Deactivated successfully.
Oct  7 11:03:15 np0005473739 podman[433497]: 2025-10-07 15:03:15.51800206 +0000 UTC m=+1.325275215 container remove 42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_kowalevski, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 11:03:15 np0005473739 systemd[1]: libpod-conmon-42baadb33cdcf2c10ade99cdaa595717b25e450724a02b7fe4554a1b51a499ae.scope: Deactivated successfully.
Oct  7 11:03:15 np0005473739 nova_compute[259550]: 2025-10-07 15:03:15.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:16 np0005473739 podman[433696]: 2025-10-07 15:03:16.14642648 +0000 UTC m=+0.044773056 container create 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 11:03:16 np0005473739 systemd[1]: Started libpod-conmon-486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e.scope.
Oct  7 11:03:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:03:16 np0005473739 podman[433696]: 2025-10-07 15:03:16.128810545 +0000 UTC m=+0.027157151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:03:16 np0005473739 podman[433696]: 2025-10-07 15:03:16.238816945 +0000 UTC m=+0.137163541 container init 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 11:03:16 np0005473739 podman[433696]: 2025-10-07 15:03:16.246054588 +0000 UTC m=+0.144401164 container start 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 11:03:16 np0005473739 condescending_burnell[433712]: 167 167
Oct  7 11:03:16 np0005473739 systemd[1]: libpod-486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e.scope: Deactivated successfully.
Oct  7 11:03:16 np0005473739 podman[433696]: 2025-10-07 15:03:16.252459877 +0000 UTC m=+0.150806473 container attach 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 11:03:16 np0005473739 podman[433696]: 2025-10-07 15:03:16.253477274 +0000 UTC m=+0.151823850 container died 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 11:03:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c049bdc8b50039b485382191ff3aa3f1bd3e4a26f99c08026959927102f3bb79-merged.mount: Deactivated successfully.
Oct  7 11:03:16 np0005473739 podman[433696]: 2025-10-07 15:03:16.29373347 +0000 UTC m=+0.192080046 container remove 486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_burnell, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:03:16 np0005473739 systemd[1]: libpod-conmon-486b86bc5626d404316c0be6f019468bc7fee267f564439e7c9e22bf37b7dd6e.scope: Deactivated successfully.
Oct  7 11:03:16 np0005473739 podman[433736]: 2025-10-07 15:03:16.470897398 +0000 UTC m=+0.046762159 container create aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 11:03:16 np0005473739 systemd[1]: Started libpod-conmon-aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa.scope.
Oct  7 11:03:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:03:16 np0005473739 podman[433736]: 2025-10-07 15:03:16.453288982 +0000 UTC m=+0.029153743 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:03:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:16 np0005473739 podman[433736]: 2025-10-07 15:03:16.560119679 +0000 UTC m=+0.135984470 container init aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True)
Oct  7 11:03:16 np0005473739 podman[433736]: 2025-10-07 15:03:16.567061533 +0000 UTC m=+0.142926294 container start aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 11:03:16 np0005473739 podman[433736]: 2025-10-07 15:03:16.571193992 +0000 UTC m=+0.147058773 container attach aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 11:03:16 np0005473739 nova_compute[259550]: 2025-10-07 15:03:16.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:16 np0005473739 nova_compute[259550]: 2025-10-07 15:03:16.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:03:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3042: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]: {
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:    "0": [
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:        {
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "devices": [
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "/dev/loop3"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            ],
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_name": "ceph_lv0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_size": "21470642176",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "name": "ceph_lv0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "tags": {
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cluster_name": "ceph",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.crush_device_class": "",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.encrypted": "0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osd_id": "0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.type": "block",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.vdo": "0"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            },
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "type": "block",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "vg_name": "ceph_vg0"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:        }
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:    ],
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:    "1": [
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:        {
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "devices": [
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "/dev/loop4"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            ],
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_name": "ceph_lv1",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_size": "21470642176",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "name": "ceph_lv1",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "tags": {
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cluster_name": "ceph",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.crush_device_class": "",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.encrypted": "0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osd_id": "1",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.type": "block",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.vdo": "0"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            },
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "type": "block",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "vg_name": "ceph_vg1"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:        }
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:    ],
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:    "2": [
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:        {
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "devices": [
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "/dev/loop5"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            ],
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_name": "ceph_lv2",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_size": "21470642176",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "name": "ceph_lv2",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "tags": {
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.cluster_name": "ceph",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.crush_device_class": "",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.encrypted": "0",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osd_id": "2",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.type": "block",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:                "ceph.vdo": "0"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            },
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "type": "block",
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:            "vg_name": "ceph_vg2"
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:        }
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]:    ]
Oct  7 11:03:17 np0005473739 keen_dubinsky[433752]: }
Oct  7 11:03:17 np0005473739 systemd[1]: libpod-aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa.scope: Deactivated successfully.
Oct  7 11:03:17 np0005473739 podman[433736]: 2025-10-07 15:03:17.404017883 +0000 UTC m=+0.979882674 container died aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:03:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-96c41aaa0a4438d8aca5e6e75780ce704d8591b3bea4859dc8961bb728817df1-merged.mount: Deactivated successfully.
Oct  7 11:03:17 np0005473739 podman[433736]: 2025-10-07 15:03:17.466732932 +0000 UTC m=+1.042597693 container remove aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_dubinsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:03:17 np0005473739 systemd[1]: libpod-conmon-aefbec233d3325bf6fad2119653bfb29576f9eb1a2b583722fc7b2fd63fe30fa.scope: Deactivated successfully.
Oct  7 11:03:18 np0005473739 podman[433916]: 2025-10-07 15:03:18.12033476 +0000 UTC m=+0.044490909 container create 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 11:03:18 np0005473739 systemd[1]: Started libpod-conmon-34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f.scope.
Oct  7 11:03:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:03:18 np0005473739 podman[433916]: 2025-10-07 15:03:18.104142092 +0000 UTC m=+0.028298251 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:03:18 np0005473739 podman[433916]: 2025-10-07 15:03:18.208097073 +0000 UTC m=+0.132253252 container init 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:03:18 np0005473739 podman[433916]: 2025-10-07 15:03:18.215143059 +0000 UTC m=+0.139299208 container start 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:03:18 np0005473739 podman[433916]: 2025-10-07 15:03:18.219923295 +0000 UTC m=+0.144079464 container attach 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 11:03:18 np0005473739 friendly_carver[433933]: 167 167
Oct  7 11:03:18 np0005473739 systemd[1]: libpod-34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f.scope: Deactivated successfully.
Oct  7 11:03:18 np0005473739 conmon[433933]: conmon 34d0a59c2f6899ed48ad <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f.scope/container/memory.events
Oct  7 11:03:18 np0005473739 podman[433916]: 2025-10-07 15:03:18.222187276 +0000 UTC m=+0.146343415 container died 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:03:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5968717b5562fa77fca4e62b79d1af4dc1a341e03e4d5396692707d27c1a5e7b-merged.mount: Deactivated successfully.
Oct  7 11:03:18 np0005473739 podman[433916]: 2025-10-07 15:03:18.25368434 +0000 UTC m=+0.177840489 container remove 34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_carver, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 11:03:18 np0005473739 systemd[1]: libpod-conmon-34d0a59c2f6899ed48ad86afd8f5f90ab48ae7a817531603f18661b602fa367f.scope: Deactivated successfully.
Oct  7 11:03:18 np0005473739 nova_compute[259550]: 2025-10-07 15:03:18.353 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:18 np0005473739 podman[433957]: 2025-10-07 15:03:18.421541941 +0000 UTC m=+0.048160535 container create fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3)
Oct  7 11:03:18 np0005473739 systemd[1]: Started libpod-conmon-fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690.scope.
Oct  7 11:03:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:03:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:03:18 np0005473739 podman[433957]: 2025-10-07 15:03:18.491247506 +0000 UTC m=+0.117866120 container init fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default)
Oct  7 11:03:18 np0005473739 podman[433957]: 2025-10-07 15:03:18.400020322 +0000 UTC m=+0.026638946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:03:18 np0005473739 podman[433957]: 2025-10-07 15:03:18.501720464 +0000 UTC m=+0.128339068 container start fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:03:18 np0005473739 podman[433957]: 2025-10-07 15:03:18.506444438 +0000 UTC m=+0.133063072 container attach fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:03:18 np0005473739 nova_compute[259550]: 2025-10-07 15:03:18.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3043: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]: {
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "osd_id": 2,
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "type": "bluestore"
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:    },
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "osd_id": 1,
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "type": "bluestore"
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:    },
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "osd_id": 0,
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:        "type": "bluestore"
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]:    }
Oct  7 11:03:19 np0005473739 nifty_cannon[433973]: }
Oct  7 11:03:19 np0005473739 systemd[1]: libpod-fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690.scope: Deactivated successfully.
Oct  7 11:03:19 np0005473739 systemd[1]: libpod-fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690.scope: Consumed 1.027s CPU time.
Oct  7 11:03:19 np0005473739 podman[433957]: 2025-10-07 15:03:19.525734444 +0000 UTC m=+1.152353078 container died fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:03:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2f4dfe621136d7cfc5fced739bb2eb02f1423f90ac813e8fbc11478ffab5f2c9-merged.mount: Deactivated successfully.
Oct  7 11:03:19 np0005473739 podman[433957]: 2025-10-07 15:03:19.591291479 +0000 UTC m=+1.217910073 container remove fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nifty_cannon, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 11:03:19 np0005473739 systemd[1]: libpod-conmon-fd546e8165f24d8f4448db6cbed0ca9114056e8a10f4350284b19b0e85f2a690.scope: Deactivated successfully.
Oct  7 11:03:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:03:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:03:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:03:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:03:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 20b46b08-a93b-42a2-81b4-85d304abd7e6 does not exist
Oct  7 11:03:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev aca6e8d7-169f-4641-a8a2-f08e92cbb8ad does not exist
Oct  7 11:03:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:03:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:03:20 np0005473739 nova_compute[259550]: 2025-10-07 15:03:20.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3044: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:22 np0005473739 podman[434068]: 2025-10-07 15:03:22.066201357 +0000 UTC m=+0.051836813 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Oct  7 11:03:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:22 np0005473739 podman[434069]: 2025-10-07 15:03:22.13581544 +0000 UTC m=+0.115619451 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:03:22
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'images', 'cephfs.cephfs.meta', 'vms', 'cephfs.cephfs.data', 'default.rgw.control', 'default.rgw.meta', 'default.rgw.log', '.mgr', '.rgw.root', 'backups']
Oct  7 11:03:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3045: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:03:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:03:23 np0005473739 nova_compute[259550]: 2025-10-07 15:03:23.357 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:23 np0005473739 nova_compute[259550]: 2025-10-07 15:03:23.818 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3046: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3047: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:28 np0005473739 nova_compute[259550]: 2025-10-07 15:03:28.359 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:28 np0005473739 nova_compute[259550]: 2025-10-07 15:03:28.820 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:28 np0005473739 nova_compute[259550]: 2025-10-07 15:03:28.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3048: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.099 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.099 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.100 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.100 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.100 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:03:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:03:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2450312774' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.554 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.454s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.733 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.734 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3600MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.735 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:03:29 np0005473739 nova_compute[259550]: 2025-10-07 15:03:29.735 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:03:30 np0005473739 nova_compute[259550]: 2025-10-07 15:03:30.016 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:03:30 np0005473739 nova_compute[259550]: 2025-10-07 15:03:30.017 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:03:30 np0005473739 nova_compute[259550]: 2025-10-07 15:03:30.035 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:03:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:03:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2216449213' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:03:30 np0005473739 nova_compute[259550]: 2025-10-07 15:03:30.572 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.537s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:03:30 np0005473739 nova_compute[259550]: 2025-10-07 15:03:30.579 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:03:30 np0005473739 nova_compute[259550]: 2025-10-07 15:03:30.601 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:03:30 np0005473739 nova_compute[259550]: 2025-10-07 15:03:30.644 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:03:30 np0005473739 nova_compute[259550]: 2025-10-07 15:03:30.645 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.910s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:03:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3049: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  7 11:03:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:32 np0005473739 nova_compute[259550]: 2025-10-07 15:03:32.646 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:32 np0005473739 nova_compute[259550]: 2025-10-07 15:03:32.646 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:03:32 np0005473739 nova_compute[259550]: 2025-10-07 15:03:32.646 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:03:32 np0005473739 nova_compute[259550]: 2025-10-07 15:03:32.673 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:03:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:03:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713885254' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:03:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:03:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3713885254' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006662398458357752 of space, bias 1.0, pg target 0.19987195375073258 quantized to 32 (current 32)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:03:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:03:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3050: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  7 11:03:33 np0005473739 nova_compute[259550]: 2025-10-07 15:03:33.363 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e295 do_prune osdmap full prune enabled
Oct  7 11:03:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e296 e296: 3 total, 3 up, 3 in
Oct  7 11:03:33 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e296: 3 total, 3 up, 3 in
Oct  7 11:03:33 np0005473739 nova_compute[259550]: 2025-10-07 15:03:33.822 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:34 np0005473739 nova_compute[259550]: 2025-10-07 15:03:34.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:03:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3052: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 35 KiB/s rd, 614 B/s wr, 56 op/s
Oct  7 11:03:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3053: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 50 KiB/s rd, 921 B/s wr, 81 op/s
Oct  7 11:03:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:38 np0005473739 podman[434159]: 2025-10-07 15:03:38.071866507 +0000 UTC m=+0.066068739 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 11:03:38 np0005473739 podman[434160]: 2025-10-07 15:03:38.093002496 +0000 UTC m=+0.083934672 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, managed_by=edpm_ansible)
Oct  7 11:03:38 np0005473739 nova_compute[259550]: 2025-10-07 15:03:38.365 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:38 np0005473739 nova_compute[259550]: 2025-10-07 15:03:38.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3054: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 61 KiB/s rd, 1.4 KiB/s wr, 96 op/s
Oct  7 11:03:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3055: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 59 KiB/s rd, 1.4 KiB/s wr, 92 op/s
Oct  7 11:03:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e296 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e296 do_prune osdmap full prune enabled
Oct  7 11:03:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 e297: 3 total, 3 up, 3 in
Oct  7 11:03:42 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e297: 3 total, 3 up, 3 in
Oct  7 11:03:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3057: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 879 B/s wr, 42 op/s
Oct  7 11:03:43 np0005473739 nova_compute[259550]: 2025-10-07 15:03:43.366 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:43 np0005473739 nova_compute[259550]: 2025-10-07 15:03:43.825 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3058: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 26 KiB/s rd, 818 B/s wr, 39 op/s
Oct  7 11:03:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3059: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct  7 11:03:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:48 np0005473739 nova_compute[259550]: 2025-10-07 15:03:48.368 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:48 np0005473739 nova_compute[259550]: 2025-10-07 15:03:48.828 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3060: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3061: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #150. Immutable memtables: 0.
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.137113) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 91] Flushing memtable with next log file: 150
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432137358, "job": 91, "event": "flush_started", "num_memtables": 1, "num_entries": 658, "num_deletes": 252, "total_data_size": 778652, "memory_usage": 791376, "flush_reason": "Manual Compaction"}
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 91] Level-0 flush table #151: started
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432143782, "cf_name": "default", "job": 91, "event": "table_file_creation", "file_number": 151, "file_size": 531315, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 63463, "largest_seqno": 64120, "table_properties": {"data_size": 528210, "index_size": 1012, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1029, "raw_key_size": 8428, "raw_average_key_size": 21, "raw_value_size": 521618, "raw_average_value_size": 1300, "num_data_blocks": 45, "num_entries": 401, "num_filter_entries": 401, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849385, "oldest_key_time": 1759849385, "file_creation_time": 1759849432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 151, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 91] Flush lasted 7140 microseconds, and 3095 cpu microseconds.
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.144262) [db/flush_job.cc:967] [default] [JOB 91] Level-0 flush table #151: 531315 bytes OK
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.144385) [db/memtable_list.cc:519] [default] Level-0 commit table #151 started
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.145856) [db/memtable_list.cc:722] [default] Level-0 commit table #151: memtable #1 done
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.145877) EVENT_LOG_v1 {"time_micros": 1759849432145868, "job": 91, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.145906) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 91] Try to delete WAL files size 775138, prev total WAL file size 775138, number of live WAL files 2.
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000147.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.147162) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032353033' seq:72057594037927935, type:22 .. '6D6772737461740032373534' seq:0, type:0; will stop at (end)
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 92] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 91 Base level 0, inputs: [151(518KB)], [149(10MB)]
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432147258, "job": 92, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [151], "files_L6": [149], "score": -1, "input_data_size": 11861830, "oldest_snapshot_seqno": -1}
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 92] Generated table #152: 8150 keys, 8793005 bytes, temperature: kUnknown
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432239018, "cf_name": "default", "job": 92, "event": "table_file_creation", "file_number": 152, "file_size": 8793005, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 8742871, "index_size": 28645, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20421, "raw_key_size": 213875, "raw_average_key_size": 26, "raw_value_size": 8601838, "raw_average_value_size": 1055, "num_data_blocks": 1105, "num_entries": 8150, "num_filter_entries": 8150, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849432, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 152, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.239583) [db/compaction/compaction_job.cc:1663] [default] [JOB 92] Compacted 1@0 + 1@6 files to L6 => 8793005 bytes
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.241422) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 128.9 rd, 95.5 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.5, 10.8 +0.0 blob) out(8.4 +0.0 blob), read-write-amplify(38.9) write-amplify(16.5) OK, records in: 8651, records dropped: 501 output_compression: NoCompression
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.241448) EVENT_LOG_v1 {"time_micros": 1759849432241435, "job": 92, "event": "compaction_finished", "compaction_time_micros": 92040, "compaction_time_cpu_micros": 43291, "output_level": 6, "num_output_files": 1, "total_output_size": 8793005, "num_input_records": 8651, "num_output_records": 8150, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000151.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432242178, "job": 92, "event": "table_file_deletion", "file_number": 151}
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000149.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849432245355, "job": 92, "event": "table_file_deletion", "file_number": 149}
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.147005) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245513) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245520) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245526) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245539) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:52 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:03:52.245541) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:03:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:03:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3062: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:53 np0005473739 podman[434200]: 2025-10-07 15:03:53.067254629 +0000 UTC m=+0.052755577 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 11:03:53 np0005473739 podman[434201]: 2025-10-07 15:03:53.126767654 +0000 UTC m=+0.111683456 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Oct  7 11:03:53 np0005473739 nova_compute[259550]: 2025-10-07 15:03:53.370 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:53 np0005473739 nova_compute[259550]: 2025-10-07 15:03:53.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3063: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3064: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:03:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:03:58 np0005473739 nova_compute[259550]: 2025-10-07 15:03:58.374 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:58 np0005473739 nova_compute[259550]: 2025-10-07 15:03:58.832 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:03:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3065: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:04:00.102 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:04:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:04:00.102 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:04:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:04:00.103 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:04:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3066: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3067: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:03 np0005473739 nova_compute[259550]: 2025-10-07 15:04:03.376 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:03 np0005473739 nova_compute[259550]: 2025-10-07 15:04:03.834 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3068: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3069: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:08 np0005473739 nova_compute[259550]: 2025-10-07 15:04:08.378 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:08 np0005473739 nova_compute[259550]: 2025-10-07 15:04:08.837 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3070: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:09 np0005473739 podman[434245]: 2025-10-07 15:04:09.070169355 +0000 UTC m=+0.056249609 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Oct  7 11:04:09 np0005473739 podman[434246]: 2025-10-07 15:04:09.0816854 +0000 UTC m=+0.058999352 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 11:04:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3071: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:11 np0005473739 nova_compute[259550]: 2025-10-07 15:04:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3072: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:13 np0005473739 nova_compute[259550]: 2025-10-07 15:04:13.380 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:13 np0005473739 nova_compute[259550]: 2025-10-07 15:04:13.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:14 np0005473739 nova_compute[259550]: 2025-10-07 15:04:14.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:14 np0005473739 nova_compute[259550]: 2025-10-07 15:04:14.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3073: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:16 np0005473739 nova_compute[259550]: 2025-10-07 15:04:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:16 np0005473739 nova_compute[259550]: 2025-10-07 15:04:16.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:04:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3074: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:17 np0005473739 nova_compute[259550]: 2025-10-07 15:04:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:18 np0005473739 nova_compute[259550]: 2025-10-07 15:04:18.382 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:18 np0005473739 nova_compute[259550]: 2025-10-07 15:04:18.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3075: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:04:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3076: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:21 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:21 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:04:21 np0005473739 nova_compute[259550]: 2025-10-07 15:04:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:04:22
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', '.rgw.root', 'default.rgw.control', 'cephfs.cephfs.data', '.mgr', 'images', 'cephfs.cephfs.meta', 'default.rgw.log', 'vms', 'default.rgw.meta', 'volumes']
Oct  7 11:04:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3077: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:04:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:04:23 np0005473739 nova_compute[259550]: 2025-10-07 15:04:23.385 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:23 np0005473739 nova_compute[259550]: 2025-10-07 15:04:23.845 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:24 np0005473739 podman[434535]: 2025-10-07 15:04:24.115992322 +0000 UTC m=+0.085673558 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible)
Oct  7 11:04:24 np0005473739 podman[434536]: 2025-10-07 15:04:24.154423879 +0000 UTC m=+0.121717922 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Oct  7 11:04:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3078: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:26 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3079: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:27 np0005473739 podman[434718]: 2025-10-07 15:04:27.29446248 +0000 UTC m=+0.027005666 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:04:27 np0005473739 podman[434718]: 2025-10-07 15:04:27.437436193 +0000 UTC m=+0.169979389 container create 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:04:27 np0005473739 systemd[1]: Started libpod-conmon-0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64.scope.
Oct  7 11:04:27 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:04:27 np0005473739 podman[434718]: 2025-10-07 15:04:27.64285452 +0000 UTC m=+0.375397966 container init 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 11:04:27 np0005473739 podman[434718]: 2025-10-07 15:04:27.65571452 +0000 UTC m=+0.388257676 container start 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:04:27 np0005473739 systemd[1]: libpod-0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64.scope: Deactivated successfully.
Oct  7 11:04:27 np0005473739 busy_curran[434734]: 167 167
Oct  7 11:04:27 np0005473739 conmon[434734]: conmon 0d84bd59d58d674abb07 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64.scope/container/memory.events
Oct  7 11:04:27 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:27 np0005473739 podman[434718]: 2025-10-07 15:04:27.72521477 +0000 UTC m=+0.457757976 container attach 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:04:27 np0005473739 podman[434718]: 2025-10-07 15:04:27.726383871 +0000 UTC m=+0.458927027 container died 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f9aab1a605212a214e59f9022b055ee525e778b28baa4251854a17b3b0374f7a-merged.mount: Deactivated successfully.
Oct  7 11:04:27 np0005473739 nova_compute[259550]: 2025-10-07 15:04:27.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:28 np0005473739 podman[434718]: 2025-10-07 15:04:28.059272721 +0000 UTC m=+0.791815877 container remove 0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=busy_curran, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:04:28 np0005473739 systemd[1]: libpod-conmon-0d84bd59d58d674abb07f235f6730d24c43cfdf8e6975089a38bd81a4cb70c64.scope: Deactivated successfully.
Oct  7 11:04:28 np0005473739 podman[434760]: 2025-10-07 15:04:28.192198008 +0000 UTC m=+0.022052924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:04:28 np0005473739 podman[434760]: 2025-10-07 15:04:28.290560512 +0000 UTC m=+0.120415408 container create b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 11:04:28 np0005473739 nova_compute[259550]: 2025-10-07 15:04:28.387 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:28 np0005473739 systemd[1]: Started libpod-conmon-b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5.scope.
Oct  7 11:04:28 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:04:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:28 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:28 np0005473739 podman[434760]: 2025-10-07 15:04:28.465491481 +0000 UTC m=+0.295346397 container init b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 11:04:28 np0005473739 podman[434760]: 2025-10-07 15:04:28.477690333 +0000 UTC m=+0.307545219 container start b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS)
Oct  7 11:04:28 np0005473739 podman[434760]: 2025-10-07 15:04:28.528507239 +0000 UTC m=+0.358362195 container attach b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:04:28 np0005473739 nova_compute[259550]: 2025-10-07 15:04:28.846 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3080: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:29 np0005473739 awesome_borg[434776]: [
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:    {
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        "available": false,
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        "ceph_device": false,
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        "device_id": "QEMU_DVD-ROM_QM00001",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        "lsm_data": {},
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        "lvs": [],
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        "path": "/dev/sr0",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        "rejected_reasons": [
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "Has a FileSystem",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "Insufficient space (<5GB)"
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        ],
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        "sys_api": {
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "actuators": null,
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "device_nodes": "sr0",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "devname": "sr0",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "human_readable_size": "482.00 KB",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "id_bus": "ata",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "model": "QEMU DVD-ROM",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "nr_requests": "2",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "parent": "/dev/sr0",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "partitions": {},
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "path": "/dev/sr0",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "removable": "1",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "rev": "2.5+",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "ro": "0",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "rotational": "0",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "sas_address": "",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "sas_device_handle": "",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "scheduler_mode": "mq-deadline",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "sectors": 0,
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "sectorsize": "2048",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "size": 493568.0,
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "support_discard": "2048",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "type": "disk",
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:            "vendor": "QEMU"
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:        }
Oct  7 11:04:29 np0005473739 awesome_borg[434776]:    }
Oct  7 11:04:29 np0005473739 awesome_borg[434776]: ]
Oct  7 11:04:29 np0005473739 nova_compute[259550]: 2025-10-07 15:04:29.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:30 np0005473739 systemd[1]: libpod-b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5.scope: Deactivated successfully.
Oct  7 11:04:30 np0005473739 systemd[1]: libpod-b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5.scope: Consumed 1.554s CPU time.
Oct  7 11:04:30 np0005473739 podman[436985]: 2025-10-07 15:04:30.042739443 +0000 UTC m=+0.024893150 container died b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:04:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6e0fd429d0f9424fb544d4a9fd177082e8b011eebaab1d183d8b7991ce70aeb5-merged.mount: Deactivated successfully.
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.145 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.145 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.146 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.146 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.146 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:04:30 np0005473739 podman[436985]: 2025-10-07 15:04:30.251742554 +0000 UTC m=+0.233896241 container remove b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=awesome_borg, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:04:30 np0005473739 systemd[1]: libpod-conmon-b763fbfe371d294ad49b39020ab7c47e5a3d9ef3610ae1b155326e56241017c5.scope: Deactivated successfully.
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:30 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a2b91a4c-6db3-4430-b7ce-4122ba644cff does not exist
Oct  7 11:04:30 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 28ca5ebc-c64a-46de-9ba8-9e78f31f1052 does not exist
Oct  7 11:04:30 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ea3b936b-ffe4-4693-ad20-689f52124aca does not exist
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:04:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2992853080' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.643 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.497s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.815 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.817 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3589MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.817 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:04:30 np0005473739 nova_compute[259550]: 2025-10-07 15:04:30.818 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:04:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3081: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.083 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.083 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:04:31 np0005473739 podman[437162]: 2025-10-07 15:04:31.143002891 +0000 UTC m=+0.118941419 container create 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:04:31 np0005473739 podman[437162]: 2025-10-07 15:04:31.052374133 +0000 UTC m=+0.028312711 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.148 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.225 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 11:04:31 np0005473739 systemd[1]: Started libpod-conmon-88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f.scope.
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.226 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.241 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 11:04:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.262 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.280 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:04:31 np0005473739 podman[437162]: 2025-10-07 15:04:31.319052521 +0000 UTC m=+0.294991099 container init 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 11:04:31 np0005473739 podman[437162]: 2025-10-07 15:04:31.330717669 +0000 UTC m=+0.306656197 container start 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:31 np0005473739 romantic_khorana[437178]: 167 167
Oct  7 11:04:31 np0005473739 systemd[1]: libpod-88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f.scope: Deactivated successfully.
Oct  7 11:04:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:04:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:31 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:04:31 np0005473739 podman[437162]: 2025-10-07 15:04:31.395270647 +0000 UTC m=+0.371209175 container attach 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 11:04:31 np0005473739 podman[437162]: 2025-10-07 15:04:31.395974626 +0000 UTC m=+0.371913164 container died 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2fcf70b9d0bf1ad0356037b6f0d1d5a9e9652cfea17e7b66e7d3562f7ef5bad4-merged.mount: Deactivated successfully.
Oct  7 11:04:31 np0005473739 podman[437162]: 2025-10-07 15:04:31.639065639 +0000 UTC m=+0.615004167 container remove 88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=romantic_khorana, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:04:31 np0005473739 systemd[1]: libpod-conmon-88d2c02f3b96888547796670571930f6da06b61936c4c67bc9263a43edad116f.scope: Deactivated successfully.
Oct  7 11:04:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:04:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1203092699' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.813 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.533s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.823 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:04:31 np0005473739 podman[437225]: 2025-10-07 15:04:31.838036215 +0000 UTC m=+0.081893148 container create 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.839 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.840 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:04:31 np0005473739 nova_compute[259550]: 2025-10-07 15:04:31.840 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.023s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:04:31 np0005473739 podman[437225]: 2025-10-07 15:04:31.777980675 +0000 UTC m=+0.021837628 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:04:31 np0005473739 systemd[1]: Started libpod-conmon-6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be.scope.
Oct  7 11:04:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:04:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:31 np0005473739 podman[437225]: 2025-10-07 15:04:31.962329184 +0000 UTC m=+0.206186147 container init 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:31 np0005473739 podman[437225]: 2025-10-07 15:04:31.970613084 +0000 UTC m=+0.214470017 container start 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:31 np0005473739 podman[437225]: 2025-10-07 15:04:31.974763744 +0000 UTC m=+0.218620677 container attach 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 11:04:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:04:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/453162746' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:04:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:04:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/453162746' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:04:32 np0005473739 systemd[1]: virtsecretd.service: Deactivated successfully.
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:04:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:04:33 np0005473739 focused_keldysh[437243]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:04:33 np0005473739 focused_keldysh[437243]: --> relative data size: 1.0
Oct  7 11:04:33 np0005473739 focused_keldysh[437243]: --> All data devices are unavailable
Oct  7 11:04:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3082: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:33 np0005473739 systemd[1]: libpod-6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be.scope: Deactivated successfully.
Oct  7 11:04:33 np0005473739 systemd[1]: libpod-6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be.scope: Consumed 1.063s CPU time.
Oct  7 11:04:33 np0005473739 podman[437225]: 2025-10-07 15:04:33.092975087 +0000 UTC m=+1.336832070 container died 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:04:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a617e74c30fdd2d7db612de5ab95898140bc0cd4d4ef503e5c5c88f891812185-merged.mount: Deactivated successfully.
Oct  7 11:04:33 np0005473739 podman[437225]: 2025-10-07 15:04:33.151076575 +0000 UTC m=+1.394933518 container remove 6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=focused_keldysh, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:33 np0005473739 systemd[1]: libpod-conmon-6d58faa2464d4c1274b9435870f220b0455d59c12e77ec35842d7952a834d4be.scope: Deactivated successfully.
Oct  7 11:04:33 np0005473739 nova_compute[259550]: 2025-10-07 15:04:33.389 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:33 np0005473739 podman[437429]: 2025-10-07 15:04:33.717030432 +0000 UTC m=+0.042679410 container create 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:33 np0005473739 systemd[1]: Started libpod-conmon-848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a.scope.
Oct  7 11:04:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:04:33 np0005473739 podman[437429]: 2025-10-07 15:04:33.699537 +0000 UTC m=+0.025186008 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:04:33 np0005473739 podman[437429]: 2025-10-07 15:04:33.796963168 +0000 UTC m=+0.122612166 container init 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 11:04:33 np0005473739 podman[437429]: 2025-10-07 15:04:33.805906095 +0000 UTC m=+0.131555073 container start 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef)
Oct  7 11:04:33 np0005473739 podman[437429]: 2025-10-07 15:04:33.809917321 +0000 UTC m=+0.135566319 container attach 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:04:33 np0005473739 musing_kowalevski[437445]: 167 167
Oct  7 11:04:33 np0005473739 systemd[1]: libpod-848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a.scope: Deactivated successfully.
Oct  7 11:04:33 np0005473739 podman[437429]: 2025-10-07 15:04:33.814167474 +0000 UTC m=+0.139816472 container died 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 11:04:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c073287ce4f5a45282265fdb8e3f044cb1ae588ba17596314e8ad2d1d20b81b6-merged.mount: Deactivated successfully.
Oct  7 11:04:33 np0005473739 nova_compute[259550]: 2025-10-07 15:04:33.841 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:33 np0005473739 nova_compute[259550]: 2025-10-07 15:04:33.842 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:04:33 np0005473739 nova_compute[259550]: 2025-10-07 15:04:33.842 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:04:33 np0005473739 nova_compute[259550]: 2025-10-07 15:04:33.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:33 np0005473739 podman[437429]: 2025-10-07 15:04:33.850710161 +0000 UTC m=+0.176359149 container remove 848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_kowalevski, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:04:33 np0005473739 nova_compute[259550]: 2025-10-07 15:04:33.860 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:04:33 np0005473739 systemd[1]: libpod-conmon-848755228aaa944d75a7d16ac10501ba2a0da49dc3d89114dd8b80d08d72af5a.scope: Deactivated successfully.
Oct  7 11:04:34 np0005473739 podman[437469]: 2025-10-07 15:04:34.028730502 +0000 UTC m=+0.043450361 container create b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True)
Oct  7 11:04:34 np0005473739 systemd[1]: Started libpod-conmon-b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825.scope.
Oct  7 11:04:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:04:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:34 np0005473739 podman[437469]: 2025-10-07 15:04:34.011819004 +0000 UTC m=+0.026538883 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:04:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:34 np0005473739 podman[437469]: 2025-10-07 15:04:34.121428315 +0000 UTC m=+0.136148224 container init b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:04:34 np0005473739 podman[437469]: 2025-10-07 15:04:34.129870028 +0000 UTC m=+0.144589927 container start b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:04:34 np0005473739 podman[437469]: 2025-10-07 15:04:34.133801862 +0000 UTC m=+0.148521721 container attach b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]: {
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:    "0": [
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:        {
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "devices": [
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "/dev/loop3"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            ],
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_name": "ceph_lv0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_size": "21470642176",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "name": "ceph_lv0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "tags": {
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cluster_name": "ceph",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.crush_device_class": "",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.encrypted": "0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osd_id": "0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.type": "block",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.vdo": "0"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            },
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "type": "block",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "vg_name": "ceph_vg0"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:        }
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:    ],
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:    "1": [
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:        {
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "devices": [
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "/dev/loop4"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            ],
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_name": "ceph_lv1",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_size": "21470642176",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "name": "ceph_lv1",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "tags": {
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cluster_name": "ceph",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.crush_device_class": "",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.encrypted": "0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osd_id": "1",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.type": "block",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.vdo": "0"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            },
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "type": "block",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "vg_name": "ceph_vg1"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:        }
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:    ],
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:    "2": [
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:        {
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "devices": [
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "/dev/loop5"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            ],
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_name": "ceph_lv2",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_size": "21470642176",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "name": "ceph_lv2",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "tags": {
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.cluster_name": "ceph",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.crush_device_class": "",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.encrypted": "0",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osd_id": "2",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.type": "block",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:                "ceph.vdo": "0"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            },
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "type": "block",
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:            "vg_name": "ceph_vg2"
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:        }
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]:    ]
Oct  7 11:04:34 np0005473739 vigilant_lederberg[437485]: }
Oct  7 11:04:34 np0005473739 systemd[1]: libpod-b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825.scope: Deactivated successfully.
Oct  7 11:04:34 np0005473739 podman[437494]: 2025-10-07 15:04:34.93931131 +0000 UTC m=+0.023285427 container died b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:04:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-94f8aabab54f4fc043a25334a6ee0bd8bc6438ee1890d48164f03a28e9843034-merged.mount: Deactivated successfully.
Oct  7 11:04:34 np0005473739 podman[437494]: 2025-10-07 15:04:34.997029528 +0000 UTC m=+0.081003625 container remove b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lederberg, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 11:04:35 np0005473739 systemd[1]: libpod-conmon-b596ec93d774b8a463f5abe191c9eb36e8c4965fdba7f3b6800fbbfa55a76825.scope: Deactivated successfully.
Oct  7 11:04:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3083: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:35 np0005473739 podman[437648]: 2025-10-07 15:04:35.608412748 +0000 UTC m=+0.042582188 container create ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 11:04:35 np0005473739 systemd[1]: Started libpod-conmon-ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7.scope.
Oct  7 11:04:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:04:35 np0005473739 podman[437648]: 2025-10-07 15:04:35.590875564 +0000 UTC m=+0.025044994 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:04:35 np0005473739 podman[437648]: 2025-10-07 15:04:35.694405714 +0000 UTC m=+0.128575154 container init ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 11:04:35 np0005473739 podman[437648]: 2025-10-07 15:04:35.700760272 +0000 UTC m=+0.134929682 container start ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 11:04:35 np0005473739 podman[437648]: 2025-10-07 15:04:35.704132882 +0000 UTC m=+0.138302322 container attach ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:04:35 np0005473739 eloquent_edison[437664]: 167 167
Oct  7 11:04:35 np0005473739 systemd[1]: libpod-ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7.scope: Deactivated successfully.
Oct  7 11:04:35 np0005473739 podman[437648]: 2025-10-07 15:04:35.709372 +0000 UTC m=+0.143541430 container died ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 11:04:35 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a4e46ddaa8900e981bd2d2368b1d39a598bb5ca18d8deccc8be1d87674be943b-merged.mount: Deactivated successfully.
Oct  7 11:04:35 np0005473739 podman[437648]: 2025-10-07 15:04:35.748383372 +0000 UTC m=+0.182552802 container remove ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eloquent_edison, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 11:04:35 np0005473739 systemd[1]: libpod-conmon-ee14f72cc2cd822baf102b109677c9e877742481b07191529fc8092ac9a9cba7.scope: Deactivated successfully.
Oct  7 11:04:35 np0005473739 podman[437689]: 2025-10-07 15:04:35.928334115 +0000 UTC m=+0.038062748 container create 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3)
Oct  7 11:04:35 np0005473739 systemd[1]: Started libpod-conmon-0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd.scope.
Oct  7 11:04:35 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:04:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:35 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:04:36 np0005473739 podman[437689]: 2025-10-07 15:04:35.912518656 +0000 UTC m=+0.022247279 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:04:36 np0005473739 podman[437689]: 2025-10-07 15:04:36.010119349 +0000 UTC m=+0.119848002 container init 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct  7 11:04:36 np0005473739 podman[437689]: 2025-10-07 15:04:36.016453717 +0000 UTC m=+0.126182330 container start 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:36 np0005473739 podman[437689]: 2025-10-07 15:04:36.02301095 +0000 UTC m=+0.132739593 container attach 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 11:04:36 np0005473739 nova_compute[259550]: 2025-10-07 15:04:36.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]: {
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "osd_id": 2,
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "type": "bluestore"
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:    },
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "osd_id": 1,
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "type": "bluestore"
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:    },
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "osd_id": 0,
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:        "type": "bluestore"
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]:    }
Oct  7 11:04:37 np0005473739 compassionate_nightingale[437705]: }
Oct  7 11:04:37 np0005473739 systemd[1]: libpod-0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd.scope: Deactivated successfully.
Oct  7 11:04:37 np0005473739 systemd[1]: libpod-0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd.scope: Consumed 1.020s CPU time.
Oct  7 11:04:37 np0005473739 podman[437689]: 2025-10-07 15:04:37.033668307 +0000 UTC m=+1.143396920 container died 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 11:04:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5e06079bfde7c21fbc41518bdcded0240e37412b458c919f78d9aa448c0d8204-merged.mount: Deactivated successfully.
Oct  7 11:04:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3084: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:37 np0005473739 podman[437689]: 2025-10-07 15:04:37.084876863 +0000 UTC m=+1.194605466 container remove 0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=compassionate_nightingale, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:04:37 np0005473739 systemd[1]: libpod-conmon-0bc91a41748f80a5c97dee0560373e0005d67382abcfda9a6cae451b1048bdcd.scope: Deactivated successfully.
Oct  7 11:04:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:04:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:04:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f0bc7d41-d21e-47fb-bede-63235bccdf20 does not exist
Oct  7 11:04:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bfb93889-712f-4253-8cd0-e19b7b15d7c2 does not exist
Oct  7 11:04:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:04:38 np0005473739 nova_compute[259550]: 2025-10-07 15:04:38.444 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:38 np0005473739 nova_compute[259550]: 2025-10-07 15:04:38.851 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3085: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:40 np0005473739 podman[437799]: 2025-10-07 15:04:40.092818828 +0000 UTC m=+0.068789132 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:04:40 np0005473739 podman[437800]: 2025-10-07 15:04:40.119030762 +0000 UTC m=+0.094662567 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=iscsid, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 11:04:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3086: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3087: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:43 np0005473739 nova_compute[259550]: 2025-10-07 15:04:43.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:43 np0005473739 nova_compute[259550]: 2025-10-07 15:04:43.856 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3088: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3089: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:48 np0005473739 nova_compute[259550]: 2025-10-07 15:04:48.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:48 np0005473739 nova_compute[259550]: 2025-10-07 15:04:48.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3090: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3091: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:04:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:04:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3092: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:53 np0005473739 nova_compute[259550]: 2025-10-07 15:04:53.451 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:53 np0005473739 nova_compute[259550]: 2025-10-07 15:04:53.860 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3093: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:55 np0005473739 podman[437841]: 2025-10-07 15:04:55.101228405 +0000 UTC m=+0.080131942 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Oct  7 11:04:55 np0005473739 podman[437842]: 2025-10-07 15:04:55.147955722 +0000 UTC m=+0.120695046 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:04:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3094: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:04:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:04:58 np0005473739 nova_compute[259550]: 2025-10-07 15:04:58.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:58 np0005473739 nova_compute[259550]: 2025-10-07 15:04:58.863 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:04:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3095: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:05:00.104 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:05:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:05:00.104 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:05:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:05:00.104 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:05:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3096: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3097: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:03 np0005473739 nova_compute[259550]: 2025-10-07 15:05:03.455 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:03 np0005473739 nova_compute[259550]: 2025-10-07 15:05:03.865 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3098: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3099: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:08 np0005473739 nova_compute[259550]: 2025-10-07 15:05:08.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:08 np0005473739 nova_compute[259550]: 2025-10-07 15:05:08.867 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3100: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:11 np0005473739 podman[437884]: 2025-10-07 15:05:11.088701023 +0000 UTC m=+0.072429768 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  7 11:05:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3101: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:11 np0005473739 podman[437885]: 2025-10-07 15:05:11.108913297 +0000 UTC m=+0.089719175 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, container_name=iscsid)
Oct  7 11:05:11 np0005473739 nova_compute[259550]: 2025-10-07 15:05:11.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3102: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:13 np0005473739 nova_compute[259550]: 2025-10-07 15:05:13.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:13 np0005473739 nova_compute[259550]: 2025-10-07 15:05:13.869 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3103: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:15 np0005473739 nova_compute[259550]: 2025-10-07 15:05:15.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:16 np0005473739 nova_compute[259550]: 2025-10-07 15:05:16.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3104: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:17 np0005473739 nova_compute[259550]: 2025-10-07 15:05:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:18 np0005473739 nova_compute[259550]: 2025-10-07 15:05:18.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:18 np0005473739 nova_compute[259550]: 2025-10-07 15:05:18.873 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:18 np0005473739 nova_compute[259550]: 2025-10-07 15:05:18.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:18 np0005473739 nova_compute[259550]: 2025-10-07 15:05:18.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:05:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3105: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3106: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:05:22
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', '.mgr', '.rgw.root', 'vms', 'cephfs.cephfs.data', 'images', 'volumes', 'cephfs.cephfs.meta', 'default.rgw.log', 'default.rgw.control', 'default.rgw.meta']
Oct  7 11:05:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3107: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:05:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:05:23 np0005473739 nova_compute[259550]: 2025-10-07 15:05:23.464 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:23 np0005473739 nova_compute[259550]: 2025-10-07 15:05:23.874 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:23 np0005473739 nova_compute[259550]: 2025-10-07 15:05:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3108: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:26 np0005473739 podman[437925]: 2025-10-07 15:05:26.060802799 +0000 UTC m=+0.051559526 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 11:05:26 np0005473739 podman[437926]: 2025-10-07 15:05:26.147096133 +0000 UTC m=+0.123188191 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 11:05:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3109: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:05:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:28 np0005473739 nova_compute[259550]: 2025-10-07 15:05:28.466 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:28 np0005473739 nova_compute[259550]: 2025-10-07 15:05:28.876 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3110: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.3 MiB/s rd, 85 B/s wr, 6 op/s
Oct  7 11:05:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3111: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  7 11:05:31 np0005473739 nova_compute[259550]: 2025-10-07 15:05:31.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.013 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:05:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:05:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/673196036' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.485 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.472s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.653 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.654 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3612MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.655 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.655 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.735 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.735 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:05:32 np0005473739 nova_compute[259550]: 2025-10-07 15:05:32.758 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:05:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:05:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346489447' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:05:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:05:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/346489447' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:05:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:05:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3112: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  7 11:05:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:05:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2114457313' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:05:33 np0005473739 nova_compute[259550]: 2025-10-07 15:05:33.222 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.464s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:05:33 np0005473739 nova_compute[259550]: 2025-10-07 15:05:33.229 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:05:33 np0005473739 nova_compute[259550]: 2025-10-07 15:05:33.246 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:05:33 np0005473739 nova_compute[259550]: 2025-10-07 15:05:33.248 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:05:33 np0005473739 nova_compute[259550]: 2025-10-07 15:05:33.249 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:05:33 np0005473739 nova_compute[259550]: 2025-10-07 15:05:33.468 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:33 np0005473739 nova_compute[259550]: 2025-10-07 15:05:33.877 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:34 np0005473739 nova_compute[259550]: 2025-10-07 15:05:34.250 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:34 np0005473739 nova_compute[259550]: 2025-10-07 15:05:34.251 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:05:34 np0005473739 nova_compute[259550]: 2025-10-07 15:05:34.251 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:05:34 np0005473739 nova_compute[259550]: 2025-10-07 15:05:34.295 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:05:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3113: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 MiB/s rd, 85 B/s wr, 6 op/s
Oct  7 11:05:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e297 do_prune osdmap full prune enabled
Oct  7 11:05:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e298 e298: 3 total, 3 up, 3 in
Oct  7 11:05:36 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e298: 3 total, 3 up, 3 in
Oct  7 11:05:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3115: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 2.0 MiB/s rd, 102 B/s wr, 8 op/s
Oct  7 11:05:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e298 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:05:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:05:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:38 np0005473739 nova_compute[259550]: 2025-10-07 15:05:38.472 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 46d7502f-71d3-411a-a9b0-b689569df79d does not exist
Oct  7 11:05:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a11df85f-6749-4c0c-a51c-d45b7d1922f8 does not exist
Oct  7 11:05:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bb97a492-c32f-4672-aaf4-b9488938bdeb does not exist
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:05:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:05:38 np0005473739 nova_compute[259550]: 2025-10-07 15:05:38.879 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:38 np0005473739 nova_compute[259550]: 2025-10-07 15:05:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:05:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3116: 305 pgs: 305 active+clean; 25 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 472 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Oct  7 11:05:39 np0005473739 podman[438405]: 2025-10-07 15:05:39.285608412 +0000 UTC m=+0.046139042 container create aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 11:05:39 np0005473739 systemd[1]: Started libpod-conmon-aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e.scope.
Oct  7 11:05:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:05:39 np0005473739 podman[438405]: 2025-10-07 15:05:39.265666005 +0000 UTC m=+0.026196655 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:05:39 np0005473739 podman[438405]: 2025-10-07 15:05:39.37921752 +0000 UTC m=+0.139748230 container init aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:05:39 np0005473739 podman[438405]: 2025-10-07 15:05:39.386148943 +0000 UTC m=+0.146679573 container start aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 11:05:39 np0005473739 podman[438405]: 2025-10-07 15:05:39.388884826 +0000 UTC m=+0.149415466 container attach aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:05:39 np0005473739 funny_boyd[438421]: 167 167
Oct  7 11:05:39 np0005473739 systemd[1]: libpod-aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e.scope: Deactivated successfully.
Oct  7 11:05:39 np0005473739 conmon[438421]: conmon aa25eafe6fe64df5d33e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e.scope/container/memory.events
Oct  7 11:05:39 np0005473739 podman[438405]: 2025-10-07 15:05:39.39509923 +0000 UTC m=+0.155629860 container died aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 11:05:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b25a29ff388276b1fc6bf0ceca75683ff340a89dbc3a8eb4ec1677d4fc2d1a5f-merged.mount: Deactivated successfully.
Oct  7 11:05:39 np0005473739 podman[438405]: 2025-10-07 15:05:39.439260829 +0000 UTC m=+0.199791499 container remove aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=funny_boyd, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 11:05:39 np0005473739 systemd[1]: libpod-conmon-aa25eafe6fe64df5d33eed1c215b9186f837b381ac8f1c55b9a1c00d8509069e.scope: Deactivated successfully.
Oct  7 11:05:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 11:05:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:05:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:05:39 np0005473739 podman[438444]: 2025-10-07 15:05:39.595625198 +0000 UTC m=+0.045928307 container create 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:05:39 np0005473739 systemd[1]: Started libpod-conmon-26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a.scope.
Oct  7 11:05:39 np0005473739 podman[438444]: 2025-10-07 15:05:39.577951249 +0000 UTC m=+0.028254368 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:05:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:05:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:39 np0005473739 podman[438444]: 2025-10-07 15:05:39.706745788 +0000 UTC m=+0.157048967 container init 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:05:39 np0005473739 podman[438444]: 2025-10-07 15:05:39.719365082 +0000 UTC m=+0.169668181 container start 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 11:05:39 np0005473739 podman[438444]: 2025-10-07 15:05:39.723649795 +0000 UTC m=+0.173952934 container attach 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:05:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e298 do_prune osdmap full prune enabled
Oct  7 11:05:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e299 e299: 3 total, 3 up, 3 in
Oct  7 11:05:40 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e299: 3 total, 3 up, 3 in
Oct  7 11:05:40 np0005473739 keen_lehmann[438460]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:05:40 np0005473739 keen_lehmann[438460]: --> relative data size: 1.0
Oct  7 11:05:40 np0005473739 keen_lehmann[438460]: --> All data devices are unavailable
Oct  7 11:05:40 np0005473739 systemd[1]: libpod-26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a.scope: Deactivated successfully.
Oct  7 11:05:40 np0005473739 systemd[1]: libpod-26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a.scope: Consumed 1.017s CPU time.
Oct  7 11:05:40 np0005473739 podman[438489]: 2025-10-07 15:05:40.842532716 +0000 UTC m=+0.029744568 container died 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:05:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-262fb01adafa5f6223da0e379b570a5ec154dfd844a2e72e988a0edf69782401-merged.mount: Deactivated successfully.
Oct  7 11:05:40 np0005473739 podman[438489]: 2025-10-07 15:05:40.908553203 +0000 UTC m=+0.095765025 container remove 26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=keen_lehmann, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:05:40 np0005473739 systemd[1]: libpod-conmon-26459fd1527ae955f8dc0998e4eeb245139dde0ab1da8bbc15af969e69e5455a.scope: Deactivated successfully.
Oct  7 11:05:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3118: 305 pgs: 305 active+clean; 21 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 32 op/s
Oct  7 11:05:41 np0005473739 podman[438554]: 2025-10-07 15:05:41.2201306 +0000 UTC m=+0.093125986 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 11:05:41 np0005473739 podman[438589]: 2025-10-07 15:05:41.254848049 +0000 UTC m=+0.076839235 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=iscsid)
Oct  7 11:05:41 np0005473739 podman[438683]: 2025-10-07 15:05:41.552481205 +0000 UTC m=+0.045445433 container create dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True)
Oct  7 11:05:41 np0005473739 systemd[1]: Started libpod-conmon-dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777.scope.
Oct  7 11:05:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:05:41 np0005473739 podman[438683]: 2025-10-07 15:05:41.62523231 +0000 UTC m=+0.118196568 container init dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 11:05:41 np0005473739 podman[438683]: 2025-10-07 15:05:41.532815135 +0000 UTC m=+0.025779353 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:05:41 np0005473739 podman[438683]: 2025-10-07 15:05:41.63654164 +0000 UTC m=+0.129505868 container start dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:05:41 np0005473739 podman[438683]: 2025-10-07 15:05:41.639695424 +0000 UTC m=+0.132659682 container attach dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:05:41 np0005473739 friendly_leavitt[438699]: 167 167
Oct  7 11:05:41 np0005473739 systemd[1]: libpod-dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777.scope: Deactivated successfully.
Oct  7 11:05:41 np0005473739 conmon[438699]: conmon dbb7dd7ce9ca2f56ca19 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777.scope/container/memory.events
Oct  7 11:05:41 np0005473739 podman[438683]: 2025-10-07 15:05:41.643448633 +0000 UTC m=+0.136412851 container died dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 11:05:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7a6d7cfff79f72f339f0532b8e13800d47afe7bf70cd7eb26530e76340f5da0f-merged.mount: Deactivated successfully.
Oct  7 11:05:41 np0005473739 podman[438683]: 2025-10-07 15:05:41.692675955 +0000 UTC m=+0.185640173 container remove dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_leavitt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:05:41 np0005473739 systemd[1]: libpod-conmon-dbb7dd7ce9ca2f56ca191da0a019e754e7625110a94d9ba39c1effed203c4777.scope: Deactivated successfully.
Oct  7 11:05:41 np0005473739 podman[438722]: 2025-10-07 15:05:41.890862601 +0000 UTC m=+0.064645572 container create ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0)
Oct  7 11:05:41 np0005473739 systemd[1]: Started libpod-conmon-ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c.scope.
Oct  7 11:05:41 np0005473739 podman[438722]: 2025-10-07 15:05:41.869357852 +0000 UTC m=+0.043140823 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:05:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:05:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:41 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:42 np0005473739 podman[438722]: 2025-10-07 15:05:42.01210701 +0000 UTC m=+0.185889981 container init ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:05:42 np0005473739 podman[438722]: 2025-10-07 15:05:42.025532035 +0000 UTC m=+0.199315006 container start ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:05:42 np0005473739 podman[438722]: 2025-10-07 15:05:42.029958492 +0000 UTC m=+0.203741463 container attach ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:05:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e299 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e299 do_prune osdmap full prune enabled
Oct  7 11:05:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e300 e300: 3 total, 3 up, 3 in
Oct  7 11:05:42 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e300: 3 total, 3 up, 3 in
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]: {
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:    "0": [
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:        {
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "devices": [
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "/dev/loop3"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            ],
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_name": "ceph_lv0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_size": "21470642176",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "name": "ceph_lv0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "tags": {
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cluster_name": "ceph",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.crush_device_class": "",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.encrypted": "0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osd_id": "0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.type": "block",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.vdo": "0"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            },
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "type": "block",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "vg_name": "ceph_vg0"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:        }
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:    ],
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:    "1": [
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:        {
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "devices": [
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "/dev/loop4"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            ],
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_name": "ceph_lv1",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_size": "21470642176",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "name": "ceph_lv1",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "tags": {
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cluster_name": "ceph",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.crush_device_class": "",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.encrypted": "0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osd_id": "1",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.type": "block",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.vdo": "0"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            },
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "type": "block",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "vg_name": "ceph_vg1"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:        }
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:    ],
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:    "2": [
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:        {
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "devices": [
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "/dev/loop5"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            ],
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_name": "ceph_lv2",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_size": "21470642176",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "name": "ceph_lv2",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "tags": {
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.cluster_name": "ceph",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.crush_device_class": "",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.encrypted": "0",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osd_id": "2",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.type": "block",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:                "ceph.vdo": "0"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            },
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "type": "block",
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:            "vg_name": "ceph_vg2"
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:        }
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]:    ]
Oct  7 11:05:42 np0005473739 inspiring_wu[438738]: }
Oct  7 11:05:42 np0005473739 systemd[1]: libpod-ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c.scope: Deactivated successfully.
Oct  7 11:05:42 np0005473739 podman[438747]: 2025-10-07 15:05:42.912372985 +0000 UTC m=+0.040194345 container died ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 11:05:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-e59e79a274ab3549be8be2397fe0c2398bf7da14d0671348118e10c25de59aaa-merged.mount: Deactivated successfully.
Oct  7 11:05:42 np0005473739 podman[438747]: 2025-10-07 15:05:42.970150814 +0000 UTC m=+0.097972144 container remove ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_wu, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 11:05:42 np0005473739 systemd[1]: libpod-conmon-ed518bb65f177fdb65828bdf4b0c18bf4e7db55a8d68713b286bd511abb1730c.scope: Deactivated successfully.
Oct  7 11:05:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3120: 305 pgs: 305 active+clean; 21 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 29 KiB/s rd, 2.3 KiB/s wr, 40 op/s
Oct  7 11:05:43 np0005473739 nova_compute[259550]: 2025-10-07 15:05:43.475 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:43 np0005473739 podman[438902]: 2025-10-07 15:05:43.674875624 +0000 UTC m=+0.040037550 container create 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:05:43 np0005473739 systemd[1]: Started libpod-conmon-1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb.scope.
Oct  7 11:05:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:05:43 np0005473739 podman[438902]: 2025-10-07 15:05:43.660041042 +0000 UTC m=+0.025202878 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:05:43 np0005473739 podman[438902]: 2025-10-07 15:05:43.757726457 +0000 UTC m=+0.122888323 container init 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:05:43 np0005473739 podman[438902]: 2025-10-07 15:05:43.767323731 +0000 UTC m=+0.132485547 container start 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:05:43 np0005473739 podman[438902]: 2025-10-07 15:05:43.771005849 +0000 UTC m=+0.136167705 container attach 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:05:43 np0005473739 vigilant_lichterman[438918]: 167 167
Oct  7 11:05:43 np0005473739 systemd[1]: libpod-1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb.scope: Deactivated successfully.
Oct  7 11:05:43 np0005473739 podman[438902]: 2025-10-07 15:05:43.775376284 +0000 UTC m=+0.140538140 container died 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:05:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ee94a2b601bc250eba7f82ed24b76de0f15eaacc08b6036d2890259885d8f03b-merged.mount: Deactivated successfully.
Oct  7 11:05:43 np0005473739 podman[438902]: 2025-10-07 15:05:43.824123564 +0000 UTC m=+0.189285410 container remove 1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_lichterman, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:05:43 np0005473739 systemd[1]: libpod-conmon-1617375290b0cad581dd7b6310b29c3283ca36defea9724b3043def77aef8cdb.scope: Deactivated successfully.
Oct  7 11:05:43 np0005473739 nova_compute[259550]: 2025-10-07 15:05:43.881 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:44 np0005473739 podman[438944]: 2025-10-07 15:05:44.002531176 +0000 UTC m=+0.053020005 container create d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:05:44 np0005473739 systemd[1]: Started libpod-conmon-d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3.scope.
Oct  7 11:05:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:05:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:05:44 np0005473739 podman[438944]: 2025-10-07 15:05:44.076874123 +0000 UTC m=+0.127362982 container init d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 11:05:44 np0005473739 podman[438944]: 2025-10-07 15:05:43.987097577 +0000 UTC m=+0.037586426 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:05:44 np0005473739 podman[438944]: 2025-10-07 15:05:44.084369612 +0000 UTC m=+0.134858441 container start d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 11:05:44 np0005473739 podman[438944]: 2025-10-07 15:05:44.08882485 +0000 UTC m=+0.139313699 container attach d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 11:05:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3121: 305 pgs: 305 active+clean; 4.9 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 42 KiB/s rd, 2.6 KiB/s wr, 56 op/s
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]: {
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "osd_id": 2,
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "type": "bluestore"
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:    },
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "osd_id": 1,
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "type": "bluestore"
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:    },
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "osd_id": 0,
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:        "type": "bluestore"
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]:    }
Oct  7 11:05:45 np0005473739 pedantic_kare[438960]: }
Oct  7 11:05:45 np0005473739 systemd[1]: libpod-d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3.scope: Deactivated successfully.
Oct  7 11:05:45 np0005473739 podman[438944]: 2025-10-07 15:05:45.147370544 +0000 UTC m=+1.197859383 container died d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:05:45 np0005473739 systemd[1]: libpod-d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3.scope: Consumed 1.070s CPU time.
Oct  7 11:05:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3ae59aafbde3c01cec472b46319226d2ae495e720c174201b3f7620d5a752d7a-merged.mount: Deactivated successfully.
Oct  7 11:05:45 np0005473739 podman[438944]: 2025-10-07 15:05:45.201317181 +0000 UTC m=+1.251806010 container remove d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_kare, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:05:45 np0005473739 systemd[1]: libpod-conmon-d53d3cd20a5c43d4f0268739722ae777d42bbb46cd8501bc7425c10d8730d1c3.scope: Deactivated successfully.
Oct  7 11:05:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:05:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:05:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bc613f6a-b458-4b41-b0ca-01fef27065de does not exist
Oct  7 11:05:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0f395372-2b25-46e7-ae63-13881ad4ddc4 does not exist
Oct  7 11:05:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:05:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3122: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 2.1 KiB/s wr, 32 op/s
Oct  7 11:05:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:48 np0005473739 nova_compute[259550]: 2025-10-07 15:05:48.478 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:48 np0005473739 nova_compute[259550]: 2025-10-07 15:05:48.884 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3123: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 21 KiB/s rd, 1.5 KiB/s wr, 28 op/s
Oct  7 11:05:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3124: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct  7 11:05:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e300 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e300 do_prune osdmap full prune enabled
Oct  7 11:05:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e301 e301: 3 total, 3 up, 3 in
Oct  7 11:05:52 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e301: 3 total, 3 up, 3 in
Oct  7 11:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:05:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:05:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3126: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.3 KiB/s wr, 24 op/s
Oct  7 11:05:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e301 do_prune osdmap full prune enabled
Oct  7 11:05:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 e302: 3 total, 3 up, 3 in
Oct  7 11:05:53 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e302: 3 total, 3 up, 3 in
Oct  7 11:05:53 np0005473739 nova_compute[259550]: 2025-10-07 15:05:53.480 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:53 np0005473739 nova_compute[259550]: 2025-10-07 15:05:53.885 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3128: 305 pgs: 305 active+clean; 457 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 1.9 KiB/s wr, 18 op/s
Oct  7 11:05:57 np0005473739 podman[439057]: 2025-10-07 15:05:57.06926554 +0000 UTC m=+0.053783966 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true)
Oct  7 11:05:57 np0005473739 podman[439058]: 2025-10-07 15:05:57.095826159 +0000 UTC m=+0.080389446 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Oct  7 11:05:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3129: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  7 11:05:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:05:58 np0005473739 nova_compute[259550]: 2025-10-07 15:05:58.482 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:58 np0005473739 nova_compute[259550]: 2025-10-07 15:05:58.886 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:05:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3130: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.0 KiB/s wr, 18 op/s
Oct  7 11:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:06:00.106 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:06:00.106 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:06:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:06:00.106 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:06:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3131: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 1.8 KiB/s wr, 16 op/s
Oct  7 11:06:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3132: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 10 KiB/s rd, 1.6 KiB/s wr, 14 op/s
Oct  7 11:06:03 np0005473739 nova_compute[259550]: 2025-10-07 15:06:03.486 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:03 np0005473739 nova_compute[259550]: 2025-10-07 15:06:03.887 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3133: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 8.8 KiB/s rd, 1.4 KiB/s wr, 12 op/s
Oct  7 11:06:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3134: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 85 B/s wr, 0 op/s
Oct  7 11:06:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:08 np0005473739 nova_compute[259550]: 2025-10-07 15:06:08.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:08 np0005473739 nova_compute[259550]: 2025-10-07 15:06:08.889 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3135: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3136: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:12 np0005473739 podman[439103]: 2025-10-07 15:06:12.080097939 +0000 UTC m=+0.060309947 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 11:06:12 np0005473739 podman[439104]: 2025-10-07 15:06:12.110507339 +0000 UTC m=+0.088296323 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:06:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:12 np0005473739 nova_compute[259550]: 2025-10-07 15:06:12.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3137: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:13 np0005473739 nova_compute[259550]: 2025-10-07 15:06:13.490 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:13 np0005473739 nova_compute[259550]: 2025-10-07 15:06:13.891 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3138: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3139: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:17 np0005473739 nova_compute[259550]: 2025-10-07 15:06:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:18 np0005473739 nova_compute[259550]: 2025-10-07 15:06:18.492 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:18 np0005473739 nova_compute[259550]: 2025-10-07 15:06:18.893 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:18 np0005473739 nova_compute[259550]: 2025-10-07 15:06:18.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3140: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:19 np0005473739 nova_compute[259550]: 2025-10-07 15:06:19.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:20 np0005473739 nova_compute[259550]: 2025-10-07 15:06:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:20 np0005473739 nova_compute[259550]: 2025-10-07 15:06:20.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:06:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3141: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:06:22
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'backups', 'cephfs.cephfs.data', 'images', 'default.rgw.meta', 'volumes', '.rgw.root', 'default.rgw.log', '.mgr', 'default.rgw.control', 'vms']
Oct  7 11:06:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3142: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:06:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:06:23 np0005473739 nova_compute[259550]: 2025-10-07 15:06:23.495 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:23 np0005473739 nova_compute[259550]: 2025-10-07 15:06:23.896 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:24 np0005473739 nova_compute[259550]: 2025-10-07 15:06:24.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3143: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3144: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:27 np0005473739 nova_compute[259550]: 2025-10-07 15:06:27.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:28 np0005473739 podman[439142]: 2025-10-07 15:06:28.088081649 +0000 UTC m=+0.077077398 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 11:06:28 np0005473739 podman[439141]: 2025-10-07 15:06:28.094096157 +0000 UTC m=+0.086786344 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 11:06:28 np0005473739 nova_compute[259550]: 2025-10-07 15:06:28.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:28 np0005473739 nova_compute[259550]: 2025-10-07 15:06:28.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3145: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3146: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:31 np0005473739 nova_compute[259550]: 2025-10-07 15:06:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.019 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.021 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.022 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:06:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:06:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3515539142' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.477 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.456s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.648 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.650 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3631MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.650 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.650 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.737 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.738 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:06:32 np0005473739 nova_compute[259550]: 2025-10-07 15:06:32.760 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:06:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:06:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2481322' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:06:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:06:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2481322' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:06:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:06:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3147: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:06:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4145620044' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:06:33 np0005473739 nova_compute[259550]: 2025-10-07 15:06:33.245 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.485s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:06:33 np0005473739 nova_compute[259550]: 2025-10-07 15:06:33.255 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:06:33 np0005473739 nova_compute[259550]: 2025-10-07 15:06:33.275 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:06:33 np0005473739 nova_compute[259550]: 2025-10-07 15:06:33.278 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:06:33 np0005473739 nova_compute[259550]: 2025-10-07 15:06:33.279 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.628s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:06:33 np0005473739 nova_compute[259550]: 2025-10-07 15:06:33.499 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:33 np0005473739 nova_compute[259550]: 2025-10-07 15:06:33.898 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3148: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:35 np0005473739 nova_compute[259550]: 2025-10-07 15:06:35.281 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:35 np0005473739 nova_compute[259550]: 2025-10-07 15:06:35.282 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:06:35 np0005473739 nova_compute[259550]: 2025-10-07 15:06:35.282 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:06:35 np0005473739 nova_compute[259550]: 2025-10-07 15:06:35.305 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:06:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3149: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:38 np0005473739 nova_compute[259550]: 2025-10-07 15:06:38.502 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:38 np0005473739 nova_compute[259550]: 2025-10-07 15:06:38.900 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3150: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:39 np0005473739 nova_compute[259550]: 2025-10-07 15:06:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:06:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3151: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:43 np0005473739 podman[439231]: 2025-10-07 15:06:43.103832871 +0000 UTC m=+0.090152742 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid)
Oct  7 11:06:43 np0005473739 podman[439230]: 2025-10-07 15:06:43.108044652 +0000 UTC m=+0.095827291 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:06:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3152: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:43 np0005473739 nova_compute[259550]: 2025-10-07 15:06:43.504 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:43 np0005473739 nova_compute[259550]: 2025-10-07 15:06:43.901 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3153: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:46 np0005473739 podman[439445]: 2025-10-07 15:06:46.226442933 +0000 UTC m=+0.069127809 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 11:06:46 np0005473739 podman[439445]: 2025-10-07 15:06:46.322585262 +0000 UTC m=+0.165270118 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #153. Immutable memtables: 0.
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.337133) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 93] Flushing memtable with next log file: 153
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606337207, "job": 93, "event": "flush_started", "num_memtables": 1, "num_entries": 1687, "num_deletes": 255, "total_data_size": 2725226, "memory_usage": 2765216, "flush_reason": "Manual Compaction"}
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 93] Level-0 flush table #154: started
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606356180, "cf_name": "default", "job": 93, "event": "table_file_creation", "file_number": 154, "file_size": 2654879, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 64121, "largest_seqno": 65807, "table_properties": {"data_size": 2646982, "index_size": 4775, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 2053, "raw_key_size": 16260, "raw_average_key_size": 20, "raw_value_size": 2631154, "raw_average_value_size": 3280, "num_data_blocks": 213, "num_entries": 802, "num_filter_entries": 802, "num_deletions": 255, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849432, "oldest_key_time": 1759849432, "file_creation_time": 1759849606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 154, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 93] Flush lasted 19104 microseconds, and 5741 cpu microseconds.
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.356247) [db/flush_job.cc:967] [default] [JOB 93] Level-0 flush table #154: 2654879 bytes OK
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.356266) [db/memtable_list.cc:519] [default] Level-0 commit table #154 started
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.357798) [db/memtable_list.cc:722] [default] Level-0 commit table #154: memtable #1 done
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.357812) EVENT_LOG_v1 {"time_micros": 1759849606357808, "job": 93, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.357829) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 93] Try to delete WAL files size 2717926, prev total WAL file size 2717926, number of live WAL files 2.
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000150.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.358552) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036323735' seq:72057594037927935, type:22 .. '7061786F730036353237' seq:0, type:0; will stop at (end)
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 94] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 93 Base level 0, inputs: [154(2592KB)], [152(8586KB)]
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606358580, "job": 94, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [154], "files_L6": [152], "score": -1, "input_data_size": 11447884, "oldest_snapshot_seqno": -1}
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 94] Generated table #155: 8428 keys, 9734643 bytes, temperature: kUnknown
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606425760, "cf_name": "default", "job": 94, "event": "table_file_creation", "file_number": 155, "file_size": 9734643, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9681578, "index_size": 30872, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21125, "raw_key_size": 220318, "raw_average_key_size": 26, "raw_value_size": 9534524, "raw_average_value_size": 1131, "num_data_blocks": 1195, "num_entries": 8428, "num_filter_entries": 8428, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849606, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 155, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.426060) [db/compaction/compaction_job.cc:1663] [default] [JOB 94] Compacted 1@0 + 1@6 files to L6 => 9734643 bytes
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.427371) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 170.2 rd, 144.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.5, 8.4 +0.0 blob) out(9.3 +0.0 blob), read-write-amplify(8.0) write-amplify(3.7) OK, records in: 8952, records dropped: 524 output_compression: NoCompression
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.427386) EVENT_LOG_v1 {"time_micros": 1759849606427379, "job": 94, "event": "compaction_finished", "compaction_time_micros": 67262, "compaction_time_cpu_micros": 23106, "output_level": 6, "num_output_files": 1, "total_output_size": 9734643, "num_input_records": 8952, "num_output_records": 8428, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000154.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606427827, "job": 94, "event": "table_file_deletion", "file_number": 154}
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000152.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849606429039, "job": 94, "event": "table_file_deletion", "file_number": 152}
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.358464) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429084) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429089) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429091) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429092) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:06:46 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:06:46.429094) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3154: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5d8439fd-e66a-4346-8771-ffbdb6ed0ee4 does not exist
Oct  7 11:06:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 479c0b00-47cd-4d0a-b482-c197a85d20dc does not exist
Oct  7 11:06:47 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 8e7cf4b3-b131-42ac-bca8-cbca57cfba35 does not exist
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:06:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:06:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:06:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:48 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:06:48 np0005473739 nova_compute[259550]: 2025-10-07 15:06:48.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:48 np0005473739 podman[439874]: 2025-10-07 15:06:48.571843863 +0000 UTC m=+0.036690016 container create 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 11:06:48 np0005473739 systemd[1]: Started libpod-conmon-66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893.scope.
Oct  7 11:06:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:06:48 np0005473739 podman[439874]: 2025-10-07 15:06:48.630888757 +0000 UTC m=+0.095734930 container init 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 11:06:48 np0005473739 podman[439874]: 2025-10-07 15:06:48.637306305 +0000 UTC m=+0.102152448 container start 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:06:48 np0005473739 podman[439874]: 2025-10-07 15:06:48.640655053 +0000 UTC m=+0.105501216 container attach 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:06:48 np0005473739 jolly_bell[439890]: 167 167
Oct  7 11:06:48 np0005473739 systemd[1]: libpod-66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893.scope: Deactivated successfully.
Oct  7 11:06:48 np0005473739 conmon[439890]: conmon 66199bc5a8ad67828d5f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893.scope/container/memory.events
Oct  7 11:06:48 np0005473739 podman[439874]: 2025-10-07 15:06:48.644642368 +0000 UTC m=+0.109488521 container died 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 11:06:48 np0005473739 podman[439874]: 2025-10-07 15:06:48.557862345 +0000 UTC m=+0.022708508 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:06:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-92bae8950ca45e8ce285dfc85f83bc88a042971fed35475200f4673044ab7407-merged.mount: Deactivated successfully.
Oct  7 11:06:48 np0005473739 podman[439874]: 2025-10-07 15:06:48.687256379 +0000 UTC m=+0.152102542 container remove 66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_bell, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:06:48 np0005473739 systemd[1]: libpod-conmon-66199bc5a8ad67828d5f6f2c98cfede56f115e525d4fa867c09b586ffa035893.scope: Deactivated successfully.
Oct  7 11:06:48 np0005473739 podman[439915]: 2025-10-07 15:06:48.879426943 +0000 UTC m=+0.059935607 container create 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:06:48 np0005473739 nova_compute[259550]: 2025-10-07 15:06:48.904 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:48 np0005473739 systemd[1]: Started libpod-conmon-84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29.scope.
Oct  7 11:06:48 np0005473739 podman[439915]: 2025-10-07 15:06:48.849222919 +0000 UTC m=+0.029731633 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:06:48 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:06:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:48 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:48 np0005473739 podman[439915]: 2025-10-07 15:06:48.995220949 +0000 UTC m=+0.175729613 container init 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef)
Oct  7 11:06:49 np0005473739 podman[439915]: 2025-10-07 15:06:49.004605666 +0000 UTC m=+0.185114300 container start 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 11:06:49 np0005473739 podman[439915]: 2025-10-07 15:06:49.007966165 +0000 UTC m=+0.188474839 container attach 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 11:06:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3155: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:50 np0005473739 nostalgic_moore[439931]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:06:50 np0005473739 nostalgic_moore[439931]: --> relative data size: 1.0
Oct  7 11:06:50 np0005473739 nostalgic_moore[439931]: --> All data devices are unavailable
Oct  7 11:06:50 np0005473739 systemd[1]: libpod-84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29.scope: Deactivated successfully.
Oct  7 11:06:50 np0005473739 podman[439915]: 2025-10-07 15:06:50.051486451 +0000 UTC m=+1.231995075 container died 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:06:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a76ab1ac2e3ac15ab275bdc3bf63f6121d16782edbdc3daf05fd8a272d607ec2-merged.mount: Deactivated successfully.
Oct  7 11:06:50 np0005473739 podman[439915]: 2025-10-07 15:06:50.115821524 +0000 UTC m=+1.296330168 container remove 84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_moore, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 11:06:50 np0005473739 systemd[1]: libpod-conmon-84b4e5d458cd5dafe29d478d0448abdb1597826ec1e2e503430c3e0950a5ff29.scope: Deactivated successfully.
Oct  7 11:06:50 np0005473739 podman[440114]: 2025-10-07 15:06:50.80113774 +0000 UTC m=+0.083140449 container create 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:06:50 np0005473739 podman[440114]: 2025-10-07 15:06:50.757667615 +0000 UTC m=+0.039670404 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:06:50 np0005473739 systemd[1]: Started libpod-conmon-166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556.scope.
Oct  7 11:06:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:06:50 np0005473739 podman[440114]: 2025-10-07 15:06:50.896247301 +0000 UTC m=+0.178250020 container init 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 11:06:50 np0005473739 podman[440114]: 2025-10-07 15:06:50.906769378 +0000 UTC m=+0.188772127 container start 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 11:06:50 np0005473739 musing_swanson[440130]: 167 167
Oct  7 11:06:50 np0005473739 podman[440114]: 2025-10-07 15:06:50.910600629 +0000 UTC m=+0.192603328 container attach 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:06:50 np0005473739 systemd[1]: libpod-166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556.scope: Deactivated successfully.
Oct  7 11:06:50 np0005473739 podman[440114]: 2025-10-07 15:06:50.911319687 +0000 UTC m=+0.193322386 container died 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:06:50 np0005473739 systemd[1]: var-lib-containers-storage-overlay-38afbf8c424e37f1d5e6ad3ab313d36b133aaddd657bd22f58499952d49bb8f4-merged.mount: Deactivated successfully.
Oct  7 11:06:50 np0005473739 podman[440114]: 2025-10-07 15:06:50.95551571 +0000 UTC m=+0.237518399 container remove 166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_swanson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 11:06:50 np0005473739 systemd[1]: libpod-conmon-166ce60cce877edda75923ea1a37b3910d282f80366d3ea5aea11abfc45b0556.scope: Deactivated successfully.
Oct  7 11:06:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3156: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:51 np0005473739 podman[440152]: 2025-10-07 15:06:51.163812909 +0000 UTC m=+0.071173973 container create 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:06:51 np0005473739 systemd[1]: Started libpod-conmon-828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4.scope.
Oct  7 11:06:51 np0005473739 podman[440152]: 2025-10-07 15:06:51.134884318 +0000 UTC m=+0.042245482 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:06:51 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:06:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:51 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:51 np0005473739 podman[440152]: 2025-10-07 15:06:51.289182336 +0000 UTC m=+0.196543500 container init 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 11:06:51 np0005473739 podman[440152]: 2025-10-07 15:06:51.309161671 +0000 UTC m=+0.216522755 container start 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 11:06:51 np0005473739 podman[440152]: 2025-10-07 15:06:51.313123776 +0000 UTC m=+0.220484960 container attach 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]: {
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:    "0": [
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:        {
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "devices": [
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "/dev/loop3"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            ],
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_name": "ceph_lv0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_size": "21470642176",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "name": "ceph_lv0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "tags": {
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cluster_name": "ceph",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.crush_device_class": "",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.encrypted": "0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osd_id": "0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.type": "block",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.vdo": "0"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            },
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "type": "block",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "vg_name": "ceph_vg0"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:        }
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:    ],
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:    "1": [
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:        {
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "devices": [
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "/dev/loop4"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            ],
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_name": "ceph_lv1",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_size": "21470642176",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "name": "ceph_lv1",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "tags": {
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cluster_name": "ceph",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.crush_device_class": "",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.encrypted": "0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osd_id": "1",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.type": "block",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.vdo": "0"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            },
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "type": "block",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "vg_name": "ceph_vg1"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:        }
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:    ],
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:    "2": [
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:        {
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "devices": [
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "/dev/loop5"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            ],
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_name": "ceph_lv2",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_size": "21470642176",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "name": "ceph_lv2",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "tags": {
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.cluster_name": "ceph",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.crush_device_class": "",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.encrypted": "0",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osd_id": "2",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.type": "block",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:                "ceph.vdo": "0"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            },
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "type": "block",
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:            "vg_name": "ceph_vg2"
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:        }
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]:    ]
Oct  7 11:06:52 np0005473739 gracious_rosalind[440169]: }
Oct  7 11:06:52 np0005473739 systemd[1]: libpod-828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4.scope: Deactivated successfully.
Oct  7 11:06:52 np0005473739 podman[440152]: 2025-10-07 15:06:52.123971473 +0000 UTC m=+1.031332587 container died 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 11:06:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8578c3ffd08b49ae6ad4d22842fda5966d29e71a47080ea7e0e6165f3d4213d1-merged.mount: Deactivated successfully.
Oct  7 11:06:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:52 np0005473739 podman[440152]: 2025-10-07 15:06:52.242285005 +0000 UTC m=+1.149646079 container remove 828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=gracious_rosalind, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 11:06:52 np0005473739 systemd[1]: libpod-conmon-828af43303450e5ce0c34c1d87855dcbd003481107a68c858df9c59bd3b85ae4.scope: Deactivated successfully.
Oct  7 11:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:06:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:06:52 np0005473739 podman[440333]: 2025-10-07 15:06:52.89714788 +0000 UTC m=+0.049962675 container create aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:06:52 np0005473739 systemd[1]: Started libpod-conmon-aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e.scope.
Oct  7 11:06:52 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:06:52 np0005473739 podman[440333]: 2025-10-07 15:06:52.877200665 +0000 UTC m=+0.030015560 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:06:52 np0005473739 podman[440333]: 2025-10-07 15:06:52.988443471 +0000 UTC m=+0.141258276 container init aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 11:06:53 np0005473739 podman[440333]: 2025-10-07 15:06:53.002063759 +0000 UTC m=+0.154878554 container start aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 11:06:53 np0005473739 podman[440333]: 2025-10-07 15:06:53.006513277 +0000 UTC m=+0.159328082 container attach aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default)
Oct  7 11:06:53 np0005473739 jolly_fermat[440349]: 167 167
Oct  7 11:06:53 np0005473739 systemd[1]: libpod-aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e.scope: Deactivated successfully.
Oct  7 11:06:53 np0005473739 podman[440333]: 2025-10-07 15:06:53.011091897 +0000 UTC m=+0.163906732 container died aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 11:06:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7b5fd7e72bb59963314ae7fcb74cc703511a2bbaa8eea063e8cf59e11df049e3-merged.mount: Deactivated successfully.
Oct  7 11:06:53 np0005473739 podman[440333]: 2025-10-07 15:06:53.058218957 +0000 UTC m=+0.211033752 container remove aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_fermat, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:06:53 np0005473739 systemd[1]: libpod-conmon-aa2400a56924d67799835181c1a39e3852e134fe0222101bce26bd043cc6684e.scope: Deactivated successfully.
Oct  7 11:06:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3157: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:53 np0005473739 podman[440374]: 2025-10-07 15:06:53.265079187 +0000 UTC m=+0.045113777 container create 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:06:53 np0005473739 systemd[1]: Started libpod-conmon-4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018.scope.
Oct  7 11:06:53 np0005473739 podman[440374]: 2025-10-07 15:06:53.245988455 +0000 UTC m=+0.026023035 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:06:53 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:06:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:53 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:06:53 np0005473739 podman[440374]: 2025-10-07 15:06:53.368145078 +0000 UTC m=+0.148179668 container init 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:06:53 np0005473739 podman[440374]: 2025-10-07 15:06:53.375951163 +0000 UTC m=+0.155985733 container start 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 11:06:53 np0005473739 podman[440374]: 2025-10-07 15:06:53.379406064 +0000 UTC m=+0.159440624 container attach 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 11:06:53 np0005473739 nova_compute[259550]: 2025-10-07 15:06:53.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:53 np0005473739 nova_compute[259550]: 2025-10-07 15:06:53.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]: {
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "osd_id": 2,
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "type": "bluestore"
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:    },
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "osd_id": 1,
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "type": "bluestore"
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:    },
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "osd_id": 0,
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:        "type": "bluestore"
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]:    }
Oct  7 11:06:54 np0005473739 eager_rhodes[440390]: }
Oct  7 11:06:54 np0005473739 systemd[1]: libpod-4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018.scope: Deactivated successfully.
Oct  7 11:06:54 np0005473739 systemd[1]: libpod-4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018.scope: Consumed 1.235s CPU time.
Oct  7 11:06:54 np0005473739 podman[440374]: 2025-10-07 15:06:54.603571613 +0000 UTC m=+1.383606213 container died 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 11:06:54 np0005473739 systemd[1]: var-lib-containers-storage-overlay-11f48f54d84ba80bb0f102e8ba367396004022978a87ece73bea31af496f4cd6-merged.mount: Deactivated successfully.
Oct  7 11:06:54 np0005473739 podman[440374]: 2025-10-07 15:06:54.671988942 +0000 UTC m=+1.452023492 container remove 4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_rhodes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:06:54 np0005473739 systemd[1]: libpod-conmon-4fff5fd64c0236d11d0cc2087140e7146db401931a93f6e836c5504495d76018.scope: Deactivated successfully.
Oct  7 11:06:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:06:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:06:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:54 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ec4ae349-8bd1-4886-9ac8-924a49c1e991 does not exist
Oct  7 11:06:54 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev e122a2a6-a0c6-46ae-abaf-4b057918bf6e does not exist
Oct  7 11:06:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3158: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:55 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:55 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:06:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3159: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:06:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:06:58 np0005473739 nova_compute[259550]: 2025-10-07 15:06:58.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:58 np0005473739 nova_compute[259550]: 2025-10-07 15:06:58.909 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:06:59 np0005473739 podman[440488]: 2025-10-07 15:06:59.099110638 +0000 UTC m=+0.074939562 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 11:06:59 np0005473739 podman[440489]: 2025-10-07 15:06:59.145788545 +0000 UTC m=+0.125699286 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:06:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3160: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:07:00.108 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:07:00.108 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:07:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:07:00.108 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:07:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3161: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3162: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:03 np0005473739 nova_compute[259550]: 2025-10-07 15:07:03.517 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:03 np0005473739 nova_compute[259550]: 2025-10-07 15:07:03.924 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3163: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3164: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:08 np0005473739 nova_compute[259550]: 2025-10-07 15:07:08.520 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:08 np0005473739 nova_compute[259550]: 2025-10-07 15:07:08.927 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3165: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3166: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:11 np0005473739 nova_compute[259550]: 2025-10-07 15:07:11.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:11 np0005473739 nova_compute[259550]: 2025-10-07 15:07:11.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 11:07:11 np0005473739 nova_compute[259550]: 2025-10-07 15:07:11.999 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 11:07:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:12 np0005473739 nova_compute[259550]: 2025-10-07 15:07:12.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3167: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:13 np0005473739 nova_compute[259550]: 2025-10-07 15:07:13.523 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:13 np0005473739 nova_compute[259550]: 2025-10-07 15:07:13.929 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:14 np0005473739 podman[440534]: 2025-10-07 15:07:14.085030242 +0000 UTC m=+0.071665986 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 11:07:14 np0005473739 podman[440535]: 2025-10-07 15:07:14.104987987 +0000 UTC m=+0.072856168 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.license=GPLv2, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:07:14 np0005473739 nova_compute[259550]: 2025-10-07 15:07:14.996 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3168: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3169: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:18 np0005473739 nova_compute[259550]: 2025-10-07 15:07:18.525 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:18 np0005473739 nova_compute[259550]: 2025-10-07 15:07:18.933 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:18 np0005473739 nova_compute[259550]: 2025-10-07 15:07:18.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3170: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:20 np0005473739 nova_compute[259550]: 2025-10-07 15:07:20.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:20 np0005473739 nova_compute[259550]: 2025-10-07 15:07:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3171: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:21 np0005473739 nova_compute[259550]: 2025-10-07 15:07:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:21 np0005473739 nova_compute[259550]: 2025-10-07 15:07:21.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:07:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:07:22
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', '.mgr', 'default.rgw.meta', 'vms', 'volumes', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'backups', '.rgw.root', 'images', 'default.rgw.log']
Oct  7 11:07:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3172: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:07:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:07:23 np0005473739 nova_compute[259550]: 2025-10-07 15:07:23.527 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:23 np0005473739 nova_compute[259550]: 2025-10-07 15:07:23.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:24 np0005473739 nova_compute[259550]: 2025-10-07 15:07:24.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:24 np0005473739 nova_compute[259550]: 2025-10-07 15:07:24.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:24 np0005473739 nova_compute[259550]: 2025-10-07 15:07:24.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 11:07:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3173: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3174: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:28 np0005473739 nova_compute[259550]: 2025-10-07 15:07:28.528 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:28 np0005473739 nova_compute[259550]: 2025-10-07 15:07:28.934 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3175: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:30 np0005473739 podman[440574]: 2025-10-07 15:07:30.083174914 +0000 UTC m=+0.077316154 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 11:07:30 np0005473739 podman[440575]: 2025-10-07 15:07:30.104812324 +0000 UTC m=+0.097862725 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  7 11:07:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3176: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:07:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/488099912' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:07:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:07:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/488099912' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 4.4513495474376506e-07 of space, bias 1.0, pg target 0.00013354048642312953 quantized to 32 (current 32)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:07:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:07:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3177: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:33 np0005473739 nova_compute[259550]: 2025-10-07 15:07:33.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:33 np0005473739 nova_compute[259550]: 2025-10-07 15:07:33.936 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.017 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.079 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.079 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.080 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.080 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.080 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:07:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:07:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/941561486' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.507 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.427s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.702 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.703 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3608MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.704 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.704 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.767 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.767 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:07:34 np0005473739 nova_compute[259550]: 2025-10-07 15:07:34.806 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:07:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3178: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:07:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/464643896' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:07:35 np0005473739 nova_compute[259550]: 2025-10-07 15:07:35.276 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.469s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:07:35 np0005473739 nova_compute[259550]: 2025-10-07 15:07:35.285 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:07:35 np0005473739 nova_compute[259550]: 2025-10-07 15:07:35.313 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:07:35 np0005473739 nova_compute[259550]: 2025-10-07 15:07:35.316 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:07:35 np0005473739 nova_compute[259550]: 2025-10-07 15:07:35.316 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.612s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:07:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3179: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #156. Immutable memtables: 0.
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.186857) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 95] Flushing memtable with next log file: 156
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657187186, "job": 95, "event": "flush_started", "num_memtables": 1, "num_entries": 682, "num_deletes": 258, "total_data_size": 814183, "memory_usage": 828024, "flush_reason": "Manual Compaction"}
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 95] Level-0 flush table #157: started
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657200492, "cf_name": "default", "job": 95, "event": "table_file_creation", "file_number": 157, "file_size": 795974, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 65808, "largest_seqno": 66489, "table_properties": {"data_size": 792393, "index_size": 1424, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1093, "raw_key_size": 8034, "raw_average_key_size": 18, "raw_value_size": 785147, "raw_average_value_size": 1838, "num_data_blocks": 63, "num_entries": 427, "num_filter_entries": 427, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849607, "oldest_key_time": 1759849607, "file_creation_time": 1759849657, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 157, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 95] Flush lasted 13713 microseconds, and 4373 cpu microseconds.
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.200562) [db/flush_job.cc:967] [default] [JOB 95] Level-0 flush table #157: 795974 bytes OK
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.200597) [db/memtable_list.cc:519] [default] Level-0 commit table #157 started
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.202613) [db/memtable_list.cc:722] [default] Level-0 commit table #157: memtable #1 done
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.202634) EVENT_LOG_v1 {"time_micros": 1759849657202626, "job": 95, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.202661) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 95] Try to delete WAL files size 810567, prev total WAL file size 810567, number of live WAL files 2.
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000153.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.203340) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0032373637' seq:72057594037927935, type:22 .. '6C6F676D0033303231' seq:0, type:0; will stop at (end)
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 96] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 95 Base level 0, inputs: [157(777KB)], [155(9506KB)]
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657203386, "job": 96, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [157], "files_L6": [155], "score": -1, "input_data_size": 10530617, "oldest_snapshot_seqno": -1}
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 96] Generated table #158: 8327 keys, 10414740 bytes, temperature: kUnknown
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657274058, "cf_name": "default", "job": 96, "event": "table_file_creation", "file_number": 158, "file_size": 10414740, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10361033, "index_size": 31770, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 20869, "raw_key_size": 219140, "raw_average_key_size": 26, "raw_value_size": 10214364, "raw_average_value_size": 1226, "num_data_blocks": 1232, "num_entries": 8327, "num_filter_entries": 8327, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849657, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 158, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.276156) [db/compaction/compaction_job.cc:1663] [default] [JOB 96] Compacted 1@0 + 1@6 files to L6 => 10414740 bytes
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.278368) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 145.7 rd, 144.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.3 +0.0 blob) out(9.9 +0.0 blob), read-write-amplify(26.3) write-amplify(13.1) OK, records in: 8855, records dropped: 528 output_compression: NoCompression
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.278399) EVENT_LOG_v1 {"time_micros": 1759849657278388, "job": 96, "event": "compaction_finished", "compaction_time_micros": 72253, "compaction_time_cpu_micros": 34834, "output_level": 6, "num_output_files": 1, "total_output_size": 10414740, "num_input_records": 8855, "num_output_records": 8327, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000157.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657279217, "job": 96, "event": "table_file_deletion", "file_number": 157}
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000155.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849657281388, "job": 96, "event": "table_file_deletion", "file_number": 155}
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.203244) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281485) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281491) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281493) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281495) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:07:37 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:07:37.281496) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:07:37 np0005473739 nova_compute[259550]: 2025-10-07 15:07:37.283 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:37 np0005473739 nova_compute[259550]: 2025-10-07 15:07:37.283 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:07:37 np0005473739 nova_compute[259550]: 2025-10-07 15:07:37.284 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:07:37 np0005473739 nova_compute[259550]: 2025-10-07 15:07:37.303 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:07:38 np0005473739 nova_compute[259550]: 2025-10-07 15:07:38.532 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e302 do_prune osdmap full prune enabled
Oct  7 11:07:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e303 e303: 3 total, 3 up, 3 in
Oct  7 11:07:38 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e303: 3 total, 3 up, 3 in
Oct  7 11:07:39 np0005473739 nova_compute[259550]: 2025-10-07 15:07:39.102 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3181: 305 pgs: 305 active+clean; 458 KiB data, 1000 MiB used, 59 GiB / 60 GiB avail; 2.8 KiB/s rd, 1.1 KiB/s wr, 5 op/s
Oct  7 11:07:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e303 do_prune osdmap full prune enabled
Oct  7 11:07:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 e304: 3 total, 3 up, 3 in
Oct  7 11:07:40 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e304: 3 total, 3 up, 3 in
Oct  7 11:07:40 np0005473739 nova_compute[259550]: 2025-10-07 15:07:40.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:07:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3183: 305 pgs: 305 active+clean; 21 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 28 op/s
Oct  7 11:07:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3184: 305 pgs: 305 active+clean; 21 MiB data, 1016 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 2.6 MiB/s wr, 28 op/s
Oct  7 11:07:43 np0005473739 nova_compute[259550]: 2025-10-07 15:07:43.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:44 np0005473739 nova_compute[259550]: 2025-10-07 15:07:44.103 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:45 np0005473739 podman[440665]: 2025-10-07 15:07:45.070721775 +0000 UTC m=+0.060777630 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, config_id=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 11:07:45 np0005473739 podman[440666]: 2025-10-07 15:07:45.075628333 +0000 UTC m=+0.059979788 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=iscsid)
Oct  7 11:07:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3185: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 5.1 MiB/s wr, 46 op/s
Oct  7 11:07:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3186: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 4.9 MiB/s wr, 39 op/s
Oct  7 11:07:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:48 np0005473739 nova_compute[259550]: 2025-10-07 15:07:48.537 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:49 np0005473739 nova_compute[259550]: 2025-10-07 15:07:49.105 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3187: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 4.1 MiB/s wr, 32 op/s
Oct  7 11:07:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3188: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 2.0 MiB/s wr, 15 op/s
Oct  7 11:07:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:07:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:07:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3189: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct  7 11:07:53 np0005473739 nova_compute[259550]: 2025-10-07 15:07:53.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:54 np0005473739 nova_compute[259550]: 2025-10-07 15:07:54.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3190: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 9.7 KiB/s rd, 1.7 MiB/s wr, 13 op/s
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:07:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0eaf2512-4bef-4194-8df1-7f3ae61dcfb2 does not exist
Oct  7 11:07:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3a9917bf-047e-4804-8616-0a9262cf8f86 does not exist
Oct  7 11:07:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 52b49856-9431-4dd1-b9e9-f23aa6eb28f2 does not exist
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:07:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:07:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:07:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:07:56 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:07:56 np0005473739 podman[440975]: 2025-10-07 15:07:56.54126996 +0000 UTC m=+0.056243230 container create 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:07:56 np0005473739 systemd[1]: Started libpod-conmon-0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852.scope.
Oct  7 11:07:56 np0005473739 podman[440975]: 2025-10-07 15:07:56.516376395 +0000 UTC m=+0.031349665 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:07:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:07:56 np0005473739 podman[440975]: 2025-10-07 15:07:56.652612278 +0000 UTC m=+0.167585538 container init 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 11:07:56 np0005473739 podman[440975]: 2025-10-07 15:07:56.664967353 +0000 UTC m=+0.179940593 container start 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 11:07:56 np0005473739 podman[440975]: 2025-10-07 15:07:56.668685431 +0000 UTC m=+0.183658691 container attach 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2)
Oct  7 11:07:56 np0005473739 jovial_rubin[440992]: 167 167
Oct  7 11:07:56 np0005473739 systemd[1]: libpod-0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852.scope: Deactivated successfully.
Oct  7 11:07:56 np0005473739 podman[440975]: 2025-10-07 15:07:56.674081413 +0000 UTC m=+0.189054683 container died 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:07:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4b603a179f5bc4bb342d7c20c74f5a3a3d3941d6cbb4b2b875c7fb372037a6af-merged.mount: Deactivated successfully.
Oct  7 11:07:56 np0005473739 podman[440975]: 2025-10-07 15:07:56.73820267 +0000 UTC m=+0.253175910 container remove 0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_rubin, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 11:07:56 np0005473739 systemd[1]: libpod-conmon-0d71de39b09bfda6a0205061dfa897a0129ff075d06134d9d35cce18d250b852.scope: Deactivated successfully.
Oct  7 11:07:56 np0005473739 podman[441014]: 2025-10-07 15:07:56.945706408 +0000 UTC m=+0.051296511 container create 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:07:56 np0005473739 systemd[1]: Started libpod-conmon-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope.
Oct  7 11:07:57 np0005473739 podman[441014]: 2025-10-07 15:07:56.925491906 +0000 UTC m=+0.031081999 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:07:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:07:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:57 np0005473739 podman[441014]: 2025-10-07 15:07:57.048822689 +0000 UTC m=+0.154412782 container init 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:07:57 np0005473739 podman[441014]: 2025-10-07 15:07:57.057069926 +0000 UTC m=+0.162659999 container start 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:07:57 np0005473739 podman[441014]: 2025-10-07 15:07:57.064177023 +0000 UTC m=+0.169767116 container attach 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3)
Oct  7 11:07:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3191: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 341 B/s rd, 170 B/s wr, 0 op/s
Oct  7 11:07:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:07:58 np0005473739 brave_ishizaka[441031]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:07:58 np0005473739 brave_ishizaka[441031]: --> relative data size: 1.0
Oct  7 11:07:58 np0005473739 brave_ishizaka[441031]: --> All data devices are unavailable
Oct  7 11:07:58 np0005473739 systemd[1]: libpod-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope: Deactivated successfully.
Oct  7 11:07:58 np0005473739 systemd[1]: libpod-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope: Consumed 1.017s CPU time.
Oct  7 11:07:58 np0005473739 conmon[441031]: conmon 73b024d4b058c52ab577 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope/container/memory.events
Oct  7 11:07:58 np0005473739 podman[441014]: 2025-10-07 15:07:58.121815752 +0000 UTC m=+1.227405845 container died 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True)
Oct  7 11:07:58 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3e36d90b695de25aa4ff967039ebfed2b25fd2e2c1a1a46597babb1cff3cde6f-merged.mount: Deactivated successfully.
Oct  7 11:07:58 np0005473739 podman[441014]: 2025-10-07 15:07:58.225418668 +0000 UTC m=+1.331008731 container remove 73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_ishizaka, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:07:58 np0005473739 systemd[1]: libpod-conmon-73b024d4b058c52ab577efa61f930e9279f6fe2f2346fde673c65793ac9a3491.scope: Deactivated successfully.
Oct  7 11:07:58 np0005473739 nova_compute[259550]: 2025-10-07 15:07:58.541 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:58 np0005473739 podman[441213]: 2025-10-07 15:07:58.946170825 +0000 UTC m=+0.048052345 container create 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3)
Oct  7 11:07:58 np0005473739 systemd[1]: Started libpod-conmon-52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09.scope.
Oct  7 11:07:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:07:59 np0005473739 podman[441213]: 2025-10-07 15:07:58.926100637 +0000 UTC m=+0.027982067 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:07:59 np0005473739 podman[441213]: 2025-10-07 15:07:59.027993877 +0000 UTC m=+0.129875297 container init 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 11:07:59 np0005473739 podman[441213]: 2025-10-07 15:07:59.034688083 +0000 UTC m=+0.136569503 container start 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 11:07:59 np0005473739 podman[441213]: 2025-10-07 15:07:59.038513454 +0000 UTC m=+0.140394894 container attach 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:07:59 np0005473739 wizardly_heyrovsky[441230]: 167 167
Oct  7 11:07:59 np0005473739 systemd[1]: libpod-52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09.scope: Deactivated successfully.
Oct  7 11:07:59 np0005473739 conmon[441230]: conmon 52e6da90c0cb65cb7ae9 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09.scope/container/memory.events
Oct  7 11:07:59 np0005473739 podman[441213]: 2025-10-07 15:07:59.043989277 +0000 UTC m=+0.145870687 container died 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:07:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-53f89e85f8aad3364de0da29d9917f699bce77577f3fb6c12554790decdcee5b-merged.mount: Deactivated successfully.
Oct  7 11:07:59 np0005473739 podman[441213]: 2025-10-07 15:07:59.078995108 +0000 UTC m=+0.180876518 container remove 52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_heyrovsky, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:07:59 np0005473739 systemd[1]: libpod-conmon-52e6da90c0cb65cb7ae93658cda41b177a607ff6c230832315c5492ea600fa09.scope: Deactivated successfully.
Oct  7 11:07:59 np0005473739 nova_compute[259550]: 2025-10-07 15:07:59.108 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:07:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3192: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:07:59 np0005473739 podman[441254]: 2025-10-07 15:07:59.248422305 +0000 UTC m=+0.045598720 container create 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 11:07:59 np0005473739 systemd[1]: Started libpod-conmon-8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490.scope.
Oct  7 11:07:59 np0005473739 podman[441254]: 2025-10-07 15:07:59.228799078 +0000 UTC m=+0.025975503 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:07:59 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:07:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:59 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:07:59 np0005473739 podman[441254]: 2025-10-07 15:07:59.350481009 +0000 UTC m=+0.147657434 container init 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 11:07:59 np0005473739 podman[441254]: 2025-10-07 15:07:59.359061655 +0000 UTC m=+0.156238070 container start 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True)
Oct  7 11:07:59 np0005473739 podman[441254]: 2025-10-07 15:07:59.363492122 +0000 UTC m=+0.160668537 container attach 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:08:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:08:00.109 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:08:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:08:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:08:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:08:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]: {
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:    "0": [
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:        {
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "devices": [
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "/dev/loop3"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            ],
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_name": "ceph_lv0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_size": "21470642176",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "name": "ceph_lv0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "tags": {
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cluster_name": "ceph",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.crush_device_class": "",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.encrypted": "0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osd_id": "0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.type": "block",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.vdo": "0"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            },
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "type": "block",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "vg_name": "ceph_vg0"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:        }
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:    ],
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:    "1": [
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:        {
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "devices": [
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "/dev/loop4"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            ],
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_name": "ceph_lv1",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_size": "21470642176",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "name": "ceph_lv1",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "tags": {
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cluster_name": "ceph",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.crush_device_class": "",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.encrypted": "0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osd_id": "1",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.type": "block",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.vdo": "0"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            },
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "type": "block",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "vg_name": "ceph_vg1"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:        }
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:    ],
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:    "2": [
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:        {
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "devices": [
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "/dev/loop5"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            ],
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_name": "ceph_lv2",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_size": "21470642176",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "name": "ceph_lv2",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "tags": {
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.cluster_name": "ceph",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.crush_device_class": "",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.encrypted": "0",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osd_id": "2",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.type": "block",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:                "ceph.vdo": "0"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            },
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "type": "block",
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:            "vg_name": "ceph_vg2"
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:        }
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]:    ]
Oct  7 11:08:00 np0005473739 relaxed_haibt[441271]: }
Oct  7 11:08:00 np0005473739 systemd[1]: libpod-8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490.scope: Deactivated successfully.
Oct  7 11:08:00 np0005473739 podman[441254]: 2025-10-07 15:08:00.201406941 +0000 UTC m=+0.998583356 container died 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, OSD_FLAVOR=default, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:08:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1ed7266e51ffbfcc438828762f1f6bc2af71b98803f041ae694e1b4f41643f39-merged.mount: Deactivated successfully.
Oct  7 11:08:00 np0005473739 podman[441254]: 2025-10-07 15:08:00.264269584 +0000 UTC m=+1.061445999 container remove 8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=relaxed_haibt, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:08:00 np0005473739 systemd[1]: libpod-conmon-8d30b25af7ba3f25255abfa067d9bebab9e67eeb921c998e953dadee7016c490.scope: Deactivated successfully.
Oct  7 11:08:00 np0005473739 podman[441280]: 2025-10-07 15:08:00.322794994 +0000 UTC m=+0.083514578 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 11:08:00 np0005473739 podman[441288]: 2025-10-07 15:08:00.360772123 +0000 UTC m=+0.115809478 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 11:08:01 np0005473739 podman[441476]: 2025-10-07 15:08:01.048711367 +0000 UTC m=+0.049725439 container create a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 11:08:01 np0005473739 systemd[1]: Started libpod-conmon-a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749.scope.
Oct  7 11:08:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:08:01 np0005473739 podman[441476]: 2025-10-07 15:08:01.021620985 +0000 UTC m=+0.022635087 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:08:01 np0005473739 podman[441476]: 2025-10-07 15:08:01.131225757 +0000 UTC m=+0.132239849 container init a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 11:08:01 np0005473739 podman[441476]: 2025-10-07 15:08:01.141841187 +0000 UTC m=+0.142855259 container start a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 11:08:01 np0005473739 podman[441476]: 2025-10-07 15:08:01.146423147 +0000 UTC m=+0.147437249 container attach a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 11:08:01 np0005473739 systemd[1]: libpod-a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749.scope: Deactivated successfully.
Oct  7 11:08:01 np0005473739 xenodochial_bhaskara[441492]: 167 167
Oct  7 11:08:01 np0005473739 podman[441476]: 2025-10-07 15:08:01.151776348 +0000 UTC m=+0.152790430 container died a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 11:08:01 np0005473739 conmon[441492]: conmon a8fa7905b5c58222d891 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749.scope/container/memory.events
Oct  7 11:08:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3193: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:01 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2fc6fa78a8eb060bb2cc1625789a631abbdeed43c33431f64c519a268d90dc8d-merged.mount: Deactivated successfully.
Oct  7 11:08:01 np0005473739 podman[441476]: 2025-10-07 15:08:01.199913824 +0000 UTC m=+0.200927896 container remove a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=xenodochial_bhaskara, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 11:08:01 np0005473739 systemd[1]: libpod-conmon-a8fa7905b5c58222d891e2109782f2248a5926b27f4cc387c068684ee2442749.scope: Deactivated successfully.
Oct  7 11:08:01 np0005473739 nova_compute[259550]: 2025-10-07 15:08:01.344 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:08:01.347 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=62, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=61) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:08:01 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:08:01.348 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 11:08:01 np0005473739 podman[441516]: 2025-10-07 15:08:01.427662815 +0000 UTC m=+0.063156273 container create b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 11:08:01 np0005473739 systemd[1]: Started libpod-conmon-b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334.scope.
Oct  7 11:08:01 np0005473739 podman[441516]: 2025-10-07 15:08:01.402899213 +0000 UTC m=+0.038392751 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:08:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:08:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:08:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:08:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:08:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:08:01 np0005473739 podman[441516]: 2025-10-07 15:08:01.530295334 +0000 UTC m=+0.165788812 container init b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:08:01 np0005473739 podman[441516]: 2025-10-07 15:08:01.544427545 +0000 UTC m=+0.179921003 container start b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:08:01 np0005473739 podman[441516]: 2025-10-07 15:08:01.548319388 +0000 UTC m=+0.183812956 container attach b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:08:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]: {
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "osd_id": 2,
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "type": "bluestore"
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:    },
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "osd_id": 1,
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "type": "bluestore"
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:    },
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "osd_id": 0,
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:        "type": "bluestore"
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]:    }
Oct  7 11:08:02 np0005473739 sharp_bohr[441533]: }
Oct  7 11:08:02 np0005473739 systemd[1]: libpod-b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334.scope: Deactivated successfully.
Oct  7 11:08:02 np0005473739 podman[441516]: 2025-10-07 15:08:02.524807882 +0000 UTC m=+1.160301350 container died b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:08:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9d7f0755967d88704fcc4ffc2a28bd96981114351fcbc34ffc4356986dace944-merged.mount: Deactivated successfully.
Oct  7 11:08:02 np0005473739 podman[441516]: 2025-10-07 15:08:02.624176305 +0000 UTC m=+1.259669793 container remove b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sharp_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2)
Oct  7 11:08:02 np0005473739 systemd[1]: libpod-conmon-b064db35199cf57ec6ec25128e93a999e0d7430d42f9c638e70e26276acb5334.scope: Deactivated successfully.
Oct  7 11:08:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:08:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:08:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:08:02 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:08:02 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3eca058d-050f-44a1-ae4f-f84c32f57365 does not exist
Oct  7 11:08:02 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b54f8a1d-7a3f-4f1b-85b0-5c7c35bf32ef does not exist
Oct  7 11:08:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3194: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:03 np0005473739 nova_compute[259550]: 2025-10-07 15:08:03.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:08:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:08:04 np0005473739 nova_compute[259550]: 2025-10-07 15:08:04.110 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3195: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3196: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:07 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:08:07.350 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '62'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:08:08 np0005473739 nova_compute[259550]: 2025-10-07 15:08:08.544 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e304 do_prune osdmap full prune enabled
Oct  7 11:08:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e305 e305: 3 total, 3 up, 3 in
Oct  7 11:08:08 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e305: 3 total, 3 up, 3 in
Oct  7 11:08:09 np0005473739 nova_compute[259550]: 2025-10-07 15:08:09.112 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3198: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct  7 11:08:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3199: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 921 B/s wr, 9 op/s
Oct  7 11:08:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3200: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 7.0 KiB/s rd, 921 B/s wr, 9 op/s
Oct  7 11:08:13 np0005473739 nova_compute[259550]: 2025-10-07 15:08:13.548 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:14 np0005473739 nova_compute[259550]: 2025-10-07 15:08:14.114 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:14 np0005473739 nova_compute[259550]: 2025-10-07 15:08:14.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3201: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  7 11:08:16 np0005473739 podman[441633]: 2025-10-07 15:08:16.088092999 +0000 UTC m=+0.069489019 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Oct  7 11:08:16 np0005473739 podman[441632]: 2025-10-07 15:08:16.08849745 +0000 UTC m=+0.070032053 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd)
Oct  7 11:08:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3202: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 24 op/s
Oct  7 11:08:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e305 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e305 do_prune osdmap full prune enabled
Oct  7 11:08:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 e306: 3 total, 3 up, 3 in
Oct  7 11:08:17 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e306: 3 total, 3 up, 3 in
Oct  7 11:08:18 np0005473739 nova_compute[259550]: 2025-10-07 15:08:18.550 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:19 np0005473739 nova_compute[259550]: 2025-10-07 15:08:19.117 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3204: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct  7 11:08:20 np0005473739 nova_compute[259550]: 2025-10-07 15:08:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3205: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct  7 11:08:21 np0005473739 nova_compute[259550]: 2025-10-07 15:08:21.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:21 np0005473739 nova_compute[259550]: 2025-10-07 15:08:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:21 np0005473739 nova_compute[259550]: 2025-10-07 15:08:21.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:08:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:08:22
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', 'images', 'vms', 'volumes', 'default.rgw.control', '.mgr', 'backups', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data']
Oct  7 11:08:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:08:22 np0005473739 nova_compute[259550]: 2025-10-07 15:08:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3206: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 511 B/s wr, 15 op/s
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:08:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:08:23 np0005473739 nova_compute[259550]: 2025-10-07 15:08:23.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:24 np0005473739 nova_compute[259550]: 2025-10-07 15:08:24.118 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3207: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:25 np0005473739 nova_compute[259550]: 2025-10-07 15:08:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3208: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:28 np0005473739 nova_compute[259550]: 2025-10-07 15:08:28.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:28 np0005473739 nova_compute[259550]: 2025-10-07 15:08:28.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:29 np0005473739 nova_compute[259550]: 2025-10-07 15:08:29.121 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3209: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:31 np0005473739 podman[441671]: 2025-10-07 15:08:31.098401567 +0000 UTC m=+0.081928655 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible)
Oct  7 11:08:31 np0005473739 podman[441672]: 2025-10-07 15:08:31.13118276 +0000 UTC m=+0.115191012 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 11:08:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3210: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:08:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1335946709' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:08:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:08:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1335946709' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:08:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:08:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3211: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:33 np0005473739 nova_compute[259550]: 2025-10-07 15:08:33.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:34 np0005473739 nova_compute[259550]: 2025-10-07 15:08:34.122 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3212: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:35 np0005473739 nova_compute[259550]: 2025-10-07 15:08:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.014 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.015 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.016 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.016 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.017 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:08:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:08:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4124713347' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.534 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.518s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.714 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.716 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3603MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.717 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.717 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.788 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.789 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:08:36 np0005473739 nova_compute[259550]: 2025-10-07 15:08:36.810 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:08:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3213: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:08:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1847847925' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:08:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:37 np0005473739 nova_compute[259550]: 2025-10-07 15:08:37.297 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:08:37 np0005473739 nova_compute[259550]: 2025-10-07 15:08:37.306 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:08:37 np0005473739 nova_compute[259550]: 2025-10-07 15:08:37.329 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:08:37 np0005473739 nova_compute[259550]: 2025-10-07 15:08:37.333 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:08:37 np0005473739 nova_compute[259550]: 2025-10-07 15:08:37.333 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.616s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:08:38 np0005473739 nova_compute[259550]: 2025-10-07 15:08:38.560 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:39 np0005473739 nova_compute[259550]: 2025-10-07 15:08:39.123 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3214: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:39 np0005473739 nova_compute[259550]: 2025-10-07 15:08:39.334 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:39 np0005473739 nova_compute[259550]: 2025-10-07 15:08:39.335 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:08:39 np0005473739 nova_compute[259550]: 2025-10-07 15:08:39.335 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:08:39 np0005473739 nova_compute[259550]: 2025-10-07 15:08:39.371 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:08:40 np0005473739 nova_compute[259550]: 2025-10-07 15:08:40.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:08:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3215: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3216: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:43 np0005473739 nova_compute[259550]: 2025-10-07 15:08:43.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:44 np0005473739 nova_compute[259550]: 2025-10-07 15:08:44.126 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3217: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:47 np0005473739 podman[441760]: 2025-10-07 15:08:47.083772431 +0000 UTC m=+0.064248281 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:08:47 np0005473739 podman[441759]: 2025-10-07 15:08:47.11108543 +0000 UTC m=+0.096893881 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  7 11:08:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3218: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:48 np0005473739 nova_compute[259550]: 2025-10-07 15:08:48.564 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:49 np0005473739 nova_compute[259550]: 2025-10-07 15:08:49.128 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3219: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3220: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:08:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:08:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3221: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:53 np0005473739 nova_compute[259550]: 2025-10-07 15:08:53.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:54 np0005473739 nova_compute[259550]: 2025-10-07 15:08:54.131 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3222: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3223: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:08:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:08:58 np0005473739 nova_compute[259550]: 2025-10-07 15:08:58.568 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:59 np0005473739 nova_compute[259550]: 2025-10-07 15:08:59.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:08:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3224: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:09:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:09:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:09:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:09:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:09:00.111 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:09:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3225: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:02 np0005473739 podman[441794]: 2025-10-07 15:09:02.0917039 +0000 UTC m=+0.076143013 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 11:09:02 np0005473739 podman[441795]: 2025-10-07 15:09:02.117183 +0000 UTC m=+0.102154388 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3)
Oct  7 11:09:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3226: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:03 np0005473739 nova_compute[259550]: 2025-10-07 15:09:03.569 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:09:03 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bcff4464-3bfa-459f-8a2b-1cb673ec9cfa does not exist
Oct  7 11:09:03 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 712199a8-2d06-4c15-8eee-7ec328337708 does not exist
Oct  7 11:09:03 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a7603c14-68d6-4d4a-9389-9c132ccdb1f4 does not exist
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:09:03 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:09:04 np0005473739 nova_compute[259550]: 2025-10-07 15:09:04.133 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:04 np0005473739 podman[442111]: 2025-10-07 15:09:04.376981239 +0000 UTC m=+0.050833818 container create ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0)
Oct  7 11:09:04 np0005473739 systemd[1]: Started libpod-conmon-ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a.scope.
Oct  7 11:09:04 np0005473739 podman[442111]: 2025-10-07 15:09:04.352120855 +0000 UTC m=+0.025973464 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:09:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:09:04 np0005473739 podman[442111]: 2025-10-07 15:09:04.479189777 +0000 UTC m=+0.153042376 container init ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:09:04 np0005473739 podman[442111]: 2025-10-07 15:09:04.489266122 +0000 UTC m=+0.163118701 container start ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:09:04 np0005473739 podman[442111]: 2025-10-07 15:09:04.493548085 +0000 UTC m=+0.167400704 container attach ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 11:09:04 np0005473739 musing_heisenberg[442128]: 167 167
Oct  7 11:09:04 np0005473739 systemd[1]: libpod-ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a.scope: Deactivated successfully.
Oct  7 11:09:04 np0005473739 podman[442111]: 2025-10-07 15:09:04.499119151 +0000 UTC m=+0.172971720 container died ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:09:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-079cfeaeaeff89cd63598c1a8441fafe1892064f940727e5ecbef92b70c66745-merged.mount: Deactivated successfully.
Oct  7 11:09:04 np0005473739 podman[442111]: 2025-10-07 15:09:04.553901172 +0000 UTC m=+0.227753741 container remove ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=musing_heisenberg, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 11:09:04 np0005473739 systemd[1]: libpod-conmon-ce940b6a8dd52fa398e8077bfb7ab4d7b365d196b2cc4057298296669c89023a.scope: Deactivated successfully.
Oct  7 11:09:04 np0005473739 podman[442152]: 2025-10-07 15:09:04.777253377 +0000 UTC m=+0.063236584 container create e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 11:09:04 np0005473739 systemd[1]: Started libpod-conmon-e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc.scope.
Oct  7 11:09:04 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:09:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:04 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:04 np0005473739 podman[442152]: 2025-10-07 15:09:04.757687022 +0000 UTC m=+0.043670239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:09:04 np0005473739 podman[442152]: 2025-10-07 15:09:04.866360331 +0000 UTC m=+0.152343518 container init e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:09:04 np0005473739 podman[442152]: 2025-10-07 15:09:04.873408836 +0000 UTC m=+0.159392023 container start e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:09:04 np0005473739 podman[442152]: 2025-10-07 15:09:04.877355829 +0000 UTC m=+0.163339016 container attach e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507)
Oct  7 11:09:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3227: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:05 np0005473739 elastic_mirzakhani[442169]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:09:05 np0005473739 elastic_mirzakhani[442169]: --> relative data size: 1.0
Oct  7 11:09:05 np0005473739 elastic_mirzakhani[442169]: --> All data devices are unavailable
Oct  7 11:09:05 np0005473739 systemd[1]: libpod-e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc.scope: Deactivated successfully.
Oct  7 11:09:05 np0005473739 podman[442152]: 2025-10-07 15:09:05.904775613 +0000 UTC m=+1.190758800 container died e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 11:09:05 np0005473739 systemd[1]: var-lib-containers-storage-overlay-8e18383c2e56c74256dd071fc40accd2db4a50b74abe3d18b87f2fb97f7f9a96-merged.mount: Deactivated successfully.
Oct  7 11:09:05 np0005473739 podman[442152]: 2025-10-07 15:09:05.9654524 +0000 UTC m=+1.251435587 container remove e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elastic_mirzakhani, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 11:09:05 np0005473739 systemd[1]: libpod-conmon-e26ab054974b2a28f772729c9cbb3a58c7f86508d582fb105c8ead446d56dadc.scope: Deactivated successfully.
Oct  7 11:09:06 np0005473739 podman[442353]: 2025-10-07 15:09:06.646198904 +0000 UTC m=+0.046331570 container create 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 11:09:06 np0005473739 systemd[1]: Started libpod-conmon-7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca.scope.
Oct  7 11:09:06 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:09:06 np0005473739 podman[442353]: 2025-10-07 15:09:06.627070111 +0000 UTC m=+0.027202787 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:09:06 np0005473739 podman[442353]: 2025-10-07 15:09:06.737919366 +0000 UTC m=+0.138052052 container init 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:09:06 np0005473739 podman[442353]: 2025-10-07 15:09:06.747880848 +0000 UTC m=+0.148013514 container start 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:09:06 np0005473739 epic_sutherland[442370]: 167 167
Oct  7 11:09:06 np0005473739 podman[442353]: 2025-10-07 15:09:06.752189371 +0000 UTC m=+0.152322037 container attach 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:09:06 np0005473739 systemd[1]: libpod-7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca.scope: Deactivated successfully.
Oct  7 11:09:06 np0005473739 podman[442353]: 2025-10-07 15:09:06.754289747 +0000 UTC m=+0.154422393 container died 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:09:06 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cc70d0e61f79c7dde5d0d1fb9e985b5a56eb3a35891552194cd39534638d9fea-merged.mount: Deactivated successfully.
Oct  7 11:09:06 np0005473739 podman[442353]: 2025-10-07 15:09:06.805122943 +0000 UTC m=+0.205255609 container remove 7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_sutherland, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 11:09:06 np0005473739 systemd[1]: libpod-conmon-7aeae4bd77869cb85e992854724822ee625ff6252cd38379e64bd7aed3e336ca.scope: Deactivated successfully.
Oct  7 11:09:07 np0005473739 podman[442393]: 2025-10-07 15:09:07.00874054 +0000 UTC m=+0.037623521 container create 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 11:09:07 np0005473739 systemd[1]: Started libpod-conmon-237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164.scope.
Oct  7 11:09:07 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:09:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:07 np0005473739 podman[442393]: 2025-10-07 15:09:06.99282332 +0000 UTC m=+0.021706321 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:09:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:07 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:07 np0005473739 podman[442393]: 2025-10-07 15:09:07.103939244 +0000 UTC m=+0.132822255 container init 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0)
Oct  7 11:09:07 np0005473739 podman[442393]: 2025-10-07 15:09:07.117316225 +0000 UTC m=+0.146199236 container start 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 11:09:07 np0005473739 podman[442393]: 2025-10-07 15:09:07.120992642 +0000 UTC m=+0.149875633 container attach 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 11:09:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3228: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]: {
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:    "0": [
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:        {
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "devices": [
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "/dev/loop3"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            ],
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_name": "ceph_lv0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_size": "21470642176",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "name": "ceph_lv0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "tags": {
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cluster_name": "ceph",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.crush_device_class": "",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.encrypted": "0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osd_id": "0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.type": "block",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.vdo": "0"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            },
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "type": "block",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "vg_name": "ceph_vg0"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:        }
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:    ],
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:    "1": [
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:        {
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "devices": [
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "/dev/loop4"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            ],
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_name": "ceph_lv1",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_size": "21470642176",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "name": "ceph_lv1",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "tags": {
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cluster_name": "ceph",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.crush_device_class": "",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.encrypted": "0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osd_id": "1",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.type": "block",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.vdo": "0"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            },
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "type": "block",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "vg_name": "ceph_vg1"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:        }
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:    ],
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:    "2": [
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:        {
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "devices": [
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "/dev/loop5"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            ],
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_name": "ceph_lv2",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_size": "21470642176",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "name": "ceph_lv2",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "tags": {
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.cluster_name": "ceph",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.crush_device_class": "",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.encrypted": "0",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osd_id": "2",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.type": "block",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:                "ceph.vdo": "0"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            },
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "type": "block",
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:            "vg_name": "ceph_vg2"
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:        }
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]:    ]
Oct  7 11:09:07 np0005473739 vigorous_wing[442410]: }
Oct  7 11:09:07 np0005473739 systemd[1]: libpod-237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164.scope: Deactivated successfully.
Oct  7 11:09:07 np0005473739 podman[442393]: 2025-10-07 15:09:07.925954725 +0000 UTC m=+0.954837716 container died 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:09:07 np0005473739 systemd[1]: var-lib-containers-storage-overlay-617ef24771d05ed9148c0911dfd7afb5c5fcb6758d5a9bcf06c14350458c2100-merged.mount: Deactivated successfully.
Oct  7 11:09:07 np0005473739 podman[442393]: 2025-10-07 15:09:07.9976331 +0000 UTC m=+1.026516081 container remove 237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_wing, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:09:08 np0005473739 systemd[1]: libpod-conmon-237ea5a80c4274e1453b562e2292f67800f2bf41a8df4c1b0f91c036ca237164.scope: Deactivated successfully.
Oct  7 11:09:08 np0005473739 nova_compute[259550]: 2025-10-07 15:09:08.572 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:08 np0005473739 podman[442577]: 2025-10-07 15:09:08.743942569 +0000 UTC m=+0.041318248 container create 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507)
Oct  7 11:09:08 np0005473739 systemd[1]: Started libpod-conmon-559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed.scope.
Oct  7 11:09:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:09:08 np0005473739 podman[442577]: 2025-10-07 15:09:08.726705626 +0000 UTC m=+0.024081395 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:09:08 np0005473739 podman[442577]: 2025-10-07 15:09:08.838717722 +0000 UTC m=+0.136093431 container init 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:09:08 np0005473739 podman[442577]: 2025-10-07 15:09:08.846465756 +0000 UTC m=+0.143841445 container start 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 11:09:08 np0005473739 podman[442577]: 2025-10-07 15:09:08.850658576 +0000 UTC m=+0.148034345 container attach 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:09:08 np0005473739 vigilant_sanderson[442593]: 167 167
Oct  7 11:09:08 np0005473739 systemd[1]: libpod-559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed.scope: Deactivated successfully.
Oct  7 11:09:08 np0005473739 podman[442577]: 2025-10-07 15:09:08.851881448 +0000 UTC m=+0.149257127 container died 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:09:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0dce03e32ed244b7f6b3e9a50cb15cef4b5a547771eb9d869b4514b682e6adbd-merged.mount: Deactivated successfully.
Oct  7 11:09:08 np0005473739 podman[442577]: 2025-10-07 15:09:08.892892097 +0000 UTC m=+0.190267796 container remove 559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigilant_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:09:08 np0005473739 systemd[1]: libpod-conmon-559d3db92a022bba71b2ced0f8a2a899b6a3bb3920e1f73352f4ef276d8ddaed.scope: Deactivated successfully.
Oct  7 11:09:09 np0005473739 podman[442617]: 2025-10-07 15:09:09.085164535 +0000 UTC m=+0.047654425 container create 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:09:09 np0005473739 systemd[1]: Started libpod-conmon-8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838.scope.
Oct  7 11:09:09 np0005473739 nova_compute[259550]: 2025-10-07 15:09:09.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:09:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:09 np0005473739 podman[442617]: 2025-10-07 15:09:09.064671465 +0000 UTC m=+0.027161335 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:09:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:09:09 np0005473739 podman[442617]: 2025-10-07 15:09:09.171411102 +0000 UTC m=+0.133900972 container init 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 11:09:09 np0005473739 podman[442617]: 2025-10-07 15:09:09.183119701 +0000 UTC m=+0.145609551 container start 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 11:09:09 np0005473739 podman[442617]: 2025-10-07 15:09:09.185868463 +0000 UTC m=+0.148358313 container attach 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:09:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3229: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]: {
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "osd_id": 2,
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "type": "bluestore"
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:    },
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "osd_id": 1,
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "type": "bluestore"
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:    },
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "osd_id": 0,
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:        "type": "bluestore"
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]:    }
Oct  7 11:09:10 np0005473739 modest_khayyam[442632]: }
Oct  7 11:09:10 np0005473739 systemd[1]: libpod-8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838.scope: Deactivated successfully.
Oct  7 11:09:10 np0005473739 podman[442617]: 2025-10-07 15:09:10.179691223 +0000 UTC m=+1.142181083 container died 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 11:09:10 np0005473739 systemd[1]: libpod-8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838.scope: Consumed 1.004s CPU time.
Oct  7 11:09:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7e918bfe283fad05848c1cc0c5ce2519c481c71e52e831191dcac58062e67c3d-merged.mount: Deactivated successfully.
Oct  7 11:09:10 np0005473739 podman[442617]: 2025-10-07 15:09:10.265963712 +0000 UTC m=+1.228453562 container remove 8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_khayyam, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:09:10 np0005473739 systemd[1]: libpod-conmon-8740306cbaa06a02ccc20e0112f0c2569567c8dd3f7db5c346d2bdcabeb49838.scope: Deactivated successfully.
Oct  7 11:09:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:09:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:09:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:09:10 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:09:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev eaf22875-8f08-424e-a7fd-216145d63d4a does not exist
Oct  7 11:09:10 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev febe4991-3f83-4f46-9d8e-8d98b2446073 does not exist
Oct  7 11:09:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3230: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:09:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:09:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3231: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:13 np0005473739 nova_compute[259550]: 2025-10-07 15:09:13.576 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:14 np0005473739 nova_compute[259550]: 2025-10-07 15:09:14.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:14 np0005473739 nova_compute[259550]: 2025-10-07 15:09:14.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3232: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3233: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:18 np0005473739 podman[442729]: 2025-10-07 15:09:18.09551628 +0000 UTC m=+0.077118759 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, container_name=multipathd)
Oct  7 11:09:18 np0005473739 podman[442730]: 2025-10-07 15:09:18.10011202 +0000 UTC m=+0.071799009 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, container_name=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 11:09:18 np0005473739 nova_compute[259550]: 2025-10-07 15:09:18.579 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:19 np0005473739 nova_compute[259550]: 2025-10-07 15:09:19.140 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3234: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3235: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:21 np0005473739 nova_compute[259550]: 2025-10-07 15:09:21.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:21 np0005473739 nova_compute[259550]: 2025-10-07 15:09:21.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:09:22
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['volumes', 'backups', 'default.rgw.control', '.rgw.root', 'default.rgw.log', 'images', 'vms', 'default.rgw.meta', '.mgr', 'cephfs.cephfs.meta', 'cephfs.cephfs.data']
Oct  7 11:09:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:09:22 np0005473739 nova_compute[259550]: 2025-10-07 15:09:22.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3236: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:09:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:09:23 np0005473739 nova_compute[259550]: 2025-10-07 15:09:23.581 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:23 np0005473739 nova_compute[259550]: 2025-10-07 15:09:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:23 np0005473739 nova_compute[259550]: 2025-10-07 15:09:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:09:24 np0005473739 nova_compute[259550]: 2025-10-07 15:09:24.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3237: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3238: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:27 np0005473739 nova_compute[259550]: 2025-10-07 15:09:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:28 np0005473739 nova_compute[259550]: 2025-10-07 15:09:28.583 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:29 np0005473739 nova_compute[259550]: 2025-10-07 15:09:29.145 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3239: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3240: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:09:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1273617194' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:09:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:09:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1273617194' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:09:32 np0005473739 podman[442765]: 2025-10-07 15:09:32.977860912 +0000 UTC m=+0.091439976 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:09:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:09:32 np0005473739 podman[442766]: 2025-10-07 15:09:32.996387599 +0000 UTC m=+0.118946909 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:09:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3241: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:33 np0005473739 nova_compute[259550]: 2025-10-07 15:09:33.586 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:34 np0005473739 nova_compute[259550]: 2025-10-07 15:09:34.146 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3242: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:35 np0005473739 nova_compute[259550]: 2025-10-07 15:09:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.103 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.103 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.103 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.104 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.104 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:09:36 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:09:36 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3536468881' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.578 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.474s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.785 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.787 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.787 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:09:36 np0005473739 nova_compute[259550]: 2025-10-07 15:09:36.787 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.028 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.029 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.124 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 11:09:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3243: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.261 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.261 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.279 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 11:09:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.328 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.348 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:09:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:09:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3173276964' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.793 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.445s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.799 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.819 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.821 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:09:37 np0005473739 nova_compute[259550]: 2025-10-07 15:09:37.821 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:09:38 np0005473739 nova_compute[259550]: 2025-10-07 15:09:38.588 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:39 np0005473739 nova_compute[259550]: 2025-10-07 15:09:39.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3244: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3245: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:41 np0005473739 nova_compute[259550]: 2025-10-07 15:09:41.822 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:41 np0005473739 nova_compute[259550]: 2025-10-07 15:09:41.823 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:09:41 np0005473739 nova_compute[259550]: 2025-10-07 15:09:41.823 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:09:41 np0005473739 nova_compute[259550]: 2025-10-07 15:09:41.847 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:09:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:42 np0005473739 nova_compute[259550]: 2025-10-07 15:09:42.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:09:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3246: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:43 np0005473739 nova_compute[259550]: 2025-10-07 15:09:43.590 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:44 np0005473739 nova_compute[259550]: 2025-10-07 15:09:44.199 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3247: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3248: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:48 np0005473739 nova_compute[259550]: 2025-10-07 15:09:48.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:49 np0005473739 podman[442853]: 2025-10-07 15:09:49.101643347 +0000 UTC m=+0.084232687 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible)
Oct  7 11:09:49 np0005473739 podman[442854]: 2025-10-07 15:09:49.121988792 +0000 UTC m=+0.101194293 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, container_name=iscsid, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:09:49 np0005473739 nova_compute[259550]: 2025-10-07 15:09:49.201 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3249: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3250: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:09:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:09:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3251: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:53 np0005473739 nova_compute[259550]: 2025-10-07 15:09:53.595 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:54 np0005473739 nova_compute[259550]: 2025-10-07 15:09:54.204 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3252: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3253: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:09:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:09:58 np0005473739 nova_compute[259550]: 2025-10-07 15:09:58.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:59 np0005473739 nova_compute[259550]: 2025-10-07 15:09:59.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:09:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3254: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:10:00.112 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:10:00.112 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:10:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:10:00.113 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:10:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3255: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3256: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:03 np0005473739 nova_compute[259550]: 2025-10-07 15:10:03.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:04 np0005473739 podman[442895]: 2025-10-07 15:10:04.059603617 +0000 UTC m=+0.052125522 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  7 11:10:04 np0005473739 podman[442896]: 2025-10-07 15:10:04.1038283 +0000 UTC m=+0.091223840 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:10:04 np0005473739 nova_compute[259550]: 2025-10-07 15:10:04.205 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3257: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3258: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:08 np0005473739 nova_compute[259550]: 2025-10-07 15:10:08.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:09 np0005473739 nova_compute[259550]: 2025-10-07 15:10:09.207 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3259: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3260: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:10:11 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev af0deae0-6c88-4754-a4c3-0ff08117dac6 does not exist
Oct  7 11:10:11 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 91f9fdd7-9068-4ec1-801e-aabaeac65a50 does not exist
Oct  7 11:10:11 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 93f1c869-f13e-40a1-a805-24a2cc2fee31 does not exist
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:10:11 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:10:12 np0005473739 podman[443212]: 2025-10-07 15:10:12.10647629 +0000 UTC m=+0.065403241 container create 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  7 11:10:12 np0005473739 systemd[1]: Started libpod-conmon-6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f.scope.
Oct  7 11:10:12 np0005473739 podman[443212]: 2025-10-07 15:10:12.07986716 +0000 UTC m=+0.038794151 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:10:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:10:12 np0005473739 podman[443212]: 2025-10-07 15:10:12.230172764 +0000 UTC m=+0.189099765 container init 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:10:12 np0005473739 podman[443212]: 2025-10-07 15:10:12.237722972 +0000 UTC m=+0.196649923 container start 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 11:10:12 np0005473739 brave_cartwright[443229]: 167 167
Oct  7 11:10:12 np0005473739 systemd[1]: libpod-6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f.scope: Deactivated successfully.
Oct  7 11:10:12 np0005473739 podman[443212]: 2025-10-07 15:10:12.244402828 +0000 UTC m=+0.203329879 container attach 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 11:10:12 np0005473739 podman[443212]: 2025-10-07 15:10:12.245083445 +0000 UTC m=+0.204010436 container died 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 11:10:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-383adae7a6607bdab438cbb1fbac14962d085844ab7c3e92960cce5ad87865ae-merged.mount: Deactivated successfully.
Oct  7 11:10:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:12 np0005473739 podman[443212]: 2025-10-07 15:10:12.325899101 +0000 UTC m=+0.284826052 container remove 6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=brave_cartwright, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 11:10:12 np0005473739 systemd[1]: libpod-conmon-6b195f54a0de99b5c1ece1f2ee33d840393fd69b780bb2e3bfd59537a385131f.scope: Deactivated successfully.
Oct  7 11:10:12 np0005473739 podman[443256]: 2025-10-07 15:10:12.579180153 +0000 UTC m=+0.058356916 container create ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 11:10:12 np0005473739 systemd[1]: Started libpod-conmon-ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657.scope.
Oct  7 11:10:12 np0005473739 podman[443256]: 2025-10-07 15:10:12.560374959 +0000 UTC m=+0.039551692 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:10:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:10:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:12 np0005473739 podman[443256]: 2025-10-07 15:10:12.690619355 +0000 UTC m=+0.169796138 container init ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 11:10:12 np0005473739 podman[443256]: 2025-10-07 15:10:12.69730566 +0000 UTC m=+0.176482383 container start ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:10:12 np0005473739 podman[443256]: 2025-10-07 15:10:12.700866634 +0000 UTC m=+0.180043397 container attach ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 11:10:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3261: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:13 np0005473739 nova_compute[259550]: 2025-10-07 15:10:13.603 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:13 np0005473739 confident_golick[443272]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:10:13 np0005473739 confident_golick[443272]: --> relative data size: 1.0
Oct  7 11:10:13 np0005473739 confident_golick[443272]: --> All data devices are unavailable
Oct  7 11:10:13 np0005473739 systemd[1]: libpod-ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657.scope: Deactivated successfully.
Oct  7 11:10:13 np0005473739 podman[443256]: 2025-10-07 15:10:13.856321615 +0000 UTC m=+1.335498358 container died ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 11:10:13 np0005473739 systemd[1]: libpod-ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657.scope: Consumed 1.095s CPU time.
Oct  7 11:10:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2541b25e74ef25aaad72a976feb3b79c9ef788395b27c6965d7f34a5c1c06712-merged.mount: Deactivated successfully.
Oct  7 11:10:13 np0005473739 podman[443256]: 2025-10-07 15:10:13.9473672 +0000 UTC m=+1.426543923 container remove ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=confident_golick, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 11:10:13 np0005473739 systemd[1]: libpod-conmon-ba0d63e1bdf20828a3870303cd9facb16ec4b885b83f7ef5a0f4e23d3303d657.scope: Deactivated successfully.
Oct  7 11:10:14 np0005473739 nova_compute[259550]: 2025-10-07 15:10:14.209 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:14 np0005473739 podman[443455]: 2025-10-07 15:10:14.605981173 +0000 UTC m=+0.030012570 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:10:14 np0005473739 podman[443455]: 2025-10-07 15:10:14.704534935 +0000 UTC m=+0.128566282 container create 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 11:10:14 np0005473739 systemd[1]: Started libpod-conmon-4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c.scope.
Oct  7 11:10:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:10:14 np0005473739 podman[443455]: 2025-10-07 15:10:14.911416697 +0000 UTC m=+0.335448144 container init 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 11:10:14 np0005473739 podman[443455]: 2025-10-07 15:10:14.923925096 +0000 UTC m=+0.347956483 container start 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:10:14 np0005473739 agitated_benz[443471]: 167 167
Oct  7 11:10:14 np0005473739 systemd[1]: libpod-4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c.scope: Deactivated successfully.
Oct  7 11:10:14 np0005473739 podman[443455]: 2025-10-07 15:10:14.985590038 +0000 UTC m=+0.409621405 container attach 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:10:14 np0005473739 podman[443455]: 2025-10-07 15:10:14.987533229 +0000 UTC m=+0.411564586 container died 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 11:10:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-bcc43757abf960d6cf39aa8f4b42e958f70cb8e5c0bd1cb5b724f802aa847839-merged.mount: Deactivated successfully.
Oct  7 11:10:15 np0005473739 podman[443455]: 2025-10-07 15:10:15.239187978 +0000 UTC m=+0.663219365 container remove 4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=agitated_benz, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 11:10:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3262: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:15 np0005473739 systemd[1]: libpod-conmon-4516f5d350c0a631ad0b116d8e4adf11bb8773a0ffe2f5a0eea0734abf3f0b6c.scope: Deactivated successfully.
Oct  7 11:10:15 np0005473739 podman[443497]: 2025-10-07 15:10:15.52173589 +0000 UTC m=+0.106622906 container create 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:10:15 np0005473739 podman[443497]: 2025-10-07 15:10:15.471597231 +0000 UTC m=+0.056484247 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:10:15 np0005473739 systemd[1]: Started libpod-conmon-15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977.scope.
Oct  7 11:10:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:10:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:15 np0005473739 podman[443497]: 2025-10-07 15:10:15.654918253 +0000 UTC m=+0.239805239 container init 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:10:15 np0005473739 podman[443497]: 2025-10-07 15:10:15.663129729 +0000 UTC m=+0.248016705 container start 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:10:15 np0005473739 podman[443497]: 2025-10-07 15:10:15.669309081 +0000 UTC m=+0.254196067 container attach 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]: {
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:    "0": [
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:        {
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "devices": [
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "/dev/loop3"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            ],
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_name": "ceph_lv0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_size": "21470642176",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "name": "ceph_lv0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "tags": {
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cluster_name": "ceph",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.crush_device_class": "",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.encrypted": "0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osd_id": "0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.type": "block",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.vdo": "0"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            },
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "type": "block",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "vg_name": "ceph_vg0"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:        }
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:    ],
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:    "1": [
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:        {
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "devices": [
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "/dev/loop4"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            ],
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_name": "ceph_lv1",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_size": "21470642176",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "name": "ceph_lv1",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "tags": {
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cluster_name": "ceph",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.crush_device_class": "",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.encrypted": "0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osd_id": "1",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.type": "block",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.vdo": "0"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            },
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "type": "block",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "vg_name": "ceph_vg1"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:        }
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:    ],
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:    "2": [
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:        {
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "devices": [
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "/dev/loop5"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            ],
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_name": "ceph_lv2",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_size": "21470642176",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "name": "ceph_lv2",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "tags": {
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.cluster_name": "ceph",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.crush_device_class": "",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.encrypted": "0",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osd_id": "2",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.type": "block",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:                "ceph.vdo": "0"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            },
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "type": "block",
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:            "vg_name": "ceph_vg2"
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:        }
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]:    ]
Oct  7 11:10:16 np0005473739 ecstatic_joliot[443514]: }
Oct  7 11:10:16 np0005473739 systemd[1]: libpod-15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977.scope: Deactivated successfully.
Oct  7 11:10:16 np0005473739 podman[443497]: 2025-10-07 15:10:16.454185686 +0000 UTC m=+1.039072772 container died 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 11:10:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-1c5e84771f4af906cc20eb87bc99ba6799bd5ba3c63364cd5f35c6c2b63accc1-merged.mount: Deactivated successfully.
Oct  7 11:10:16 np0005473739 podman[443497]: 2025-10-07 15:10:16.567234328 +0000 UTC m=+1.152121314 container remove 15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=ecstatic_joliot, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True)
Oct  7 11:10:16 np0005473739 systemd[1]: libpod-conmon-15dc1948d934cc85721242be1938a0f8c0eca2d18e8bb305d142317da68be977.scope: Deactivated successfully.
Oct  7 11:10:16 np0005473739 nova_compute[259550]: 2025-10-07 15:10:16.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3263: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:17 np0005473739 podman[443677]: 2025-10-07 15:10:17.308836775 +0000 UTC m=+0.054107845 container create e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:10:17 np0005473739 podman[443677]: 2025-10-07 15:10:17.279034101 +0000 UTC m=+0.024305201 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:10:17 np0005473739 systemd[1]: Started libpod-conmon-e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c.scope.
Oct  7 11:10:17 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:10:17 np0005473739 podman[443677]: 2025-10-07 15:10:17.484916176 +0000 UTC m=+0.230187346 container init e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2)
Oct  7 11:10:17 np0005473739 podman[443677]: 2025-10-07 15:10:17.492394483 +0000 UTC m=+0.237665563 container start e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 11:10:17 np0005473739 zealous_haibt[443693]: 167 167
Oct  7 11:10:17 np0005473739 systemd[1]: libpod-e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c.scope: Deactivated successfully.
Oct  7 11:10:17 np0005473739 podman[443677]: 2025-10-07 15:10:17.517741119 +0000 UTC m=+0.263012209 container attach e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, io.buildah.version=1.39.3)
Oct  7 11:10:17 np0005473739 podman[443677]: 2025-10-07 15:10:17.518239122 +0000 UTC m=+0.263510192 container died e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:10:17 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ce2aa59bacdaa94bbda2ebec2eeed915a302383420c7644425e74ce5276b3c96-merged.mount: Deactivated successfully.
Oct  7 11:10:17 np0005473739 podman[443677]: 2025-10-07 15:10:17.685009489 +0000 UTC m=+0.430280559 container remove e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zealous_haibt, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 11:10:17 np0005473739 systemd[1]: libpod-conmon-e44c9569b77ed454c5b0e7afa4039ac7c0349dba2efb1712b2865e037708873c.scope: Deactivated successfully.
Oct  7 11:10:17 np0005473739 podman[443721]: 2025-10-07 15:10:17.899319716 +0000 UTC m=+0.077261774 container create 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:10:17 np0005473739 podman[443721]: 2025-10-07 15:10:17.849973667 +0000 UTC m=+0.027915775 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:10:17 np0005473739 systemd[1]: Started libpod-conmon-02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c.scope.
Oct  7 11:10:18 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:10:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:18 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:10:18 np0005473739 podman[443721]: 2025-10-07 15:10:18.050510243 +0000 UTC m=+0.228452321 container init 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 11:10:18 np0005473739 podman[443721]: 2025-10-07 15:10:18.060830454 +0000 UTC m=+0.238772522 container start 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:10:18 np0005473739 podman[443721]: 2025-10-07 15:10:18.063924675 +0000 UTC m=+0.241866743 container attach 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 11:10:18 np0005473739 nova_compute[259550]: 2025-10-07 15:10:18.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:19 np0005473739 goofy_euler[443738]: {
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "osd_id": 2,
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "type": "bluestore"
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:    },
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "osd_id": 1,
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "type": "bluestore"
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:    },
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "osd_id": 0,
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:        "type": "bluestore"
Oct  7 11:10:19 np0005473739 goofy_euler[443738]:    }
Oct  7 11:10:19 np0005473739 goofy_euler[443738]: }
Oct  7 11:10:19 np0005473739 systemd[1]: libpod-02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c.scope: Deactivated successfully.
Oct  7 11:10:19 np0005473739 systemd[1]: libpod-02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c.scope: Consumed 1.027s CPU time.
Oct  7 11:10:19 np0005473739 podman[443721]: 2025-10-07 15:10:19.120344302 +0000 UTC m=+1.298286370 container died 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef)
Oct  7 11:10:19 np0005473739 nova_compute[259550]: 2025-10-07 15:10:19.211 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:19 np0005473739 podman[443771]: 2025-10-07 15:10:19.23547728 +0000 UTC m=+0.084992536 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2)
Oct  7 11:10:19 np0005473739 systemd[1]: var-lib-containers-storage-overlay-85f14e0d4822cc0c0ca17d1be1d8a6d3b903affd4ce152b4c8327a53d1029fd4-merged.mount: Deactivated successfully.
Oct  7 11:10:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3264: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:19 np0005473739 podman[443721]: 2025-10-07 15:10:19.463123129 +0000 UTC m=+1.641065227 container remove 02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=goofy_euler, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 11:10:19 np0005473739 systemd[1]: libpod-conmon-02c5f0f0ee48c89618e08d03c4050d8a39cc56f1843805455acdcbb11029047c.scope: Deactivated successfully.
Oct  7 11:10:19 np0005473739 podman[443778]: 2025-10-07 15:10:19.487819828 +0000 UTC m=+0.333289608 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=iscsid, org.label-schema.build-date=20251001, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 11:10:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:10:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:10:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:10:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:10:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ad90d8f7-32b5-42e4-b9ab-9f71fd78de87 does not exist
Oct  7 11:10:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a0b24251-ae28-473b-8e49-a80d49cbe9ab does not exist
Oct  7 11:10:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:10:20 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:10:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3265: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:10:22
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'default.rgw.meta', 'volumes', '.rgw.root', '.mgr', 'default.rgw.control', 'backups', 'vms', 'cephfs.cephfs.meta', 'default.rgw.log', 'images']
Oct  7 11:10:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:10:22 np0005473739 nova_compute[259550]: 2025-10-07 15:10:22.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3266: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:10:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:10:23 np0005473739 nova_compute[259550]: 2025-10-07 15:10:23.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:23 np0005473739 nova_compute[259550]: 2025-10-07 15:10:23.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:23 np0005473739 nova_compute[259550]: 2025-10-07 15:10:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:23 np0005473739 nova_compute[259550]: 2025-10-07 15:10:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:23 np0005473739 nova_compute[259550]: 2025-10-07 15:10:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #159. Immutable memtables: 0.
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.074265) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 97] Flushing memtable with next log file: 159
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824074332, "job": 97, "event": "flush_started", "num_memtables": 1, "num_entries": 1636, "num_deletes": 252, "total_data_size": 2580163, "memory_usage": 2617152, "flush_reason": "Manual Compaction"}
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 97] Level-0 flush table #160: started
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824151074, "cf_name": "default", "job": 97, "event": "table_file_creation", "file_number": 160, "file_size": 2532535, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 66490, "largest_seqno": 68125, "table_properties": {"data_size": 2524947, "index_size": 4530, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1989, "raw_key_size": 15629, "raw_average_key_size": 20, "raw_value_size": 2509723, "raw_average_value_size": 3225, "num_data_blocks": 202, "num_entries": 778, "num_filter_entries": 778, "num_deletions": 252, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849657, "oldest_key_time": 1759849657, "file_creation_time": 1759849824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 160, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 97] Flush lasted 76867 microseconds, and 10435 cpu microseconds.
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.151129) [db/flush_job.cc:967] [default] [JOB 97] Level-0 flush table #160: 2532535 bytes OK
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.151155) [db/memtable_list.cc:519] [default] Level-0 commit table #160 started
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.242605) [db/memtable_list.cc:722] [default] Level-0 commit table #160: memtable #1 done
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.242652) EVENT_LOG_v1 {"time_micros": 1759849824242641, "job": 97, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.242681) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 97] Try to delete WAL files size 2573115, prev total WAL file size 2574272, number of live WAL files 2.
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000156.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.246434) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036353236' seq:72057594037927935, type:22 .. '7061786F730036373738' seq:0, type:0; will stop at (end)
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 98] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 97 Base level 0, inputs: [160(2473KB)], [158(10170KB)]
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824246569, "job": 98, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [160], "files_L6": [158], "score": -1, "input_data_size": 12947275, "oldest_snapshot_seqno": -1}
Oct  7 11:10:24 np0005473739 nova_compute[259550]: 2025-10-07 15:10:24.247 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 98] Generated table #161: 8585 keys, 11217433 bytes, temperature: kUnknown
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824499333, "cf_name": "default", "job": 98, "event": "table_file_creation", "file_number": 161, "file_size": 11217433, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 11161214, "index_size": 33642, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21509, "raw_key_size": 225140, "raw_average_key_size": 26, "raw_value_size": 11009122, "raw_average_value_size": 1282, "num_data_blocks": 1306, "num_entries": 8585, "num_filter_entries": 8585, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849824, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 161, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.499995) [db/compaction/compaction_job.cc:1663] [default] [JOB 98] Compacted 1@0 + 1@6 files to L6 => 11217433 bytes
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.541509) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 51.2 rd, 44.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.9 +0.0 blob) out(10.7 +0.0 blob), read-write-amplify(9.5) write-amplify(4.4) OK, records in: 9105, records dropped: 520 output_compression: NoCompression
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.541542) EVENT_LOG_v1 {"time_micros": 1759849824541529, "job": 98, "event": "compaction_finished", "compaction_time_micros": 253036, "compaction_time_cpu_micros": 56264, "output_level": 6, "num_output_files": 1, "total_output_size": 11217433, "num_input_records": 9105, "num_output_records": 8585, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000160.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824542288, "job": 98, "event": "table_file_deletion", "file_number": 160}
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000158.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849824544394, "job": 98, "event": "table_file_deletion", "file_number": 158}
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.246249) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544492) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544497) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544500) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:10:24 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:10:24.544501) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:10:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3267: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3268: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:27 np0005473739 nova_compute[259550]: 2025-10-07 15:10:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:28 np0005473739 nova_compute[259550]: 2025-10-07 15:10:28.611 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3269: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:29 np0005473739 nova_compute[259550]: 2025-10-07 15:10:29.300 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:29 np0005473739 systemd-logind[801]: New session 54 of user zuul.
Oct  7 11:10:29 np0005473739 systemd[1]: Started Session 54 of User zuul.
Oct  7 11:10:29 np0005473739 nova_compute[259550]: 2025-10-07 15:10:29.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3270: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:10:31.524 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=63, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=62) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:10:31.526 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 11:10:31 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:10:31.526 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '63'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:10:31 np0005473739 nova_compute[259550]: 2025-10-07 15:10:31.567 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:10:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1032906449' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:10:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:10:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1032906449' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:10:32 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:10:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3271: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:33 np0005473739 nova_compute[259550]: 2025-10-07 15:10:33.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:34 np0005473739 nova_compute[259550]: 2025-10-07 15:10:34.303 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:35 np0005473739 podman[444107]: 2025-10-07 15:10:35.089919243 +0000 UTC m=+0.072872857 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Oct  7 11:10:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:10:35 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 14K writes, 68K keys, 14K commit groups, 1.0 writes per commit group, ingest: 0.09 GB, 0.02 MB/s#012Cumulative WAL: 14K writes, 14K syncs, 1.00 writes per sync, written: 0.09 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1391 writes, 6281 keys, 1391 commit groups, 1.0 writes per commit group, ingest: 8.95 MB, 0.01 MB/s#012Interval WAL: 1391 writes, 1391 syncs, 1.00 writes per sync, written: 0.01 GB, 0.01 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   1.0      0.0     69.9      1.20              0.26        49    0.025       0      0       0.0       0.0#012  L6      1/0   10.70 MB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   4.8    133.2    112.7      3.59              1.20        48    0.075    318K    25K       0.0       0.0#012 Sum      1/0   10.70 MB   0.0      0.5     0.1      0.4       0.5      0.1       0.0   5.8     99.7    102.0      4.79              1.47        97    0.049    318K    25K       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.1     0.0      0.0       0.1      0.0       0.0   6.4     84.6     86.2      0.67              0.21        10    0.067     44K   2597       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Low      0/0    0.00 KB   0.0      0.5     0.1      0.4       0.4      0.0       0.0   0.0    133.2    112.7      3.59              1.20        48    0.075    318K    25K       0.0       0.0#012High      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.1      0.1       0.0   0.0      0.0     70.0      1.20              0.26        48    0.025       0      0       0.0       0.0#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0     14.2      0.00              0.00         1    0.004       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 600.0 interval#012Flush(GB): cumulative 0.082, interval 0.009#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.48 GB write, 0.08 MB/s write, 0.47 GB read, 0.08 MB/s read, 4.8 seconds#012Interval compaction: 0.06 GB write, 0.10 MB/s write, 0.06 GB read, 0.10 MB/s read, 0.7 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x5619101451f0#2 capacity: 304.00 MB usage: 55.08 MB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 0 last_secs: 0.000446 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3552,52.79 MB,17.3649%) FilterBlock(98,887.67 KB,0.285154%) IndexBlock(98,1.42 MB,0.468078%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **
Oct  7 11:10:35 np0005473739 podman[444108]: 2025-10-07 15:10:35.142581489 +0000 UTC m=+0.115104249 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:10:35 np0005473739 systemd[1]: session-54.scope: Deactivated successfully.
Oct  7 11:10:35 np0005473739 systemd-logind[801]: Session 54 logged out. Waiting for processes to exit.
Oct  7 11:10:35 np0005473739 systemd-logind[801]: Removed session 54.
Oct  7 11:10:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3272: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3273: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:37 np0005473739 nova_compute[259550]: 2025-10-07 15:10:37.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.113 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.114 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.114 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.115 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.115 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:10:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:10:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2706709220' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.590 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.475s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.616 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.748 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.750 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3621MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.750 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.750 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.829 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.829 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:10:38 np0005473739 nova_compute[259550]: 2025-10-07 15:10:38.857 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:10:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3274: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:39 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:10:39 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/619261694' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:10:39 np0005473739 nova_compute[259550]: 2025-10-07 15:10:39.282 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.425s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:10:39 np0005473739 nova_compute[259550]: 2025-10-07 15:10:39.289 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:10:39 np0005473739 nova_compute[259550]: 2025-10-07 15:10:39.305 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:39 np0005473739 nova_compute[259550]: 2025-10-07 15:10:39.378 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:10:39 np0005473739 nova_compute[259550]: 2025-10-07 15:10:39.381 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:10:39 np0005473739 nova_compute[259550]: 2025-10-07 15:10:39.381 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:10:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3275: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3276: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:43 np0005473739 nova_compute[259550]: 2025-10-07 15:10:43.382 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:43 np0005473739 nova_compute[259550]: 2025-10-07 15:10:43.382 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:10:43 np0005473739 nova_compute[259550]: 2025-10-07 15:10:43.383 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:10:43 np0005473739 nova_compute[259550]: 2025-10-07 15:10:43.404 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:10:43 np0005473739 nova_compute[259550]: 2025-10-07 15:10:43.618 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:43 np0005473739 nova_compute[259550]: 2025-10-07 15:10:43.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:10:44 np0005473739 nova_compute[259550]: 2025-10-07 15:10:44.306 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3277: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3278: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:48 np0005473739 nova_compute[259550]: 2025-10-07 15:10:48.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3279: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:49 np0005473739 nova_compute[259550]: 2025-10-07 15:10:49.308 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:50 np0005473739 podman[444218]: 2025-10-07 15:10:50.08303917 +0000 UTC m=+0.066993204 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 11:10:50 np0005473739 podman[444219]: 2025-10-07 15:10:50.094491241 +0000 UTC m=+0.080033706 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=iscsid, io.buildah.version=1.41.3)
Oct  7 11:10:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3280: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:10:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:10:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3281: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:53 np0005473739 nova_compute[259550]: 2025-10-07 15:10:53.623 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:54 np0005473739 nova_compute[259550]: 2025-10-07 15:10:54.312 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3282: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3283: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:10:58 np0005473739 nova_compute[259550]: 2025-10-07 15:10:58.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:10:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3284: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:10:59 np0005473739 nova_compute[259550]: 2025-10-07 15:10:59.314 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:11:00.114 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:11:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:11:00.114 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:11:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:11:00.114 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:11:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3285: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3286: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:03 np0005473739 nova_compute[259550]: 2025-10-07 15:11:03.628 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:04 np0005473739 nova_compute[259550]: 2025-10-07 15:11:04.317 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3287: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:06 np0005473739 podman[444258]: 2025-10-07 15:11:06.106579663 +0000 UTC m=+0.086919967 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:11:06 np0005473739 podman[444259]: 2025-10-07 15:11:06.167716681 +0000 UTC m=+0.143421884 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 11:11:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3288: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:08 np0005473739 nova_compute[259550]: 2025-10-07 15:11:08.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3289: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:09 np0005473739 nova_compute[259550]: 2025-10-07 15:11:09.443 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3290: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3291: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:13 np0005473739 nova_compute[259550]: 2025-10-07 15:11:13.630 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:14 np0005473739 nova_compute[259550]: 2025-10-07 15:11:14.445 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3292: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:16 np0005473739 nova_compute[259550]: 2025-10-07 15:11:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3293: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:18 np0005473739 nova_compute[259550]: 2025-10-07 15:11:18.631 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3294: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:19 np0005473739 nova_compute[259550]: 2025-10-07 15:11:19.447 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:11:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 96fc50dc-9cc2-4f98-9126-df10c0a7de94 does not exist
Oct  7 11:11:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 0eeb8113-bf92-4b09-89cb-f85ff60210ee does not exist
Oct  7 11:11:20 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 228daef2-7cc3-4856-ae5e-f0aee3a6c8df does not exist
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:11:20 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:11:20 np0005473739 podman[444456]: 2025-10-07 15:11:20.885292079 +0000 UTC m=+0.087418740 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 11:11:20 np0005473739 podman[444457]: 2025-10-07 15:11:20.886257265 +0000 UTC m=+0.089341451 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=iscsid, container_name=iscsid, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct  7 11:11:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3295: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:21 np0005473739 podman[444611]: 2025-10-07 15:11:21.386483592 +0000 UTC m=+0.035025452 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:11:21 np0005473739 podman[444611]: 2025-10-07 15:11:21.560425607 +0000 UTC m=+0.208967447 container create 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS)
Oct  7 11:11:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:11:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:11:21 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:11:21 np0005473739 systemd[1]: Started libpod-conmon-125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5.scope.
Oct  7 11:11:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:11:21 np0005473739 podman[444611]: 2025-10-07 15:11:21.929463864 +0000 UTC m=+0.578005724 container init 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:11:21 np0005473739 podman[444611]: 2025-10-07 15:11:21.942565018 +0000 UTC m=+0.591106888 container start 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:11:21 np0005473739 mystifying_heyrovsky[444628]: 167 167
Oct  7 11:11:21 np0005473739 systemd[1]: libpod-125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5.scope: Deactivated successfully.
Oct  7 11:11:21 np0005473739 podman[444611]: 2025-10-07 15:11:21.992870932 +0000 UTC m=+0.641412802 container attach 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:11:21 np0005473739 podman[444611]: 2025-10-07 15:11:21.994912316 +0000 UTC m=+0.643454146 container died 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 11:11:22 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2b6090f2b4af7a8e38e43fc63d0af7d11b87a1d5b66ea5bee993725b846d312b-merged.mount: Deactivated successfully.
Oct  7 11:11:22 np0005473739 podman[444611]: 2025-10-07 15:11:22.243987616 +0000 UTC m=+0.892529506 container remove 125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=mystifying_heyrovsky, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:11:22 np0005473739 systemd[1]: libpod-conmon-125ddaa9bbbc95ed86072669cffe5f804c810e91d7bb52044a11301a74c4aed5.scope: Deactivated successfully.
Oct  7 11:11:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:22 np0005473739 podman[444652]: 2025-10-07 15:11:22.416158505 +0000 UTC m=+0.045649301 container create c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:11:22 np0005473739 systemd[1]: Started libpod-conmon-c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2.scope.
Oct  7 11:11:22 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:11:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:22 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:22 np0005473739 podman[444652]: 2025-10-07 15:11:22.395736118 +0000 UTC m=+0.025226944 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:11:22 np0005473739 podman[444652]: 2025-10-07 15:11:22.498508071 +0000 UTC m=+0.127998887 container init c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:11:22 np0005473739 podman[444652]: 2025-10-07 15:11:22.507735614 +0000 UTC m=+0.137226410 container start c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:11:22 np0005473739 podman[444652]: 2025-10-07 15:11:22.511150944 +0000 UTC m=+0.140641730 container attach c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:11:22
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.control', 'backups', 'default.rgw.meta', 'cephfs.cephfs.data', 'volumes', 'images', 'vms', '.mgr', 'default.rgw.log', '.rgw.root', 'cephfs.cephfs.meta']
Oct  7 11:11:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3296: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:11:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:11:23 np0005473739 nova_compute[259550]: 2025-10-07 15:11:23.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:23 np0005473739 modest_dubinsky[444668]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:11:23 np0005473739 modest_dubinsky[444668]: --> relative data size: 1.0
Oct  7 11:11:23 np0005473739 modest_dubinsky[444668]: --> All data devices are unavailable
Oct  7 11:11:23 np0005473739 systemd[1]: libpod-c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2.scope: Deactivated successfully.
Oct  7 11:11:23 np0005473739 systemd[1]: libpod-c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2.scope: Consumed 1.139s CPU time.
Oct  7 11:11:23 np0005473739 podman[444652]: 2025-10-07 15:11:23.704647806 +0000 UTC m=+1.334138612 container died c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:11:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b9d0a834cabf0b64bb4e14c66089ff28aefc7c668f3f1f87cac9c9ecf7a87898-merged.mount: Deactivated successfully.
Oct  7 11:11:23 np0005473739 podman[444652]: 2025-10-07 15:11:23.777319678 +0000 UTC m=+1.406810474 container remove c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_dubinsky, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:11:23 np0005473739 systemd[1]: libpod-conmon-c8ec58774ae433c9505b8db9599307998e34a711eb2e8e9acc2dfa61d753d3c2.scope: Deactivated successfully.
Oct  7 11:11:23 np0005473739 nova_compute[259550]: 2025-10-07 15:11:23.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:23 np0005473739 nova_compute[259550]: 2025-10-07 15:11:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:23 np0005473739 nova_compute[259550]: 2025-10-07 15:11:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:23 np0005473739 nova_compute[259550]: 2025-10-07 15:11:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:11:24 np0005473739 podman[444853]: 2025-10-07 15:11:24.403302903 +0000 UTC m=+0.037509878 container create d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 11:11:24 np0005473739 systemd[1]: Started libpod-conmon-d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9.scope.
Oct  7 11:11:24 np0005473739 nova_compute[259550]: 2025-10-07 15:11:24.449 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:11:24 np0005473739 podman[444853]: 2025-10-07 15:11:24.386336816 +0000 UTC m=+0.020543811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:11:24 np0005473739 podman[444853]: 2025-10-07 15:11:24.482046984 +0000 UTC m=+0.116253979 container init d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:11:24 np0005473739 podman[444853]: 2025-10-07 15:11:24.488432121 +0000 UTC m=+0.122639096 container start d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2)
Oct  7 11:11:24 np0005473739 podman[444853]: 2025-10-07 15:11:24.4921708 +0000 UTC m=+0.126377795 container attach d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 11:11:24 np0005473739 jovial_kilby[444869]: 167 167
Oct  7 11:11:24 np0005473739 podman[444853]: 2025-10-07 15:11:24.493621168 +0000 UTC m=+0.127828143 container died d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:11:24 np0005473739 systemd[1]: libpod-d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9.scope: Deactivated successfully.
Oct  7 11:11:24 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5c2d185594dd18b91d7d49c9483425f3ab5a77409f93ef3c5338f52265c849d5-merged.mount: Deactivated successfully.
Oct  7 11:11:24 np0005473739 podman[444853]: 2025-10-07 15:11:24.537292596 +0000 UTC m=+0.171499561 container remove d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_kilby, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 11:11:24 np0005473739 systemd[1]: libpod-conmon-d39d76a168123db8aeceb595394fdc17ae48680098bdbc96febc60d0a39b6eb9.scope: Deactivated successfully.
Oct  7 11:11:24 np0005473739 podman[444892]: 2025-10-07 15:11:24.686384068 +0000 UTC m=+0.042496549 container create 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 11:11:24 np0005473739 systemd[1]: Started libpod-conmon-79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6.scope.
Oct  7 11:11:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:11:24 np0005473739 podman[444892]: 2025-10-07 15:11:24.666696 +0000 UTC m=+0.022808501 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:11:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:24 np0005473739 podman[444892]: 2025-10-07 15:11:24.784063078 +0000 UTC m=+0.140175579 container init 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:11:24 np0005473739 podman[444892]: 2025-10-07 15:11:24.792018626 +0000 UTC m=+0.148131107 container start 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:11:24 np0005473739 podman[444892]: 2025-10-07 15:11:24.796961867 +0000 UTC m=+0.153074368 container attach 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:11:24 np0005473739 nova_compute[259550]: 2025-10-07 15:11:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3297: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:25 np0005473739 elated_allen[444908]: {
Oct  7 11:11:25 np0005473739 elated_allen[444908]:    "0": [
Oct  7 11:11:25 np0005473739 elated_allen[444908]:        {
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "devices": [
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "/dev/loop3"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            ],
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_name": "ceph_lv0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_size": "21470642176",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "name": "ceph_lv0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "tags": {
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cluster_name": "ceph",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.crush_device_class": "",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.encrypted": "0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osd_id": "0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.type": "block",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.vdo": "0"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            },
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "type": "block",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "vg_name": "ceph_vg0"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:        }
Oct  7 11:11:25 np0005473739 elated_allen[444908]:    ],
Oct  7 11:11:25 np0005473739 elated_allen[444908]:    "1": [
Oct  7 11:11:25 np0005473739 elated_allen[444908]:        {
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "devices": [
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "/dev/loop4"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            ],
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_name": "ceph_lv1",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_size": "21470642176",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "name": "ceph_lv1",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "tags": {
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cluster_name": "ceph",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.crush_device_class": "",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.encrypted": "0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osd_id": "1",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.type": "block",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.vdo": "0"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            },
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "type": "block",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "vg_name": "ceph_vg1"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:        }
Oct  7 11:11:25 np0005473739 elated_allen[444908]:    ],
Oct  7 11:11:25 np0005473739 elated_allen[444908]:    "2": [
Oct  7 11:11:25 np0005473739 elated_allen[444908]:        {
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "devices": [
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "/dev/loop5"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            ],
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_name": "ceph_lv2",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_size": "21470642176",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "name": "ceph_lv2",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "tags": {
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.cluster_name": "ceph",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.crush_device_class": "",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.encrypted": "0",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osd_id": "2",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.type": "block",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:                "ceph.vdo": "0"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            },
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "type": "block",
Oct  7 11:11:25 np0005473739 elated_allen[444908]:            "vg_name": "ceph_vg2"
Oct  7 11:11:25 np0005473739 elated_allen[444908]:        }
Oct  7 11:11:25 np0005473739 elated_allen[444908]:    ]
Oct  7 11:11:25 np0005473739 elated_allen[444908]: }
Oct  7 11:11:25 np0005473739 systemd[1]: libpod-79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6.scope: Deactivated successfully.
Oct  7 11:11:25 np0005473739 podman[444892]: 2025-10-07 15:11:25.667528305 +0000 UTC m=+1.023640806 container died 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:11:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b766a7db4082f6f8da47e9a551d4dd456c308bfd17f25950a3712d9eb358da77-merged.mount: Deactivated successfully.
Oct  7 11:11:25 np0005473739 podman[444892]: 2025-10-07 15:11:25.734771534 +0000 UTC m=+1.090884015 container remove 79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elated_allen, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 11:11:25 np0005473739 systemd[1]: libpod-conmon-79c113f0a59a3550d07cd24f3db0ae384a7fc944a8dbe9e8ffe06a1ff00b46e6.scope: Deactivated successfully.
Oct  7 11:11:26 np0005473739 podman[445070]: 2025-10-07 15:11:26.368328887 +0000 UTC m=+0.047598283 container create 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 11:11:26 np0005473739 systemd[1]: Started libpod-conmon-2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e.scope.
Oct  7 11:11:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:11:26 np0005473739 podman[445070]: 2025-10-07 15:11:26.347788637 +0000 UTC m=+0.027058123 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:11:26 np0005473739 podman[445070]: 2025-10-07 15:11:26.456216799 +0000 UTC m=+0.135486265 container init 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:11:26 np0005473739 podman[445070]: 2025-10-07 15:11:26.463902161 +0000 UTC m=+0.143171557 container start 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:11:26 np0005473739 podman[445070]: 2025-10-07 15:11:26.467994409 +0000 UTC m=+0.147263805 container attach 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 11:11:26 np0005473739 condescending_sammet[445087]: 167 167
Oct  7 11:11:26 np0005473739 systemd[1]: libpod-2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e.scope: Deactivated successfully.
Oct  7 11:11:26 np0005473739 podman[445070]: 2025-10-07 15:11:26.471969533 +0000 UTC m=+0.151239019 container died 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 11:11:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-da17b1272f1a88c292bbb6d0561d27cfbffb535a923e2aa04bcd5520f0a882a4-merged.mount: Deactivated successfully.
Oct  7 11:11:26 np0005473739 podman[445070]: 2025-10-07 15:11:26.518006284 +0000 UTC m=+0.197275680 container remove 2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=condescending_sammet, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:11:26 np0005473739 systemd[1]: libpod-conmon-2a8719fbd95e31672aeecba38658af70b839142feaed646d3d9b230bc5d7f35e.scope: Deactivated successfully.
Oct  7 11:11:26 np0005473739 podman[445111]: 2025-10-07 15:11:26.677223272 +0000 UTC m=+0.039799288 container create 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:11:26 np0005473739 systemd[1]: Started libpod-conmon-90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25.scope.
Oct  7 11:11:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:11:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:11:26 np0005473739 podman[445111]: 2025-10-07 15:11:26.756325363 +0000 UTC m=+0.118901319 container init 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:11:26 np0005473739 podman[445111]: 2025-10-07 15:11:26.660967705 +0000 UTC m=+0.023543681 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:11:26 np0005473739 podman[445111]: 2025-10-07 15:11:26.762749922 +0000 UTC m=+0.125325868 container start 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:11:26 np0005473739 podman[445111]: 2025-10-07 15:11:26.769744926 +0000 UTC m=+0.132320892 container attach 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 11:11:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3298: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]: {
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "osd_id": 2,
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "type": "bluestore"
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:    },
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "osd_id": 1,
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "type": "bluestore"
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:    },
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "osd_id": 0,
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:        "type": "bluestore"
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]:    }
Oct  7 11:11:27 np0005473739 interesting_swanson[445127]: }
Oct  7 11:11:27 np0005473739 systemd[1]: libpod-90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25.scope: Deactivated successfully.
Oct  7 11:11:27 np0005473739 podman[445111]: 2025-10-07 15:11:27.698670439 +0000 UTC m=+1.061246385 container died 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, ceph=True, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:11:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6eb7d2705957bf233ece967742f4d534c6aecf4d28c9f2d4e4701fff93a91086-merged.mount: Deactivated successfully.
Oct  7 11:11:27 np0005473739 podman[445111]: 2025-10-07 15:11:27.837074059 +0000 UTC m=+1.199650005 container remove 90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=interesting_swanson, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 11:11:27 np0005473739 systemd[1]: libpod-conmon-90ccda29c653156bd1f5a8c961ebdcb7ee83ca19617698ee4e33457311be6a25.scope: Deactivated successfully.
Oct  7 11:11:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:11:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:11:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:11:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:11:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev fd0a614e-7087-410d-abb3-446994592cfa does not exist
Oct  7 11:11:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 1559e2ad-7f8d-4a95-acb2-f3b784b7c680 does not exist
Oct  7 11:11:28 np0005473739 nova_compute[259550]: 2025-10-07 15:11:28.635 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:28 np0005473739 nova_compute[259550]: 2025-10-07 15:11:28.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:11:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:11:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3299: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:29 np0005473739 nova_compute[259550]: 2025-10-07 15:11:29.453 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3300: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:11:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1528135723' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:11:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:11:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1528135723' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:11:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3301: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:33 np0005473739 nova_compute[259550]: 2025-10-07 15:11:33.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:34 np0005473739 nova_compute[259550]: 2025-10-07 15:11:34.454 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3302: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:37 np0005473739 podman[445221]: 2025-10-07 15:11:37.09925518 +0000 UTC m=+0.078481066 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Oct  7 11:11:37 np0005473739 podman[445222]: 2025-10-07 15:11:37.134036005 +0000 UTC m=+0.105974119 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:11:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3303: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:38 np0005473739 nova_compute[259550]: 2025-10-07 15:11:38.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3304: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:39 np0005473739 nova_compute[259550]: 2025-10-07 15:11:39.457 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:39 np0005473739 nova_compute[259550]: 2025-10-07 15:11:39.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.010 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.011 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:11:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:11:40 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3028571804' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.487 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.477s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.683 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.684 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3579MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.685 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.685 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.792 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.793 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:11:40 np0005473739 nova_compute[259550]: 2025-10-07 15:11:40.812 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:11:41 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:11:41 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1145876012' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:11:41 np0005473739 nova_compute[259550]: 2025-10-07 15:11:41.257 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.444s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:11:41 np0005473739 nova_compute[259550]: 2025-10-07 15:11:41.265 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:11:41 np0005473739 nova_compute[259550]: 2025-10-07 15:11:41.280 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:11:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3305: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:41 np0005473739 nova_compute[259550]: 2025-10-07 15:11:41.282 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:11:41 np0005473739 nova_compute[259550]: 2025-10-07 15:11:41.283 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.598s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:11:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3306: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:43 np0005473739 nova_compute[259550]: 2025-10-07 15:11:43.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:44 np0005473739 nova_compute[259550]: 2025-10-07 15:11:44.284 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:44 np0005473739 nova_compute[259550]: 2025-10-07 15:11:44.284 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:11:44 np0005473739 nova_compute[259550]: 2025-10-07 15:11:44.284 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:11:44 np0005473739 nova_compute[259550]: 2025-10-07 15:11:44.312 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:11:44 np0005473739 nova_compute[259550]: 2025-10-07 15:11:44.459 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:44 np0005473739 nova_compute[259550]: 2025-10-07 15:11:44.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:11:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3307: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3308: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:48 np0005473739 nova_compute[259550]: 2025-10-07 15:11:48.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3309: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:49 np0005473739 nova_compute[259550]: 2025-10-07 15:11:49.460 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:51 np0005473739 podman[445310]: 2025-10-07 15:11:51.069313519 +0000 UTC m=+0.061544730 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251001, config_id=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Oct  7 11:11:51 np0005473739 podman[445311]: 2025-10-07 15:11:51.069273768 +0000 UTC m=+0.059068944 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Oct  7 11:11:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3310: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:11:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:11:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:11:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:11:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:11:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:11:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3311: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:53 np0005473739 nova_compute[259550]: 2025-10-07 15:11:53.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:53 np0005473739 systemd-logind[801]: New session 55 of user zuul.
Oct  7 11:11:53 np0005473739 systemd[1]: Started Session 55 of User zuul.
Oct  7 11:11:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:11:53.978 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=64, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=63) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:11:53 np0005473739 nova_compute[259550]: 2025-10-07 15:11:53.979 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:53 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:11:53.980 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 11:11:54 np0005473739 nova_compute[259550]: 2025-10-07 15:11:54.462 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:54 np0005473739 systemd[1]: Reloading.
Oct  7 11:11:54 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 11:11:54 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 11:11:55 np0005473739 systemd[1]: Starting dnf makecache...
Oct  7 11:11:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3312: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:55 np0005473739 systemd[1]: Reloading.
Oct  7 11:11:55 np0005473739 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Oct  7 11:11:55 np0005473739 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Oct  7 11:11:55 np0005473739 dnf[445518]: Metadata cache refreshed recently.
Oct  7 11:11:55 np0005473739 systemd[1]: dnf-makecache.service: Deactivated successfully.
Oct  7 11:11:55 np0005473739 systemd[1]: Finished dnf makecache.
Oct  7 11:11:55 np0005473739 systemd[1]: Starting Podman API Socket...
Oct  7 11:11:55 np0005473739 systemd[1]: Listening on Podman API Socket.
Oct  7 11:11:56 np0005473739 dbus-broker-launch[772]: avc:  op=setenforce lsm=selinux enforcing=0 res=1
Oct  7 11:11:56 np0005473739 systemd[1]: podman.socket: Deactivated successfully.
Oct  7 11:11:56 np0005473739 systemd[1]: Closed Podman API Socket.
Oct  7 11:11:56 np0005473739 systemd[1]: Stopping Podman API Socket...
Oct  7 11:11:56 np0005473739 systemd[1]: Starting Podman API Socket...
Oct  7 11:11:56 np0005473739 systemd[1]: Listening on Podman API Socket.
Oct  7 11:11:56 np0005473739 systemd-logind[801]: New session 56 of user zuul.
Oct  7 11:11:56 np0005473739 systemd[1]: Started Session 56 of User zuul.
Oct  7 11:11:56 np0005473739 systemd[1]: Starting Podman API Service...
Oct  7 11:11:56 np0005473739 systemd[1]: Started Podman API Service.
Oct  7 11:11:56 np0005473739 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="/usr/bin/podman filtering at log level info"
Oct  7 11:11:56 np0005473739 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="Setting parallel job count to 25"
Oct  7 11:11:56 np0005473739 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="Using sqlite as database backend"
Oct  7 11:11:56 np0005473739 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Oct  7 11:11:56 np0005473739 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="Using systemd socket activation to determine API endpoint"
Oct  7 11:11:56 np0005473739 podman[445584]: time="2025-10-07T15:11:56Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Oct  7 11:11:56 np0005473739 podman[445584]: @ - - [07/Oct/2025:15:11:56 +0000] "HEAD /v4.7.0/libpod/_ping HTTP/1.1" 200 0 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct  7 11:11:56 np0005473739 podman[445584]: @ - - [07/Oct/2025:15:11:56 +0000] "GET /v4.7.0/libpod/containers/json HTTP/1.1" 200 27465 "" "PodmanPy/4.7.0 (API v4.7.0; Compatible v1.40)"
Oct  7 11:11:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3313: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:11:58 np0005473739 nova_compute[259550]: 2025-10-07 15:11:58.647 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:11:58 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:11:58.982 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '64'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:11:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3314: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:11:59 np0005473739 nova_compute[259550]: 2025-10-07 15:11:59.465 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:12:00.114 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:12:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:12:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:12:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:12:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:12:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3315: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3316: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:03 np0005473739 nova_compute[259550]: 2025-10-07 15:12:03.651 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:04 np0005473739 nova_compute[259550]: 2025-10-07 15:12:04.467 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3317: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3318: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:08 np0005473739 podman[445621]: 2025-10-07 15:12:08.101362597 +0000 UTC m=+0.086609979 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 11:12:08 np0005473739 podman[445622]: 2025-10-07 15:12:08.123340515 +0000 UTC m=+0.094125736 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 11:12:08 np0005473739 nova_compute[259550]: 2025-10-07 15:12:08.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3319: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:09 np0005473739 nova_compute[259550]: 2025-10-07 15:12:09.470 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3320: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:11 np0005473739 podman[445584]: time="2025-10-07T15:12:11Z" level=info msg="Received shutdown.Stop(), terminating!" PID=445584
Oct  7 11:12:11 np0005473739 systemd[1]: podman.service: Deactivated successfully.
Oct  7 11:12:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3321: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:13 np0005473739 nova_compute[259550]: 2025-10-07 15:12:13.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:12:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.0 total, 600.0 interval#012Cumulative writes: 45K writes, 181K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.79 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 515 writes, 1459 keys, 515 commit groups, 1.0 writes per commit group, ingest: 0.82 MB, 0.00 MB/s#012Interval WAL: 515 writes, 229 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.3      0.00              0.00         1    0.005       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55daf62f31f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 4.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.0 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Oct  7 11:12:13 np0005473739 nova_compute[259550]: 2025-10-07 15:12:13.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:13 np0005473739 nova_compute[259550]: 2025-10-07 15:12:13.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 11:12:14 np0005473739 nova_compute[259550]: 2025-10-07 15:12:14.001 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 11:12:14 np0005473739 nova_compute[259550]: 2025-10-07 15:12:14.506 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3322: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:16 np0005473739 nova_compute[259550]: 2025-10-07 15:12:16.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3323: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:17 np0005473739 nova_compute[259550]: 2025-10-07 15:12:17.995 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:12:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 645 writes, 1636 keys, 645 commit groups, 1.0 writes per commit group, ingest: 0.83 MB, 0.00 MB/s#012Interval WAL: 645 writes, 286 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Oct  7 11:12:18 np0005473739 nova_compute[259550]: 2025-10-07 15:12:18.656 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3324: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:19 np0005473739 nova_compute[259550]: 2025-10-07 15:12:19.510 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3325: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:22 np0005473739 podman[445716]: 2025-10-07 15:12:22.097076521 +0000 UTC m=+0.078730472 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 11:12:22 np0005473739 podman[445717]: 2025-10-07 15:12:22.101703523 +0000 UTC m=+0.081399543 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:12:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.79 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 677 writes, 1677 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.81 MB, 0.00 MB/s#012Interval WAL: 677 writes, 305 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #162. Immutable memtables: 0.
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.448477) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 99] Flushing memtable with next log file: 162
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942448572, "job": 99, "event": "flush_started", "num_memtables": 1, "num_entries": 1134, "num_deletes": 251, "total_data_size": 1665999, "memory_usage": 1686872, "flush_reason": "Manual Compaction"}
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 99] Level-0 flush table #163: started
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942464564, "cf_name": "default", "job": 99, "event": "table_file_creation", "file_number": 163, "file_size": 979743, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 68126, "largest_seqno": 69259, "table_properties": {"data_size": 975627, "index_size": 1703, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1349, "raw_key_size": 11094, "raw_average_key_size": 20, "raw_value_size": 966559, "raw_average_value_size": 1799, "num_data_blocks": 78, "num_entries": 537, "num_filter_entries": 537, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849824, "oldest_key_time": 1759849824, "file_creation_time": 1759849942, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 163, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 99] Flush lasted 16128 microseconds, and 5519 cpu microseconds.
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.464617) [db/flush_job.cc:967] [default] [JOB 99] Level-0 flush table #163: 979743 bytes OK
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.464639) [db/memtable_list.cc:519] [default] Level-0 commit table #163 started
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.472489) [db/memtable_list.cc:722] [default] Level-0 commit table #163: memtable #1 done
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.472526) EVENT_LOG_v1 {"time_micros": 1759849942472518, "job": 99, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.472551) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 99] Try to delete WAL files size 1660786, prev total WAL file size 1660786, number of live WAL files 2.
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000159.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.473450) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6D6772737461740032373533' seq:72057594037927935, type:22 .. '6D6772737461740033303035' seq:0, type:0; will stop at (end)
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 100] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 99 Base level 0, inputs: [163(956KB)], [161(10MB)]
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942473524, "job": 100, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [163], "files_L6": [161], "score": -1, "input_data_size": 12197176, "oldest_snapshot_seqno": -1}
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 100] Generated table #164: 8659 keys, 9552040 bytes, temperature: kUnknown
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942644791, "cf_name": "default", "job": 100, "event": "table_file_creation", "file_number": 164, "file_size": 9552040, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9498690, "index_size": 30581, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 226821, "raw_average_key_size": 26, "raw_value_size": 9348726, "raw_average_value_size": 1079, "num_data_blocks": 1182, "num_entries": 8659, "num_filter_entries": 8659, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759849942, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 164, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.645256) [db/compaction/compaction_job.cc:1663] [default] [JOB 100] Compacted 1@0 + 1@6 files to L6 => 9552040 bytes
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.677553) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 71.2 rd, 55.7 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 10.7 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(22.2) write-amplify(9.7) OK, records in: 9122, records dropped: 463 output_compression: NoCompression
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.677592) EVENT_LOG_v1 {"time_micros": 1759849942677579, "job": 100, "event": "compaction_finished", "compaction_time_micros": 171425, "compaction_time_cpu_micros": 28623, "output_level": 6, "num_output_files": 1, "total_output_size": 9552040, "num_input_records": 9122, "num_output_records": 8659, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000163.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942678016, "job": 100, "event": "table_file_deletion", "file_number": 163}
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000161.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759849942680223, "job": 100, "event": "table_file_deletion", "file_number": 161}
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.473330) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680320) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680325) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680327) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680328) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:12:22 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:12:22.680329) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:12:22 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:12:22 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:12:22
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'volumes', '.mgr', 'default.rgw.control', 'default.rgw.meta', 'vms', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images']
Oct  7 11:12:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3326: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:12:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:12:23 np0005473739 nova_compute[259550]: 2025-10-07 15:12:23.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:23 np0005473739 nova_compute[259550]: 2025-10-07 15:12:23.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:23 np0005473739 nova_compute[259550]: 2025-10-07 15:12:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:12:24 np0005473739 systemd[1]: session-55.scope: Deactivated successfully.
Oct  7 11:12:24 np0005473739 systemd[1]: session-55.scope: Consumed 1.410s CPU time.
Oct  7 11:12:24 np0005473739 systemd-logind[801]: Session 55 logged out. Waiting for processes to exit.
Oct  7 11:12:24 np0005473739 systemd-logind[801]: Removed session 55.
Oct  7 11:12:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 11:12:24 np0005473739 nova_compute[259550]: 2025-10-07 15:12:24.552 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:24 np0005473739 nova_compute[259550]: 2025-10-07 15:12:24.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:25 np0005473739 systemd[1]: session-56.scope: Deactivated successfully.
Oct  7 11:12:25 np0005473739 systemd-logind[801]: Session 56 logged out. Waiting for processes to exit.
Oct  7 11:12:25 np0005473739 systemd-logind[801]: Removed session 56.
Oct  7 11:12:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3327: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:25 np0005473739 nova_compute[259550]: 2025-10-07 15:12:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:25 np0005473739 nova_compute[259550]: 2025-10-07 15:12:25.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3328: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:28 np0005473739 nova_compute[259550]: 2025-10-07 15:12:28.663 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:12:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b6e4585f-257e-4a83-a68c-23fe4045db25 does not exist
Oct  7 11:12:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cdd11784-248d-4c23-aaeb-63e76fe08c04 does not exist
Oct  7 11:12:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 4602800b-1692-497d-96c0-1261e7b064fe does not exist
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:12:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:12:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3329: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:29 np0005473739 nova_compute[259550]: 2025-10-07 15:12:29.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:29 np0005473739 podman[446030]: 2025-10-07 15:12:29.566180368 +0000 UTC m=+0.024854055 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:12:29 np0005473739 podman[446030]: 2025-10-07 15:12:29.68985753 +0000 UTC m=+0.148531187 container create 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 11:12:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:12:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:12:29 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:12:29 np0005473739 systemd[1]: Started libpod-conmon-9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae.scope.
Oct  7 11:12:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:12:29 np0005473739 nova_compute[259550]: 2025-10-07 15:12:29.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:30 np0005473739 podman[446030]: 2025-10-07 15:12:30.013739659 +0000 UTC m=+0.472413346 container init 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True)
Oct  7 11:12:30 np0005473739 podman[446030]: 2025-10-07 15:12:30.027869311 +0000 UTC m=+0.486542958 container start 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:12:30 np0005473739 cranky_babbage[446044]: 167 167
Oct  7 11:12:30 np0005473739 systemd[1]: libpod-9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae.scope: Deactivated successfully.
Oct  7 11:12:30 np0005473739 conmon[446044]: conmon 9d96c20b2d23e406d173 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae.scope/container/memory.events
Oct  7 11:12:30 np0005473739 podman[446030]: 2025-10-07 15:12:30.0939877 +0000 UTC m=+0.552661387 container attach 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:12:30 np0005473739 podman[446030]: 2025-10-07 15:12:30.094522674 +0000 UTC m=+0.553196341 container died 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:12:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0d4a1ff6f064a60bf568b9cf077049ebea338fe6bed9f01b521b4d6f9c319a3d-merged.mount: Deactivated successfully.
Oct  7 11:12:30 np0005473739 podman[446030]: 2025-10-07 15:12:30.219369038 +0000 UTC m=+0.678042695 container remove 9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=cranky_babbage, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:12:30 np0005473739 systemd[1]: libpod-conmon-9d96c20b2d23e406d173636e45d39802077f01394c27cf8a350c9342e5b2dcae.scope: Deactivated successfully.
Oct  7 11:12:30 np0005473739 podman[446070]: 2025-10-07 15:12:30.383714351 +0000 UTC m=+0.041509233 container create f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 11:12:30 np0005473739 systemd[1]: Started libpod-conmon-f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899.scope.
Oct  7 11:12:30 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:12:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:30 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:30 np0005473739 podman[446070]: 2025-10-07 15:12:30.365243384 +0000 UTC m=+0.023038286 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:12:30 np0005473739 podman[446070]: 2025-10-07 15:12:30.478512474 +0000 UTC m=+0.136307356 container init f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 11:12:30 np0005473739 podman[446070]: 2025-10-07 15:12:30.488023254 +0000 UTC m=+0.145818126 container start f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 11:12:30 np0005473739 podman[446070]: 2025-10-07 15:12:30.492815431 +0000 UTC m=+0.150610333 container attach f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:12:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3330: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:31 np0005473739 sweet_sanderson[446086]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:12:31 np0005473739 sweet_sanderson[446086]: --> relative data size: 1.0
Oct  7 11:12:31 np0005473739 sweet_sanderson[446086]: --> All data devices are unavailable
Oct  7 11:12:31 np0005473739 systemd[1]: libpod-f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899.scope: Deactivated successfully.
Oct  7 11:12:31 np0005473739 systemd[1]: libpod-f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899.scope: Consumed 1.085s CPU time.
Oct  7 11:12:31 np0005473739 podman[446070]: 2025-10-07 15:12:31.622310109 +0000 UTC m=+1.280105011 container died f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:12:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-2dbc9fb39eaa8da925d35cef447043ff171a818301af34e3489f97e5ce84b5b8-merged.mount: Deactivated successfully.
Oct  7 11:12:31 np0005473739 podman[446070]: 2025-10-07 15:12:31.704546822 +0000 UTC m=+1.362341714 container remove f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_sanderson, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:12:31 np0005473739 systemd[1]: libpod-conmon-f335c77fb73898fefdf4ce54e36d0f13be2dd8c09b001b278a7d90f5008a7899.scope: Deactivated successfully.
Oct  7 11:12:31 np0005473739 nova_compute[259550]: 2025-10-07 15:12:31.901 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:32 np0005473739 nova_compute[259550]: 2025-10-07 15:12:32.001 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:32 np0005473739 podman[446270]: 2025-10-07 15:12:32.47281187 +0000 UTC m=+0.037083037 container create 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:12:32 np0005473739 systemd[1]: Started libpod-conmon-725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3.scope.
Oct  7 11:12:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:12:32 np0005473739 podman[446270]: 2025-10-07 15:12:32.540976022 +0000 UTC m=+0.105247229 container init 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 11:12:32 np0005473739 podman[446270]: 2025-10-07 15:12:32.551167681 +0000 UTC m=+0.115438868 container start 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 11:12:32 np0005473739 podman[446270]: 2025-10-07 15:12:32.456833109 +0000 UTC m=+0.021104306 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:12:32 np0005473739 vigorous_mirzakhani[446286]: 167 167
Oct  7 11:12:32 np0005473739 systemd[1]: libpod-725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3.scope: Deactivated successfully.
Oct  7 11:12:32 np0005473739 podman[446270]: 2025-10-07 15:12:32.557311712 +0000 UTC m=+0.121582909 container attach 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 11:12:32 np0005473739 podman[446270]: 2025-10-07 15:12:32.558227926 +0000 UTC m=+0.122499143 container died 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True)
Oct  7 11:12:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-75047ea33df9ce3d036c246b2669889fc0fbbcadbf498dd6c9390ad07ac2d277-merged.mount: Deactivated successfully.
Oct  7 11:12:32 np0005473739 podman[446270]: 2025-10-07 15:12:32.598910736 +0000 UTC m=+0.163181913 container remove 725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=vigorous_mirzakhani, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 11:12:32 np0005473739 systemd[1]: libpod-conmon-725e2b396215263814d9c38a43b090754b8b6c8965cbdaf4bd039629279125d3.scope: Deactivated successfully.
Oct  7 11:12:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:12:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624438936' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:12:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:12:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2624438936' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:12:32 np0005473739 podman[446309]: 2025-10-07 15:12:32.791523592 +0000 UTC m=+0.056741823 container create 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:12:32 np0005473739 systemd[1]: Started libpod-conmon-5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02.scope.
Oct  7 11:12:32 np0005473739 podman[446309]: 2025-10-07 15:12:32.763754302 +0000 UTC m=+0.028972573 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:12:32 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:12:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:32 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:32 np0005473739 podman[446309]: 2025-10-07 15:12:32.889375466 +0000 UTC m=+0.154593707 container init 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 11:12:32 np0005473739 podman[446309]: 2025-10-07 15:12:32.897707415 +0000 UTC m=+0.162925606 container start 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:12:32 np0005473739 podman[446309]: 2025-10-07 15:12:32.90170732 +0000 UTC m=+0.166925561 container attach 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, io.buildah.version=1.39.3)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:12:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3331: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]: {
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:    "0": [
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:        {
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "devices": [
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "/dev/loop3"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            ],
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_name": "ceph_lv0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_size": "21470642176",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "name": "ceph_lv0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "tags": {
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cluster_name": "ceph",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.crush_device_class": "",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.encrypted": "0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osd_id": "0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.type": "block",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.vdo": "0"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            },
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "type": "block",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "vg_name": "ceph_vg0"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:        }
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:    ],
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:    "1": [
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:        {
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "devices": [
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "/dev/loop4"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            ],
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_name": "ceph_lv1",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_size": "21470642176",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "name": "ceph_lv1",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "tags": {
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cluster_name": "ceph",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.crush_device_class": "",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.encrypted": "0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osd_id": "1",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.type": "block",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.vdo": "0"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            },
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "type": "block",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "vg_name": "ceph_vg1"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:        }
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:    ],
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:    "2": [
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:        {
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "devices": [
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "/dev/loop5"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            ],
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_name": "ceph_lv2",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_size": "21470642176",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "name": "ceph_lv2",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "tags": {
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.cluster_name": "ceph",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.crush_device_class": "",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.encrypted": "0",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osd_id": "2",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.type": "block",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:                "ceph.vdo": "0"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            },
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "type": "block",
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:            "vg_name": "ceph_vg2"
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:        }
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]:    ]
Oct  7 11:12:33 np0005473739 fervent_dhawan[446325]: }
Oct  7 11:12:33 np0005473739 nova_compute[259550]: 2025-10-07 15:12:33.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:33 np0005473739 podman[446309]: 2025-10-07 15:12:33.706763915 +0000 UTC m=+0.971982126 container died 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True)
Oct  7 11:12:33 np0005473739 systemd[1]: libpod-5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02.scope: Deactivated successfully.
Oct  7 11:12:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5c70e2d48d4b0d6b9855d104843dbe382582a50af344a95c4baa63c1643c0945-merged.mount: Deactivated successfully.
Oct  7 11:12:33 np0005473739 podman[446309]: 2025-10-07 15:12:33.773136871 +0000 UTC m=+1.038355062 container remove 5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=fervent_dhawan, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 11:12:33 np0005473739 systemd[1]: libpod-conmon-5c68cf6259710fd00f06d2034c6fca76a65c759733346102663106abc49dcd02.scope: Deactivated successfully.
Oct  7 11:12:34 np0005473739 podman[446489]: 2025-10-07 15:12:34.460695385 +0000 UTC m=+0.039217192 container create 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:12:34 np0005473739 systemd[1]: Started libpod-conmon-2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba.scope.
Oct  7 11:12:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:12:34 np0005473739 podman[446489]: 2025-10-07 15:12:34.444581392 +0000 UTC m=+0.023103219 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:12:34 np0005473739 podman[446489]: 2025-10-07 15:12:34.552662045 +0000 UTC m=+0.131183932 container init 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 11:12:34 np0005473739 nova_compute[259550]: 2025-10-07 15:12:34.556 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:34 np0005473739 podman[446489]: 2025-10-07 15:12:34.562078523 +0000 UTC m=+0.140600330 container start 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 11:12:34 np0005473739 podman[446489]: 2025-10-07 15:12:34.566190661 +0000 UTC m=+0.144712488 container attach 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:12:34 np0005473739 reverent_newton[446505]: 167 167
Oct  7 11:12:34 np0005473739 systemd[1]: libpod-2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba.scope: Deactivated successfully.
Oct  7 11:12:34 np0005473739 podman[446489]: 2025-10-07 15:12:34.570031822 +0000 UTC m=+0.148553629 container died 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 11:12:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-67e343dce7938a745ec27efa8d67e94a11ddfd2f501cde5034dcf5cb67ded4b9-merged.mount: Deactivated successfully.
Oct  7 11:12:34 np0005473739 podman[446489]: 2025-10-07 15:12:34.609785567 +0000 UTC m=+0.188307374 container remove 2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=reverent_newton, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 11:12:34 np0005473739 systemd[1]: libpod-conmon-2cdb93597eeeb03096b545d7de2985ec67ecdd2cd69c13e1c90ca436f85eacba.scope: Deactivated successfully.
Oct  7 11:12:34 np0005473739 podman[446529]: 2025-10-07 15:12:34.808211217 +0000 UTC m=+0.051500266 container create a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 11:12:34 np0005473739 systemd[1]: Started libpod-conmon-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope.
Oct  7 11:12:34 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:12:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:34 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:12:34 np0005473739 podman[446529]: 2025-10-07 15:12:34.787804339 +0000 UTC m=+0.031093388 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:12:34 np0005473739 podman[446529]: 2025-10-07 15:12:34.914143582 +0000 UTC m=+0.157432631 container init a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 11:12:34 np0005473739 podman[446529]: 2025-10-07 15:12:34.920634053 +0000 UTC m=+0.163923082 container start a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 11:12:34 np0005473739 podman[446529]: 2025-10-07 15:12:34.923795396 +0000 UTC m=+0.167084435 container attach a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 11:12:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3332: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:36 np0005473739 loving_einstein[446545]: {
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "osd_id": 2,
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "type": "bluestore"
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:    },
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "osd_id": 1,
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "type": "bluestore"
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:    },
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "osd_id": 0,
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:        "type": "bluestore"
Oct  7 11:12:36 np0005473739 loving_einstein[446545]:    }
Oct  7 11:12:36 np0005473739 loving_einstein[446545]: }
Oct  7 11:12:36 np0005473739 systemd[1]: libpod-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope: Deactivated successfully.
Oct  7 11:12:36 np0005473739 systemd[1]: libpod-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope: Consumed 1.133s CPU time.
Oct  7 11:12:36 np0005473739 conmon[446545]: conmon a3c3653222408e514d9e <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope/container/memory.events
Oct  7 11:12:36 np0005473739 podman[446578]: 2025-10-07 15:12:36.09757023 +0000 UTC m=+0.035749211 container died a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:12:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-444341dcf4d0088643e764f543db1e8d64e7a421834e9c327e56fcd90c7e87cd-merged.mount: Deactivated successfully.
Oct  7 11:12:36 np0005473739 podman[446578]: 2025-10-07 15:12:36.979475806 +0000 UTC m=+0.917654717 container remove a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=loving_einstein, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True)
Oct  7 11:12:36 np0005473739 systemd[1]: libpod-conmon-a3c3653222408e514d9e7d82b300dbd703387589f6c96cfc54770152e4f4f0f3.scope: Deactivated successfully.
Oct  7 11:12:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:12:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:12:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:12:37 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:12:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 275c5d69-2f59-4030-810a-39fe2617190f does not exist
Oct  7 11:12:37 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d5cfb0ce-2bcb-4c9a-a50d-81ecf9ec5870 does not exist
Oct  7 11:12:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3333: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:37 np0005473739 nova_compute[259550]: 2025-10-07 15:12:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:37 np0005473739 nova_compute[259550]: 2025-10-07 15:12:37.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 11:12:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:12:38 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:12:38 np0005473739 nova_compute[259550]: 2025-10-07 15:12:38.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:39 np0005473739 podman[446643]: 2025-10-07 15:12:39.121155748 +0000 UTC m=+0.105548337 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  7 11:12:39 np0005473739 podman[446644]: 2025-10-07 15:12:39.136673096 +0000 UTC m=+0.118610851 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251001, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 11:12:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3334: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:39 np0005473739 nova_compute[259550]: 2025-10-07 15:12:39.585 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3335: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.025 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.057 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.058 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.058 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.059 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.059 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:12:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:12:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3266480678' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.516 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.457s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.704 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.707 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3606MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.707 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.708 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.787 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.787 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:12:42 np0005473739 nova_compute[259550]: 2025-10-07 15:12:42.810 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:12:43 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:12:43 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/337520255' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:12:43 np0005473739 nova_compute[259550]: 2025-10-07 15:12:43.286 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.476s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:12:43 np0005473739 nova_compute[259550]: 2025-10-07 15:12:43.293 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:12:43 np0005473739 nova_compute[259550]: 2025-10-07 15:12:43.307 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:12:43 np0005473739 nova_compute[259550]: 2025-10-07 15:12:43.309 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:12:43 np0005473739 nova_compute[259550]: 2025-10-07 15:12:43.309 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:12:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3336: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:43 np0005473739 nova_compute[259550]: 2025-10-07 15:12:43.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:44 np0005473739 nova_compute[259550]: 2025-10-07 15:12:44.587 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3337: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:45 np0005473739 nova_compute[259550]: 2025-10-07 15:12:45.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:45 np0005473739 nova_compute[259550]: 2025-10-07 15:12:45.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:12:45 np0005473739 nova_compute[259550]: 2025-10-07 15:12:45.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.019 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.020 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.021 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.034 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.042 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.042 2 WARNING nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.042 2 WARNING nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.043 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Removable base files: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122 /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.043 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/df58437405374f3849a6707234f9941cabaf3122#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.043 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c0abcd1b206cdc7a9d122dd8d1b88f4a5695e2f2#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.043 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.043 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.043 2 DEBUG nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Oct  7 11:12:46 np0005473739 nova_compute[259550]: 2025-10-07 15:12:46.044 2 INFO nova.virt.libvirt.imagecache [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Oct  7 11:12:47 np0005473739 nova_compute[259550]: 2025-10-07 15:12:47.006 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:12:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3338: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:48 np0005473739 nova_compute[259550]: 2025-10-07 15:12:48.680 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3339: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:49 np0005473739 nova_compute[259550]: 2025-10-07 15:12:49.589 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3340: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:12:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:12:53 np0005473739 podman[446733]: 2025-10-07 15:12:53.079486737 +0000 UTC m=+0.067624839 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:12:53 np0005473739 podman[446734]: 2025-10-07 15:12:53.099480274 +0000 UTC m=+0.074924833 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, container_name=iscsid, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:12:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3341: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:53 np0005473739 nova_compute[259550]: 2025-10-07 15:12:53.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:54 np0005473739 nova_compute[259550]: 2025-10-07 15:12:54.591 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3342: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3343: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:12:58 np0005473739 nova_compute[259550]: 2025-10-07 15:12:58.690 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:12:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3344: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:12:59 np0005473739 nova_compute[259550]: 2025-10-07 15:12:59.593 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:13:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:13:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:13:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:13:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:13:00.115 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:13:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3345: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3346: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:03 np0005473739 nova_compute[259550]: 2025-10-07 15:13:03.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:04 np0005473739 nova_compute[259550]: 2025-10-07 15:13:04.597 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3347: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3348: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:08 np0005473739 nova_compute[259550]: 2025-10-07 15:13:08.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3349: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:09 np0005473739 nova_compute[259550]: 2025-10-07 15:13:09.599 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:10 np0005473739 podman[446776]: 2025-10-07 15:13:10.075138616 +0000 UTC m=+0.058669495 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 11:13:10 np0005473739 podman[446777]: 2025-10-07 15:13:10.148248048 +0000 UTC m=+0.119970266 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001)
Oct  7 11:13:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3350: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3351: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:13 np0005473739 nova_compute[259550]: 2025-10-07 15:13:13.701 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:14 np0005473739 nova_compute[259550]: 2025-10-07 15:13:14.602 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3352: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3353: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:17 np0005473739 nova_compute[259550]: 2025-10-07 15:13:17.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:18 np0005473739 nova_compute[259550]: 2025-10-07 15:13:18.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3354: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:19 np0005473739 nova_compute[259550]: 2025-10-07 15:13:19.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3355: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:13:22
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.meta', '.rgw.root', 'default.rgw.log', 'default.rgw.meta', 'cephfs.cephfs.data', 'images', 'volumes', 'vms', '.mgr', 'default.rgw.control']
Oct  7 11:13:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3356: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:13:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:13:23 np0005473739 nova_compute[259550]: 2025-10-07 15:13:23.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:23 np0005473739 nova_compute[259550]: 2025-10-07 15:13:23.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:23 np0005473739 nova_compute[259550]: 2025-10-07 15:13:23.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:13:24 np0005473739 podman[446824]: 2025-10-07 15:13:24.076804395 +0000 UTC m=+0.065997627 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:13:24 np0005473739 podman[446825]: 2025-10-07 15:13:24.098343281 +0000 UTC m=+0.085068608 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:13:24 np0005473739 nova_compute[259550]: 2025-10-07 15:13:24.607 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:24 np0005473739 nova_compute[259550]: 2025-10-07 15:13:24.979 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3357: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:25 np0005473739 nova_compute[259550]: 2025-10-07 15:13:25.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3358: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:27 np0005473739 nova_compute[259550]: 2025-10-07 15:13:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:28 np0005473739 nova_compute[259550]: 2025-10-07 15:13:28.713 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3359: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:29 np0005473739 nova_compute[259550]: 2025-10-07 15:13:29.608 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:29 np0005473739 nova_compute[259550]: 2025-10-07 15:13:29.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3360: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  7 11:13:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:13:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/965044073' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:13:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:13:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/965044073' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:13:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3361: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  7 11:13:33 np0005473739 nova_compute[259550]: 2025-10-07 15:13:33.718 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:34 np0005473739 nova_compute[259550]: 2025-10-07 15:13:34.612 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3362: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Oct  7 11:13:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3363: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 11:13:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:13:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 27a4af63-00fe-4fa6-8b8e-b5ccbcc9eda0 does not exist
Oct  7 11:13:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev bae9f2e6-18fc-4994-82fd-96b1dd2a15ed does not exist
Oct  7 11:13:38 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 47900388-6140-401a-a348-aa847bfba54b does not exist
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:13:38 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:13:38 np0005473739 nova_compute[259550]: 2025-10-07 15:13:38.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:39 np0005473739 podman[447137]: 2025-10-07 15:13:39.110375095 +0000 UTC m=+0.051674030 container create d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:13:39 np0005473739 systemd[1]: Started libpod-conmon-d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f.scope.
Oct  7 11:13:39 np0005473739 podman[447137]: 2025-10-07 15:13:39.083637342 +0000 UTC m=+0.024936357 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:13:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:13:39 np0005473739 podman[447137]: 2025-10-07 15:13:39.210759275 +0000 UTC m=+0.152058230 container init d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:13:39 np0005473739 podman[447137]: 2025-10-07 15:13:39.222893894 +0000 UTC m=+0.164192859 container start d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:13:39 np0005473739 podman[447137]: 2025-10-07 15:13:39.228114171 +0000 UTC m=+0.169413146 container attach d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 11:13:39 np0005473739 nostalgic_lalande[447153]: 167 167
Oct  7 11:13:39 np0005473739 systemd[1]: libpod-d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f.scope: Deactivated successfully.
Oct  7 11:13:39 np0005473739 podman[447137]: 2025-10-07 15:13:39.232113487 +0000 UTC m=+0.173412412 container died d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3)
Oct  7 11:13:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5827ba1752a225ba69277638b5d59b7b9336f48514004fe1e6e4a4698c0d916a-merged.mount: Deactivated successfully.
Oct  7 11:13:39 np0005473739 podman[447137]: 2025-10-07 15:13:39.299365275 +0000 UTC m=+0.240664200 container remove d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nostalgic_lalande, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:13:39 np0005473739 systemd[1]: libpod-conmon-d9a57ccb2f90323890fcd16555697375be11b0c9349f278f5fd690cf52d67c9f.scope: Deactivated successfully.
Oct  7 11:13:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3364: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 11:13:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:13:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:13:39 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:13:39 np0005473739 podman[447179]: 2025-10-07 15:13:39.505153408 +0000 UTC m=+0.055214853 container create e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:13:39 np0005473739 systemd[1]: Started libpod-conmon-e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e.scope.
Oct  7 11:13:39 np0005473739 podman[447179]: 2025-10-07 15:13:39.479495733 +0000 UTC m=+0.029557168 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:13:39 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:13:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:39 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:39 np0005473739 nova_compute[259550]: 2025-10-07 15:13:39.614 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:39 np0005473739 podman[447179]: 2025-10-07 15:13:39.624351342 +0000 UTC m=+0.174412797 container init e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 11:13:39 np0005473739 podman[447179]: 2025-10-07 15:13:39.636843411 +0000 UTC m=+0.186904866 container start e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 11:13:39 np0005473739 podman[447179]: 2025-10-07 15:13:39.64326052 +0000 UTC m=+0.193322015 container attach e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 11:13:40 np0005473739 magical_archimedes[447195]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:13:40 np0005473739 magical_archimedes[447195]: --> relative data size: 1.0
Oct  7 11:13:40 np0005473739 magical_archimedes[447195]: --> All data devices are unavailable
Oct  7 11:13:40 np0005473739 systemd[1]: libpod-e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e.scope: Deactivated successfully.
Oct  7 11:13:40 np0005473739 systemd[1]: libpod-e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e.scope: Consumed 1.103s CPU time.
Oct  7 11:13:40 np0005473739 podman[447179]: 2025-10-07 15:13:40.781980461 +0000 UTC m=+1.332041886 container died e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 11:13:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-86598f04b766773d6d82e2ce28a01176d3fec428f86dcefd7073ed57769d28c2-merged.mount: Deactivated successfully.
Oct  7 11:13:40 np0005473739 podman[447225]: 2025-10-07 15:13:40.896957575 +0000 UTC m=+0.081313040 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251001)
Oct  7 11:13:40 np0005473739 podman[447179]: 2025-10-07 15:13:40.918194633 +0000 UTC m=+1.468256058 container remove e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=magical_archimedes, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 11:13:40 np0005473739 systemd[1]: libpod-conmon-e61c7096de7fa8a65418f48c6bd34c689c0a788ec7f687831e78ea7a9cdab11e.scope: Deactivated successfully.
Oct  7 11:13:40 np0005473739 podman[447227]: 2025-10-07 15:13:40.947973837 +0000 UTC m=+0.131131130 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:13:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3365: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 11:13:41 np0005473739 podman[447425]: 2025-10-07 15:13:41.688640299 +0000 UTC m=+0.095765561 container create af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:13:41 np0005473739 podman[447425]: 2025-10-07 15:13:41.622056317 +0000 UTC m=+0.029181569 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:13:41 np0005473739 systemd[1]: Started libpod-conmon-af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e.scope.
Oct  7 11:13:41 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:13:41 np0005473739 podman[447425]: 2025-10-07 15:13:41.817329373 +0000 UTC m=+0.224454685 container init af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 11:13:41 np0005473739 podman[447425]: 2025-10-07 15:13:41.82480761 +0000 UTC m=+0.231932832 container start af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:13:41 np0005473739 podman[447425]: 2025-10-07 15:13:41.829259097 +0000 UTC m=+0.236384429 container attach af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 11:13:41 np0005473739 upbeat_feynman[447441]: 167 167
Oct  7 11:13:41 np0005473739 systemd[1]: libpod-af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e.scope: Deactivated successfully.
Oct  7 11:13:41 np0005473739 podman[447425]: 2025-10-07 15:13:41.835923423 +0000 UTC m=+0.243048645 container died af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, ceph=True, OSD_FLAVOR=default)
Oct  7 11:13:41 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7ff6a829b4cc89b6ea19893e3b071e1c25fef49f5aa471aed1983f171d460e32-merged.mount: Deactivated successfully.
Oct  7 11:13:41 np0005473739 podman[447425]: 2025-10-07 15:13:41.882696312 +0000 UTC m=+0.289821544 container remove af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=upbeat_feynman, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 11:13:41 np0005473739 systemd[1]: libpod-conmon-af6ba029085d59400ffb8601d37e29b1a67c0ed8f55d7afd1c71c3f2d170321e.scope: Deactivated successfully.
Oct  7 11:13:42 np0005473739 podman[447464]: 2025-10-07 15:13:42.123441414 +0000 UTC m=+0.066948862 container create 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True)
Oct  7 11:13:42 np0005473739 systemd[1]: Started libpod-conmon-0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369.scope.
Oct  7 11:13:42 np0005473739 podman[447464]: 2025-10-07 15:13:42.095613292 +0000 UTC m=+0.039120790 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:13:42 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:13:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:42 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:42 np0005473739 podman[447464]: 2025-10-07 15:13:42.221177375 +0000 UTC m=+0.164684883 container init 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 11:13:42 np0005473739 podman[447464]: 2025-10-07 15:13:42.22820845 +0000 UTC m=+0.171715888 container start 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:13:42 np0005473739 podman[447464]: 2025-10-07 15:13:42.233037057 +0000 UTC m=+0.176544465 container attach 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507)
Oct  7 11:13:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:42 np0005473739 boring_easley[447481]: {
Oct  7 11:13:42 np0005473739 boring_easley[447481]:    "0": [
Oct  7 11:13:42 np0005473739 boring_easley[447481]:        {
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "devices": [
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "/dev/loop3"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            ],
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_name": "ceph_lv0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_size": "21470642176",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "name": "ceph_lv0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "tags": {
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cluster_name": "ceph",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.crush_device_class": "",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.encrypted": "0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osd_id": "0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.type": "block",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.vdo": "0"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            },
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "type": "block",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "vg_name": "ceph_vg0"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:        }
Oct  7 11:13:42 np0005473739 boring_easley[447481]:    ],
Oct  7 11:13:42 np0005473739 boring_easley[447481]:    "1": [
Oct  7 11:13:42 np0005473739 boring_easley[447481]:        {
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "devices": [
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "/dev/loop4"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            ],
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_name": "ceph_lv1",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_size": "21470642176",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "name": "ceph_lv1",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "tags": {
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cluster_name": "ceph",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.crush_device_class": "",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.encrypted": "0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osd_id": "1",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.type": "block",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.vdo": "0"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            },
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "type": "block",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "vg_name": "ceph_vg1"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:        }
Oct  7 11:13:42 np0005473739 boring_easley[447481]:    ],
Oct  7 11:13:42 np0005473739 boring_easley[447481]:    "2": [
Oct  7 11:13:42 np0005473739 boring_easley[447481]:        {
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "devices": [
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "/dev/loop5"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            ],
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_name": "ceph_lv2",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_size": "21470642176",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "name": "ceph_lv2",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "tags": {
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.cluster_name": "ceph",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.crush_device_class": "",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.encrypted": "0",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osd_id": "2",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.type": "block",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:                "ceph.vdo": "0"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            },
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "type": "block",
Oct  7 11:13:42 np0005473739 boring_easley[447481]:            "vg_name": "ceph_vg2"
Oct  7 11:13:42 np0005473739 boring_easley[447481]:        }
Oct  7 11:13:42 np0005473739 boring_easley[447481]:    ]
Oct  7 11:13:42 np0005473739 boring_easley[447481]: }
Oct  7 11:13:43 np0005473739 systemd[1]: libpod-0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369.scope: Deactivated successfully.
Oct  7 11:13:43 np0005473739 podman[447464]: 2025-10-07 15:13:43.015455066 +0000 UTC m=+0.958962474 container died 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:13:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-424fa7c3d40d8a4f0fa425124cfa0bbd7dedc4feceb008f36b7a1249f0c3d81d-merged.mount: Deactivated successfully.
Oct  7 11:13:43 np0005473739 podman[447464]: 2025-10-07 15:13:43.071657784 +0000 UTC m=+1.015165182 container remove 0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=boring_easley, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:13:43 np0005473739 systemd[1]: libpod-conmon-0d9b6671f6f632cc5704da45d3813590c6585fb83a3d963a2db6094fa1536369.scope: Deactivated successfully.
Oct  7 11:13:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3366: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct  7 11:13:43 np0005473739 nova_compute[259550]: 2025-10-07 15:13:43.725 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:43 np0005473739 podman[447642]: 2025-10-07 15:13:43.742259463 +0000 UTC m=+0.040939678 container create 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507)
Oct  7 11:13:43 np0005473739 systemd[1]: Started libpod-conmon-0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26.scope.
Oct  7 11:13:43 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:13:43 np0005473739 podman[447642]: 2025-10-07 15:13:43.81627565 +0000 UTC m=+0.114955855 container init 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef)
Oct  7 11:13:43 np0005473739 podman[447642]: 2025-10-07 15:13:43.7227604 +0000 UTC m=+0.021440595 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:13:43 np0005473739 podman[447642]: 2025-10-07 15:13:43.823992453 +0000 UTC m=+0.122672628 container start 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:13:43 np0005473739 podman[447642]: 2025-10-07 15:13:43.827663249 +0000 UTC m=+0.126343444 container attach 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:13:43 np0005473739 angry_noether[447658]: 167 167
Oct  7 11:13:43 np0005473739 systemd[1]: libpod-0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26.scope: Deactivated successfully.
Oct  7 11:13:43 np0005473739 podman[447642]: 2025-10-07 15:13:43.833351669 +0000 UTC m=+0.132031874 container died 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, ceph=True, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:13:43 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b1b1952cc0103757621f89e77937f113737cd21fc52bc1845e70c49aa0b01052-merged.mount: Deactivated successfully.
Oct  7 11:13:43 np0005473739 podman[447642]: 2025-10-07 15:13:43.873342281 +0000 UTC m=+0.172022456 container remove 0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=angry_noether, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 11:13:43 np0005473739 systemd[1]: libpod-conmon-0668a6167e9e5ca7199ea9515527abd482219726ec9773bf9044ab2ea7abab26.scope: Deactivated successfully.
Oct  7 11:13:43 np0005473739 nova_compute[259550]: 2025-10-07 15:13:43.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.003 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.004 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.005 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.005 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.005 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:13:44 np0005473739 podman[447683]: 2025-10-07 15:13:44.048014696 +0000 UTC m=+0.050340765 container create c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 11:13:44 np0005473739 systemd[1]: Started libpod-conmon-c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c.scope.
Oct  7 11:13:44 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:13:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:44 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:13:44 np0005473739 podman[447683]: 2025-10-07 15:13:44.031719377 +0000 UTC m=+0.034045456 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:13:44 np0005473739 podman[447683]: 2025-10-07 15:13:44.14360276 +0000 UTC m=+0.145928829 container init c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:13:44 np0005473739 podman[447683]: 2025-10-07 15:13:44.151717633 +0000 UTC m=+0.154043702 container start c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:13:44 np0005473739 podman[447683]: 2025-10-07 15:13:44.155875043 +0000 UTC m=+0.158201102 container attach c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 11:13:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:13:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1751947822' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.470 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.465s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.615 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.683 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.684 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3570MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.685 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.685 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.753 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:13:44 np0005473739 nova_compute[259550]: 2025-10-07 15:13:44.772 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]: {
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "osd_id": 2,
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "type": "bluestore"
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:    },
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "osd_id": 1,
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "type": "bluestore"
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:    },
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "osd_id": 0,
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:        "type": "bluestore"
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]:    }
Oct  7 11:13:45 np0005473739 pedantic_wiles[447701]: }
Oct  7 11:13:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:13:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2691601116' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:13:45 np0005473739 nova_compute[259550]: 2025-10-07 15:13:45.233 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:13:45 np0005473739 nova_compute[259550]: 2025-10-07 15:13:45.240 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:13:45 np0005473739 systemd[1]: libpod-c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c.scope: Deactivated successfully.
Oct  7 11:13:45 np0005473739 systemd[1]: libpod-c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c.scope: Consumed 1.078s CPU time.
Oct  7 11:13:45 np0005473739 nova_compute[259550]: 2025-10-07 15:13:45.261 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:13:45 np0005473739 nova_compute[259550]: 2025-10-07 15:13:45.263 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:13:45 np0005473739 nova_compute[259550]: 2025-10-07 15:13:45.263 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.578s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:13:45 np0005473739 podman[447777]: 2025-10-07 15:13:45.299327418 +0000 UTC m=+0.036507511 container died c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 11:13:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3367: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct  7 11:13:45 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c99259b08f96986b2c2090542e984f8bddca1c89083edf97aba55bbcfcc0bc86-merged.mount: Deactivated successfully.
Oct  7 11:13:45 np0005473739 podman[447777]: 2025-10-07 15:13:45.4244649 +0000 UTC m=+0.161644943 container remove c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_wiles, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:13:45 np0005473739 systemd[1]: libpod-conmon-c31e488098a5079bd9c363a9849f5e3b2f54c0a06745b6ea7780dbd1ac2c411c.scope: Deactivated successfully.
Oct  7 11:13:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:13:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:13:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:13:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:13:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 9ea6d768-0976-4a38-8f17-be34405c2e8b does not exist
Oct  7 11:13:45 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 04f05371-d51d-4f3a-9ef8-8c8d43ef1f82 does not exist
Oct  7 11:13:45 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:13:45 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:13:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3368: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Oct  7 11:13:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:48 np0005473739 nova_compute[259550]: 2025-10-07 15:13:48.264 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:48 np0005473739 nova_compute[259550]: 2025-10-07 15:13:48.265 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:13:48 np0005473739 nova_compute[259550]: 2025-10-07 15:13:48.265 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:13:48 np0005473739 nova_compute[259550]: 2025-10-07 15:13:48.286 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:13:48 np0005473739 nova_compute[259550]: 2025-10-07 15:13:48.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:48 np0005473739 nova_compute[259550]: 2025-10-07 15:13:48.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:13:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3369: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:49 np0005473739 nova_compute[259550]: 2025-10-07 15:13:49.619 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3370: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:13:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:13:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3371: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:53 np0005473739 nova_compute[259550]: 2025-10-07 15:13:53.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:54 np0005473739 nova_compute[259550]: 2025-10-07 15:13:54.621 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:55 np0005473739 podman[447842]: 2025-10-07 15:13:55.106022127 +0000 UTC m=+0.090148112 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 11:13:55 np0005473739 podman[447843]: 2025-10-07 15:13:55.112279262 +0000 UTC m=+0.088584381 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']})
Oct  7 11:13:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3372: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3373: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:13:58 np0005473739 nova_compute[259550]: 2025-10-07 15:13:58.737 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:13:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3374: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:13:59 np0005473739 nova_compute[259550]: 2025-10-07 15:13:59.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:14:00.116 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:14:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:14:00.116 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:14:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:14:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:14:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3375: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3376: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:03 np0005473739 nova_compute[259550]: 2025-10-07 15:14:03.741 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:04 np0005473739 nova_compute[259550]: 2025-10-07 15:14:04.626 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3377: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3378: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:08 np0005473739 nova_compute[259550]: 2025-10-07 15:14:08.745 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3379: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:09 np0005473739 nova_compute[259550]: 2025-10-07 15:14:09.627 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:11 np0005473739 podman[447884]: 2025-10-07 15:14:11.079771487 +0000 UTC m=+0.067211339 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:14:11 np0005473739 podman[447885]: 2025-10-07 15:14:11.12208036 +0000 UTC m=+0.108164716 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible)
Oct  7 11:14:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3380: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3381: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:13 np0005473739 nova_compute[259550]: 2025-10-07 15:14:13.749 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:14 np0005473739 nova_compute[259550]: 2025-10-07 15:14:14.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3382: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3383: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #165. Immutable memtables: 0.
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.712630) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 101] Flushing memtable with next log file: 165
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057712675, "job": 101, "event": "flush_started", "num_memtables": 1, "num_entries": 1159, "num_deletes": 251, "total_data_size": 1740474, "memory_usage": 1770592, "flush_reason": "Manual Compaction"}
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 101] Level-0 flush table #166: started
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057729296, "cf_name": "default", "job": 101, "event": "table_file_creation", "file_number": 166, "file_size": 1713048, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 69260, "largest_seqno": 70418, "table_properties": {"data_size": 1707407, "index_size": 3035, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1541, "raw_key_size": 11816, "raw_average_key_size": 19, "raw_value_size": 1696209, "raw_average_value_size": 2836, "num_data_blocks": 136, "num_entries": 598, "num_filter_entries": 598, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759849943, "oldest_key_time": 1759849943, "file_creation_time": 1759850057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 166, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 101] Flush lasted 16715 microseconds, and 4844 cpu microseconds.
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.729340) [db/flush_job.cc:967] [default] [JOB 101] Level-0 flush table #166: 1713048 bytes OK
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.729364) [db/memtable_list.cc:519] [default] Level-0 commit table #166 started
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.736345) [db/memtable_list.cc:722] [default] Level-0 commit table #166: memtable #1 done
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.736363) EVENT_LOG_v1 {"time_micros": 1759850057736357, "job": 101, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.736380) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 101] Try to delete WAL files size 1735158, prev total WAL file size 1735158, number of live WAL files 2.
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000162.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.737231) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730036373737' seq:72057594037927935, type:22 .. '7061786F730037303239' seq:0, type:0; will stop at (end)
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 102] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 101 Base level 0, inputs: [166(1672KB)], [164(9328KB)]
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057737271, "job": 102, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [166], "files_L6": [164], "score": -1, "input_data_size": 11265088, "oldest_snapshot_seqno": -1}
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 102] Generated table #167: 8743 keys, 9492209 bytes, temperature: kUnknown
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057810171, "cf_name": "default", "job": 102, "event": "table_file_creation", "file_number": 167, "file_size": 9492209, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 9438392, "index_size": 30856, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21893, "raw_key_size": 229210, "raw_average_key_size": 26, "raw_value_size": 9286884, "raw_average_value_size": 1062, "num_data_blocks": 1187, "num_entries": 8743, "num_filter_entries": 8743, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850057, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 167, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.810574) [db/compaction/compaction_job.cc:1663] [default] [JOB 102] Compacted 1@0 + 1@6 files to L6 => 9492209 bytes
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.816332) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 154.3 rd, 130.0 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(1.6, 9.1 +0.0 blob) out(9.1 +0.0 blob), read-write-amplify(12.1) write-amplify(5.5) OK, records in: 9257, records dropped: 514 output_compression: NoCompression
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.816376) EVENT_LOG_v1 {"time_micros": 1759850057816362, "job": 102, "event": "compaction_finished", "compaction_time_micros": 73031, "compaction_time_cpu_micros": 25310, "output_level": 6, "num_output_files": 1, "total_output_size": 9492209, "num_input_records": 9257, "num_output_records": 8743, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000166.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057817154, "job": 102, "event": "table_file_deletion", "file_number": 166}
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000164.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850057820606, "job": 102, "event": "table_file_deletion", "file_number": 164}
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.737141) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820710) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820717) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820720) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:14:17 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:14:17.820726) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:14:17 np0005473739 nova_compute[259550]: 2025-10-07 15:14:17.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:18 np0005473739 nova_compute[259550]: 2025-10-07 15:14:18.754 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3384: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:19 np0005473739 nova_compute[259550]: 2025-10-07 15:14:19.632 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3385: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:14:22
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', 'backups', 'vms', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', 'default.rgw.meta', 'volumes', 'images', '.mgr', '.rgw.root']
Oct  7 11:14:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:14:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3386: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:23 np0005473739 nova_compute[259550]: 2025-10-07 15:14:23.759 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:24 np0005473739 nova_compute[259550]: 2025-10-07 15:14:24.634 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:24 np0005473739 nova_compute[259550]: 2025-10-07 15:14:24.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:24 np0005473739 nova_compute[259550]: 2025-10-07 15:14:24.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:24 np0005473739 nova_compute[259550]: 2025-10-07 15:14:24.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:14:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3387: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:26 np0005473739 podman[447928]: 2025-10-07 15:14:26.067886694 +0000 UTC m=+0.053277872 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, config_id=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3)
Oct  7 11:14:26 np0005473739 podman[447927]: 2025-10-07 15:14:26.068302185 +0000 UTC m=+0.057118064 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Oct  7 11:14:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3388: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:27 np0005473739 nova_compute[259550]: 2025-10-07 15:14:27.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:28 np0005473739 nova_compute[259550]: 2025-10-07 15:14:28.762 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:28 np0005473739 nova_compute[259550]: 2025-10-07 15:14:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3389: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:29 np0005473739 nova_compute[259550]: 2025-10-07 15:14:29.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:29 np0005473739 nova_compute[259550]: 2025-10-07 15:14:29.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3390: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:14:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3995275996' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:14:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:14:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/3995275996' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:14:32 np0005473739 nova_compute[259550]: 2025-10-07 15:14:32.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.0006659854830044931 of space, bias 1.0, pg target 0.19979564490134794 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:14:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3391: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:33 np0005473739 nova_compute[259550]: 2025-10-07 15:14:33.766 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:34 np0005473739 nova_compute[259550]: 2025-10-07 15:14:34.637 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3392: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3393: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:38 np0005473739 nova_compute[259550]: 2025-10-07 15:14:38.770 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3394: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:39 np0005473739 nova_compute[259550]: 2025-10-07 15:14:39.640 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3395: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:42 np0005473739 podman[447969]: 2025-10-07 15:14:42.105056432 +0000 UTC m=+0.076571685 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Oct  7 11:14:42 np0005473739 podman[447970]: 2025-10-07 15:14:42.171631343 +0000 UTC m=+0.136747757 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller)
Oct  7 11:14:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3396: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:43 np0005473739 nova_compute[259550]: 2025-10-07 15:14:43.773 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:44 np0005473739 nova_compute[259550]: 2025-10-07 15:14:44.642 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:44 np0005473739 nova_compute[259550]: 2025-10-07 15:14:44.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.014 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:14:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3397: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:14:45 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3797503227' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.447 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.433s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.639 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.641 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3634MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.642 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:14:45 np0005473739 nova_compute[259550]: 2025-10-07 15:14:45.642 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.051 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.051 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.158 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.292 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.292 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.312 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.334 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.352 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:14:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 428c4d31-a3ee-48bc-9655-86899c7a158a does not exist
Oct  7 11:14:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c7cb0c7b-078b-47ab-af4a-17413a045f22 does not exist
Oct  7 11:14:46 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev b1021ced-d779-4f7c-b346-948dcbf39a0d does not exist
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:14:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2330554311' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.955 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.603s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.961 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.981 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.983 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:14:46 np0005473739 nova_compute[259550]: 2025-10-07 15:14:46.983 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:14:47 np0005473739 podman[448330]: 2025-10-07 15:14:47.172310293 +0000 UTC m=+0.051626779 container create f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 11:14:47 np0005473739 systemd[1]: Started libpod-conmon-f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97.scope.
Oct  7 11:14:47 np0005473739 podman[448330]: 2025-10-07 15:14:47.146597046 +0000 UTC m=+0.025913522 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:14:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:14:47 np0005473739 podman[448330]: 2025-10-07 15:14:47.275463896 +0000 UTC m=+0.154780422 container init f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:14:47 np0005473739 podman[448330]: 2025-10-07 15:14:47.284286919 +0000 UTC m=+0.163603405 container start f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:14:47 np0005473739 podman[448330]: 2025-10-07 15:14:47.289611699 +0000 UTC m=+0.168928205 container attach f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:14:47 np0005473739 serene_napier[448346]: 167 167
Oct  7 11:14:47 np0005473739 systemd[1]: libpod-f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97.scope: Deactivated successfully.
Oct  7 11:14:47 np0005473739 conmon[448346]: conmon f2248a036b7c241101ed <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97.scope/container/memory.events
Oct  7 11:14:47 np0005473739 podman[448330]: 2025-10-07 15:14:47.294190309 +0000 UTC m=+0.173506795 container died f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 11:14:47 np0005473739 systemd[1]: var-lib-containers-storage-overlay-9d3c79e2cbeb5af053de20f6e2e38b1d33c76fdd7a41b39c8df870a9d98cb14f-merged.mount: Deactivated successfully.
Oct  7 11:14:47 np0005473739 podman[448330]: 2025-10-07 15:14:47.343177058 +0000 UTC m=+0.222493514 container remove f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_napier, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 11:14:47 np0005473739 systemd[1]: libpod-conmon-f2248a036b7c241101ed5e647f111bec6ad42e1824405ec1b1a5de7f18e81f97.scope: Deactivated successfully.
Oct  7 11:14:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3398: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:47 np0005473739 podman[448370]: 2025-10-07 15:14:47.563200095 +0000 UTC m=+0.061287694 container create 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507)
Oct  7 11:14:47 np0005473739 systemd[1]: Started libpod-conmon-320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7.scope.
Oct  7 11:14:47 np0005473739 podman[448370]: 2025-10-07 15:14:47.545076598 +0000 UTC m=+0.043164177 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:14:47 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:14:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:47 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:47 np0005473739 podman[448370]: 2025-10-07 15:14:47.665371972 +0000 UTC m=+0.163459541 container init 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 11:14:47 np0005473739 podman[448370]: 2025-10-07 15:14:47.673091135 +0000 UTC m=+0.171178694 container start 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:14:47 np0005473739 podman[448370]: 2025-10-07 15:14:47.676598667 +0000 UTC m=+0.174686256 container attach 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:14:48 np0005473739 priceless_panini[448387]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:14:48 np0005473739 priceless_panini[448387]: --> relative data size: 1.0
Oct  7 11:14:48 np0005473739 priceless_panini[448387]: --> All data devices are unavailable
Oct  7 11:14:48 np0005473739 systemd[1]: libpod-320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7.scope: Deactivated successfully.
Oct  7 11:14:48 np0005473739 podman[448370]: 2025-10-07 15:14:48.741119207 +0000 UTC m=+1.239206776 container died 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 11:14:48 np0005473739 systemd[1]: libpod-320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7.scope: Consumed 1.021s CPU time.
Oct  7 11:14:48 np0005473739 nova_compute[259550]: 2025-10-07 15:14:48.775 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:48 np0005473739 systemd[1]: var-lib-containers-storage-overlay-73bcbf8b902919bf75d38b575b4855d198d3467fa5d186ed6ca3380451eafa6e-merged.mount: Deactivated successfully.
Oct  7 11:14:48 np0005473739 podman[448370]: 2025-10-07 15:14:48.897173922 +0000 UTC m=+1.395261491 container remove 320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=priceless_panini, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 11:14:48 np0005473739 systemd[1]: libpod-conmon-320db93007f349a40ce6c8bfe5450697a2b78aa1e2523e556cfa9eeba4c8bfd7.scope: Deactivated successfully.
Oct  7 11:14:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3399: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:49 np0005473739 podman[448572]: 2025-10-07 15:14:49.574148297 +0000 UTC m=+0.058145759 container create 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, io.buildah.version=1.39.3)
Oct  7 11:14:49 np0005473739 systemd[1]: Started libpod-conmon-461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd.scope.
Oct  7 11:14:49 np0005473739 podman[448572]: 2025-10-07 15:14:49.53928813 +0000 UTC m=+0.023285612 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:14:49 np0005473739 nova_compute[259550]: 2025-10-07 15:14:49.643 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:49 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:14:49 np0005473739 podman[448572]: 2025-10-07 15:14:49.686960935 +0000 UTC m=+0.170958407 container init 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:14:49 np0005473739 podman[448572]: 2025-10-07 15:14:49.696140546 +0000 UTC m=+0.180138008 container start 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 11:14:49 np0005473739 pensive_sinoussi[448587]: 167 167
Oct  7 11:14:49 np0005473739 systemd[1]: libpod-461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd.scope: Deactivated successfully.
Oct  7 11:14:49 np0005473739 conmon[448587]: conmon 461e2d787392734d625d <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd.scope/container/memory.events
Oct  7 11:14:49 np0005473739 podman[448572]: 2025-10-07 15:14:49.711646274 +0000 UTC m=+0.195643756 container attach 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:14:49 np0005473739 podman[448572]: 2025-10-07 15:14:49.712291591 +0000 UTC m=+0.196289083 container died 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:14:49 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5e0210df84fed5dbb3a0902703b33bd828ab54fd2651d3379d79ef456bc98a59-merged.mount: Deactivated successfully.
Oct  7 11:14:49 np0005473739 podman[448572]: 2025-10-07 15:14:49.778104722 +0000 UTC m=+0.262102174 container remove 461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pensive_sinoussi, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:14:49 np0005473739 systemd[1]: libpod-conmon-461e2d787392734d625dec90aa9a6036604a512b2aa11aa9060a869e7df71acd.scope: Deactivated successfully.
Oct  7 11:14:50 np0005473739 podman[448612]: 2025-10-07 15:14:49.953465775 +0000 UTC m=+0.027618228 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:14:50 np0005473739 podman[448612]: 2025-10-07 15:14:50.271466458 +0000 UTC m=+0.345618881 container create 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507)
Oct  7 11:14:50 np0005473739 systemd[1]: Started libpod-conmon-5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac.scope.
Oct  7 11:14:50 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:14:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:50 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:50 np0005473739 podman[448612]: 2025-10-07 15:14:50.539161049 +0000 UTC m=+0.613313542 container init 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:14:50 np0005473739 podman[448612]: 2025-10-07 15:14:50.551749621 +0000 UTC m=+0.625902034 container start 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:14:50 np0005473739 podman[448612]: 2025-10-07 15:14:50.569342094 +0000 UTC m=+0.643494607 container attach 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default)
Oct  7 11:14:50 np0005473739 nova_compute[259550]: 2025-10-07 15:14:50.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:50 np0005473739 nova_compute[259550]: 2025-10-07 15:14:50.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:14:50 np0005473739 nova_compute[259550]: 2025-10-07 15:14:50.986 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:14:51 np0005473739 nova_compute[259550]: 2025-10-07 15:14:51.005 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:14:51 np0005473739 nova_compute[259550]: 2025-10-07 15:14:51.005 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:14:51 np0005473739 zen_shamir[448629]: {
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:    "0": [
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:        {
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "devices": [
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "/dev/loop3"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            ],
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_name": "ceph_lv0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_size": "21470642176",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "name": "ceph_lv0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "tags": {
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cluster_name": "ceph",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.crush_device_class": "",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.encrypted": "0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osd_id": "0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.type": "block",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.vdo": "0"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            },
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "type": "block",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "vg_name": "ceph_vg0"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:        }
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:    ],
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:    "1": [
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:        {
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "devices": [
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "/dev/loop4"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            ],
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_name": "ceph_lv1",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_size": "21470642176",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "name": "ceph_lv1",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "tags": {
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cluster_name": "ceph",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.crush_device_class": "",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.encrypted": "0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osd_id": "1",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.type": "block",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.vdo": "0"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            },
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "type": "block",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "vg_name": "ceph_vg1"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:        }
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:    ],
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:    "2": [
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:        {
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "devices": [
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "/dev/loop5"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            ],
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_name": "ceph_lv2",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_size": "21470642176",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "name": "ceph_lv2",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "tags": {
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.cluster_name": "ceph",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.crush_device_class": "",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.encrypted": "0",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osd_id": "2",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.type": "block",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:                "ceph.vdo": "0"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            },
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "type": "block",
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:            "vg_name": "ceph_vg2"
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:        }
Oct  7 11:14:51 np0005473739 zen_shamir[448629]:    ]
Oct  7 11:14:51 np0005473739 zen_shamir[448629]: }
Oct  7 11:14:51 np0005473739 systemd[1]: libpod-5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac.scope: Deactivated successfully.
Oct  7 11:14:51 np0005473739 podman[448612]: 2025-10-07 15:14:51.322066783 +0000 UTC m=+1.396219206 container died 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:14:51 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0d56517ca73afdfb34a385b6739292fa2c965059cba89462273a9b21915a3550-merged.mount: Deactivated successfully.
Oct  7 11:14:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3400: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:51 np0005473739 podman[448612]: 2025-10-07 15:14:51.385773578 +0000 UTC m=+1.459926001 container remove 5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_shamir, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Oct  7 11:14:51 np0005473739 systemd[1]: libpod-conmon-5c0b1bfa6eb6d8d6e6d61bfe1b75c41863ef081cfb2d2cd29b830fa713ba8aac.scope: Deactivated successfully.
Oct  7 11:14:52 np0005473739 podman[448793]: 2025-10-07 15:14:52.115310507 +0000 UTC m=+0.057129464 container create 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:14:52 np0005473739 systemd[1]: Started libpod-conmon-3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43.scope.
Oct  7 11:14:52 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:14:52 np0005473739 podman[448793]: 2025-10-07 15:14:52.095374192 +0000 UTC m=+0.037193199 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:14:52 np0005473739 podman[448793]: 2025-10-07 15:14:52.201917245 +0000 UTC m=+0.143736282 container init 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 11:14:52 np0005473739 podman[448793]: 2025-10-07 15:14:52.211046115 +0000 UTC m=+0.152865062 container start 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:14:52 np0005473739 podman[448793]: 2025-10-07 15:14:52.214569887 +0000 UTC m=+0.156388884 container attach 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 11:14:52 np0005473739 hopeful_banach[448810]: 167 167
Oct  7 11:14:52 np0005473739 systemd[1]: libpod-3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43.scope: Deactivated successfully.
Oct  7 11:14:52 np0005473739 podman[448793]: 2025-10-07 15:14:52.217177906 +0000 UTC m=+0.158996893 container died 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:14:52 np0005473739 systemd[1]: var-lib-containers-storage-overlay-ac228111f78e1185cd881ba3f8ad7adc98b6b3d676d11ee522e5fb755b201bd4-merged.mount: Deactivated successfully.
Oct  7 11:14:52 np0005473739 podman[448793]: 2025-10-07 15:14:52.294389707 +0000 UTC m=+0.236208674 container remove 3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_banach, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:14:52 np0005473739 systemd[1]: libpod-conmon-3222cd3736a93366224ecbc87033b99d2eb4443620c1407bd048540fdd684f43.scope: Deactivated successfully.
Oct  7 11:14:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:52 np0005473739 podman[448833]: 2025-10-07 15:14:52.506466165 +0000 UTC m=+0.056114037 container create 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:14:52 np0005473739 systemd[1]: Started libpod-conmon-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope.
Oct  7 11:14:52 np0005473739 podman[448833]: 2025-10-07 15:14:52.479411983 +0000 UTC m=+0.029059915 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:14:52 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:14:52 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:52 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:52 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:52 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:14:52 np0005473739 podman[448833]: 2025-10-07 15:14:52.623341539 +0000 UTC m=+0.172989401 container init 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:14:52 np0005473739 podman[448833]: 2025-10-07 15:14:52.634340369 +0000 UTC m=+0.183988211 container start 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:14:52 np0005473739 podman[448833]: 2025-10-07 15:14:52.638332154 +0000 UTC m=+0.187980056 container attach 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS)
Oct  7 11:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:14:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:14:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3401: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:53 np0005473739 stoic_booth[448850]: {
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "osd_id": 2,
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "type": "bluestore"
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:    },
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "osd_id": 1,
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "type": "bluestore"
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:    },
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "osd_id": 0,
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:        "type": "bluestore"
Oct  7 11:14:53 np0005473739 stoic_booth[448850]:    }
Oct  7 11:14:53 np0005473739 stoic_booth[448850]: }
Oct  7 11:14:53 np0005473739 systemd[1]: libpod-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope: Deactivated successfully.
Oct  7 11:14:53 np0005473739 systemd[1]: libpod-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope: Consumed 1.100s CPU time.
Oct  7 11:14:53 np0005473739 conmon[448850]: conmon 8160dcd933f9b02de8ca <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope/container/memory.events
Oct  7 11:14:53 np0005473739 podman[448833]: 2025-10-07 15:14:53.727382708 +0000 UTC m=+1.277030550 container died 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, ceph=True)
Oct  7 11:14:53 np0005473739 systemd[1]: var-lib-containers-storage-overlay-70f7032e65233f4bbffa93f8f5df1cac882b776b43188a3a7beb0efb13b6cd19-merged.mount: Deactivated successfully.
Oct  7 11:14:53 np0005473739 nova_compute[259550]: 2025-10-07 15:14:53.780 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:53 np0005473739 podman[448833]: 2025-10-07 15:14:53.818400452 +0000 UTC m=+1.368048324 container remove 8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stoic_booth, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 11:14:53 np0005473739 systemd[1]: libpod-conmon-8160dcd933f9b02de8caa9eb18dada8ae0f540699a3e2b757a90bec634363911.scope: Deactivated successfully.
Oct  7 11:14:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:14:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:14:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:14:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:14:53 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 66d93530-0350-4d01-b885-93c78c8957c2 does not exist
Oct  7 11:14:53 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev a6084d7b-42d8-4d49-98a6-108295f2b42b does not exist
Oct  7 11:14:54 np0005473739 nova_compute[259550]: 2025-10-07 15:14:54.645 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:14:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:14:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3402: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:57 np0005473739 podman[448947]: 2025-10-07 15:14:57.101271949 +0000 UTC m=+0.072676463 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, config_id=iscsid, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Oct  7 11:14:57 np0005473739 podman[448946]: 2025-10-07 15:14:57.110708827 +0000 UTC m=+0.093993613 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd)
Oct  7 11:14:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3403: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:14:58 np0005473739 nova_compute[259550]: 2025-10-07 15:14:58.785 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:14:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3404: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:14:59 np0005473739 nova_compute[259550]: 2025-10-07 15:14:59.648 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:15:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:15:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:15:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:15:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:15:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3405: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3406: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:03 np0005473739 nova_compute[259550]: 2025-10-07 15:15:03.789 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:04 np0005473739 nova_compute[259550]: 2025-10-07 15:15:04.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3407: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3408: 305 pgs: 305 active+clean; 41 MiB data, 1.0 GiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e306 do_prune osdmap full prune enabled
Oct  7 11:15:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e307 e307: 3 total, 3 up, 3 in
Oct  7 11:15:07 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e307: 3 total, 3 up, 3 in
Oct  7 11:15:08 np0005473739 nova_compute[259550]: 2025-10-07 15:15:08.794 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3410: 305 pgs: 305 active+clean; 37 MiB data, 1020 MiB used, 59 GiB / 60 GiB avail; 3.7 KiB/s rd, 511 B/s wr, 6 op/s
Oct  7 11:15:09 np0005473739 nova_compute[259550]: 2025-10-07 15:15:09.650 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e307 do_prune osdmap full prune enabled
Oct  7 11:15:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e308 e308: 3 total, 3 up, 3 in
Oct  7 11:15:10 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e308: 3 total, 3 up, 3 in
Oct  7 11:15:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3412: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 23 KiB/s rd, 1.9 KiB/s wr, 33 op/s
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e308 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e308 do_prune osdmap full prune enabled
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e309 e309: 3 total, 3 up, 3 in
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e309: 3 total, 3 up, 3 in
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #168. Immutable memtables: 0.
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.476664) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 103] Flushing memtable with next log file: 168
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112476732, "job": 103, "event": "flush_started", "num_memtables": 1, "num_entries": 734, "num_deletes": 258, "total_data_size": 867344, "memory_usage": 880520, "flush_reason": "Manual Compaction"}
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 103] Level-0 flush table #169: started
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112486772, "cf_name": "default", "job": 103, "event": "table_file_creation", "file_number": 169, "file_size": 858833, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 70419, "largest_seqno": 71152, "table_properties": {"data_size": 855002, "index_size": 1610, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8674, "raw_average_key_size": 19, "raw_value_size": 847200, "raw_average_value_size": 1886, "num_data_blocks": 71, "num_entries": 449, "num_filter_entries": 449, "num_deletions": 258, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850058, "oldest_key_time": 1759850058, "file_creation_time": 1759850112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 169, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 103] Flush lasted 10149 microseconds, and 4436 cpu microseconds.
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.486822) [db/flush_job.cc:967] [default] [JOB 103] Level-0 flush table #169: 858833 bytes OK
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.486845) [db/memtable_list.cc:519] [default] Level-0 commit table #169 started
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490041) [db/memtable_list.cc:722] [default] Level-0 commit table #169: memtable #1 done
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490057) EVENT_LOG_v1 {"time_micros": 1759850112490051, "job": 103, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490075) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 103] Try to delete WAL files size 863544, prev total WAL file size 863544, number of live WAL files 2.
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000165.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490568) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033303230' seq:72057594037927935, type:22 .. '6C6F676D0033323731' seq:0, type:0; will stop at (end)
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 104] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 103 Base level 0, inputs: [169(838KB)], [167(9269KB)]
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112490616, "job": 104, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [169], "files_L6": [167], "score": -1, "input_data_size": 10351042, "oldest_snapshot_seqno": -1}
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 104] Generated table #170: 8662 keys, 10238067 bytes, temperature: kUnknown
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112586286, "cf_name": "default", "job": 104, "event": "table_file_creation", "file_number": 170, "file_size": 10238067, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10183311, "index_size": 31987, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 21701, "raw_key_size": 228455, "raw_average_key_size": 26, "raw_value_size": 10031790, "raw_average_value_size": 1158, "num_data_blocks": 1234, "num_entries": 8662, "num_filter_entries": 8662, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850112, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 170, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.586620) [db/compaction/compaction_job.cc:1663] [default] [JOB 104] Compacted 1@0 + 1@6 files to L6 => 10238067 bytes
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.588053) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 108.1 rd, 106.9 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.8, 9.1 +0.0 blob) out(9.8 +0.0 blob), read-write-amplify(24.0) write-amplify(11.9) OK, records in: 9192, records dropped: 530 output_compression: NoCompression
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.588072) EVENT_LOG_v1 {"time_micros": 1759850112588063, "job": 104, "event": "compaction_finished", "compaction_time_micros": 95782, "compaction_time_cpu_micros": 43279, "output_level": 6, "num_output_files": 1, "total_output_size": 10238067, "num_input_records": 9192, "num_output_records": 8662, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000169.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112588391, "job": 104, "event": "table_file_deletion", "file_number": 169}
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000167.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850112590205, "job": 104, "event": "table_file_deletion", "file_number": 167}
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.490498) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590272) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590279) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590281) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590283) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:15:12 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:15:12.590285) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:15:13 np0005473739 podman[448986]: 2025-10-07 15:15:13.142448639 +0000 UTC m=+0.113751623 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:15:13 np0005473739 podman[448987]: 2025-10-07 15:15:13.147790279 +0000 UTC m=+0.114161124 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:15:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3414: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 31 KiB/s rd, 2.5 KiB/s wr, 44 op/s
Oct  7 11:15:13 np0005473739 nova_compute[259550]: 2025-10-07 15:15:13.797 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:14 np0005473739 nova_compute[259550]: 2025-10-07 15:15:14.652 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3415: 305 pgs: 305 active+clean; 4.9 MiB data, 988 MiB used, 59 GiB / 60 GiB avail; 43 KiB/s rd, 2.7 KiB/s wr, 58 op/s
Oct  7 11:15:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e309 do_prune osdmap full prune enabled
Oct  7 11:15:16 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e310 e310: 3 total, 3 up, 3 in
Oct  7 11:15:16 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e310: 3 total, 3 up, 3 in
Oct  7 11:15:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3417: 305 pgs: 305 active+clean; 8.4 MiB data, 984 MiB used, 59 GiB / 60 GiB avail; 48 KiB/s rd, 1.2 MiB/s wr, 64 op/s
Oct  7 11:15:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:18 np0005473739 nova_compute[259550]: 2025-10-07 15:15:18.801 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:18 np0005473739 nova_compute[259550]: 2025-10-07 15:15:18.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3418: 305 pgs: 305 active+clean; 21 MiB data, 996 MiB used, 59 GiB / 60 GiB avail; 32 KiB/s rd, 2.6 MiB/s wr, 44 op/s
Oct  7 11:15:19 np0005473739 nova_compute[259550]: 2025-10-07 15:15:19.653 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3419: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 28 KiB/s rd, 2.3 MiB/s wr, 40 op/s
Oct  7 11:15:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e310 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e310 do_prune osdmap full prune enabled
Oct  7 11:15:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 e311: 3 total, 3 up, 3 in
Oct  7 11:15:22 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e311: 3 total, 3 up, 3 in
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:15:22
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.data', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.meta', 'default.rgw.control', '.mgr', 'volumes', 'images', 'backups', 'vms', 'default.rgw.meta']
Oct  7 11:15:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3421: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 13 KiB/s rd, 2.6 MiB/s wr, 21 op/s
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:15:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:15:23 np0005473739 nova_compute[259550]: 2025-10-07 15:15:23.805 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:24 np0005473739 nova_compute[259550]: 2025-10-07 15:15:24.655 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3422: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 9.0 KiB/s rd, 1.5 MiB/s wr, 13 op/s
Oct  7 11:15:26 np0005473739 nova_compute[259550]: 2025-10-07 15:15:26.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:26 np0005473739 nova_compute[259550]: 2025-10-07 15:15:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:26 np0005473739 nova_compute[259550]: 2025-10-07 15:15:26.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:15:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3423: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 7.7 KiB/s rd, 1.2 MiB/s wr, 11 op/s
Oct  7 11:15:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:28 np0005473739 podman[449033]: 2025-10-07 15:15:28.07568542 +0000 UTC m=+0.054175975 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 11:15:28 np0005473739 podman[449032]: 2025-10-07 15:15:28.101975582 +0000 UTC m=+0.086320441 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Oct  7 11:15:28 np0005473739 nova_compute[259550]: 2025-10-07 15:15:28.809 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:28 np0005473739 nova_compute[259550]: 2025-10-07 15:15:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3424: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:29 np0005473739 nova_compute[259550]: 2025-10-07 15:15:29.657 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:29 np0005473739 nova_compute[259550]: 2025-10-07 15:15:29.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:29 np0005473739 nova_compute[259550]: 2025-10-07 15:15:29.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3425: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:15:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1490900485' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:15:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:15:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1490900485' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 0.00033308812756397733 of space, bias 1.0, pg target 0.0999264382691932 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:15:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3426: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:33 np0005473739 nova_compute[259550]: 2025-10-07 15:15:33.813 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:34 np0005473739 nova_compute[259550]: 2025-10-07 15:15:34.659 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3427: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3428: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:38 np0005473739 nova_compute[259550]: 2025-10-07 15:15:38.817 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3429: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:39 np0005473739 nova_compute[259550]: 2025-10-07 15:15:39.662 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3430: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3431: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:43 np0005473739 nova_compute[259550]: 2025-10-07 15:15:43.821 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:44 np0005473739 podman[449073]: 2025-10-07 15:15:44.062588496 +0000 UTC m=+0.053203331 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true)
Oct  7 11:15:44 np0005473739 podman[449074]: 2025-10-07 15:15:44.115487436 +0000 UTC m=+0.099089616 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0)
Oct  7 11:15:44 np0005473739 nova_compute[259550]: 2025-10-07 15:15:44.664 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3432: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:45 np0005473739 nova_compute[259550]: 2025-10-07 15:15:45.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.010 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.011 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.011 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.011 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.012 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:15:46 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:15:46 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2035377137' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.499 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.487s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.663 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.665 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3619MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.665 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.666 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.722 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.723 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:15:46 np0005473739 nova_compute[259550]: 2025-10-07 15:15:46.742 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:15:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:15:47 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4744337' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:15:47 np0005473739 nova_compute[259550]: 2025-10-07 15:15:47.256 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.514s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:15:47 np0005473739 nova_compute[259550]: 2025-10-07 15:15:47.265 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:15:47 np0005473739 nova_compute[259550]: 2025-10-07 15:15:47.285 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:15:47 np0005473739 nova_compute[259550]: 2025-10-07 15:15:47.287 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:15:47 np0005473739 nova_compute[259550]: 2025-10-07 15:15:47.287 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:15:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3433: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:48 np0005473739 nova_compute[259550]: 2025-10-07 15:15:48.824 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3434: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:49 np0005473739 nova_compute[259550]: 2025-10-07 15:15:49.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:51 np0005473739 nova_compute[259550]: 2025-10-07 15:15:51.288 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:51 np0005473739 nova_compute[259550]: 2025-10-07 15:15:51.289 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:15:51 np0005473739 nova_compute[259550]: 2025-10-07 15:15:51.289 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:15:51 np0005473739 nova_compute[259550]: 2025-10-07 15:15:51.304 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:15:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3435: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:15:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:15:52 np0005473739 nova_compute[259550]: 2025-10-07 15:15:52.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:15:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3436: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:53 np0005473739 nova_compute[259550]: 2025-10-07 15:15:53.830 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:15:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:15:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:15:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:15:54 np0005473739 nova_compute[259550]: 2025-10-07 15:15:54.667 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:15:54 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:15:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3437: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"} v 0) v1
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:15:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev da8639ee-af7b-4093-8c20-a62e7ee28f55 does not exist
Oct  7 11:15:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5226074b-cbdc-42b4-9cbf-d5c848717b41 does not exist
Oct  7 11:15:55 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 701eb824-98fe-4803-8beb-f42f1a1f50c7 does not exist
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config rm", "who": "osd/host:compute-0", "name": "osd_memory_target"}]: dispatch
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:15:55 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:15:56 np0005473739 podman[449557]: 2025-10-07 15:15:56.133444069 +0000 UTC m=+0.040100946 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:15:56 np0005473739 podman[449557]: 2025-10-07 15:15:56.255222702 +0000 UTC m=+0.161879559 container create 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 11:15:56 np0005473739 systemd[1]: Started libpod-conmon-9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a.scope.
Oct  7 11:15:56 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:15:56 np0005473739 podman[449557]: 2025-10-07 15:15:56.555402517 +0000 UTC m=+0.462059394 container init 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 11:15:56 np0005473739 podman[449557]: 2025-10-07 15:15:56.56537808 +0000 UTC m=+0.472034977 container start 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 11:15:56 np0005473739 epic_poincare[449574]: 167 167
Oct  7 11:15:56 np0005473739 systemd[1]: libpod-9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a.scope: Deactivated successfully.
Oct  7 11:15:56 np0005473739 podman[449557]: 2025-10-07 15:15:56.714749359 +0000 UTC m=+0.621406316 container attach 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2)
Oct  7 11:15:56 np0005473739 podman[449557]: 2025-10-07 15:15:56.717045028 +0000 UTC m=+0.623701985 container died 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:15:56 np0005473739 systemd[1]: var-lib-containers-storage-overlay-3f4cb06a5ccca84580d644f67e4184fd4924c28d28c22eb2575e6252a567c579-merged.mount: Deactivated successfully.
Oct  7 11:15:57 np0005473739 podman[449557]: 2025-10-07 15:15:57.235869435 +0000 UTC m=+1.142526312 container remove 9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=epic_poincare, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:15:57 np0005473739 systemd[1]: libpod-conmon-9daff3193fbb75816c4cf9812593ba8201cfef8266efb8a5bedcb05038373d0a.scope: Deactivated successfully.
Oct  7 11:15:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3438: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:57 np0005473739 podman[449598]: 2025-10-07 15:15:57.39572856 +0000 UTC m=+0.036187773 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:15:57 np0005473739 podman[449598]: 2025-10-07 15:15:57.490023731 +0000 UTC m=+0.130482894 container create b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True)
Oct  7 11:15:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:15:57 np0005473739 systemd[1]: Started libpod-conmon-b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910.scope.
Oct  7 11:15:57 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:15:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:15:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:15:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:15:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:15:57 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:15:57 np0005473739 podman[449598]: 2025-10-07 15:15:57.839068991 +0000 UTC m=+0.479528224 container init b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:15:57 np0005473739 podman[449598]: 2025-10-07 15:15:57.854300422 +0000 UTC m=+0.494759545 container start b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:15:57 np0005473739 podman[449598]: 2025-10-07 15:15:57.974456592 +0000 UTC m=+0.614915745 container attach b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0)
Oct  7 11:15:58 np0005473739 nova_compute[259550]: 2025-10-07 15:15:58.833 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:15:58 np0005473739 competent_bohr[449614]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:15:58 np0005473739 competent_bohr[449614]: --> relative data size: 1.0
Oct  7 11:15:58 np0005473739 competent_bohr[449614]: --> All data devices are unavailable
Oct  7 11:15:58 np0005473739 systemd[1]: libpod-b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910.scope: Deactivated successfully.
Oct  7 11:15:58 np0005473739 podman[449598]: 2025-10-07 15:15:58.980250687 +0000 UTC m=+1.620709810 container died b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0)
Oct  7 11:15:58 np0005473739 systemd[1]: libpod-b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910.scope: Consumed 1.080s CPU time.
Oct  7 11:15:59 np0005473739 systemd[1]: var-lib-containers-storage-overlay-6c08f1462b629f502ad6cfd8332cd22e7f196871c31418c8fe3783da8bd6160b-merged.mount: Deactivated successfully.
Oct  7 11:15:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3439: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:15:59 np0005473739 podman[449598]: 2025-10-07 15:15:59.43320942 +0000 UTC m=+2.073668553 container remove b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=competent_bohr, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 11:15:59 np0005473739 systemd[1]: libpod-conmon-b175a29ecc6c0d8beae895d22a9b4db9bf6eb9a4bed9ed8061e66c8425268910.scope: Deactivated successfully.
Oct  7 11:15:59 np0005473739 podman[449644]: 2025-10-07 15:15:59.519433819 +0000 UTC m=+0.505689722 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001)
Oct  7 11:15:59 np0005473739 podman[449653]: 2025-10-07 15:15:59.519689595 +0000 UTC m=+0.503412501 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Oct  7 11:15:59 np0005473739 nova_compute[259550]: 2025-10-07 15:15:59.669 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:16:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:16:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:16:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:16:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:16:00.117 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:16:00 np0005473739 podman[449831]: 2025-10-07 15:16:00.092201754 +0000 UTC m=+0.025448190 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:16:00 np0005473739 podman[449831]: 2025-10-07 15:16:00.253202099 +0000 UTC m=+0.186448485 container create b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 11:16:00 np0005473739 systemd[1]: Started libpod-conmon-b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680.scope.
Oct  7 11:16:00 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:16:00 np0005473739 podman[449831]: 2025-10-07 15:16:00.721538757 +0000 UTC m=+0.654785213 container init b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:16:00 np0005473739 podman[449831]: 2025-10-07 15:16:00.730733559 +0000 UTC m=+0.663979905 container start b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:16:00 np0005473739 stupefied_kalam[449848]: 167 167
Oct  7 11:16:00 np0005473739 systemd[1]: libpod-b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680.scope: Deactivated successfully.
Oct  7 11:16:00 np0005473739 podman[449831]: 2025-10-07 15:16:00.794079905 +0000 UTC m=+0.727326281 container attach b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True)
Oct  7 11:16:00 np0005473739 podman[449831]: 2025-10-07 15:16:00.794616239 +0000 UTC m=+0.727862615 container died b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:16:00 np0005473739 systemd[1]: var-lib-containers-storage-overlay-40d0d3847fda64ec7383e4040f3fbff0ed3433f1cb4804a97b2f3951f9cc7466-merged.mount: Deactivated successfully.
Oct  7 11:16:00 np0005473739 podman[449831]: 2025-10-07 15:16:00.963113851 +0000 UTC m=+0.896360197 container remove b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=stupefied_kalam, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:16:00 np0005473739 systemd[1]: libpod-conmon-b1049979d03638e95c4a4b81d886121a59efc1ed090725bbee3b7768ab86e680.scope: Deactivated successfully.
Oct  7 11:16:01 np0005473739 podman[449872]: 2025-10-07 15:16:01.183789516 +0000 UTC m=+0.026408886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:16:01 np0005473739 podman[449872]: 2025-10-07 15:16:01.314478742 +0000 UTC m=+0.157098122 container create 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:16:01 np0005473739 systemd[1]: Started libpod-conmon-3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e.scope.
Oct  7 11:16:01 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:16:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3440: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:16:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:16:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:16:01 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:16:01 np0005473739 podman[449872]: 2025-10-07 15:16:01.435579338 +0000 UTC m=+0.278198708 container init 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:16:01 np0005473739 podman[449872]: 2025-10-07 15:16:01.446761463 +0000 UTC m=+0.289380813 container start 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:16:01 np0005473739 podman[449872]: 2025-10-07 15:16:01.455391449 +0000 UTC m=+0.298010799 container attach 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 11:16:02 np0005473739 great_murdock[449889]: {
Oct  7 11:16:02 np0005473739 great_murdock[449889]:    "0": [
Oct  7 11:16:02 np0005473739 great_murdock[449889]:        {
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "devices": [
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "/dev/loop3"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            ],
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_name": "ceph_lv0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_size": "21470642176",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "name": "ceph_lv0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "tags": {
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cluster_name": "ceph",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.crush_device_class": "",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.encrypted": "0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osd_id": "0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.type": "block",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.vdo": "0"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            },
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "type": "block",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "vg_name": "ceph_vg0"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:        }
Oct  7 11:16:02 np0005473739 great_murdock[449889]:    ],
Oct  7 11:16:02 np0005473739 great_murdock[449889]:    "1": [
Oct  7 11:16:02 np0005473739 great_murdock[449889]:        {
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "devices": [
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "/dev/loop4"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            ],
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_name": "ceph_lv1",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_size": "21470642176",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "name": "ceph_lv1",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "tags": {
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cluster_name": "ceph",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.crush_device_class": "",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.encrypted": "0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osd_id": "1",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.type": "block",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.vdo": "0"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            },
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "type": "block",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "vg_name": "ceph_vg1"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:        }
Oct  7 11:16:02 np0005473739 great_murdock[449889]:    ],
Oct  7 11:16:02 np0005473739 great_murdock[449889]:    "2": [
Oct  7 11:16:02 np0005473739 great_murdock[449889]:        {
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "devices": [
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "/dev/loop5"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            ],
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_name": "ceph_lv2",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_size": "21470642176",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "name": "ceph_lv2",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "tags": {
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.cluster_name": "ceph",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.crush_device_class": "",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.encrypted": "0",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osd_id": "2",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.type": "block",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:                "ceph.vdo": "0"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            },
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "type": "block",
Oct  7 11:16:02 np0005473739 great_murdock[449889]:            "vg_name": "ceph_vg2"
Oct  7 11:16:02 np0005473739 great_murdock[449889]:        }
Oct  7 11:16:02 np0005473739 great_murdock[449889]:    ]
Oct  7 11:16:02 np0005473739 great_murdock[449889]: }
Oct  7 11:16:02 np0005473739 systemd[1]: libpod-3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e.scope: Deactivated successfully.
Oct  7 11:16:02 np0005473739 podman[449872]: 2025-10-07 15:16:02.304824781 +0000 UTC m=+1.147444131 container died 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:16:02 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5f7921156b790b07f43a8edd1fc21f45c7ce7c2c8035ef9512c1d84be3452947-merged.mount: Deactivated successfully.
Oct  7 11:16:02 np0005473739 podman[449872]: 2025-10-07 15:16:02.375449809 +0000 UTC m=+1.218069159 container remove 3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=great_murdock, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, ceph=True, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:16:02 np0005473739 systemd[1]: libpod-conmon-3b23a4b058d661681947d93082939c1ace6c3db817d1ec9acfbd38180881ef7e.scope: Deactivated successfully.
Oct  7 11:16:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:03 np0005473739 podman[450051]: 2025-10-07 15:16:03.041602681 +0000 UTC m=+0.038051783 container create cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, ceph=True, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 11:16:03 np0005473739 systemd[1]: Started libpod-conmon-cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee.scope.
Oct  7 11:16:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:16:03 np0005473739 podman[450051]: 2025-10-07 15:16:03.120251599 +0000 UTC m=+0.116700711 container init cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:16:03 np0005473739 podman[450051]: 2025-10-07 15:16:03.023809013 +0000 UTC m=+0.020258115 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:16:03 np0005473739 podman[450051]: 2025-10-07 15:16:03.127628783 +0000 UTC m=+0.124077875 container start cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:16:03 np0005473739 zen_feynman[450067]: 167 167
Oct  7 11:16:03 np0005473739 systemd[1]: libpod-cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee.scope: Deactivated successfully.
Oct  7 11:16:03 np0005473739 podman[450051]: 2025-10-07 15:16:03.133439256 +0000 UTC m=+0.129888358 container attach cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 11:16:03 np0005473739 podman[450051]: 2025-10-07 15:16:03.134093054 +0000 UTC m=+0.130542146 container died cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 11:16:03 np0005473739 systemd[1]: var-lib-containers-storage-overlay-28a11f444fcdcf5045486b8752ed82a835886fb001feea17619c80f59beb2f0f-merged.mount: Deactivated successfully.
Oct  7 11:16:03 np0005473739 podman[450051]: 2025-10-07 15:16:03.179688182 +0000 UTC m=+0.176137274 container remove cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=zen_feynman, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:16:03 np0005473739 systemd[1]: libpod-conmon-cea40ac9c8ff3dba54d6ce70f83170d26256e30ed8e6410ba886ad7721e3ecee.scope: Deactivated successfully.
Oct  7 11:16:03 np0005473739 podman[450090]: 2025-10-07 15:16:03.340066631 +0000 UTC m=+0.044088911 container create 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507)
Oct  7 11:16:03 np0005473739 systemd[1]: Started libpod-conmon-40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b.scope.
Oct  7 11:16:03 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:16:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:16:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:16:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:16:03 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:16:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3441: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:03 np0005473739 podman[450090]: 2025-10-07 15:16:03.405604015 +0000 UTC m=+0.109626315 container init 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 11:16:03 np0005473739 podman[450090]: 2025-10-07 15:16:03.413253196 +0000 UTC m=+0.117275486 container start 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:16:03 np0005473739 podman[450090]: 2025-10-07 15:16:03.320001623 +0000 UTC m=+0.024023953 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:16:03 np0005473739 podman[450090]: 2025-10-07 15:16:03.417912968 +0000 UTC m=+0.121935268 container attach 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Oct  7 11:16:03 np0005473739 nova_compute[259550]: 2025-10-07 15:16:03.841 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]: {
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "osd_id": 2,
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "type": "bluestore"
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:    },
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "osd_id": 1,
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "type": "bluestore"
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:    },
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "osd_id": 0,
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:        "type": "bluestore"
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]:    }
Oct  7 11:16:04 np0005473739 wizardly_albattani[450107]: }
Oct  7 11:16:04 np0005473739 systemd[1]: libpod-40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b.scope: Deactivated successfully.
Oct  7 11:16:04 np0005473739 podman[450090]: 2025-10-07 15:16:04.468486121 +0000 UTC m=+1.172508401 container died 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:16:04 np0005473739 systemd[1]: libpod-40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b.scope: Consumed 1.062s CPU time.
Oct  7 11:16:04 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f6d84a48ded24c6f69ec57f9dacf31724f654d038386bfd9db1ccdd048e04563-merged.mount: Deactivated successfully.
Oct  7 11:16:04 np0005473739 podman[450090]: 2025-10-07 15:16:04.56650212 +0000 UTC m=+1.270524410 container remove 40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wizardly_albattani, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.license=GPLv2)
Oct  7 11:16:04 np0005473739 systemd[1]: libpod-conmon-40dfd93c3db510563313fd7fbc13c9192d3286e1324d6f49310b54501e1ea96b.scope: Deactivated successfully.
Oct  7 11:16:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:16:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:16:04 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:16:04 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:16:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 27b8b4a8-9bbb-424d-a626-a1d237c023e5 does not exist
Oct  7 11:16:04 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 49946aa6-69b7-42c3-b6a5-96ef96126bc4 does not exist
Oct  7 11:16:04 np0005473739 nova_compute[259550]: 2025-10-07 15:16:04.671 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3442: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:16:05 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:16:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3443: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:08 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e311 do_prune osdmap full prune enabled
Oct  7 11:16:08 np0005473739 nova_compute[259550]: 2025-10-07 15:16:08.849 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:09 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e312 e312: 3 total, 3 up, 3 in
Oct  7 11:16:09 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e312: 3 total, 3 up, 3 in
Oct  7 11:16:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3445: 305 pgs: 305 active+clean; 21 MiB data, 1004 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct  7 11:16:09 np0005473739 nova_compute[259550]: 2025-10-07 15:16:09.674 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:09 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #52. Immutable memtables: 0.
Oct  7 11:16:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3446: 305 pgs: 305 active+clean; 21 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct  7 11:16:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3447: 305 pgs: 305 active+clean; 21 MiB data, 1022 MiB used, 59 GiB / 60 GiB avail; 6.4 KiB/s rd, 614 B/s wr, 8 op/s
Oct  7 11:16:13 np0005473739 nova_compute[259550]: 2025-10-07 15:16:13.853 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:14 np0005473739 nova_compute[259550]: 2025-10-07 15:16:14.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:15 np0005473739 podman[450205]: 2025-10-07 15:16:15.08984574 +0000 UTC m=+0.071986515 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.license=GPLv2)
Oct  7 11:16:15 np0005473739 podman[450206]: 2025-10-07 15:16:15.164252037 +0000 UTC m=+0.143918796 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 11:16:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3448: 305 pgs: 305 active+clean; 8.4 MiB data, 1009 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.1 KiB/s wr, 24 op/s
Oct  7 11:16:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3449: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 18 KiB/s rd, 1.4 KiB/s wr, 25 op/s
Oct  7 11:16:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e312 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e312 do_prune osdmap full prune enabled
Oct  7 11:16:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 e313: 3 total, 3 up, 3 in
Oct  7 11:16:17 np0005473739 ceph-mon[74295]: log_channel(cluster) log [DBG] : osdmap e313: 3 total, 3 up, 3 in
Oct  7 11:16:18 np0005473739 nova_compute[259550]: 2025-10-07 15:16:18.858 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3451: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct  7 11:16:19 np0005473739 nova_compute[259550]: 2025-10-07 15:16:19.678 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:20 np0005473739 nova_compute[259550]: 2025-10-07 15:16:20.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3452: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct  7 11:16:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:16:22
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['backups', 'cephfs.cephfs.data', 'default.rgw.log', 'default.rgw.control', 'images', 'default.rgw.meta', '.mgr', '.rgw.root', 'volumes', 'vms', 'cephfs.cephfs.meta']
Oct  7 11:16:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3453: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 12 KiB/s rd, 818 B/s wr, 16 op/s
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:16:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:16:23 np0005473739 nova_compute[259550]: 2025-10-07 15:16:23.862 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:24 np0005473739 nova_compute[259550]: 2025-10-07 15:16:24.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3454: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 614 B/s rd, 307 B/s wr, 1 op/s
Oct  7 11:16:26 np0005473739 nova_compute[259550]: 2025-10-07 15:16:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:26 np0005473739 nova_compute[259550]: 2025-10-07 15:16:26.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:16:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3455: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:27 np0005473739 nova_compute[259550]: 2025-10-07 15:16:27.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:28 np0005473739 nova_compute[259550]: 2025-10-07 15:16:28.906 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3456: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:29 np0005473739 nova_compute[259550]: 2025-10-07 15:16:29.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:29 np0005473739 nova_compute[259550]: 2025-10-07 15:16:29.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:30 np0005473739 podman[450249]: 2025-10-07 15:16:30.073453887 +0000 UTC m=+0.060874681 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 11:16:30 np0005473739 podman[450250]: 2025-10-07 15:16:30.074445124 +0000 UTC m=+0.059401044 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:16:30 np0005473739 nova_compute[259550]: 2025-10-07 15:16:30.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3457: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:31 np0005473739 nova_compute[259550]: 2025-10-07 15:16:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:16:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385556054' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:16:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:16:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2385556054' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:16:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3458: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:33 np0005473739 nova_compute[259550]: 2025-10-07 15:16:33.910 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:33 np0005473739 nova_compute[259550]: 2025-10-07 15:16:33.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:34 np0005473739 nova_compute[259550]: 2025-10-07 15:16:34.683 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3459: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3460: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:37 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:38 np0005473739 nova_compute[259550]: 2025-10-07 15:16:38.915 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3461: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:39 np0005473739 nova_compute[259550]: 2025-10-07 15:16:39.714 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3462: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3463: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:43 np0005473739 nova_compute[259550]: 2025-10-07 15:16:43.967 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:44 np0005473739 nova_compute[259550]: 2025-10-07 15:16:44.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3464: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:46 np0005473739 podman[450291]: 2025-10-07 15:16:46.094312236 +0000 UTC m=+0.086813744 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Oct  7 11:16:46 np0005473739 podman[450292]: 2025-10-07 15:16:46.100852659 +0000 UTC m=+0.091734354 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:16:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3465: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:47 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:47 np0005473739 nova_compute[259550]: 2025-10-07 15:16:47.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.014 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:16:48 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:16:48 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1277041791' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.546 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.692 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.693 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3601MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.693 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.693 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.751 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.751 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.767 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:16:48 np0005473739 nova_compute[259550]: 2025-10-07 15:16:48.969 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:49 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:16:49 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3081836187' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:16:49 np0005473739 nova_compute[259550]: 2025-10-07 15:16:49.232 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.466s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:16:49 np0005473739 nova_compute[259550]: 2025-10-07 15:16:49.239 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:16:49 np0005473739 nova_compute[259550]: 2025-10-07 15:16:49.258 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:16:49 np0005473739 nova_compute[259550]: 2025-10-07 15:16:49.260 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:16:49 np0005473739 nova_compute[259550]: 2025-10-07 15:16:49.260 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:16:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3466: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:49 np0005473739 nova_compute[259550]: 2025-10-07 15:16:49.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3467: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:52 np0005473739 nova_compute[259550]: 2025-10-07 15:16:52.261 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:52 np0005473739 nova_compute[259550]: 2025-10-07 15:16:52.262 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:16:52 np0005473739 nova_compute[259550]: 2025-10-07 15:16:52.262 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:16:52 np0005473739 nova_compute[259550]: 2025-10-07 15:16:52.297 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:16:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:16:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:16:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:16:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:16:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:16:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:16:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3468: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:53 np0005473739 nova_compute[259550]: 2025-10-07 15:16:53.973 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:53 np0005473739 nova_compute[259550]: 2025-10-07 15:16:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:16:54 np0005473739 nova_compute[259550]: 2025-10-07 15:16:54.719 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3469: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3470: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:16:58 np0005473739 nova_compute[259550]: 2025-10-07 15:16:58.978 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:16:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3471: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:16:59 np0005473739 nova_compute[259550]: 2025-10-07 15:16:59.722 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:17:00.118 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:17:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:17:00.119 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:17:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:17:00.119 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:17:01 np0005473739 podman[450376]: 2025-10-07 15:17:01.06715326 +0000 UTC m=+0.052172443 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.license=GPLv2)
Oct  7 11:17:01 np0005473739 podman[450377]: 2025-10-07 15:17:01.095776634 +0000 UTC m=+0.081358162 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=iscsid, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=iscsid)
Oct  7 11:17:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3472: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:02 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:17:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3473: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:03 np0005473739 nova_compute[259550]: 2025-10-07 15:17:03.982 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:04 np0005473739 nova_compute[259550]: 2025-10-07 15:17:04.723 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3474: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:05 np0005473739 podman[450589]: 2025-10-07 15:17:05.743654875 +0000 UTC m=+0.130755480 container exec f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:17:05 np0005473739 podman[450589]: 2025-10-07 15:17:05.870617845 +0000 UTC m=+0.257718460 container exec_died f803401b563e7daa4638d591e1a62b8c30e5f510f6be54cff1c5cb4f81d20b63 (image=quay.io/ceph/ceph:v18, name=ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mon-compute-0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef)
Oct  7 11:17:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:17:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:06 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:17:06 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3475: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:07 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d22b4abc-bf75-420a-ada5-30d408deab50 does not exist
Oct  7 11:17:07 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 76976241-f4d4-4fd6-b83a-1f932378c2f3 does not exist
Oct  7 11:17:07 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3c1abc65-b67a-43de-a5ea-54c9c5f42538 does not exist
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:07 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:17:08 np0005473739 podman[451019]: 2025-10-07 15:17:08.57210003 +0000 UTC m=+0.060620795 container create 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:17:08 np0005473739 systemd[1]: Started libpod-conmon-18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d.scope.
Oct  7 11:17:08 np0005473739 podman[451019]: 2025-10-07 15:17:08.539827231 +0000 UTC m=+0.028348046 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:17:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:17:08 np0005473739 podman[451019]: 2025-10-07 15:17:08.670309983 +0000 UTC m=+0.158830858 container init 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_REF=reef)
Oct  7 11:17:08 np0005473739 podman[451019]: 2025-10-07 15:17:08.67969627 +0000 UTC m=+0.168217035 container start 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:17:08 np0005473739 exciting_chaplygin[451035]: 167 167
Oct  7 11:17:08 np0005473739 systemd[1]: libpod-18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d.scope: Deactivated successfully.
Oct  7 11:17:08 np0005473739 conmon[451035]: conmon 18ba261746bddfc86906 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d.scope/container/memory.events
Oct  7 11:17:08 np0005473739 podman[451019]: 2025-10-07 15:17:08.696168214 +0000 UTC m=+0.184689079 container attach 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:17:08 np0005473739 podman[451019]: 2025-10-07 15:17:08.696901493 +0000 UTC m=+0.185422298 container died 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3)
Oct  7 11:17:08 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7019f26ad2b515eafc39b1aa8a45c252083dd84bf7dfcd5b581e1201c331a796-merged.mount: Deactivated successfully.
Oct  7 11:17:08 np0005473739 podman[451019]: 2025-10-07 15:17:08.848178041 +0000 UTC m=+0.336698836 container remove 18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_chaplygin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 11:17:08 np0005473739 systemd[1]: libpod-conmon-18ba261746bddfc869065071cff10c0c44d77928b9e869b13ef92af530ca829d.scope: Deactivated successfully.
Oct  7 11:17:08 np0005473739 nova_compute[259550]: 2025-10-07 15:17:08.987 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:09 np0005473739 podman[451062]: 2025-10-07 15:17:09.082561236 +0000 UTC m=+0.074254483 container create 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True)
Oct  7 11:17:09 np0005473739 podman[451062]: 2025-10-07 15:17:09.036503205 +0000 UTC m=+0.028196532 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:17:09 np0005473739 systemd[1]: Started libpod-conmon-878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723.scope.
Oct  7 11:17:09 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:17:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:09 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:09 np0005473739 podman[451062]: 2025-10-07 15:17:09.195341013 +0000 UTC m=+0.187034340 container init 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 11:17:09 np0005473739 podman[451062]: 2025-10-07 15:17:09.209482174 +0000 UTC m=+0.201175411 container start 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507)
Oct  7 11:17:09 np0005473739 podman[451062]: 2025-10-07 15:17:09.21767956 +0000 UTC m=+0.209372897 container attach 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 11:17:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3476: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:09 np0005473739 nova_compute[259550]: 2025-10-07 15:17:09.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:10 np0005473739 heuristic_greider[451079]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:17:10 np0005473739 heuristic_greider[451079]: --> relative data size: 1.0
Oct  7 11:17:10 np0005473739 heuristic_greider[451079]: --> All data devices are unavailable
Oct  7 11:17:10 np0005473739 systemd[1]: libpod-878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723.scope: Deactivated successfully.
Oct  7 11:17:10 np0005473739 systemd[1]: libpod-878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723.scope: Consumed 1.035s CPU time.
Oct  7 11:17:10 np0005473739 podman[451062]: 2025-10-07 15:17:10.291374591 +0000 UTC m=+1.283067828 container died 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 11:17:10 np0005473739 systemd[1]: var-lib-containers-storage-overlay-72827cc906f0dc57a7dfd8b48be15bc5d5e5e6f96975d5e968a7c7a745ac9bbe-merged.mount: Deactivated successfully.
Oct  7 11:17:10 np0005473739 podman[451062]: 2025-10-07 15:17:10.643296238 +0000 UTC m=+1.634989515 container remove 878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_greider, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 11:17:10 np0005473739 systemd[1]: libpod-conmon-878218513e55b55123dd8e9baf300fdc88dae83253426a6fccd045ba3c689723.scope: Deactivated successfully.
Oct  7 11:17:11 np0005473739 podman[451259]: 2025-10-07 15:17:11.350405197 +0000 UTC m=+0.083309233 container create a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True)
Oct  7 11:17:11 np0005473739 podman[451259]: 2025-10-07 15:17:11.292581996 +0000 UTC m=+0.025486042 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:17:11 np0005473739 systemd[1]: Started libpod-conmon-a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715.scope.
Oct  7 11:17:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3477: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:11 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:17:11 np0005473739 podman[451259]: 2025-10-07 15:17:11.488683143 +0000 UTC m=+0.221587189 container init a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:17:11 np0005473739 podman[451259]: 2025-10-07 15:17:11.497027663 +0000 UTC m=+0.229931659 container start a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2)
Oct  7 11:17:11 np0005473739 podman[451259]: 2025-10-07 15:17:11.504266273 +0000 UTC m=+0.237170349 container attach a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True)
Oct  7 11:17:11 np0005473739 nervous_allen[451276]: 167 167
Oct  7 11:17:11 np0005473739 systemd[1]: libpod-a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715.scope: Deactivated successfully.
Oct  7 11:17:11 np0005473739 conmon[451276]: conmon a9997cf5bd7170a1bd81 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715.scope/container/memory.events
Oct  7 11:17:11 np0005473739 podman[451259]: 2025-10-07 15:17:11.509315566 +0000 UTC m=+0.242219672 container died a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:17:11 np0005473739 systemd[1]: var-lib-containers-storage-overlay-f6b97085cca2ebeb335a05cd450b5aa457b2bdd7f0e9ff24a2d4d1dcd55907eb-merged.mount: Deactivated successfully.
Oct  7 11:17:11 np0005473739 podman[451259]: 2025-10-07 15:17:11.688860379 +0000 UTC m=+0.421764415 container remove a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=nervous_allen, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:17:11 np0005473739 systemd[1]: libpod-conmon-a9997cf5bd7170a1bd814e31c1da5265b61dd8804290670f5686404d44834715.scope: Deactivated successfully.
Oct  7 11:17:11 np0005473739 podman[451302]: 2025-10-07 15:17:11.89725952 +0000 UTC m=+0.074670415 container create bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0)
Oct  7 11:17:11 np0005473739 podman[451302]: 2025-10-07 15:17:11.852298788 +0000 UTC m=+0.029709723 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:17:11 np0005473739 systemd[1]: Started libpod-conmon-bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a.scope.
Oct  7 11:17:12 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:17:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:12 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:12 np0005473739 podman[451302]: 2025-10-07 15:17:12.066021179 +0000 UTC m=+0.243432134 container init bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507)
Oct  7 11:17:12 np0005473739 podman[451302]: 2025-10-07 15:17:12.077710806 +0000 UTC m=+0.255121721 container start bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 11:17:12 np0005473739 podman[451302]: 2025-10-07 15:17:12.113807276 +0000 UTC m=+0.291218191 container attach bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3)
Oct  7 11:17:12 np0005473739 systemd-logind[801]: New session 57 of user zuul.
Oct  7 11:17:12 np0005473739 systemd[1]: Started Session 57 of User zuul.
Oct  7 11:17:12 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]: {
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:    "0": [
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:        {
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "devices": [
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "/dev/loop3"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            ],
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_name": "ceph_lv0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_size": "21470642176",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "name": "ceph_lv0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "tags": {
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cluster_name": "ceph",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.crush_device_class": "",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.encrypted": "0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osd_id": "0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.type": "block",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.vdo": "0"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            },
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "type": "block",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "vg_name": "ceph_vg0"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:        }
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:    ],
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:    "1": [
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:        {
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "devices": [
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "/dev/loop4"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            ],
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_name": "ceph_lv1",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_size": "21470642176",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "name": "ceph_lv1",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "tags": {
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cluster_name": "ceph",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.crush_device_class": "",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.encrypted": "0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osd_id": "1",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.type": "block",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.vdo": "0"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            },
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "type": "block",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "vg_name": "ceph_vg1"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:        }
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:    ],
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:    "2": [
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:        {
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "devices": [
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "/dev/loop5"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            ],
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_name": "ceph_lv2",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_size": "21470642176",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "name": "ceph_lv2",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "tags": {
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.cluster_name": "ceph",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.crush_device_class": "",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.encrypted": "0",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osd_id": "2",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.type": "block",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:                "ceph.vdo": "0"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            },
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "type": "block",
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:            "vg_name": "ceph_vg2"
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:        }
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]:    ]
Oct  7 11:17:12 np0005473739 bold_varahamihira[451318]: }
Oct  7 11:17:13 np0005473739 podman[451365]: 2025-10-07 15:17:13.062291443 +0000 UTC m=+0.033729198 container died bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.license=GPLv2)
Oct  7 11:17:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3478: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:13 np0005473739 systemd[1]: libpod-bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a.scope: Deactivated successfully.
Oct  7 11:17:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-cee79ca10faee88d1492d305198eff217eff0812223dc3eb776e574ad822e2ae-merged.mount: Deactivated successfully.
Oct  7 11:17:13 np0005473739 nova_compute[259550]: 2025-10-07 15:17:13.991 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:14 np0005473739 podman[451365]: 2025-10-07 15:17:14.009167168 +0000 UTC m=+0.980604943 container remove bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=bold_varahamihira, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:17:14 np0005473739 systemd[1]: libpod-conmon-bf17ea06ca2df7aa1d95c0553d5896b6d59fde4d174ba7320068c2307072769a.scope: Deactivated successfully.
Oct  7 11:17:14 np0005473739 podman[451567]: 2025-10-07 15:17:14.67759901 +0000 UTC m=+0.045400966 container create 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:17:14 np0005473739 systemd[1]: Started libpod-conmon-5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d.scope.
Oct  7 11:17:14 np0005473739 nova_compute[259550]: 2025-10-07 15:17:14.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:17:14 np0005473739 podman[451567]: 2025-10-07 15:17:14.656237919 +0000 UTC m=+0.024039885 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:17:14 np0005473739 podman[451567]: 2025-10-07 15:17:14.766261262 +0000 UTC m=+0.134063238 container init 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:17:14 np0005473739 podman[451567]: 2025-10-07 15:17:14.774385016 +0000 UTC m=+0.142186992 container start 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:17:14 np0005473739 podman[451567]: 2025-10-07 15:17:14.778139425 +0000 UTC m=+0.145941381 container attach 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:17:14 np0005473739 jolly_payne[451599]: 167 167
Oct  7 11:17:14 np0005473739 systemd[1]: libpod-5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d.scope: Deactivated successfully.
Oct  7 11:17:14 np0005473739 podman[451567]: 2025-10-07 15:17:14.780883737 +0000 UTC m=+0.148685713 container died 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:17:14 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c483ce2195fbc27755e079a85699c838055a8a7873078da3df7c8cde3478edcc-merged.mount: Deactivated successfully.
Oct  7 11:17:14 np0005473739 podman[451567]: 2025-10-07 15:17:14.832202956 +0000 UTC m=+0.200004912 container remove 5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_payne, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 11:17:14 np0005473739 systemd[1]: libpod-conmon-5e22cd42adb4c15256aa2e793494c8e9d4d38d68306b25e0a1b27ec6aee3da8d.scope: Deactivated successfully.
Oct  7 11:17:14 np0005473739 nova_compute[259550]: 2025-10-07 15:17:14.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:17:14 np0005473739 nova_compute[259550]: 2025-10-07 15:17:14.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 11:17:15 np0005473739 nova_compute[259550]: 2025-10-07 15:17:15.015 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 11:17:15 np0005473739 podman[451630]: 2025-10-07 15:17:15.068360088 +0000 UTC m=+0.087168013 container create 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507)
Oct  7 11:17:15 np0005473739 podman[451630]: 2025-10-07 15:17:15.0246885 +0000 UTC m=+0.043496505 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:17:15 np0005473739 systemd[1]: Started libpod-conmon-58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723.scope.
Oct  7 11:17:15 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:17:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:15 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:17:15 np0005473739 podman[451630]: 2025-10-07 15:17:15.188587311 +0000 UTC m=+0.207395246 container init 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, ceph=True, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 11:17:15 np0005473739 podman[451630]: 2025-10-07 15:17:15.196859568 +0000 UTC m=+0.215667483 container start 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef)
Oct  7 11:17:15 np0005473739 podman[451630]: 2025-10-07 15:17:15.205734621 +0000 UTC m=+0.224542546 container attach 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 11:17:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3479: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:15 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23073 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:16 np0005473739 exciting_galois[451650]: {
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "osd_id": 2,
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "type": "bluestore"
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:    },
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "osd_id": 1,
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "type": "bluestore"
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:    },
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "osd_id": 0,
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:        "type": "bluestore"
Oct  7 11:17:16 np0005473739 exciting_galois[451650]:    }
Oct  7 11:17:16 np0005473739 exciting_galois[451650]: }
Oct  7 11:17:16 np0005473739 systemd[1]: libpod-58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723.scope: Deactivated successfully.
Oct  7 11:17:16 np0005473739 systemd[1]: libpod-58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723.scope: Consumed 1.138s CPU time.
Oct  7 11:17:16 np0005473739 podman[451630]: 2025-10-07 15:17:16.329755606 +0000 UTC m=+1.348563561 container died 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Oct  7 11:17:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-23a5c4d46e9e65800afcdc681ff21b8d8315b26572dcd0273d8bad1501260f8b-merged.mount: Deactivated successfully.
Oct  7 11:17:16 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23075 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:17 np0005473739 podman[451630]: 2025-10-07 15:17:17.114547377 +0000 UTC m=+2.133355292 container remove 58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=exciting_galois, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1867773775' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  7 11:17:17 np0005473739 systemd[1]: libpod-conmon-58a3135f82b89b16c7f17db3c2abcdcc0e2d4c139530136d6fc88afd389c5723.scope: Deactivated successfully.
Oct  7 11:17:17 np0005473739 podman[451776]: 2025-10-07 15:17:17.18797377 +0000 UTC m=+0.822789293 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:17:17 np0005473739 podman[451783]: 2025-10-07 15:17:17.219402526 +0000 UTC m=+0.851992050 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller)
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 5f465ae1-7121-4790-ada9-b10e40466223 does not exist
Oct  7 11:17:17 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f1c1ffcb-eaee-431e-8cb3-2efc4b67a11d does not exist
Oct  7 11:17:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3480: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:17 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:17:18 np0005473739 nova_compute[259550]: 2025-10-07 15:17:18.995 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3481: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:19 np0005473739 nova_compute[259550]: 2025-10-07 15:17:19.729 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:20 np0005473739 ovs-vsctl[451960]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  7 11:17:21 np0005473739 nova_compute[259550]: 2025-10-07 15:17:21.014 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:17:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3482: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:21 np0005473739 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  7 11:17:21 np0005473739 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  7 11:17:21 np0005473739 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  7 11:17:22 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: cache status {prefix=cache status} (starting...)
Oct  7 11:17:22 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:17:22 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: client ls {prefix=client ls} (starting...)
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:17:22 np0005473739 lvm[452306]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  7 11:17:22 np0005473739 lvm[452305]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  7 11:17:22 np0005473739 lvm[452305]: VG ceph_vg1 finished
Oct  7 11:17:22 np0005473739 lvm[452306]: VG ceph_vg2 finished
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:17:22
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['default.rgw.log', 'cephfs.cephfs.meta', '.mgr', 'images', 'volumes', 'backups', '.rgw.root', 'cephfs.cephfs.data', 'vms', 'default.rgw.meta', 'default.rgw.control']
Oct  7 11:17:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:17:22 np0005473739 lvm[452340]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  7 11:17:22 np0005473739 lvm[452340]: VG ceph_vg0 finished
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23079 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:23 np0005473739 kernel: block loop4: the capability attribute has been deprecated.
Oct  7 11:17:23 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: damage ls {prefix=damage ls} (starting...)
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3483: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:23 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump loads {prefix=dump loads} (starting...)
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23081 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:17:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:17:23 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  7 11:17:23 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  7 11:17:23 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  7 11:17:24 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  7 11:17:24 np0005473739 nova_compute[259550]: 2025-10-07 15:17:24.034 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  7 11:17:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/183215034' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  7 11:17:24 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  7 11:17:24 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  7 11:17:24 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23087 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:24 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:17:24.478+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  7 11:17:24 np0005473739 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  7 11:17:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:17:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/207903652' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:17:24 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: ops {prefix=ops} (starting...)
Oct  7 11:17:24 np0005473739 nova_compute[259550]: 2025-10-07 15:17:24.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  7 11:17:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2192595890' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  7 11:17:24 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  7 11:17:24 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3957854229' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  7 11:17:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  7 11:17:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4109361635' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  7 11:17:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  7 11:17:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2688670234' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  7 11:17:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3484: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:25 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: session ls {prefix=session ls} (starting...)
Oct  7 11:17:25 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: status {prefix=status} (starting...)
Oct  7 11:17:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  7 11:17:25 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4204556699' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  7 11:17:25 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23101 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:26 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23105 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  7 11:17:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/976810913' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  7 11:17:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  7 11:17:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1545421245' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  7 11:17:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  7 11:17:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/901755309' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  7 11:17:26 np0005473739 nova_compute[259550]: 2025-10-07 15:17:26.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:17:26 np0005473739 nova_compute[259550]: 2025-10-07 15:17:26.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:17:26 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  7 11:17:26 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1334750667' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  7 11:17:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  7 11:17:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1911627210' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  7 11:17:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3485: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:27 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23117 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:27 np0005473739 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  7 11:17:27 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:17:27.452+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  7 11:17:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  7 11:17:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1790706680' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  7 11:17:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:17:27 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23121 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  7 11:17:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2601673510' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  7 11:17:28 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23123 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  7 11:17:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2818234284' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  7 11:17:28 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23127 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  7 11:17:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2590059880' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  7 11:17:28 np0005473739 nova_compute[259550]: 2025-10-07 15:17:28.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:17:28 np0005473739 nova_compute[259550]: 2025-10-07 15:17:28.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:17:29 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23131 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:29 np0005473739 nova_compute[259550]: 2025-10-07 15:17:29.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  7 11:17:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2873225065' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece55000/0x0/0x4ffc00000, data 0x321cda1/0x33a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280879104 unmapped: 47890432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece55000/0x0/0x4ffc00000, data 0x321cda1/0x33a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3267813 data_alloc: 234881024 data_used: 32980992
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280887296 unmapped: 47882240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280887296 unmapped: 47882240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece55000/0x0/0x4ffc00000, data 0x321cda1/0x33a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6e000 session 0x55f4fed55860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280887296 unmapped: 47882240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280887296 unmapped: 47882240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281911296 unmapped: 46858240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3301733 data_alloc: 234881024 data_used: 37003264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 45744128 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece55000/0x0/0x4ffc00000, data 0x321cda1/0x33a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 45744128 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283025408 unmapped: 45744128 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.247110367s of 14.595113754s, submitted: 73
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283033600 unmapped: 45735936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283033600 unmapped: 45735936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ece59000/0x0/0x4ffc00000, data 0x321dda1/0x33a4000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3297529 data_alloc: 234881024 data_used: 37003264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283033600 unmapped: 45735936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283033600 unmapped: 45735936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287014912 unmapped: 41754624 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3370667 data_alloc: 234881024 data_used: 37146624
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec4fb000/0x0/0x4ffc00000, data 0x3b7cda1/0x3d03000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.609622955s of 10.946825981s, submitted: 59
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287113216 unmapped: 41656320 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 40304640 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec0f0000/0x0/0x4ffc00000, data 0x3f7fda1/0x4106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3406277 data_alloc: 234881024 data_used: 37318656
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288727040 unmapped: 40042496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288972800 unmapped: 39796736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec072000/0x0/0x4ffc00000, data 0x3ffdda1/0x4184000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3415403 data_alloc: 234881024 data_used: 37376000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec072000/0x0/0x4ffc00000, data 0x3ffdda1/0x4184000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288989184 unmapped: 39780352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 40452096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec059000/0x0/0x4ffc00000, data 0x401eda1/0x41a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 40452096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 40452096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3410287 data_alloc: 234881024 data_used: 37376000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec059000/0x0/0x4ffc00000, data 0x401eda1/0x41a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec059000/0x0/0x4ffc00000, data 0x401eda1/0x41a5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3410447 data_alloc: 234881024 data_used: 37380096
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288325632 unmapped: 40443904 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 40435712 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf400 session 0x55f4fdfa8b40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fed665a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 40435712 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.526844025s of 19.174123764s, submitted: 54
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fe016d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3332683 data_alloc: 234881024 data_used: 36712448
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eca3a000/0x0/0x4ffc00000, data 0x363dda1/0x37c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 40427520 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fed66b40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94c00 session 0x55f4fecc2960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280305664 unmapped: 48463872 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48c00 session 0x55f4fd321860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3129665 data_alloc: 234881024 data_used: 24416256
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280305664 unmapped: 48463872 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280305664 unmapped: 48463872 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed6af000/0x0/0x4ffc00000, data 0x25c6d3f/0x274c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 280305664 unmapped: 48463872 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fd065800 session 0x55f4fcf212c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fd2e4d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.698882103s of 10.823541641s, submitted: 39
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 279519232 unmapped: 49250304 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdfa7860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275791872 unmapped: 52977664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee442000/0x0/0x4ffc00000, data 0x1c37d2f/0x1dbc000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3020159 data_alloc: 218103808 data_used: 19435520
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275791872 unmapped: 52977664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275791872 unmapped: 52977664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275791872 unmapped: 52977664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdf8b0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fef4a5a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48c00 session 0x55f4ff0eaf00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272793600 unmapped: 55975936 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272801792 unmapped: 55967744 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4feefa780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4feefb2c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fcf4cd20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fd2e4960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94c00 session 0x55f4fcf210e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272809984 unmapped: 55959552 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272818176 unmapped: 55951360 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2886229 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272818176 unmapped: 55951360 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272818176 unmapped: 55951360 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272826368 unmapped: 55943168 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.326816559s of 39.382202148s, submitted: 15
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffa01680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272842752 unmapped: 55926784 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272842752 unmapped: 55926784 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffe29c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fe023a40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fd29d4a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2887858 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6e000 session 0x55f4fe016000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272842752 unmapped: 55926784 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6e000 session 0x55f4fed66d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdd10d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ff9e85a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4ff0ea960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fd402960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea17000/0x0/0x4ffc00000, data 0x1661d3f/0x17e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2928261 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea17000/0x0/0x4ffc00000, data 0x1661d3f/0x17e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea17000/0x0/0x4ffc00000, data 0x1661d3f/0x17e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 272957440 unmapped: 55812096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.332861900s of 12.470142365s, submitted: 24
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944057 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274358272 unmapped: 54411264 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee80e000/0x0/0x4ffc00000, data 0x186ad3f/0x19f0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4ff9e8960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 54386688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7e2000/0x0/0x4ffc00000, data 0x1896d3f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 54386688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274382848 unmapped: 54386688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: mgrc ms_handle_reset ms_handle_reset con 0x55f4fdfd5c00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: mgrc handle_mgr_configure stats_period=5
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2979497 data_alloc: 218103808 data_used: 17051648
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7e2000/0x0/0x4ffc00000, data 0x1896d3f/0x1a1c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2977945 data_alloc: 218103808 data_used: 17051648
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7df000/0x0/0x4ffc00000, data 0x1899d3f/0x1a1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7df000/0x0/0x4ffc00000, data 0x1899d3f/0x1a1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee7df000/0x0/0x4ffc00000, data 0x1899d3f/0x1a1f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 274464768 unmapped: 54304768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.412960052s of 13.534274101s, submitted: 32
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275062784 unmapped: 53706752 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3032607 data_alloc: 218103808 data_used: 17084416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fd2e43c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffe29a40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276766720 unmapped: 52002816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fed54000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276783104 unmapped: 51986432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276783104 unmapped: 51986432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2ad000/0x0/0x4ffc00000, data 0x1dbdd3f/0x1f43000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276783104 unmapped: 51986432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2ad000/0x0/0x4ffc00000, data 0x1dbdd3f/0x1f43000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014203 data_alloc: 218103808 data_used: 17084416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276783104 unmapped: 51986432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2ad000/0x0/0x4ffc00000, data 0x1dbdd3f/0x1f43000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276799488 unmapped: 51970048 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276742144 unmapped: 52027392 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276742144 unmapped: 52027392 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276742144 unmapped: 52027392 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2b8000/0x0/0x4ffc00000, data 0x1dc0d3f/0x1f46000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3012987 data_alloc: 218103808 data_used: 17084416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276750336 unmapped: 52019200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276750336 unmapped: 52019200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276750336 unmapped: 52019200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6e000 session 0x55f4ffdba1e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fedade00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2b8000/0x0/0x4ffc00000, data 0x1dc0d3f/0x1f46000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff0ea780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276750336 unmapped: 52019200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fe024000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.059223175s of 15.161356926s, submitted: 70
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fe023e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf400 session 0x55f4ff98ad20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fcf4be00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fed66f00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fe0ea5a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3044288 data_alloc: 218103808 data_used: 17088512
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277807104 unmapped: 50962432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4ff98ad20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277815296 unmapped: 50954240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65c400 session 0x55f4fe023e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3044288 data_alloc: 218103808 data_used: 17088512
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fe024000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277815296 unmapped: 50954240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff0ea780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277815296 unmapped: 50954240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277831680 unmapped: 50937856 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074048 data_alloc: 218103808 data_used: 21282816
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb6000/0x0/0x4ffc00000, data 0x21c0db1/0x2348000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074048 data_alloc: 218103808 data_used: 21282816
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 17.183795929s of 17.558851242s, submitted: 22
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3097804 data_alloc: 218103808 data_used: 21282816
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edc5e000/0x0/0x4ffc00000, data 0x2418db1/0x25a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edc5e000/0x0/0x4ffc00000, data 0x2418db1/0x25a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277979136 unmapped: 50790400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edc5e000/0x0/0x4ffc00000, data 0x2418db1/0x25a0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277987328 unmapped: 50782208 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3097804 data_alloc: 218103808 data_used: 21282816
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277987328 unmapped: 50782208 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277987328 unmapped: 50782208 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277995520 unmapped: 50774016 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.474517822s of 12.056572914s, submitted: 25
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65c400 session 0x55f4ffdba1e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fcf4ba40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 279052288 unmapped: 49717248 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffacf800 session 0x55f4fdd5af00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019490 data_alloc: 218103808 data_used: 17084416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee2b5000/0x0/0x4ffc00000, data 0x1dc0d3f/0x1f46000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277839872 unmapped: 50929664 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277848064 unmapped: 50921472 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf21e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fd29d680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277848064 unmapped: 50921472 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fef4a780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275644416 unmapped: 53125120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275652608 unmapped: 53116928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2904568 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeac000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275660800 unmapped: 53108736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275668992 unmapped: 53100544 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 39.611000061s of 40.106571198s, submitted: 41
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 279863296 unmapped: 48906240 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd5b0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65c400 session 0x55f4fdfa8f00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf503c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275668992 unmapped: 53100544 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4ffa0da40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fedada40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2932616 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275668992 unmapped: 53100544 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffe28b40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaac000/0x0/0x4ffc00000, data 0x15cdd2f/0x1752000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff65cc00 session 0x55f4fecc30e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275668992 unmapped: 53100544 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff0eb0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275677184 unmapped: 53092352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdd112c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275677184 unmapped: 53092352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaaa000/0x0/0x4ffc00000, data 0x15cdd62/0x1754000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaaa000/0x0/0x4ffc00000, data 0x15cdd62/0x1754000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275677184 unmapped: 53092352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2966043 data_alloc: 218103808 data_used: 17567744
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaaa000/0x0/0x4ffc00000, data 0x15cdd62/0x1754000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2966043 data_alloc: 218103808 data_used: 17567744
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275693568 unmapped: 53075968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeaaa000/0x0/0x4ffc00000, data 0x15cdd62/0x1754000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275701760 unmapped: 53067776 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275701760 unmapped: 53067776 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2966043 data_alloc: 218103808 data_used: 17567744
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.276357651s of 17.347126007s, submitted: 6
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 275800064 unmapped: 52969472 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee505000/0x0/0x4ffc00000, data 0x1b72d62/0x1cf9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018643 data_alloc: 218103808 data_used: 17600512
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee505000/0x0/0x4ffc00000, data 0x1b72d62/0x1cf9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee505000/0x0/0x4ffc00000, data 0x1b72d62/0x1cf9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3018643 data_alloc: 218103808 data_used: 17600512
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee505000/0x0/0x4ffc00000, data 0x1b72d62/0x1cf9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.575452805s of 13.935386658s, submitted: 29
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276660224 unmapped: 52109312 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fface000 session 0x55f4fecc3680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9d800 session 0x55f4fcf4c780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95000 session 0x55f4fdd110e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95000 session 0x55f4fed66b40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa90e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026528 data_alloc: 218103808 data_used: 17600512
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b6000/0x0/0x4ffc00000, data 0x1bc0dc4/0x1d48000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276668416 unmapped: 52101120 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b6000/0x0/0x4ffc00000, data 0x1bc0dc4/0x1d48000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026528 data_alloc: 218103808 data_used: 17600512
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b6000/0x0/0x4ffc00000, data 0x1bc0dc4/0x1d48000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b6000/0x0/0x4ffc00000, data 0x1bc0dc4/0x1d48000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdd11c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fface000 session 0x55f4fef4b860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9d800 session 0x55f4fdd101e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.281874657s of 10.393685341s, submitted: 26
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028747 data_alloc: 218103808 data_used: 17600512
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276676608 unmapped: 52092928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9d800 session 0x55f4fd2e5e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276684800 unmapped: 52084736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276807680 unmapped: 51961856 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b4000/0x0/0x4ffc00000, data 0x1bc0df7/0x1d4a000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029955 data_alloc: 218103808 data_used: 17719296
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x1bc1df7/0x1d4b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x1bc1df7/0x1d4b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3030263 data_alloc: 218103808 data_used: 17719296
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x1bc1df7/0x1d4b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 276733952 unmapped: 52035584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee4b2000/0x0/0x4ffc00000, data 0x1bc1df7/0x1d4b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.995096207s of 13.440644264s, submitted: 6
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282075136 unmapped: 46694400 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3099583 data_alloc: 218103808 data_used: 17780736
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edd3b000/0x0/0x4ffc00000, data 0x232bdf7/0x24b5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092099 data_alloc: 218103808 data_used: 17784832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282476544 unmapped: 46292992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edd25000/0x0/0x4ffc00000, data 0x234fdf7/0x24d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edd25000/0x0/0x4ffc00000, data 0x234fdf7/0x24d9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.874958992s of 12.207747459s, submitted: 98
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4feca8000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fcf4d2c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092027 data_alloc: 218103808 data_used: 17784832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282484736 unmapped: 46284800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fe0245a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282509312 unmapped: 46260224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282509312 unmapped: 46260224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282509312 unmapped: 46260224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f5019785a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fd4034a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee502000/0x0/0x4ffc00000, data 0x1b73d62/0x1cfa000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd402000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277651456 unmapped: 51118080 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277659648 unmapped: 51109888 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277667840 unmapped: 51101696 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eeeab000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2923753 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277676032 unmapped: 51093504 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 43.395324707s of 43.585483551s, submitted: 45
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4ff997680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fd321860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9d800 session 0x55f4fcf514a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf21860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fd320000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0c000/0x0/0x4ffc00000, data 0x126dd2f/0x13f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2933211 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0c000/0x0/0x4ffc00000, data 0x126dd2f/0x13f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff98ab40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa012c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2933211 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fef4af00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0c000/0x0/0x4ffc00000, data 0x126dd2f/0x13f2000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [1])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff98bc20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0b000/0x0/0x4ffc00000, data 0x126dd3f/0x13f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0b000/0x0/0x4ffc00000, data 0x126dd3f/0x13f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277725184 unmapped: 51044352 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0b000/0x0/0x4ffc00000, data 0x126dd3f/0x13f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939369 data_alloc: 218103808 data_used: 13934592
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eee0b000/0x0/0x4ffc00000, data 0x126dd3f/0x13f3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277733376 unmapped: 51036160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2939369 data_alloc: 218103808 data_used: 13934592
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 51027968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 51027968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 277741568 unmapped: 51027968 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.298503876s of 19.327520370s, submitted: 4
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282157056 unmapped: 46612480 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee5ba000/0x0/0x4ffc00000, data 0x1ab6d3f/0x1c3c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3015491 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee58e000/0x0/0x4ffc00000, data 0x1ae2d3f/0x1c68000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282173440 unmapped: 46596096 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee58e000/0x0/0x4ffc00000, data 0x1ae2d3f/0x1c68000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281452544 unmapped: 47316992 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281460736 unmapped: 47308800 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 47300608 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 47300608 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281468928 unmapped: 47300608 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010451 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281477120 unmapped: 47292416 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee593000/0x0/0x4ffc00000, data 0x1ae5d3f/0x1c6b000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 31.915740967s of 32.219837189s, submitted: 68
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3010407 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee591000/0x0/0x4ffc00000, data 0x1ae6d3f/0x1c6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xf9ff9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281485312 unmapped: 47284224 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fdd11c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fed66b40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fcf4c780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fecc3680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fdd112c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024651 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee099000/0x0/0x4ffc00000, data 0x1bcfd3f/0x1d55000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3024651 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee099000/0x0/0x4ffc00000, data 0x1bcfd3f/0x1d55000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.549773216s of 11.732599258s, submitted: 10
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff0eb0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281591808 unmapped: 47177728 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281600000 unmapped: 47169536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee074000/0x0/0x4ffc00000, data 0x1bf3d62/0x1d7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033316 data_alloc: 218103808 data_used: 14778368
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee074000/0x0/0x4ffc00000, data 0x1bf3d62/0x1d7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3033316 data_alloc: 218103808 data_used: 14778368
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281608192 unmapped: 47161344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 281616384 unmapped: 47153152 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.436887741s of 12.460129738s, submitted: 5
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 46022656 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3085982 data_alloc: 218103808 data_used: 14929920
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282746880 unmapped: 46022656 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edab2000/0x0/0x4ffc00000, data 0x21add62/0x2334000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282517504 unmapped: 46252032 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda35000/0x0/0x4ffc00000, data 0x2232d62/0x23b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3094090 data_alloc: 218103808 data_used: 14921728
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda35000/0x0/0x4ffc00000, data 0x2232d62/0x23b9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092590 data_alloc: 218103808 data_used: 14921728
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282525696 unmapped: 46243840 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 33K writes, 134K keys, 33K commit groups, 1.0 writes per commit group, ingest: 0.13 GB, 0.03 MB/s#012Cumulative WAL: 33K writes, 11K syncs, 2.82 writes per sync, written: 0.13 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2785 writes, 10K keys, 2785 commit groups, 1.0 writes per commit group, ingest: 11.72 MB, 0.02 MB/s#012Interval WAL: 2785 writes, 1097 syncs, 2.54 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282533888 unmapped: 46235648 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282533888 unmapped: 46235648 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3092590 data_alloc: 218103808 data_used: 14921728
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eda14000/0x0/0x4ffc00000, data 0x2253d62/0x23da000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fedada40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fcf4b680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.073291779s of 19.583322525s, submitted: 55
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282542080 unmapped: 46227456 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3026900 data_alloc: 218103808 data_used: 14237696
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282566656 unmapped: 46202880 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4ffdbb2c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019712 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282574848 unmapped: 46194688 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019712 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282583040 unmapped: 46186496 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3019712 data_alloc: 218103808 data_used: 14123008
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282591232 unmapped: 46178304 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282591232 unmapped: 46178304 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4feefa1e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fd3210e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282591232 unmapped: 46178304 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282607616 unmapped: 46161920 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 21.432577133s of 21.670606613s, submitted: 19
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee180000/0x0/0x4ffc00000, data 0x1ae7d3f/0x1c6d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,2])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988114 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282615808 unmapped: 46153728 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282615808 unmapped: 46153728 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4ffe28d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 46145536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 46145536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 46145536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282624000 unmapped: 46145536 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282632192 unmapped: 46137344 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282648576 unmapped: 46120960 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf4b4a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed55c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdd5ba40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4ffdbab40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fd403860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282656768 unmapped: 46112768 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2941655 data_alloc: 218103808 data_used: 13373440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff98be00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea9c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fedade00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282664960 unmapped: 46104576 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fc6caf00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.682163239s of 33.369308472s, submitted: 9
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fe0ea5a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 45948928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 45948928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 45916160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 45899776 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 45891584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 45891584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.930875778s of 10.032606125s, submitted: 31
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283410432 unmapped: 45359104 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283951104 unmapped: 44818432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee52b000/0x0/0x4ffc00000, data 0x173ed2f/0x18c3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 44769280 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee50e000/0x0/0x4ffc00000, data 0x175bd2f/0x18e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [0,0,0,0,0,1])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991023 data_alloc: 218103808 data_used: 13512704
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 43827200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284950528 unmapped: 43819008 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285999104 unmapped: 42770432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee0e8000/0x0/0x4ffc00000, data 0x1761d2f/0x18e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1022f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992473 data_alloc: 218103808 data_used: 13512704
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.205342293s of 13.499714851s, submitted: 106
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef545000/0x0/0x4ffc00000, data 0x1764d2f/0x18e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef545000/0x0/0x4ffc00000, data 0x1764d2f/0x18e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993209 data_alloc: 218103808 data_used: 13512704
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 45596672 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fd2e45a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffdbb4a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fd29dc20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fdf8a000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffe29860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecf1000/0x0/0x4ffc00000, data 0x1fb8d2f/0x213d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053185 data_alloc: 218103808 data_used: 13512704
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fdf8bc20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0c960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fdd5b2c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053185 data_alloc: 218103808 data_used: 13512704
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.148612022s of 13.547485352s, submitted: 7
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4ffdbba40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecf1000/0x0/0x4ffc00000, data 0x1fb8d2f/0x213d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecef000/0x0/0x4ffc00000, data 0x1fb8d62/0x213f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109156 data_alloc: 218103808 data_used: 21655552
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecef000/0x0/0x4ffc00000, data 0x1fb8d62/0x213f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109156 data_alloc: 218103808 data_used: 21655552
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.794086456s of 11.875811577s, submitted: 5
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 48177152 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289021952 unmapped: 47095808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39e000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187768 data_alloc: 218103808 data_used: 22167552
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289079296 unmapped: 47038464 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39d000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188904 data_alloc: 218103808 data_used: 22589440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39d000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf21e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4ff9972c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.230875969s of 11.849612236s, submitted: 48
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fed86f00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef543000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990971 data_alloc: 218103808 data_used: 13225984
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef543000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef542000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef542000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991147 data_alloc: 218103808 data_used: 13225984
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdd5b0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f501979c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efab8000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf214a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283795456 unmapped: 52322304 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283803648 unmapped: 52314112 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283803648 unmapped: 52314112 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.062763214s of 38.424762726s, submitted: 20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 49577984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fd29c960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4feca9860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f501978780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff0eb4a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fd2e4d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef57d000/0x0/0x4ffc00000, data 0x172cd2f/0x18b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987362 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffa0d0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef57d000/0x0/0x4ffc00000, data 0x172cd2f/0x18b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0dc20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4ff9e8780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa9e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 51937280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992039 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 51937280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 51789824 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031879 data_alloc: 218103808 data_used: 18636800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031879 data_alloc: 218103808 data_used: 18636800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.940246582s of 19.882999420s, submitted: 27
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 49446912 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 48250880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111081 data_alloc: 218103808 data_used: 20406272
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.229282379s of 29.198295593s, submitted: 69
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed67e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc3860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286769152 unmapped: 49348608 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ff9e8000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9f800 session 0x55f4ff9e9680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fe026780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4feca81e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff9974a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.315746307s of 25.564805984s, submitted: 37
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4feca85a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdb38c00 session 0x55f4ffe29e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fed55860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4feca9680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fedac1e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982457 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fe026780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982457 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993657 data_alloc: 218103808 data_used: 14663680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993657 data_alloc: 218103808 data_used: 14663680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 49307648 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.177154541s of 20.592260361s, submitted: 12
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee22c000/0x0/0x4ffc00000, data 0x18dcd3f/0x1a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028855 data_alloc: 218103808 data_used: 14909440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 51036160 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 51036160 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee227000/0x0/0x4ffc00000, data 0x18e0d3f/0x1a66000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029661 data_alloc: 218103808 data_used: 14909440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029661 data_alloc: 218103808 data_used: 14909440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6f000 session 0x55f4ff98b680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa85a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed663c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc3e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.534084320s of 16.210994720s, submitted: 47
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 49561600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0cf00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe29860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fdd5b0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff996d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fcf4ba40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063882 data_alloc: 218103808 data_used: 14909440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063882 data_alloc: 218103808 data_used: 14909440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd112c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083327 data_alloc: 218103808 data_used: 17149952
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083327 data_alloc: 218103808 data_used: 17149952
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285548544 unmapped: 50569216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.206413269s of 21.587003708s, submitted: 31
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130179 data_alloc: 218103808 data_used: 17604608
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288866304 unmapped: 47251456 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec7db000/0x0/0x4ffc00000, data 0x218dd3f/0x2313000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec758000/0x0/0x4ffc00000, data 0x220fd3f/0x2395000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec758000/0x0/0x4ffc00000, data 0x220fd3f/0x2395000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3148909 data_alloc: 218103808 data_used: 18034688
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289267712 unmapped: 46850048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fdd5be00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe017c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289275904 unmapped: 46841856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.451373100s of 10.043084145s, submitted: 88
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038639 data_alloc: 218103808 data_used: 14909440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fc6ca3c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed05e000/0x0/0x4ffc00000, data 0x190ad3f/0x1a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff9e9680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fecd3e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fcf51e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.709793091s of 29.150884628s, submitted: 12
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 38690816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028468 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289628160 unmapped: 46489600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fef4a000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdd5a3c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fef4b2c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe0161e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff997a40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028484 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd403860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f5019781e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4ffa0c3c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.679104328s of 10.049762726s, submitted: 34
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fecd34a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 46473216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031495 data_alloc: 218103808 data_used: 13090816
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 46473216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 290816000 unmapped: 45301760 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089575 data_alloc: 218103808 data_used: 21217280
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089575 data_alloc: 218103808 data_used: 21217280
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.181274414s of 13.181275368s, submitted: 0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 41181184 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3176615 data_alloc: 218103808 data_used: 21319680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3bf000/0x0/0x4ffc00000, data 0x25a7dc4/0x272f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b9000/0x0/0x4ffc00000, data 0x25addc4/0x2735000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182271 data_alloc: 218103808 data_used: 21987328
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182271 data_alloc: 218103808 data_used: 21987328
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877c00 session 0x55f4fdf8a1e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd3210e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fe024960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4ffa01860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.366380692s of 16.474279404s, submitted: 92
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 40181760 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fcf201e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4ff9e8d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4ff9970e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fef4a5a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc32c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200253 data_alloc: 218103808 data_used: 21991424
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21d000/0x0/0x4ffc00000, data 0x2748dd4/0x28d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21d000/0x0/0x4ffc00000, data 0x2748dd4/0x28d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200253 data_alloc: 218103808 data_used: 21991424
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdd11e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205953 data_alloc: 218103808 data_used: 22528000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205953 data_alloc: 218103808 data_used: 22528000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.447574615s of 20.827314377s, submitted: 10
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 39960576 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284443 data_alloc: 218103808 data_used: 22757376
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 39960576 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb869000/0x0/0x4ffc00000, data 0x30fbdd4/0x3284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb869000/0x0/0x4ffc00000, data 0x30fbdd4/0x3284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291167 data_alloc: 218103808 data_used: 22753280
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff98b2c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ffdba000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fecc2960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191033 data_alloc: 218103808 data_used: 21991424
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200098038s of 12.547958374s, submitted: 61
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ffa0d4a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fecdb4a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffa0c780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.669296265s of 35.925739288s, submitted: 28
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,1,2])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297992192 unmapped: 42328064 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fcf4a000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff9965a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fed66f00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ffe28960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fe0241e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053060 data_alloc: 218103808 data_used: 13086720
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffe28780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe294a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fe016b40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fed87a40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058801 data_alloc: 218103808 data_used: 13697024
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115441 data_alloc: 218103808 data_used: 21676032
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115441 data_alloc: 218103808 data_used: 21676032
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.079471588s of 18.551073074s, submitted: 14
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 45481984 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 45449216 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec27d000/0x0/0x4ffc00000, data 0x26ecd2f/0x2871000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3220233 data_alloc: 218103808 data_used: 22417408
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227157 data_alloc: 218103808 data_used: 22556672
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.110246658s of 11.604579926s, submitted: 73
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226953 data_alloc: 218103808 data_used: 22556672
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226953 data_alloc: 218103808 data_used: 22556672
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec218000/0x0/0x4ffc00000, data 0x2751d2f/0x28d6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.151399612s of 12.149919510s, submitted: 4
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3229065 data_alloc: 218103808 data_used: 22589440
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4fcf51c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec207000/0x0/0x4ffc00000, data 0x2762d2f/0x28e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd5ab40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdfa74a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe285a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fcf4be00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 39550976 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 285 heartbeat osd_stat(store_statfs(0x4eba29000/0x0/0x4ffc00000, data 0x2f3d90e/0x30c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 39550976 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 286 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4ffe28b40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 286 heartbeat osd_stat(store_statfs(0x4eb1c3000/0x0/0x4ffc00000, data 0x37a490e/0x392b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,1])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3421390 data_alloc: 234881024 data_used: 35491840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 39534592 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4fef4af00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b7000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 39510016 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3425380 data_alloc: 234881024 data_used: 35491840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b7000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.261311531s of 13.822600365s, submitted: 64
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffdbb680
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffdbbe00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdd10d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ffdbb0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ff0eb2c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305127424 unmapped: 42328064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b9000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305135616 unmapped: 42319872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3420546 data_alloc: 234881024 data_used: 35500032
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305135616 unmapped: 42319872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adadb/0x3938000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305143808 unmapped: 42311680 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305143808 unmapped: 42311680 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdfa90e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffdbaf00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3420546 data_alloc: 234881024 data_used: 35500032
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdd5b4a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4feefa1e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b4000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305176576 unmapped: 42278912 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467236 data_alloc: 251658240 data_used: 41988096
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467236 data_alloc: 251658240 data_used: 41988096
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 38346752 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 38346752 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.982900620s of 20.121829987s, submitted: 17
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 37863424 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3489675 data_alloc: 251658240 data_used: 43794432
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb0fa000/0x0/0x4ffc00000, data 0x3868aeb/0x39f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3500175 data_alloc: 251658240 data_used: 44195840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3500175 data_alloc: 251658240 data_used: 44195840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310722560 unmapped: 36732928 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.973714828s of 16.078048706s, submitted: 14
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497747 data_alloc: 251658240 data_used: 44195840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3498067 data_alloc: 251658240 data_used: 44204032
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3505747 data_alloc: 251658240 data_used: 45961216
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fdfa85a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.843559265s of 12.874654770s, submitted: 3
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311279616 unmapped: 36175872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 289 handle_osd_map epochs [289,289], i have 289, src has [1,289]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fedac1e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffb9a000 session 0x55f4ff9e9c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe026f00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 326180864 unmapped: 21274624 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4fec93000 session 0x55f4ffa012c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 290 ms_handle_reset con 0x55f4ffb9a000 session 0x55f4ffa0c780
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 31670272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4feefa5a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e9172000/0x0/0x4ffc00000, data 0x57f0239/0x597a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768224 data_alloc: 251658240 data_used: 52424704
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 31563776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbae000 session 0x55f4feca94a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e916e000/0x0/0x4ffc00000, data 0x57ecdd2/0x597c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4feca9a40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4feca8960
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320143360 unmapped: 31514624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fd320d20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320233472 unmapped: 31424512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320233472 unmapped: 31424512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4ff9961e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffdbab40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4ffbba400 session 0x55f4fcf4b860
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3534457 data_alloc: 251658240 data_used: 51228672
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb1a9000/0x0/0x4ffc00000, data 0x37b49af/0x3944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.539574623s of 10.580464363s, submitted: 128
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb1a9000/0x0/0x4ffc00000, data 0x37b49af/0x3944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ff358c00 session 0x55f501979e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 312303616 unmapped: 39354368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 312303616 unmapped: 39354368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ff997e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fe0161e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328352 data_alloc: 234881024 data_used: 34619392
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 294 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ffe29a40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed77b000/0x0/0x4ffc00000, data 0x11deff1/0x1370000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060980 data_alloc: 218103808 data_used: 12230656
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed77b000/0x0/0x4ffc00000, data 0x11deff1/0x1370000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.101095200s of 11.366854668s, submitted: 87
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.80 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1971 writes, 8322 keys, 1971 commit groups, 1.0 writes per commit group, ingest: 9.10 MB, 0.02 MB/s#012Interval WAL: 1971 writes, 761 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063266 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063266 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: mgrc ms_handle_reset ms_handle_reset con 0x55f4fdd48c00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: mgrc handle_mgr_configure stats_period=5
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 54960128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.798728943s of 37.813915253s, submitted: 13
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 54960128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fcf4af00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4feca81e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fdfa65a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc32c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffa3a400 session 0x55f501979c20
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed055000/0x0/0x4ffc00000, data 0x1906a54/0x1a99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4fed870e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125660 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 54722560 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 54272000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed054000/0x0/0x4ffc00000, data 0x1906a77/0x1a9a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 54272000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbba400 session 0x55f501979a40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f5049ac800 session 0x55f4ffa0d0e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed054000/0x0/0x4ffc00000, data 0x1906a77/0x1a9a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,1])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ff997e00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.547794342s of 59.170566559s, submitted: 55
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 54435840 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 54427648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed36b000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 54427648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074277 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 296 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffa0de00
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072989 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.538488388s of 13.065173149s, submitted: 139
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23135 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdfa83c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 55164928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12378112
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12378112
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 55148544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 55148544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 113.802856445s of 113.915954590s, submitted: 17
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 298 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4fe0254a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004649 data_alloc: 218103808 data_used: 5570560
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edb62000/0x0/0x4ffc00000, data 0x9e5c13/0xb7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 299 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ffa01a40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee361000/0x0/0x4ffc00000, data 0x1e77c1/0x37c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee361000/0x0/0x4ffc00000, data 0x1e77c1/0x37c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935475 data_alloc: 218103808 data_used: 126976
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee35e000/0x0/0x4ffc00000, data 0x1e9240/0x37f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee35e000/0x0/0x4ffc00000, data 0x1e9240/0x37f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935475 data_alloc: 218103808 data_used: 126976
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.293596268s of 16.493015289s, submitted: 58
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ee35a000/0x0/0x4ffc00000, data 0x1eacc6/0x383000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdf8ba40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288022528 unmapped: 63635456 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288063488 unmapped: 63594496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288063488 unmapped: 63594496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 105.418647766s of 105.483337402s, submitted: 31
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 303 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdfa8b40
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288104448 unmapped: 63553536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005561 data_alloc: 218103808 data_used: 147456
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 ms_handle_reset con 0x55f5049ac800 session 0x55f4ff996000
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.854793549s of 29.929613113s, submitted: 9
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 305 ms_handle_reset con 0x55f5039db400 session 0x55f4fcf4a3c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b41/0xb91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011917 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b1e/0xb90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b1e/0xb90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011917 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288210944 unmapped: 63447040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288260096 unmapped: 63397888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 63356928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 63340544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288350208 unmapped: 63307776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288382976 unmapped: 63275008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288382976 unmapped: 63275008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288391168 unmapped: 63266816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288391168 unmapped: 63266816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288407552 unmapped: 63250432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288423936 unmapped: 63234048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288456704 unmapped: 63201280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288497664 unmapped: 63160320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288530432 unmapped: 63127552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288546816 unmapped: 63111168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288546816 unmapped: 63111168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288555008 unmapped: 63102976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288555008 unmapped: 63102976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288579584 unmapped: 63078400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288579584 unmapped: 63078400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288587776 unmapped: 63070208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288587776 unmapped: 63070208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288604160 unmapped: 63053824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288604160 unmapped: 63053824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288620544 unmapped: 63037440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288620544 unmapped: 63037440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288653312 unmapped: 63004672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288653312 unmapped: 63004672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288661504 unmapped: 62996480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288669696 unmapped: 62988288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288686080 unmapped: 62971904 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288702464 unmapped: 62955520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.79 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 677 writes, 1677 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.81 MB, 0.00 MB/s#012Interval WAL: 677 writes, 305 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288833536 unmapped: 62824448 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288849920 unmapped: 62808064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288849920 unmapped: 62808064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288866304 unmapped: 62791680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288915456 unmapped: 62742528 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 320.509613037s of 321.359313965s, submitted: 63
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288980992 unmapped: 62676992 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [0,0,0,1])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 65208320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3486: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 65093632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 65052672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 65052672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 97.341293335s of 97.832099915s, submitted: 90
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 307 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fcf210e0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0x9f511f/0xb94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0x9f511f/0xb94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 308 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fed552c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965964 data_alloc: 218103808 data_used: 172032
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284082176 unmapped: 67575808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284082176 unmapped: 67575808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee346000/0x0/0x4ffc00000, data 0x1f6cf0/0x397000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285147136 unmapped: 66510848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x1f876f/0x39a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x1f876f/0x39a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285147136 unmapped: 66510848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 66502656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3055603 data_alloc: 218103808 data_used: 172032
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.332621574s of 10.034673691s, submitted: 57
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285171712 unmapped: 66486272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ed6d4000/0x0/0x4ffc00000, data 0xe6876f/0x100a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [0,0,0,2])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 310 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fed545a0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285171712 unmapped: 66486272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059713 data_alloc: 218103808 data_used: 184320
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ed6d0000/0x0/0x4ffc00000, data 0xe6a308/0x100d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285212672 unmapped: 66445312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.395416260s of 50.752197266s, submitted: 15
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed6ca000/0x0/0x4ffc00000, data 0xe6d93c/0x1013000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 312 ms_handle_reset con 0x55f5049ac800 session 0x55f4fd3212c0
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981229 data_alloc: 218103808 data_used: 196608
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ee33b000/0x0/0x4ffc00000, data 0x1fd93c/0x3a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981229 data_alloc: 218103808 data_used: 196608
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ee33b000/0x0/0x4ffc00000, data 0x1fd93c/0x3a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.898818970s of 11.465106010s, submitted: 39
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 66330624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285368320 unmapped: 66289664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285425664 unmapped: 66232320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285442048 unmapped: 66215936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285442048 unmapped: 66215936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}'
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285515776 unmapped: 66142208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'config show' '{prefix=config show}'
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}'
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}'
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284499968 unmapped: 67158016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:29 np0005473739 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}'
Oct  7 11:17:29 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  7 11:17:29 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1547585651' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  7 11:17:29 np0005473739 nova_compute[259550]: 2025-10-07 15:17:29.733 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:29 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23139 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  7 11:17:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1355788894' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  7 11:17:30 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23143 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:17:30 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23147 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  7 11:17:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  7 11:17:30 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4002352828' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  7 11:17:30 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23149 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  7 11:17:31 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  7 11:17:31 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2910642032' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  7 11:17:31 np0005473739 nova_compute[259550]: 2025-10-07 15:17:31.108 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:17:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3487: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:31 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23157 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  7 11:17:31 np0005473739 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  7 11:17:31 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:17:31.704+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  7 11:17:31 np0005473739 nova_compute[259550]: 2025-10-07 15:17:31.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:17:32 np0005473739 podman[453703]: 2025-10-07 15:17:32.102232204 +0000 UTC m=+0.078820645 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, org.label-schema.schema-version=1.0, config_id=iscsid, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001)
Oct  7 11:17:32 np0005473739 podman[453702]: 2025-10-07 15:17:32.130199239 +0000 UTC m=+0.107000866 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/466961050' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1724404335' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2059885865' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2800186474' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/997777716' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/997777716' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:17:32 np0005473739 nova_compute[259550]: 2025-10-07 15:17:32.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  7 11:17:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2378386877' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/467493459' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4236161573' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  7 11:17:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3488: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3621770134' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3600964625' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3970781 data_alloc: 251658240 data_used: 42672128
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352067584 unmapped: 47415296 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6c5d000/0x0/0x4ffc00000, data 0x4d90846/0x4f21000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [1])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 353951744 unmapped: 45531136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4032595 data_alloc: 251658240 data_used: 43528192
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6c48000/0x0/0x4ffc00000, data 0x4da5846/0x4f36000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1407f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354107392 unmapped: 45375488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.074107170s of 12.390025139s, submitted: 73
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355885056 unmapped: 43597824 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356007936 unmapped: 43474944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4155015 data_alloc: 251658240 data_used: 45051904
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355811328 unmapped: 43671552 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355844096 unmapped: 43638784 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4154407 data_alloc: 251658240 data_used: 45056000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355844096 unmapped: 43638784 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4154407 data_alloc: 251658240 data_used: 45056000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355852288 unmapped: 43630592 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 43622400 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 43622400 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5a8e000/0x0/0x4ffc00000, data 0x5b4f846/0x5ce0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1448f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 43622400 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4154407 data_alloc: 251658240 data_used: 45056000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 43622400 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556933455680
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.250329971s of 18.907407761s, submitted: 97
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932ebb000 session 0x556932877e00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [P] New memtable created with log file: #51. Immutable memtables: 0.
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569341c21e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 48685056 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 48685056 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5c4c000/0x0/0x4ffc00000, data 0x47f2813/0x4981000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 48685056 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350797824 unmapped: 48685056 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3919666 data_alloc: 234881024 data_used: 31633408
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 48676864 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 48676864 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556934899a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x5569341e4780
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350806016 unmapped: 48676864 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x55693345ef00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6693000/0x0/0x4ffc00000, data 0x3dac813/0x3f3b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 48660480 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 48660480 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3803339 data_alloc: 234881024 data_used: 27447296
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 48660480 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.559225082s of 10.806392670s, submitted: 68
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932eba800 session 0x5569344d1860
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x5569361ac960
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350822400 unmapped: 48660480 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e66b2000/0x0/0x4ffc00000, data 0x3d8e803/0x3f1c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 57630720 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932877860
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 57630720 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 57630720 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488754 data_alloc: 218103808 data_used: 11800576
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 341852160 unmapped: 57630720 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x55693442e5a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933564000 session 0x556934899c20
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333586432 unmapped: 65896448 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932eba800 session 0x556932eff4a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3308374 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9205000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556935562780
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569344cc3c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x55693501a3c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333602816 unmapped: 65880064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932ebb000 session 0x55693442e3c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 30.720289230s of 31.468746185s, submitted: 79
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932eba800 session 0x556932effa40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x5569361ac000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569344ccf00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x5569361adc20
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569344cc780
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8306000/0x0/0x4ffc00000, data 0x213c77e/0x22c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424970 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8306000/0x0/0x4ffc00000, data 0x213c77e/0x22c8000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333701120 unmapped: 65781760 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556932eba800 session 0x556934124780
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333709312 unmapped: 65773568 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932876f00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333709312 unmapped: 65773568 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3424970 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333709312 unmapped: 65773568 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x55693616b860
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x55693616a780
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333717504 unmapped: 65765376 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333717504 unmapped: 65765376 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8305000/0x0/0x4ffc00000, data 0x213c78e/0x22c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1562f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 333922304 unmapped: 65560576 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.126608849s of 11.262543678s, submitted: 19
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x556932efc3c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x556934125a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634183 data_alloc: 234881024 data_used: 19550208
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7282000/0x0/0x4ffc00000, data 0x2d9e7f0/0x2f2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15a4f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7282000/0x0/0x4ffc00000, data 0x2d9e7f0/0x2f2c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15a4f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3634183 data_alloc: 234881024 data_used: 19550208
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693560b400 session 0x556933dcb0e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: mgrc ms_handle_reset ms_handle_reset con 0x55693560d400
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: mgrc handle_mgr_configure stats_period=5
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f000 session 0x5569362ec3c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360b400 session 0x5569334554a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 334487552 unmapped: 64995328 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932ea8960
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932eff0e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 339206144 unmapped: 60276736 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8014000/0x0/0x4ffc00000, data 0x34667f0/0x35f4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.866021156s of 10.196639061s, submitted: 89
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 60243968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569328772c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3701641 data_alloc: 234881024 data_used: 20480000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c4d800 session 0x5569341e5a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 339238912 unmapped: 60243968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 339304448 unmapped: 60178432 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ffa000/0x0/0x4ffc00000, data 0x347c800/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2784232819' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3795319 data_alloc: 251658240 data_used: 32694272
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ffa000/0x0/0x4ffc00000, data 0x347c800/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7ffa000/0x0/0x4ffc00000, data 0x347c800/0x360b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3795639 data_alloc: 251658240 data_used: 32702464
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 342974464 unmapped: 56508416 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.916577339s of 12.971271515s, submitted: 22
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 349634560 unmapped: 49848320 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351363072 unmapped: 48119808 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7300000/0x0/0x4ffc00000, data 0x4179800/0x4308000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351420416 unmapped: 48062464 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3908103 data_alloc: 251658240 data_used: 33550336
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x556935cb25a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x5569341e4000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569362ed0e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7df1000/0x0/0x4ffc00000, data 0x2bbd7e0/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347906048 unmapped: 51576832 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635134 data_alloc: 234881024 data_used: 18665472
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 50528256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7df1000/0x0/0x4ffc00000, data 0x2bbd7e0/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 50528256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 50528256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348954624 unmapped: 50528256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 50520064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3635150 data_alloc: 234881024 data_used: 18665472
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 50520064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348962816 unmapped: 50520064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7df1000/0x0/0x4ffc00000, data 0x2bbd7e0/0x2d4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.648083687s of 15.073346138s, submitted: 187
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x556932efe5a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad0800 session 0x556935562780
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x55693616b860
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569353d3a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x556933585860
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826c000/0x0/0x4ffc00000, data 0x32157e0/0x33a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3690379 data_alloc: 234881024 data_used: 18669568
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826c000/0x0/0x4ffc00000, data 0x32157e0/0x33a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347979776 unmapped: 51503104 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826c000/0x0/0x4ffc00000, data 0x32157e0/0x33a2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x5569333570e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 51494912 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693f800800 session 0x5569335832c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 51494912 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x5569355641e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x556933461e00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 51494912 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3692378 data_alloc: 234881024 data_used: 18673664
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826b000/0x0/0x4ffc00000, data 0x32157f0/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347987968 unmapped: 51494912 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3729498 data_alloc: 234881024 data_used: 23805952
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826b000/0x0/0x4ffc00000, data 0x32157f0/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826b000/0x0/0x4ffc00000, data 0x32157f0/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e826b000/0x0/0x4ffc00000, data 0x32157f0/0x33a3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.954650879s of 17.537221909s, submitted: 42
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3729982 data_alloc: 234881024 data_used: 23805952
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8269000/0x0/0x4ffc00000, data 0x32167f0/0x33a4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347881472 unmapped: 51601408 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348897280 unmapped: 50585600 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352223232 unmapped: 47259648 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3818156 data_alloc: 234881024 data_used: 24391680
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e774c000/0x0/0x4ffc00000, data 0x3d347f0/0x3ec2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354598912 unmapped: 44883968 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 44875776 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3817408 data_alloc: 234881024 data_used: 24391680
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 44875776 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e772b000/0x0/0x4ffc00000, data 0x3d557f0/0x3ee3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354607104 unmapped: 44875776 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.053106308s of 12.065863609s, submitted: 123
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x55693442f4a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x55693442fa40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693f800800 session 0x556933357a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 44859392 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 44859392 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354623488 unmapped: 44859392 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3638949 data_alloc: 234881024 data_used: 18124800
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 44851200 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88c3000/0x0/0x4ffc00000, data 0x2bbe7e0/0x2d4b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354631680 unmapped: 44851200 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556933c4fa40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556933582000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x55693616b680
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343687168 unmapped: 55795712 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343695360 unmapped: 55787520 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343711744 unmapped: 55771136 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343719936 unmapped: 55762944 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 55754752 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 55754752 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3351274 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569354fda40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556938d23000 session 0x5569354fda40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x55693616b680
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556933582000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9ea5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343728128 unmapped: 55754752 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 38.474052429s of 39.106876373s, submitted: 103
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352321536 unmapped: 47161344 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343826432 unmapped: 55656448 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556933357a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569355641e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bad3400 session 0x5569335832c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x5569333570e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569353d3a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343826432 unmapped: 55656448 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343826432 unmapped: 55656448 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3420480 data_alloc: 218103808 data_used: 5025792
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343834624 unmapped: 55648256 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 55640064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x55693616b860
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343842816 unmapped: 55640064 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475021 data_alloc: 218103808 data_used: 12517376
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3475021 data_alloc: 218103808 data_used: 12517376
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e99f6000/0x0/0x4ffc00000, data 0x1a8c77e/0x1c18000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 343867392 unmapped: 55615488 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 16.824821472s of 18.289262772s, submitted: 18
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347586560 unmapped: 51896320 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563925 data_alloc: 218103808 data_used: 13352960
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8fe4000/0x0/0x4ffc00000, data 0x249d77e/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563777 data_alloc: 218103808 data_used: 13570048
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8fe4000/0x0/0x4ffc00000, data 0x249d77e/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3564113 data_alloc: 218103808 data_used: 13578240
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8fe4000/0x0/0x4ffc00000, data 0x249d77e/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569341e5e00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935604000 session 0x5569353d2f00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569353d2000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346931200 unmapped: 52551680 heap: 399482880 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932b5ed20
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.420858383s of 13.965919495s, submitted: 73
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569334552c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556935563a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x5569344ce3c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x556932ea94a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932876960
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717224 data_alloc: 218103808 data_used: 13578240
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c15000/0x0/0x4ffc00000, data 0x386c78e/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348274688 unmapped: 62758912 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717224 data_alloc: 218103808 data_used: 13578240
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 62750720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932ea92c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c15000/0x0/0x4ffc00000, data 0x386c78e/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 62750720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569341250e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 62750720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556932876f00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348282880 unmapped: 62750720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7c15000/0x0/0x4ffc00000, data 0x386c78e/0x39f9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.793764114s of 11.047670364s, submitted: 33
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556933dcb0e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 63643648 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3719536 data_alloc: 218103808 data_used: 13586432
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7bf1000/0x0/0x4ffc00000, data 0x389078e/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347389952 unmapped: 63643648 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 350879744 unmapped: 60153856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3859056 data_alloc: 251658240 data_used: 33144832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7bf1000/0x0/0x4ffc00000, data 0x389078e/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7bf1000/0x0/0x4ffc00000, data 0x389078e/0x3a1d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x145ef9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3859056 data_alloc: 251658240 data_used: 33144832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.438287735s of 12.799725533s, submitted: 2
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354582528 unmapped: 56451072 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #52. Immutable memtables: 8.
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e62b7000/0x0/0x4ffc00000, data 0x402a78e/0x41b7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357195776 unmapped: 53837824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357318656 unmapped: 53714944 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357318656 unmapped: 53714944 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3972574 data_alloc: 251658240 data_used: 35344384
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357318656 unmapped: 53714944 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357318656 unmapped: 53714944 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5f05000/0x0/0x4ffc00000, data 0x43db78e/0x4568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3972574 data_alloc: 251658240 data_used: 35344384
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357326848 unmapped: 53706752 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e5f05000/0x0/0x4ffc00000, data 0x43db78e/0x4568000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357335040 unmapped: 53698560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.915142059s of 12.218699455s, submitted: 96
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569362ec3c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932876b40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x55693488d4a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 61382656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3578692 data_alloc: 218103808 data_used: 13639680
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 61382656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 349650944 unmapped: 61382656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7e45000/0x0/0x4ffc00000, data 0x249d77e/0x2629000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935afc400 session 0x5569341e4000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x556934124960
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346841088 unmapped: 64192512 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693501b2c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346849280 unmapped: 64184320 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346857472 unmapped: 64176128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346865664 unmapped: 64167936 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346873856 unmapped: 64159744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3376200 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 64151552 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 346882048 unmapped: 64151552 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 43.306056976s of 43.576793671s, submitted: 74
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x5569341e54a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347209728 unmapped: 63823872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347209728 unmapped: 63823872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347209728 unmapped: 63823872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3483670 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347209728 unmapped: 63823872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e82b2000/0x0/0x4ffc00000, data 0x203176e/0x21bc000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 63815680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 63815680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569361ac5a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 63815680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x5569341c3a40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693501ab40
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556932eded20
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347217920 unmapped: 63815680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3487555 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e828e000/0x0/0x4ffc00000, data 0x205576e/0x21e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347226112 unmapped: 63807488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e828e000/0x0/0x4ffc00000, data 0x205576e/0x21e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e828e000/0x0/0x4ffc00000, data 0x205576e/0x21e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583367 data_alloc: 234881024 data_used: 18440192
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e828e000/0x0/0x4ffc00000, data 0x205576e/0x21e0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3583367 data_alloc: 234881024 data_used: 18440192
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347242496 unmapped: 63791104 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.140216827s of 19.328117371s, submitted: 24
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351715328 unmapped: 59318272 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e75b3000/0x0/0x4ffc00000, data 0x2d3076e/0x2ebb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351977472 unmapped: 59056128 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3711805 data_alloc: 234881024 data_used: 20373504
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3712125 data_alloc: 234881024 data_used: 20381696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 352100352 unmapped: 58933248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 31.933893204s of 32.221965790s, submitted: 98
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351813632 unmapped: 59219968 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 59211776 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3703549 data_alloc: 234881024 data_used: 20381696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7529000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351821824 unmapped: 59211776 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569353d3e00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x55693416d0e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935ac6c00 session 0x5569344cd0e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556933356000
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x5569344cd860
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x55693416c3c0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569354605a0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556937126000 session 0x556934894780
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556937126000 session 0x5569344cef00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755983 data_alloc: 234881024 data_used: 20381696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8b000/0x0/0x4ffc00000, data 0x33567e0/0x34e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8b000/0x0/0x4ffc00000, data 0x33567e0/0x34e3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693501be00
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693360f800 session 0x556934898780
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351830016 unmapped: 59203584 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755983 data_alloc: 234881024 data_used: 20381696
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933d8a800 session 0x556933c4f0e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.253780365s of 11.719895363s, submitted: 43
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x55693345e1e0
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351838208 unmapped: 59195392 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351846400 unmapped: 59187200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8a000/0x0/0x4ffc00000, data 0x33567f0/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8a000/0x0/0x4ffc00000, data 0x33567f0/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3792094 data_alloc: 234881024 data_used: 25141248
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8a000/0x0/0x4ffc00000, data 0x33567f0/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3792094 data_alloc: 234881024 data_used: 25141248
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8a000/0x0/0x4ffc00000, data 0x33567f0/0x34e4000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 351797248 unmapped: 59236352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.376653671s of 12.407247543s, submitted: 8
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 353624064 unmapped: 57409536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 57597952 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e67ba000/0x0/0x4ffc00000, data 0x3b1f7f0/0x3cad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 353435648 unmapped: 57597952 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3856984 data_alloc: 234881024 data_used: 25473024
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3864480 data_alloc: 234881024 data_used: 25468928
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e679e000/0x0/0x4ffc00000, data 0x3b337f0/0x3cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 4800.1 total, 600.0 interval#012Cumulative writes: 46K writes, 182K keys, 46K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.04 MB/s#012Cumulative WAL: 46K writes, 16K syncs, 2.77 writes per sync, written: 0.18 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 4220 writes, 19K keys, 4220 commit groups, 1.0 writes per commit group, ingest: 23.74 MB, 0.04 MB/s#012Interval WAL: 4220 writes, 1544 syncs, 2.73 writes per sync, written: 0.02 GB, 0.04 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3864480 data_alloc: 234881024 data_used: 25468928
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e679e000/0x0/0x4ffc00000, data 0x3b337f0/0x3cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e679e000/0x0/0x4ffc00000, data 0x3b337f0/0x3cc1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3865120 data_alloc: 234881024 data_used: 25530368
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354746368 unmapped: 56287232 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354754560 unmapped: 56279040 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:33 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.104679108s of 19.538419724s, submitted: 86
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569353d2f00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569362ec000
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354770944 unmapped: 56262656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354787328 unmapped: 56246272 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x5569335854a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717343 data_alloc: 234881024 data_used: 20443136
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717343 data_alloc: 234881024 data_used: 20443136
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717343 data_alloc: 234881024 data_used: 20443136
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354795520 unmapped: 56238080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 56229888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3717343 data_alloc: 234881024 data_used: 20443136
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7528000/0x0/0x4ffc00000, data 0x2dba76e/0x2f45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569362ed4a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.052337646s of 18.575210571s, submitted: 62
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af8400 session 0x556935cb34a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 56229888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 56229888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354803712 unmapped: 56229888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355860480 unmapped: 55173120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347463680 unmapped: 63569920 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3410491 data_alloc: 218103808 data_used: 5132288
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e9081000/0x0/0x4ffc00000, data 0x126276e/0x13ed000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [0,0,0,0,1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569334545a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403451 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403451 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403451 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3403451 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569362ede00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x5569354f50e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x556934125860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347480064 unmapped: 63553536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935af6000 session 0x5569354f4b40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 27.082035065s of 28.449712753s, submitted: 38
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934898960
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556933357a40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x556932edef00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e90a5000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x55693616b680
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693ff4a000 session 0x55693442f4a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544909 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347955200 unmapped: 63078400 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fb0000/0x0/0x4ffc00000, data 0x23317e0/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3544909 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556933454f00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fb0000/0x0/0x4ffc00000, data 0x23317e0/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569344cd4a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fb0000/0x0/0x4ffc00000, data 0x23317e0/0x24be000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x5569333e1c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x556932ea90e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 347963392 unmapped: 63070208 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664740 data_alloc: 234881024 data_used: 21495808
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.941271782s of 12.659450531s, submitted: 59
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fae000/0x0/0x4ffc00000, data 0x2331813/0x24c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [0,1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fae000/0x0/0x4ffc00000, data 0x2331813/0x24c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3664516 data_alloc: 234881024 data_used: 21499904
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348774400 unmapped: 62259200 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fae000/0x0/0x4ffc00000, data 0x2331813/0x24c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666280 data_alloc: 234881024 data_used: 21540864
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 56147968 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.728628635s of 10.006806374s, submitted: 121
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 56336384 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 55255040 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7493000/0x0/0x4ffc00000, data 0x2e4b813/0x2fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355958784 unmapped: 55074816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768784 data_alloc: 234881024 data_used: 23756800
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769104 data_alloc: 234881024 data_used: 23764992
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.462882042s of 13.117553711s, submitted: 114
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833585 data_alloc: 234881024 data_used: 23764992
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x5569354fdc20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x55693489f4a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569344cdc20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569354fc780
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x556933585680
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821841 data_alloc: 234881024 data_used: 23764992
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569354f52c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569334545a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556933583a40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569334543c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.283301353s of 10.441823006s, submitted: 21
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826034 data_alloc: 234881024 data_used: 23764992
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 51830784 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3875606 data_alloc: 234881024 data_used: 29491200
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3876746 data_alloc: 234881024 data_used: 29483008
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b0000/0x0/0x4ffc00000, data 0x351e813/0x36ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.968862534s of 12.149744034s, submitted: 5
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 51118080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 50880512 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958876 data_alloc: 234881024 data_used: 30003200
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3954340 data_alloc: 234881024 data_used: 30003200
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x55693489e960
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x5569355652c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361299968 unmapped: 49733632 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.944513321s of 10.002432823s, submitted: 77
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60f6000/0x0/0x4ffc00000, data 0x3dd9813/0x3f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361308160 unmapped: 49725440 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556935565c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783798 data_alloc: 234881024 data_used: 22560768
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706b000/0x0/0x4ffc00000, data 0x2e64813/0x2ff3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348954a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x55693345f860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3781038 data_alloc: 234881024 data_used: 22560768
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 48627712 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 55648256 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.576909542s of 10.082017899s, submitted: 108
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934894780
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 55607296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x55693416c3c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693488da40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569333e12c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556933461c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 55607296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.773714066s of 28.863956451s, submitted: 9
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1,0,1,7,6])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556933454f00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569341243c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932b5ed20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934898780
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569333e1c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556932ede960
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464974 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bacec00 session 0x55693345e5a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a74000/0x0/0x4ffc00000, data 0x145d7e0/0x15ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934894d20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569341c2960
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 55566336 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.313346863s of 19.983018875s, submitted: 34
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 53174272 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e805b000/0x0/0x4ffc00000, data 0x1e6f7f0/0x1ffd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 53084160 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8018000/0x0/0x4ffc00000, data 0x1ea97f0/0x2037000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565560 data_alloc: 218103808 data_used: 6414336
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8018000/0x0/0x4ffc00000, data 0x1ea97f0/0x2037000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558532 data_alloc: 218103808 data_used: 6418432
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8025000/0x0/0x4ffc00000, data 0x1eab7f0/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8025000/0x0/0x4ffc00000, data 0x1eab7f0/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558532 data_alloc: 218103808 data_used: 6418432
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.445440292s of 14.635492325s, submitted: 114
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558760 data_alloc: 218103808 data_used: 6418432
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558760 data_alloc: 218103808 data_used: 6418432
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.461366653s of 14.488227844s, submitted: 1
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556932eff2c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348985a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558336 data_alloc: 218103808 data_used: 6418432
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c53c00 session 0x556935cb34a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 52068352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 52068352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.447141647s of 25.612014771s, submitted: 32
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 40411136 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934125a40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569344d05a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556933455c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556933c4f0e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935e52000 session 0x5569333e0d20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815c000/0x0/0x4ffc00000, data 0x1d7776e/0x1f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932ea92c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556935cb34a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533416 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815c000/0x0/0x4ffc00000, data 0x1d7776e/0x1f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569348985a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556932eff2c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616138 data_alloc: 234881024 data_used: 16084992
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620138 data_alloc: 234881024 data_used: 16707584
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620138 data_alloc: 234881024 data_used: 16707584
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.948329926s of 20.243396759s, submitted: 34
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 46022656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368492544 unmapped: 42541056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7583000/0x0/0x4ffc00000, data 0x294f791/0x2adb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721858 data_alloc: 234881024 data_used: 18522112
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e757d000/0x0/0x4ffc00000, data 0x2955791/0x2ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730482 data_alloc: 234881024 data_used: 18755584
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730482 data_alloc: 234881024 data_used: 18755584
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369147904 unmapped: 41885696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.108821869s of 16.882175446s, submitted: 108
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556933c4e000
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774326 data_alloc: 234881024 data_used: 18755584
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569341c30e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693616a1e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932b5fa40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774326 data_alloc: 234881024 data_used: 18755584
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569341e4000
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369942528 unmapped: 41091072 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818646 data_alloc: 234881024 data_used: 24911872
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818646 data_alloc: 234881024 data_used: 24911872
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.362392426s of 19.756223679s, submitted: 9
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 40165376 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372105216 unmapped: 38928384 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888978 data_alloc: 234881024 data_used: 26075136
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68c1000/0x0/0x4ffc00000, data 0x3611791/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68c1000/0x0/0x4ffc00000, data 0x3611791/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372129792 unmapped: 38903808 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888978 data_alloc: 234881024 data_used: 26075136
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372129792 unmapped: 38903808 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693345ef00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68be000/0x0/0x4ffc00000, data 0x3614791/0x37a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371220480 unmapped: 39813120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.185196877s of 10.101745605s, submitted: 92
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693616be00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3737914 data_alloc: 234881024 data_used: 18817024
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569333e1c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b762800 session 0x556934808960
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569354f4b40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569334550e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932efd4a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569348990e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693488d0e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.007173538s of 30.416368484s, submitted: 66
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569354f5a40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b762800 session 0x556933454f00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569353d3860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510209 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556935cb25a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348954a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x55693488cb40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509809 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569361adc20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693488c3c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.333779335s of 10.194737434s, submitted: 17
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569356eb4a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511570 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551250 data_alloc: 218103808 data_used: 10608640
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551250 data_alloc: 218103808 data_used: 10608640
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.512318611s of 13.137044907s, submitted: 4
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572334 data_alloc: 218103808 data_used: 10981376
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8525000/0x0/0x4ffc00000, data 0x19ae76e/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580412 data_alloc: 218103808 data_used: 11112448
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580412 data_alloc: 218103808 data_used: 11112448
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x55693345eb40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569335854a0
Oct  7 11:17:34 np0005473739 nova_compute[259550]: 2025-10-07 15:17:34.039 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x556933583680
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569344cc3c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.535245895s of 16.586713791s, submitted: 25
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,1,0,6])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569362ec000
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556935cb32c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556932ea9860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 58228736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x556934895860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569354f4780
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58220544 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3685997 data_alloc: 218103808 data_used: 11112448
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932efe000
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556933583e00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569353d23c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569341c25a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686790 data_alloc: 218103808 data_used: 11112448
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 58204160 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788494 data_alloc: 234881024 data_used: 25300992
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788494 data_alloc: 234881024 data_used: 25300992
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.612783432s of 20.685228348s, submitted: 31
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 52690944 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 53075968 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,2,0,1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3846534 data_alloc: 234881024 data_used: 25468928
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70d5000/0x0/0x4ffc00000, data 0x2df676e/0x2f81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 53248000 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841770 data_alloc: 234881024 data_used: 25468928
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556934125860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693489f680
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 53248000 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x55693442ed20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3591774 data_alloc: 218103808 data_used: 11173888
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.757778168s of 12.639021873s, submitted: 116
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932876f00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556934898f00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 59170816 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693489fa40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.288772583s of 35.765110016s, submitted: 48
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569354fc1e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934125a40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556935cb34a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569348990e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569354f4b40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521521 data_alloc: 218103808 data_used: 5021696
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549973 data_alloc: 218103808 data_used: 8990720
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549973 data_alloc: 218103808 data_used: 8990720
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.138957977s of 18.666332245s, submitted: 15
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557359 data_alloc: 218103808 data_used: 8982528
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880d000/0x0/0x4ffc00000, data 0x16be76e/0x1849000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 57204736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 57204736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559541 data_alloc: 218103808 data_used: 8982528
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559541 data_alloc: 218103808 data_used: 8982528
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.330938339s of 10.883395195s, submitted: 16
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559557 data_alloc: 218103808 data_used: 8982528
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559557 data_alloc: 218103808 data_used: 8982528
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.153526306s of 12.898469925s, submitted: 1
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361062400 unmapped: 57843712 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880d000/0x0/0x4ffc00000, data 0x16c576e/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 57835520 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559113 data_alloc: 218103808 data_used: 8982528
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569362ed860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 57835520 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x5569343fc800 session 0x556932efeb40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c56000 session 0x5569361acd20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c4f000 session 0x5569335852c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 57819136 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c4f000 session 0x5569344ce3c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 65880064 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 286 ms_handle_reset con 0x556933526400 session 0x5569362ecf00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 65855488 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 286 heartbeat osd_stat(store_statfs(0x4e7113000/0x0/0x4ffc00000, data 0x2dbbecb/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 286 heartbeat osd_stat(store_statfs(0x4e7113000/0x0/0x4ffc00000, data 0x2dbbecb/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x5569341c3a40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762301 data_alloc: 218103808 data_used: 9789440
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x55693489ef00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c56000 session 0x556932edf680
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933526400 session 0x55693345ef00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762301 data_alloc: 218103808 data_used: 9789440
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x5569333574a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x556934894d20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c4f000 session 0x556932edf0e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x55693f800800 session 0x55693489e5a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.815621376s of 13.853458405s, submitted: 48
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x556932b5f860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933526400 session 0x556933583860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x556935564d20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c4f000 session 0x5569344d0960
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71400 session 0x5569341c3c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710d000/0x0/0x4ffc00000, data 0x2dbdb28/0x2f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 287 handle_osd_map epochs [288,288], i have 288, src has [1,288]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755695 data_alloc: 218103808 data_used: 9789440
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf58b/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 69664768 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 69664768 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 ms_handle_reset con 0x556933526400 session 0x5569354f5a40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 69656576 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758141 data_alloc: 218103808 data_used: 9789440
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 69640192 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 67837952 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833473 data_alloc: 234881024 data_used: 20254720
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.241428375s of 17.537368774s, submitted: 23
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833473 data_alloc: 234881024 data_used: 20254720
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866312 data_alloc: 234881024 data_used: 26378240
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872264 data_alloc: 234881024 data_used: 27168768
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 ms_handle_reset con 0x556935c4f000 session 0x5569353d25a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.446823120s of 32.526866913s, submitted: 8
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556934891400 session 0x5569348945a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367566848 unmapped: 62881792 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556937127400 session 0x556932876780
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x55693f800000 session 0x55693416c3c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556933526400 session 0x5569353d3a40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556934891400 session 0x556933c4fa40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374775808 unmapped: 59875328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556935c4f000 session 0x556932efd2c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374390784 unmapped: 60260352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e45c9000/0x0/0x4ffc00000, data 0x58fb8f7/0x5a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556937127400 session 0x556935cb34a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374398976 unmapped: 60252160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556933526800 session 0x556934125860
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4223401 data_alloc: 251658240 data_used: 33873920
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556933526400 session 0x55693489f0e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 60211200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556934891400 session 0x5569354f4780
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e70fe000/0x0/0x4ffc00000, data 0x2dc6482/0x2f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 60211200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556933b71c00 session 0x556934899c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x5569343fc800 session 0x556933583c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374448128 unmapped: 60203008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556935c4f000 session 0x55693501bc20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3922026 data_alloc: 251658240 data_used: 33869824
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.127110481s of 10.177739143s, submitted: 163
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e70ff000/0x0/0x4ffc00000, data 0x2dc639b/0x2f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 293 ms_handle_reset con 0x556933526400 session 0x5569362ec5a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368902144 unmapped: 65748992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 294 ms_handle_reset con 0x556933b6f400 session 0x5569354f4960
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368918528 unmapped: 65732608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 294 ms_handle_reset con 0x556933b71c00 session 0x5569333e0d20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560957 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560957 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.429761887s of 10.748415947s, submitted: 82
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 49K writes, 193K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 49K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2813 writes, 11K keys, 2813 commit groups, 1.0 writes per commit group, ingest: 11.66 MB, 0.02 MB/s#012Interval WAL: 2813 writes, 1097 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933bdbc00 session 0x5569361ad2c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: mgrc ms_handle_reset ms_handle_reset con 0x556935c56c00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: mgrc handle_mgr_configure stats_period=5
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556935c59000 session 0x5569341e52c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x55693360f000 session 0x55693442e3c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.841426849s of 37.856082916s, submitted: 15
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556934891400 session 0x5569333e12c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556937127400 session 0x5569362ed2c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8873000/0x0/0x4ffc00000, data 0x16514bc/0x17eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933526400 session 0x5569353d2000
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8873000/0x0/0x4ffc00000, data 0x16514f5/0x17eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933b6f400 session 0x556933585e00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933b71c00 session 0x556933460f00
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556934891400 session 0x5569333e1c20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362356736 unmapped: 72294400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629019 data_alloc: 218103808 data_used: 9269248
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556935605800 session 0x5569334550e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x5569335b2000 session 0x556932ea85a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e884e000/0x0/0x4ffc00000, data 0x1675504/0x1810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933526400 session 0x5569335830e0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 72253440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 72253440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362414080 unmapped: 72237056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.524005890s of 59.169479370s, submitted: 66
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 72228864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567816 data_alloc: 218103808 data_used: 5070848
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c74000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 72220672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 72204288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 72204288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c74000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362463232 unmapped: 72187904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 296 ms_handle_reset con 0x556933b6f400 session 0x5569354fcb40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570568 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c72000/0x0/0x4ffc00000, data 0x1252f30/0x13eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570568 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 72171520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c72000/0x0/0x4ffc00000, data 0x1252f30/0x13eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.192964554s of 14.603412628s, submitted: 105
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362561536 unmapped: 72089600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362569728 unmapped: 72081408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 72015872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 71991296 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 104.046302795s of 104.136131287s, submitted: 16
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 ms_handle_reset con 0x556933b71c00 session 0x556933c4f4a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 ms_handle_reset con 0x556934891400 session 0x55693416cb40
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580822 data_alloc: 218103808 data_used: 10911744
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 67788800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580822 data_alloc: 218103808 data_used: 10911744
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 67772416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1,1])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 298 ms_handle_reset con 0x556933526400 session 0x556932b5e000
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.556733131s of 11.714488029s, submitted: 44
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508745 data_alloc: 218103808 data_used: 4096000
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 299 ms_handle_reset con 0x5569335b2000 session 0x556932efc960
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e905b000/0x0/0x4ffc00000, data 0xa58102/0xbf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445661 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445661 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.843618393s of 12.988986969s, submitted: 53
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 ms_handle_reset con 0x556933b6f400 session 0x556935cb34a0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 77185024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 77185024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2344058974' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 77119488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 77119488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 77078528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 77078528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 105.469673157s of 105.490242004s, submitted: 20
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: osd.1 303 ms_handle_reset con 0x556933b71c00 session 0x556935cb32c0
Oct  7 11:17:34 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:22:08 np0005473739 podman[462167]: 2025-10-07 15:22:08.841256608 +0000 UTC m=+0.089008801 container create ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:22:08 np0005473739 podman[462167]: 2025-10-07 15:22:08.775382656 +0000 UTC m=+0.023134849 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:22:08 np0005473739 systemd[1]: Started libpod-conmon-ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997.scope.
Oct  7 11:22:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:22:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:22:08 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:22:08 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:22:09 np0005473739 podman[462167]: 2025-10-07 15:22:09.078133079 +0000 UTC m=+0.325885272 container init ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:22:09 np0005473739 podman[462167]: 2025-10-07 15:22:09.091522311 +0000 UTC m=+0.339274504 container start ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:22:09 np0005473739 distracted_ishizaka[462183]: 167 167
Oct  7 11:22:09 np0005473739 systemd[1]: libpod-ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997.scope: Deactivated successfully.
Oct  7 11:22:09 np0005473739 rsyslogd[1004]: imjournal: 15164 messages lost due to rate-limiting (20000 allowed within 600 seconds)
Oct  7 11:22:09 np0005473739 podman[462167]: 2025-10-07 15:22:09.206430774 +0000 UTC m=+0.454182997 container attach ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS)
Oct  7 11:22:09 np0005473739 podman[462167]: 2025-10-07 15:22:09.207657896 +0000 UTC m=+0.455410079 container died ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:22:09 np0005473739 nova_compute[259550]: 2025-10-07 15:22:09.477 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3626: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:09 np0005473739 systemd[1]: var-lib-containers-storage-overlay-70dfa0a3487a4e2b74d3bfb2e05d2d9f624de23cebcf573b20f391cf3ba1ef80-merged.mount: Deactivated successfully.
Oct  7 11:22:10 np0005473739 nova_compute[259550]: 2025-10-07 15:22:10.021 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:10 np0005473739 podman[462167]: 2025-10-07 15:22:10.558211729 +0000 UTC m=+1.805963922 container remove ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=distracted_ishizaka, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:22:10 np0005473739 systemd[1]: libpod-conmon-ab0e1a8b1024d608acc6de9b00db402e243a1c660997e0ea2d41ccb2f9204997.scope: Deactivated successfully.
Oct  7 11:22:10 np0005473739 podman[462207]: 2025-10-07 15:22:10.734210867 +0000 UTC m=+0.044955023 container create 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS)
Oct  7 11:22:10 np0005473739 systemd[1]: Started libpod-conmon-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope.
Oct  7 11:22:10 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:22:10 np0005473739 podman[462207]: 2025-10-07 15:22:10.71568762 +0000 UTC m=+0.026431816 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:22:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:10 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:10 np0005473739 podman[462207]: 2025-10-07 15:22:10.827383748 +0000 UTC m=+0.138127954 container init 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 11:22:10 np0005473739 podman[462207]: 2025-10-07 15:22:10.846390918 +0000 UTC m=+0.157135074 container start 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:22:10 np0005473739 podman[462207]: 2025-10-07 15:22:10.850730092 +0000 UTC m=+0.161474338 container attach 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:22:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3627: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:11 np0005473739 sweet_lichterman[462224]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:22:11 np0005473739 sweet_lichterman[462224]: --> relative data size: 1.0
Oct  7 11:22:11 np0005473739 sweet_lichterman[462224]: --> All data devices are unavailable
Oct  7 11:22:11 np0005473739 systemd[1]: libpod-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope: Deactivated successfully.
Oct  7 11:22:11 np0005473739 systemd[1]: libpod-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope: Consumed 1.087s CPU time.
Oct  7 11:22:11 np0005473739 conmon[462224]: conmon 6356166d39afc820b794 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope/container/memory.events
Oct  7 11:22:11 np0005473739 podman[462207]: 2025-10-07 15:22:11.986361403 +0000 UTC m=+1.297105559 container died 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef)
Oct  7 11:22:12 np0005473739 systemd[1]: var-lib-containers-storage-overlay-487674ef27ef0b8cba58197b55c0f72fd17c06754e4935d335f963e02a9a9af7-merged.mount: Deactivated successfully.
Oct  7 11:22:12 np0005473739 podman[462207]: 2025-10-07 15:22:12.352056711 +0000 UTC m=+1.662800867 container remove 6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=sweet_lichterman, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:22:12 np0005473739 systemd[1]: libpod-conmon-6356166d39afc820b79417a620ee081f06663d7505b046898ac7aac40c0c5932.scope: Deactivated successfully.
Oct  7 11:22:12 np0005473739 podman[462253]: 2025-10-07 15:22:12.370219869 +0000 UTC m=+0.342472199 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 11:22:12 np0005473739 podman[462260]: 2025-10-07 15:22:12.387148204 +0000 UTC m=+0.359184639 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_id=iscsid, container_name=iscsid)
Oct  7 11:22:13 np0005473739 podman[462447]: 2025-10-07 15:22:13.079108594 +0000 UTC m=+0.035820433 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:22:13 np0005473739 podman[462447]: 2025-10-07 15:22:13.205396685 +0000 UTC m=+0.162108494 container create fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2)
Oct  7 11:22:13 np0005473739 systemd[1]: Started libpod-conmon-fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432.scope.
Oct  7 11:22:13 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:22:13 np0005473739 podman[462447]: 2025-10-07 15:22:13.537817549 +0000 UTC m=+0.494529388 container init fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, ceph=True)
Oct  7 11:22:13 np0005473739 podman[462447]: 2025-10-07 15:22:13.54814138 +0000 UTC m=+0.504853189 container start fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_REF=reef)
Oct  7 11:22:13 np0005473739 hopeful_darwin[462463]: 167 167
Oct  7 11:22:13 np0005473739 systemd[1]: libpod-fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432.scope: Deactivated successfully.
Oct  7 11:22:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3628: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:13 np0005473739 podman[462447]: 2025-10-07 15:22:13.788339479 +0000 UTC m=+0.745051278 container attach fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:22:13 np0005473739 podman[462447]: 2025-10-07 15:22:13.78917662 +0000 UTC m=+0.745888429 container died fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:22:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:22:13 np0005473739 ceph-osd[88039]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.0 total, 600.0 interval#012Cumulative writes: 45K writes, 182K keys, 45K commit groups, 1.0 writes per commit group, ingest: 0.18 GB, 0.03 MB/s#012Cumulative WAL: 45K writes, 16K syncs, 2.78 writes per sync, written: 0.18 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 400 writes, 997 keys, 400 commit groups, 1.0 writes per commit group, ingest: 0.41 MB, 0.00 MB/s#012Interval WAL: 400 writes, 182 syncs, 2.20 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:22:13 np0005473739 systemd[1]: var-lib-containers-storage-overlay-05ac88f63d15c761ed155a6297a8f0ef15a1d7aa1f754e181ee6c351bf82796c-merged.mount: Deactivated successfully.
Oct  7 11:22:13 np0005473739 podman[462447]: 2025-10-07 15:22:13.963035133 +0000 UTC m=+0.919746982 container remove fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hopeful_darwin, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 11:22:13 np0005473739 systemd[1]: libpod-conmon-fe5486250db9c58d2121141929d56eeee0f81f94b21335cb66f80b896e8a8432.scope: Deactivated successfully.
Oct  7 11:22:14 np0005473739 podman[462487]: 2025-10-07 15:22:14.1754432 +0000 UTC m=+0.076462003 container create e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default)
Oct  7 11:22:14 np0005473739 podman[462487]: 2025-10-07 15:22:14.130318573 +0000 UTC m=+0.031337386 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:22:14 np0005473739 systemd[1]: Started libpod-conmon-e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06.scope.
Oct  7 11:22:14 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:22:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:14 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:14 np0005473739 nova_compute[259550]: 2025-10-07 15:22:14.483 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:14 np0005473739 podman[462487]: 2025-10-07 15:22:14.530690194 +0000 UTC m=+0.431708977 container init e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:22:14 np0005473739 podman[462487]: 2025-10-07 15:22:14.540776759 +0000 UTC m=+0.441795522 container start e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 11:22:14 np0005473739 podman[462487]: 2025-10-07 15:22:14.546074659 +0000 UTC m=+0.447093422 container attach e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507)
Oct  7 11:22:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:22:14.725 161536 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=65, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '82:03:61', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': '3e:f8:7f:4b:21:42'}, ipsec=False) old=SB_Global(nb_cfg=64) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Oct  7 11:22:14 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:22:14.726 161536 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Oct  7 11:22:14 np0005473739 nova_compute[259550]: 2025-10-07 15:22:14.727 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:15 np0005473739 nova_compute[259550]: 2025-10-07 15:22:15.023 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]: {
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:    "0": [
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:        {
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "devices": [
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "/dev/loop3"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            ],
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_name": "ceph_lv0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_size": "21470642176",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "name": "ceph_lv0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "tags": {
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cluster_name": "ceph",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.crush_device_class": "",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.encrypted": "0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osd_id": "0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.type": "block",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.vdo": "0"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            },
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "type": "block",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "vg_name": "ceph_vg0"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:        }
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:    ],
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:    "1": [
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:        {
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "devices": [
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "/dev/loop4"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            ],
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_name": "ceph_lv1",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_size": "21470642176",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "name": "ceph_lv1",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "tags": {
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cluster_name": "ceph",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.crush_device_class": "",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.encrypted": "0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osd_id": "1",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.type": "block",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.vdo": "0"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            },
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "type": "block",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "vg_name": "ceph_vg1"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:        }
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:    ],
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:    "2": [
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:        {
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "devices": [
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "/dev/loop5"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            ],
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_name": "ceph_lv2",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_size": "21470642176",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "name": "ceph_lv2",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "tags": {
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.cluster_name": "ceph",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.crush_device_class": "",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.encrypted": "0",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osd_id": "2",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.type": "block",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:                "ceph.vdo": "0"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            },
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "type": "block",
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:            "vg_name": "ceph_vg2"
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:        }
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]:    ]
Oct  7 11:22:15 np0005473739 friendly_wozniak[462504]: }
Oct  7 11:22:15 np0005473739 systemd[1]: libpod-e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06.scope: Deactivated successfully.
Oct  7 11:22:15 np0005473739 podman[462513]: 2025-10-07 15:22:15.541362568 +0000 UTC m=+0.033631626 container died e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:22:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3629: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:15 np0005473739 systemd[1]: var-lib-containers-storage-overlay-01846c62deb9396396089de18a1823533cadd86f0d3ff3e87bd555b9294ed0ee-merged.mount: Deactivated successfully.
Oct  7 11:22:15 np0005473739 podman[462513]: 2025-10-07 15:22:15.62549504 +0000 UTC m=+0.117764038 container remove e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=friendly_wozniak, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS)
Oct  7 11:22:15 np0005473739 systemd[1]: libpod-conmon-e82a13442aa147f0205b920ff2f34ddcb3cfc23861dc16aca6189dc50f218d06.scope: Deactivated successfully.
Oct  7 11:22:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:16 np0005473739 podman[462667]: 2025-10-07 15:22:16.347348576 +0000 UTC m=+0.044086410 container create 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:22:16 np0005473739 systemd[1]: Started libpod-conmon-2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5.scope.
Oct  7 11:22:16 np0005473739 podman[462667]: 2025-10-07 15:22:16.329299281 +0000 UTC m=+0.026037135 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:22:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:22:16 np0005473739 podman[462667]: 2025-10-07 15:22:16.455295405 +0000 UTC m=+0.152033239 container init 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:22:16 np0005473739 podman[462667]: 2025-10-07 15:22:16.464512488 +0000 UTC m=+0.161250302 container start 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:22:16 np0005473739 podman[462667]: 2025-10-07 15:22:16.46987704 +0000 UTC m=+0.166614894 container attach 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef)
Oct  7 11:22:16 np0005473739 wonderful_blackwell[462683]: 167 167
Oct  7 11:22:16 np0005473739 systemd[1]: libpod-2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5.scope: Deactivated successfully.
Oct  7 11:22:16 np0005473739 podman[462667]: 2025-10-07 15:22:16.472471798 +0000 UTC m=+0.169209622 container died 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.label-schema.build-date=20250507)
Oct  7 11:22:16 np0005473739 systemd[1]: var-lib-containers-storage-overlay-44dcee9af340b6f5abe8cbd2bcd945ee2e41a7edfda1c5e098cd6deac035a0f0-merged.mount: Deactivated successfully.
Oct  7 11:22:16 np0005473739 podman[462667]: 2025-10-07 15:22:16.524506006 +0000 UTC m=+0.221243830 container remove 2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=wonderful_blackwell, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 11:22:16 np0005473739 systemd[1]: libpod-conmon-2a0583dae4873ae998984796683693f1330f394005c10b868b5a71e1da259be5.scope: Deactivated successfully.
Oct  7 11:22:16 np0005473739 podman[462705]: 2025-10-07 15:22:16.721408605 +0000 UTC m=+0.073012921 container create c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True)
Oct  7 11:22:16 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:22:16.727 161536 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=52903340-c961-4e65-8ffc-97dd01d2b2e2, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '65'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Oct  7 11:22:16 np0005473739 systemd[1]: Started libpod-conmon-c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d.scope.
Oct  7 11:22:16 np0005473739 podman[462705]: 2025-10-07 15:22:16.674694326 +0000 UTC m=+0.026298672 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:22:16 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:22:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:16 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:22:16 np0005473739 podman[462705]: 2025-10-07 15:22:16.823711265 +0000 UTC m=+0.175315581 container init c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:22:16 np0005473739 podman[462705]: 2025-10-07 15:22:16.836666377 +0000 UTC m=+0.188270693 container start c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 11:22:16 np0005473739 podman[462705]: 2025-10-07 15:22:16.841447652 +0000 UTC m=+0.193051968 container attach c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  7 11:22:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3630: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]: {
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "osd_id": 2,
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "type": "bluestore"
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:    },
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "osd_id": 1,
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "type": "bluestore"
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:    },
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "osd_id": 0,
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:        "type": "bluestore"
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]:    }
Oct  7 11:22:17 np0005473739 elegant_brattain[462722]: }
Oct  7 11:22:17 np0005473739 systemd[1]: libpod-c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d.scope: Deactivated successfully.
Oct  7 11:22:17 np0005473739 systemd[1]: libpod-c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d.scope: Consumed 1.144s CPU time.
Oct  7 11:22:17 np0005473739 podman[462705]: 2025-10-07 15:22:17.974853364 +0000 UTC m=+1.326457710 container died c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:22:18 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7e3f024f90aabe59d724aa624880f9924ac2ecf084e2d8d57bf82ab756131a36-merged.mount: Deactivated successfully.
Oct  7 11:22:18 np0005473739 podman[462705]: 2025-10-07 15:22:18.195436815 +0000 UTC m=+1.547041151 container remove c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=elegant_brattain, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:22:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:22:18 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 50K writes, 196K keys, 50K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 50K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 413 writes, 956 keys, 413 commit groups, 1.0 writes per commit group, ingest: 0.47 MB, 0.00 MB/s#012Interval WAL: 413 writes, 182 syncs, 2.27 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:22:18 np0005473739 systemd[1]: libpod-conmon-c4b020de5d067237a7087a637a0b5dc0a8f2cddf549a0f32bde378453c70e06d.scope: Deactivated successfully.
Oct  7 11:22:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:22:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:22:18 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:22:18 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:22:18 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c6e3dedb-5925-479e-9651-ad50235254f5 does not exist
Oct  7 11:22:18 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 844f3ccc-f102-4781-b9c7-de8a28c2d79f does not exist
Oct  7 11:22:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:22:18 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:22:19 np0005473739 nova_compute[259550]: 2025-10-07 15:22:19.488 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3631: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:20 np0005473739 nova_compute[259550]: 2025-10-07 15:22:20.029 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3632: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:21 np0005473739 nova_compute[259550]: 2025-10-07 15:22:21.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:21 np0005473739 nova_compute[259550]: 2025-10-07 15:22:21.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Oct  7 11:22:22 np0005473739 nova_compute[259550]: 2025-10-07 15:22:22.003 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Oct  7 11:22:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:22:22 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.78 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 385 writes, 982 keys, 385 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 385 writes, 171 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:22:22
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.mgr', 'images', 'backups', '.rgw.root', 'default.rgw.log', 'cephfs.cephfs.data', 'volumes', 'default.rgw.meta', 'default.rgw.control', 'vms']
Oct  7 11:22:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3633: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:22:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:22:24 np0005473739 ceph-mgr[74587]: [devicehealth INFO root] Check health
Oct  7 11:22:24 np0005473739 nova_compute[259550]: 2025-10-07 15:22:24.493 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:25 np0005473739 nova_compute[259550]: 2025-10-07 15:22:25.030 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3634: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:27 np0005473739 nova_compute[259550]: 2025-10-07 15:22:27.005 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:27 np0005473739 podman[462819]: 2025-10-07 15:22:27.076220633 +0000 UTC m=+0.062622329 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Oct  7 11:22:27 np0005473739 podman[462820]: 2025-10-07 15:22:27.112830165 +0000 UTC m=+0.094827665 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Oct  7 11:22:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3635: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:29 np0005473739 nova_compute[259550]: 2025-10-07 15:22:29.497 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3636: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:30 np0005473739 nova_compute[259550]: 2025-10-07 15:22:30.032 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:30 np0005473739 nova_compute[259550]: 2025-10-07 15:22:30.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:30 np0005473739 nova_compute[259550]: 2025-10-07 15:22:30.983 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:22:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3637: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:22:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/868484289' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:22:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:22:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/868484289' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:22:32 np0005473739 nova_compute[259550]: 2025-10-07 15:22:32.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:22:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3638: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:34 np0005473739 nova_compute[259550]: 2025-10-07 15:22:34.503 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:34 np0005473739 nova_compute[259550]: 2025-10-07 15:22:34.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:35 np0005473739 nova_compute[259550]: 2025-10-07 15:22:35.033 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3639: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3640: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:37 np0005473739 nova_compute[259550]: 2025-10-07 15:22:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:38 np0005473739 nova_compute[259550]: 2025-10-07 15:22:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:39 np0005473739 nova_compute[259550]: 2025-10-07 15:22:39.507 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3641: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:39 np0005473739 nova_compute[259550]: 2025-10-07 15:22:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:40 np0005473739 nova_compute[259550]: 2025-10-07 15:22:40.035 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3642: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:43 np0005473739 podman[462864]: 2025-10-07 15:22:43.072871292 +0000 UTC m=+0.057001670 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 11:22:43 np0005473739 podman[462865]: 2025-10-07 15:22:43.076007945 +0000 UTC m=+0.057258187 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 11:22:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3643: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:43 np0005473739 nova_compute[259550]: 2025-10-07 15:22:43.917 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:44 np0005473739 nova_compute[259550]: 2025-10-07 15:22:44.511 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:45 np0005473739 nova_compute[259550]: 2025-10-07 15:22:45.037 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3644: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:46 np0005473739 nova_compute[259550]: 2025-10-07 15:22:46.479 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3645: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:49 np0005473739 nova_compute[259550]: 2025-10-07 15:22:49.515 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3646: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:50 np0005473739 nova_compute[259550]: 2025-10-07 15:22:50.038 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3647: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:51 np0005473739 nova_compute[259550]: 2025-10-07 15:22:51.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.059 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.060 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.060 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.060 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.060 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:22:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:22:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4063851977' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.507 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.446s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.678 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.679 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3586MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.680 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.680 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:22:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.761 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.761 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:22:52 np0005473739 nova_compute[259550]: 2025-10-07 15:22:52.780 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:22:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:22:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1582143350' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:22:53 np0005473739 nova_compute[259550]: 2025-10-07 15:22:53.242 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.461s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:22:53 np0005473739 nova_compute[259550]: 2025-10-07 15:22:53.249 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:22:53 np0005473739 nova_compute[259550]: 2025-10-07 15:22:53.292 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:22:53 np0005473739 nova_compute[259550]: 2025-10-07 15:22:53.293 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:22:53 np0005473739 nova_compute[259550]: 2025-10-07 15:22:53.294 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.614s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:22:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3648: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:54 np0005473739 nova_compute[259550]: 2025-10-07 15:22:54.519 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:54 np0005473739 nova_compute[259550]: 2025-10-07 15:22:54.983 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:54 np0005473739 nova_compute[259550]: 2025-10-07 15:22:54.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Oct  7 11:22:55 np0005473739 nova_compute[259550]: 2025-10-07 15:22:55.040 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3649: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #180. Immutable memtables: 0.
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.059062) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 111] Flushing memtable with next log file: 180
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576059096, "job": 111, "event": "flush_started", "num_memtables": 1, "num_entries": 741, "num_deletes": 257, "total_data_size": 930850, "memory_usage": 944288, "flush_reason": "Manual Compaction"}
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 111] Level-0 flush table #181: started
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576130872, "cf_name": "default", "job": 111, "event": "table_file_creation", "file_number": 181, "file_size": 922399, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75173, "largest_seqno": 75913, "table_properties": {"data_size": 918545, "index_size": 1633, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1157, "raw_key_size": 8430, "raw_average_key_size": 18, "raw_value_size": 910839, "raw_average_value_size": 2042, "num_data_blocks": 73, "num_entries": 446, "num_filter_entries": 446, "num_deletions": 257, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850514, "oldest_key_time": 1759850514, "file_creation_time": 1759850576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 181, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 111] Flush lasted 71972 microseconds, and 3517 cpu microseconds.
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.131021) [db/flush_job.cc:967] [default] [JOB 111] Level-0 flush table #181: 922399 bytes OK
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.131055) [db/memtable_list.cc:519] [default] Level-0 commit table #181 started
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.173284) [db/memtable_list.cc:722] [default] Level-0 commit table #181: memtable #1 done
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.173336) EVENT_LOG_v1 {"time_micros": 1759850576173324, "job": 111, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.173369) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 111] Try to delete WAL files size 927038, prev total WAL file size 954167, number of live WAL files 2.
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000177.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.174335) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '6C6F676D0033323730' seq:72057594037927935, type:22 .. '6C6F676D0033353233' seq:0, type:0; will stop at (end)
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 112] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 111 Base level 0, inputs: [181(900KB)], [179(9163KB)]
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576174368, "job": 112, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [181], "files_L6": [179], "score": -1, "input_data_size": 10305862, "oldest_snapshot_seqno": -1}
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 112] Generated table #182: 9107 keys, 10197588 bytes, temperature: kUnknown
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576681841, "cf_name": "default", "job": 112, "event": "table_file_creation", "file_number": 182, "file_size": 10197588, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10140857, "index_size": 32874, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 22789, "raw_key_size": 240777, "raw_average_key_size": 26, "raw_value_size": 9982371, "raw_average_value_size": 1096, "num_data_blocks": 1265, "num_entries": 9107, "num_filter_entries": 9107, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850576, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 182, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.682111) [db/compaction/compaction_job.cc:1663] [default] [JOB 112] Compacted 1@0 + 1@6 files to L6 => 10197588 bytes
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.705812) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 20.3 rd, 20.1 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(0.9, 8.9 +0.0 blob) out(9.7 +0.0 blob), read-write-amplify(22.2) write-amplify(11.1) OK, records in: 9632, records dropped: 525 output_compression: NoCompression
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.705859) EVENT_LOG_v1 {"time_micros": 1759850576705843, "job": 112, "event": "compaction_finished", "compaction_time_micros": 507574, "compaction_time_cpu_micros": 30596, "output_level": 6, "num_output_files": 1, "total_output_size": 10197588, "num_input_records": 9632, "num_output_records": 9107, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000181.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576706379, "job": 112, "event": "table_file_deletion", "file_number": 181}
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000179.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850576708607, "job": 112, "event": "table_file_deletion", "file_number": 179}
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.174239) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708668) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708674) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708675) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708677) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:22:56 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:22:56.708679) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:22:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3650: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:22:58 np0005473739 podman[462946]: 2025-10-07 15:22:58.134198346 +0000 UTC m=+0.120453489 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251001)
Oct  7 11:22:58 np0005473739 podman[462947]: 2025-10-07 15:22:58.149760885 +0000 UTC m=+0.128489921 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_controller, org.label-schema.build-date=20251001, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Oct  7 11:22:59 np0005473739 nova_compute[259550]: 2025-10-07 15:22:59.003 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:22:59 np0005473739 nova_compute[259550]: 2025-10-07 15:22:59.003 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:22:59 np0005473739 nova_compute[259550]: 2025-10-07 15:22:59.003 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:22:59 np0005473739 nova_compute[259550]: 2025-10-07 15:22:59.025 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:22:59 np0005473739 nova_compute[259550]: 2025-10-07 15:22:59.522 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:22:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3651: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:00 np0005473739 nova_compute[259550]: 2025-10-07 15:23:00.043 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:23:00.124 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:23:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:23:00.125 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:23:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:23:00.125 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:23:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3652: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3653: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:03 np0005473739 nova_compute[259550]: 2025-10-07 15:23:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:04 np0005473739 nova_compute[259550]: 2025-10-07 15:23:04.526 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:05 np0005473739 nova_compute[259550]: 2025-10-07 15:23:05.045 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3654: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3655: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:09 np0005473739 nova_compute[259550]: 2025-10-07 15:23:09.530 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3656: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:10 np0005473739 nova_compute[259550]: 2025-10-07 15:23:10.047 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3657: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3658: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:14 np0005473739 podman[462991]: 2025-10-07 15:23:14.067608557 +0000 UTC m=+0.057268197 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Oct  7 11:23:14 np0005473739 podman[462992]: 2025-10-07 15:23:14.080812824 +0000 UTC m=+0.063177243 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001, container_name=iscsid)
Oct  7 11:23:14 np0005473739 nova_compute[259550]: 2025-10-07 15:23:14.533 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:15 np0005473739 nova_compute[259550]: 2025-10-07 15:23:15.049 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3659: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3660: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:23:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev dc4db16c-618c-4a8d-b9d1-757e1e193aed does not exist
Oct  7 11:23:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 54a80a1b-8a07-474f-85b4-3be1bf36c2af does not exist
Oct  7 11:23:19 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 07e19790-83ca-455c-8e19-edcc4d38f7b6 does not exist
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:23:19 np0005473739 nova_compute[259550]: 2025-10-07 15:23:19.535 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3661: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:23:19 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:23:20 np0005473739 nova_compute[259550]: 2025-10-07 15:23:20.052 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:20 np0005473739 podman[463302]: 2025-10-07 15:23:19.974629518 +0000 UTC m=+0.024907355 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:23:20 np0005473739 podman[463302]: 2025-10-07 15:23:20.330779503 +0000 UTC m=+0.381057290 container create e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:23:20 np0005473739 systemd[1]: Started libpod-conmon-e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e.scope.
Oct  7 11:23:20 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:23:20 np0005473739 podman[463302]: 2025-10-07 15:23:20.56785484 +0000 UTC m=+0.618132637 container init e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, OSD_FLAVOR=default, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:23:20 np0005473739 podman[463302]: 2025-10-07 15:23:20.576075426 +0000 UTC m=+0.626353183 container start e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 11:23:20 np0005473739 modest_keller[463319]: 167 167
Oct  7 11:23:20 np0005473739 systemd[1]: libpod-e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e.scope: Deactivated successfully.
Oct  7 11:23:20 np0005473739 podman[463302]: 2025-10-07 15:23:20.658879271 +0000 UTC m=+0.709157048 container attach e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:23:20 np0005473739 podman[463302]: 2025-10-07 15:23:20.661475009 +0000 UTC m=+0.711752786 container died e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:23:20 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4586fefa4422814d6da0f615cf6d02bf5656212496e806438be2bc5210bae9b1-merged.mount: Deactivated successfully.
Oct  7 11:23:20 np0005473739 podman[463302]: 2025-10-07 15:23:20.902895231 +0000 UTC m=+0.953172988 container remove e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=modest_keller, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Oct  7 11:23:20 np0005473739 systemd[1]: libpod-conmon-e92d955fb2b52badc564ec38daaf0aa440807bff7376000cc2503f69e20b8c0e.scope: Deactivated successfully.
Oct  7 11:23:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:21 np0005473739 podman[463341]: 2025-10-07 15:23:21.153021391 +0000 UTC m=+0.045612769 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:23:21 np0005473739 podman[463341]: 2025-10-07 15:23:21.510473171 +0000 UTC m=+0.403064449 container create 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0)
Oct  7 11:23:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3662: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:21 np0005473739 systemd[1]: Started libpod-conmon-5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e.scope.
Oct  7 11:23:21 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:23:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:21 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:21 np0005473739 podman[463341]: 2025-10-07 15:23:21.80665241 +0000 UTC m=+0.699243708 container init 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, ceph=True, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, CEPH_REF=reef)
Oct  7 11:23:21 np0005473739 podman[463341]: 2025-10-07 15:23:21.816504629 +0000 UTC m=+0.709095937 container start 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:23:21 np0005473739 podman[463341]: 2025-10-07 15:23:21.971184723 +0000 UTC m=+0.863776001 container attach 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:23:22
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['cephfs.cephfs.meta', '.rgw.root', 'cephfs.cephfs.data', 'default.rgw.meta', 'backups', 'images', '.mgr', 'volumes', 'default.rgw.log', 'default.rgw.control', 'vms']
Oct  7 11:23:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:23:22 np0005473739 hardcore_lamport[463357]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:23:22 np0005473739 hardcore_lamport[463357]: --> relative data size: 1.0
Oct  7 11:23:22 np0005473739 hardcore_lamport[463357]: --> All data devices are unavailable
Oct  7 11:23:22 np0005473739 systemd[1]: libpod-5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e.scope: Deactivated successfully.
Oct  7 11:23:22 np0005473739 systemd[1]: libpod-5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e.scope: Consumed 1.041s CPU time.
Oct  7 11:23:22 np0005473739 podman[463386]: 2025-10-07 15:23:22.964538444 +0000 UTC m=+0.031647392 container died 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:23:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-25590288f9bc1f874a2304fd827b9729ba3bae68093e5992533bce41f718abcb-merged.mount: Deactivated successfully.
Oct  7 11:23:23 np0005473739 podman[463386]: 2025-10-07 15:23:23.112526302 +0000 UTC m=+0.179635260 container remove 5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=hardcore_lamport, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 11:23:23 np0005473739 systemd[1]: libpod-conmon-5e34319f1b07d4ef34b4f9dfed48e78f8fe8bc2f24c712aa6584bdc428d31b8e.scope: Deactivated successfully.
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3663: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:23:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:23:23 np0005473739 podman[463543]: 2025-10-07 15:23:23.768405009 +0000 UTC m=+0.060306415 container create 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:23:23 np0005473739 systemd[1]: Started libpod-conmon-117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b.scope.
Oct  7 11:23:23 np0005473739 podman[463543]: 2025-10-07 15:23:23.738527395 +0000 UTC m=+0.030428811 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:23:23 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:23:23 np0005473739 podman[463543]: 2025-10-07 15:23:23.871843296 +0000 UTC m=+0.163744692 container init 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, OSD_FLAVOR=default, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Oct  7 11:23:23 np0005473739 podman[463543]: 2025-10-07 15:23:23.879749954 +0000 UTC m=+0.171651320 container start 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, org.label-schema.license=GPLv2, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:23:23 np0005473739 pedantic_chatterjee[463559]: 167 167
Oct  7 11:23:23 np0005473739 systemd[1]: libpod-117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b.scope: Deactivated successfully.
Oct  7 11:23:23 np0005473739 podman[463543]: 2025-10-07 15:23:23.885869445 +0000 UTC m=+0.177770811 container attach 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:23:23 np0005473739 podman[463543]: 2025-10-07 15:23:23.887417035 +0000 UTC m=+0.179318401 container died 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:23:23 np0005473739 systemd[1]: var-lib-containers-storage-overlay-254a87a56049e2ecea16e77c25636492b5ed3806eed179c1d01ff334259674cd-merged.mount: Deactivated successfully.
Oct  7 11:23:23 np0005473739 podman[463543]: 2025-10-07 15:23:23.952152046 +0000 UTC m=+0.244053412 container remove 117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=pedantic_chatterjee, org.label-schema.vendor=CentOS, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:23:23 np0005473739 systemd[1]: libpod-conmon-117fd8ccba92121cb164d90f68b0ef92fa1704a53bdeadbe9b852c4317ccb76b.scope: Deactivated successfully.
Oct  7 11:23:24 np0005473739 podman[463582]: 2025-10-07 15:23:24.141973742 +0000 UTC m=+0.045658361 container create 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 11:23:24 np0005473739 systemd[1]: Started libpod-conmon-2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65.scope.
Oct  7 11:23:24 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:23:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:24 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:24 np0005473739 podman[463582]: 2025-10-07 15:23:24.124991646 +0000 UTC m=+0.028676285 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:23:24 np0005473739 podman[463582]: 2025-10-07 15:23:24.238138238 +0000 UTC m=+0.141822897 container init 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef)
Oct  7 11:23:24 np0005473739 podman[463582]: 2025-10-07 15:23:24.251402686 +0000 UTC m=+0.155087315 container start 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, io.buildah.version=1.39.3)
Oct  7 11:23:24 np0005473739 podman[463582]: 2025-10-07 15:23:24.255235977 +0000 UTC m=+0.158920636 container attach 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, ceph=True)
Oct  7 11:23:24 np0005473739 nova_compute[259550]: 2025-10-07 15:23:24.539 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:25 np0005473739 nova_compute[259550]: 2025-10-07 15:23:25.054 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]: {
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:    "0": [
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:        {
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "devices": [
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "/dev/loop3"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            ],
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_name": "ceph_lv0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_size": "21470642176",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "name": "ceph_lv0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "tags": {
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cluster_name": "ceph",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.crush_device_class": "",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.encrypted": "0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osd_id": "0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.type": "block",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.vdo": "0"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            },
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "type": "block",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "vg_name": "ceph_vg0"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:        }
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:    ],
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:    "1": [
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:        {
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "devices": [
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "/dev/loop4"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            ],
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_name": "ceph_lv1",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_size": "21470642176",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "name": "ceph_lv1",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "tags": {
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cluster_name": "ceph",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.crush_device_class": "",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.encrypted": "0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osd_id": "1",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.type": "block",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.vdo": "0"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            },
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "type": "block",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "vg_name": "ceph_vg1"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:        }
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:    ],
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:    "2": [
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:        {
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "devices": [
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "/dev/loop5"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            ],
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_name": "ceph_lv2",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_size": "21470642176",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "name": "ceph_lv2",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "tags": {
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.cluster_name": "ceph",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.crush_device_class": "",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.encrypted": "0",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osd_id": "2",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.type": "block",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:                "ceph.vdo": "0"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            },
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "type": "block",
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:            "vg_name": "ceph_vg2"
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:        }
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]:    ]
Oct  7 11:23:25 np0005473739 suspicious_turing[463598]: }
Oct  7 11:23:25 np0005473739 systemd[1]: libpod-2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65.scope: Deactivated successfully.
Oct  7 11:23:25 np0005473739 podman[463582]: 2025-10-07 15:23:25.139534464 +0000 UTC m=+1.043219133 container died 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:23:25 np0005473739 systemd[1]: var-lib-containers-storage-overlay-68c04c40d798398ff920fb664118f9482f8b3433a5db2a2f64881d38e05e5b82-merged.mount: Deactivated successfully.
Oct  7 11:23:25 np0005473739 podman[463582]: 2025-10-07 15:23:25.226324264 +0000 UTC m=+1.130008893 container remove 2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=suspicious_turing, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, OSD_FLAVOR=default, ceph=True, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 11:23:25 np0005473739 systemd[1]: libpod-conmon-2569ac5de6ca557309ade89572a7ce5274b3f4baa33fa978796b24dbc7836a65.scope: Deactivated successfully.
Oct  7 11:23:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3664: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:25 np0005473739 podman[463760]: 2025-10-07 15:23:25.885341324 +0000 UTC m=+0.047899239 container create 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, ceph=True, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:23:25 np0005473739 systemd[1]: Started libpod-conmon-6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0.scope.
Oct  7 11:23:25 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:23:25 np0005473739 podman[463760]: 2025-10-07 15:23:25.867849805 +0000 UTC m=+0.030407730 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:23:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:25 np0005473739 podman[463760]: 2025-10-07 15:23:25.98147809 +0000 UTC m=+0.144036025 container init 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:23:25 np0005473739 podman[463760]: 2025-10-07 15:23:25.989467279 +0000 UTC m=+0.152025174 container start 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default)
Oct  7 11:23:25 np0005473739 heuristic_almeida[463776]: 167 167
Oct  7 11:23:25 np0005473739 podman[463760]: 2025-10-07 15:23:25.994136862 +0000 UTC m=+0.156694787 container attach 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default)
Oct  7 11:23:25 np0005473739 systemd[1]: libpod-6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0.scope: Deactivated successfully.
Oct  7 11:23:25 np0005473739 podman[463760]: 2025-10-07 15:23:25.995230191 +0000 UTC m=+0.157788136 container died 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:23:26 np0005473739 systemd[1]: var-lib-containers-storage-overlay-5a83e31d7f140dc3a03b03ada7f12904e5b4d52cd6f8ec3d1ddd90c997292b7f-merged.mount: Deactivated successfully.
Oct  7 11:23:26 np0005473739 podman[463760]: 2025-10-07 15:23:26.056009907 +0000 UTC m=+0.218567802 container remove 6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=heuristic_almeida, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef)
Oct  7 11:23:26 np0005473739 systemd[1]: libpod-conmon-6157c4fb3c41d3db7e3cd6294f9b0de4daf012ed5acb58fefd05ac679a2fd3e0.scope: Deactivated successfully.
Oct  7 11:23:26 np0005473739 podman[463803]: 2025-10-07 15:23:26.265721906 +0000 UTC m=+0.049896482 container create 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 11:23:26 np0005473739 systemd[1]: Started libpod-conmon-159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804.scope.
Oct  7 11:23:26 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:23:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:26 np0005473739 podman[463803]: 2025-10-07 15:23:26.245394712 +0000 UTC m=+0.029569118 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:23:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:26 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:23:26 np0005473739 podman[463803]: 2025-10-07 15:23:26.35533552 +0000 UTC m=+0.139509886 container init 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, ceph=True, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 11:23:26 np0005473739 podman[463803]: 2025-10-07 15:23:26.363248327 +0000 UTC m=+0.147422693 container start 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, org.label-schema.license=GPLv2, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:23:26 np0005473739 podman[463803]: 2025-10-07 15:23:26.367699455 +0000 UTC m=+0.151873871 container attach 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]: {
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "osd_id": 2,
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "type": "bluestore"
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:    },
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "osd_id": 1,
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "type": "bluestore"
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:    },
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "osd_id": 0,
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:        "type": "bluestore"
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]:    }
Oct  7 11:23:27 np0005473739 flamboyant_wright[463819]: }
Oct  7 11:23:27 np0005473739 systemd[1]: libpod-159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804.scope: Deactivated successfully.
Oct  7 11:23:27 np0005473739 systemd[1]: libpod-159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804.scope: Consumed 1.089s CPU time.
Oct  7 11:23:27 np0005473739 podman[463803]: 2025-10-07 15:23:27.447208271 +0000 UTC m=+1.231382637 container died 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef)
Oct  7 11:23:27 np0005473739 systemd[1]: var-lib-containers-storage-overlay-34a1cb66e7ad778694fe995bd6560125af0ccb97eb3fa9b8865de7a07305731f-merged.mount: Deactivated successfully.
Oct  7 11:23:27 np0005473739 podman[463803]: 2025-10-07 15:23:27.509529857 +0000 UTC m=+1.293704263 container remove 159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=flamboyant_wright, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:23:27 np0005473739 systemd[1]: libpod-conmon-159f9b384c478750e7d8fe23c967cf2301a45137ace63fe3b11fe0e2d5e49804.scope: Deactivated successfully.
Oct  7 11:23:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:23:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:23:27 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:23:27 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:23:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ce13f93a-ae27-4335-b0fd-6850ded24d3b does not exist
Oct  7 11:23:27 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cca867dd-b469-4dd5-b529-2581c21fca51 does not exist
Oct  7 11:23:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3665: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:27 np0005473739 nova_compute[259550]: 2025-10-07 15:23:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:23:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:23:29 np0005473739 podman[463914]: 2025-10-07 15:23:29.1036738 +0000 UTC m=+0.080788773 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Oct  7 11:23:29 np0005473739 podman[463915]: 2025-10-07 15:23:29.112465411 +0000 UTC m=+0.089765329 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251001)
Oct  7 11:23:29 np0005473739 nova_compute[259550]: 2025-10-07 15:23:29.543 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3666: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:23:30 np0005473739 nova_compute[259550]: 2025-10-07 15:23:30.056 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:30 np0005473739 nova_compute[259550]: 2025-10-07 15:23:30.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:30 np0005473739 nova_compute[259550]: 2025-10-07 15:23:30.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:23:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3667: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  7 11:23:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:23:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2582887299' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:23:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:23:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2582887299' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:23:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3668: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 1.7 KiB/s rd, 0 B/s wr, 2 op/s
Oct  7 11:23:33 np0005473739 nova_compute[259550]: 2025-10-07 15:23:33.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:34 np0005473739 nova_compute[259550]: 2025-10-07 15:23:34.547 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:35 np0005473739 nova_compute[259550]: 2025-10-07 15:23:35.058 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3669: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 24 KiB/s rd, 0 B/s wr, 40 op/s
Oct  7 11:23:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:35 np0005473739 nova_compute[259550]: 2025-10-07 15:23:35.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3670: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 11:23:38 np0005473739 nova_compute[259550]: 2025-10-07 15:23:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:39 np0005473739 nova_compute[259550]: 2025-10-07 15:23:39.551 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3671: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 11:23:39 np0005473739 nova_compute[259550]: 2025-10-07 15:23:39.936 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:39 np0005473739 nova_compute[259550]: 2025-10-07 15:23:39.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:40 np0005473739 nova_compute[259550]: 2025-10-07 15:23:40.059 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3672: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 36 KiB/s rd, 0 B/s wr, 59 op/s
Oct  7 11:23:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3673: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct  7 11:23:44 np0005473739 nova_compute[259550]: 2025-10-07 15:23:44.554 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:45 np0005473739 nova_compute[259550]: 2025-10-07 15:23:45.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:45 np0005473739 podman[463961]: 2025-10-07 15:23:45.090244571 +0000 UTC m=+0.063872178 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, config_id=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Oct  7 11:23:45 np0005473739 podman[463960]: 2025-10-07 15:23:45.115015612 +0000 UTC m=+0.087325504 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible)
Oct  7 11:23:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3674: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 34 KiB/s rd, 0 B/s wr, 56 op/s
Oct  7 11:23:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3675: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 11 KiB/s rd, 0 B/s wr, 19 op/s
Oct  7 11:23:49 np0005473739 nova_compute[259550]: 2025-10-07 15:23:49.558 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3676: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  7 11:23:50 np0005473739 nova_compute[259550]: 2025-10-07 15:23:50.063 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3677: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  7 11:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:23:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:23:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3678: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  7 11:23:53 np0005473739 nova_compute[259550]: 2025-10-07 15:23:53.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.012 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.013 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.013 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.013 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.562 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:23:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2788824219' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.638 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.624s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.782 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.783 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3618MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.784 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.784 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.888 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.889 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:23:54 np0005473739 nova_compute[259550]: 2025-10-07 15:23:54.923 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:23:55 np0005473739 nova_compute[259550]: 2025-10-07 15:23:55.064 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:23:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4115273436' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:23:55 np0005473739 nova_compute[259550]: 2025-10-07 15:23:55.375 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.453s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:23:55 np0005473739 nova_compute[259550]: 2025-10-07 15:23:55.382 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:23:55 np0005473739 nova_compute[259550]: 2025-10-07 15:23:55.410 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:23:55 np0005473739 nova_compute[259550]: 2025-10-07 15:23:55.412 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:23:55 np0005473739 nova_compute[259550]: 2025-10-07 15:23:55.413 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:23:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3679: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  7 11:23:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:23:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3680: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  7 11:23:59 np0005473739 nova_compute[259550]: 2025-10-07 15:23:59.566 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:23:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3681: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail; 255 B/s rd, 0 op/s
Oct  7 11:24:00 np0005473739 podman[464042]: 2025-10-07 15:24:00.064148911 +0000 UTC m=+0.052560472 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:24:00 np0005473739 nova_compute[259550]: 2025-10-07 15:24:00.065 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:00 np0005473739 podman[464043]: 2025-10-07 15:24:00.106121884 +0000 UTC m=+0.090398496 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:24:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:24:00.125 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:24:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:24:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:24:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:24:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:24:00 np0005473739 nova_compute[259550]: 2025-10-07 15:24:00.414 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:00 np0005473739 nova_compute[259550]: 2025-10-07 15:24:00.414 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:24:00 np0005473739 nova_compute[259550]: 2025-10-07 15:24:00.415 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:24:00 np0005473739 nova_compute[259550]: 2025-10-07 15:24:00.428 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:24:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3682: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3683: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:03 np0005473739 nova_compute[259550]: 2025-10-07 15:24:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:04 np0005473739 nova_compute[259550]: 2025-10-07 15:24:04.570 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:05 np0005473739 nova_compute[259550]: 2025-10-07 15:24:05.066 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3684: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3685: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:09 np0005473739 nova_compute[259550]: 2025-10-07 15:24:09.601 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3686: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:10 np0005473739 nova_compute[259550]: 2025-10-07 15:24:10.068 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3687: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3688: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:14 np0005473739 nova_compute[259550]: 2025-10-07 15:24:14.605 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:15 np0005473739 nova_compute[259550]: 2025-10-07 15:24:15.069 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3689: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:16 np0005473739 podman[464086]: 2025-10-07 15:24:16.057776095 +0000 UTC m=+0.049106891 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=iscsid, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team)
Oct  7 11:24:16 np0005473739 podman[464085]: 2025-10-07 15:24:16.059640513 +0000 UTC m=+0.053269400 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251001, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Oct  7 11:24:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3690: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:19 np0005473739 nova_compute[259550]: 2025-10-07 15:24:19.609 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3691: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:20 np0005473739 nova_compute[259550]: 2025-10-07 15:24:20.071 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3692: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:24:22
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['images', 'default.rgw.meta', 'cephfs.cephfs.meta', 'default.rgw.control', '.rgw.root', 'volumes', 'backups', 'cephfs.cephfs.data', 'default.rgw.log', '.mgr', 'vms']
Oct  7 11:24:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:24:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3693: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:24 np0005473739 nova_compute[259550]: 2025-10-07 15:24:24.613 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:25 np0005473739 nova_compute[259550]: 2025-10-07 15:24:25.073 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3694: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3695: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:27 np0005473739 nova_compute[259550]: 2025-10-07 15:24:27.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:24:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev ccf56b99-df22-4ff0-97d0-b80a31dac81e does not exist
Oct  7 11:24:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev c941abd9-cdae-4ae6-a556-f0a2a4b2a3d6 does not exist
Oct  7 11:24:28 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 55953369-4cba-4b27-970d-de366e8481cf does not exist
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:24:28 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:24:29 np0005473739 podman[464394]: 2025-10-07 15:24:29.069827712 +0000 UTC m=+0.043767511 container create 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, OSD_FLAVOR=default, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:24:29 np0005473739 systemd[1]: Started libpod-conmon-52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76.scope.
Oct  7 11:24:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:24:29 np0005473739 podman[464394]: 2025-10-07 15:24:29.143044225 +0000 UTC m=+0.116984044 container init 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default)
Oct  7 11:24:29 np0005473739 podman[464394]: 2025-10-07 15:24:29.051916181 +0000 UTC m=+0.025856000 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:24:29 np0005473739 podman[464394]: 2025-10-07 15:24:29.149579837 +0000 UTC m=+0.123519636 container start 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:24:29 np0005473739 podman[464394]: 2025-10-07 15:24:29.154010853 +0000 UTC m=+0.127950652 container attach 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, ceph=True, org.label-schema.schema-version=1.0, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3)
Oct  7 11:24:29 np0005473739 kind_bose[464412]: 167 167
Oct  7 11:24:29 np0005473739 systemd[1]: libpod-52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76.scope: Deactivated successfully.
Oct  7 11:24:29 np0005473739 conmon[464412]: conmon 52c056d33fd48f3d0d92 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76.scope/container/memory.events
Oct  7 11:24:29 np0005473739 podman[464394]: 2025-10-07 15:24:29.156341974 +0000 UTC m=+0.130281773 container died 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True)
Oct  7 11:24:29 np0005473739 systemd[1]: var-lib-containers-storage-overlay-7acf668f3441202d574ee8e77b9ca8990a0f3c93ce78ecc1027a791a22cda63c-merged.mount: Deactivated successfully.
Oct  7 11:24:29 np0005473739 podman[464394]: 2025-10-07 15:24:29.192492694 +0000 UTC m=+0.166432493 container remove 52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=kind_bose, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Oct  7 11:24:29 np0005473739 systemd[1]: libpod-conmon-52c056d33fd48f3d0d92f5bc706b7327b5c4ff5fa0413db70a57f76315ba4f76.scope: Deactivated successfully.
Oct  7 11:24:29 np0005473739 podman[464437]: 2025-10-07 15:24:29.355872916 +0000 UTC m=+0.050599041 container create b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:24:29 np0005473739 systemd[1]: Started libpod-conmon-b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2.scope.
Oct  7 11:24:29 np0005473739 podman[464437]: 2025-10-07 15:24:29.331132706 +0000 UTC m=+0.025858911 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:24:29 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:24:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:29 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:29 np0005473739 podman[464437]: 2025-10-07 15:24:29.451063656 +0000 UTC m=+0.145789831 container init b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:24:29 np0005473739 podman[464437]: 2025-10-07 15:24:29.460873394 +0000 UTC m=+0.155599509 container start b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:24:29 np0005473739 podman[464437]: 2025-10-07 15:24:29.464384006 +0000 UTC m=+0.159110131 container attach b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0)
Oct  7 11:24:29 np0005473739 nova_compute[259550]: 2025-10-07 15:24:29.617 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3696: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:30 np0005473739 nova_compute[259550]: 2025-10-07 15:24:30.074 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:30 np0005473739 serene_hertz[464454]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:24:30 np0005473739 serene_hertz[464454]: --> relative data size: 1.0
Oct  7 11:24:30 np0005473739 serene_hertz[464454]: --> All data devices are unavailable
Oct  7 11:24:30 np0005473739 systemd[1]: libpod-b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2.scope: Deactivated successfully.
Oct  7 11:24:30 np0005473739 systemd[1]: libpod-b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2.scope: Consumed 1.078s CPU time.
Oct  7 11:24:30 np0005473739 podman[464437]: 2025-10-07 15:24:30.586529362 +0000 UTC m=+1.281255477 container died b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default)
Oct  7 11:24:30 np0005473739 systemd[1]: var-lib-containers-storage-overlay-4f1266a78569277661218ffac7a7049f2cb41871274078a85190c4b2541853e6-merged.mount: Deactivated successfully.
Oct  7 11:24:30 np0005473739 podman[464437]: 2025-10-07 15:24:30.667359304 +0000 UTC m=+1.362085419 container remove b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_hertz, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.build-date=20250507, OSD_FLAVOR=default)
Oct  7 11:24:30 np0005473739 systemd[1]: libpod-conmon-b4f3ffe05ad1289eb5d8249026b680e6caf9956473d8dcdfb58a04b7073e13a2.scope: Deactivated successfully.
Oct  7 11:24:30 np0005473739 podman[464484]: 2025-10-07 15:24:30.704706685 +0000 UTC m=+0.081806210 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Oct  7 11:24:30 np0005473739 podman[464492]: 2025-10-07 15:24:30.760790408 +0000 UTC m=+0.135584722 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251001, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:24:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:31 np0005473739 podman[464676]: 2025-10-07 15:24:31.295225757 +0000 UTC m=+0.050493518 container create d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:24:31 np0005473739 systemd[1]: Started libpod-conmon-d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539.scope.
Oct  7 11:24:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:24:31 np0005473739 podman[464676]: 2025-10-07 15:24:31.274608115 +0000 UTC m=+0.029875886 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:24:31 np0005473739 podman[464676]: 2025-10-07 15:24:31.37114069 +0000 UTC m=+0.126408511 container init d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, ceph=True, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:24:31 np0005473739 podman[464676]: 2025-10-07 15:24:31.385124698 +0000 UTC m=+0.140392469 container start d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 11:24:31 np0005473739 podman[464676]: 2025-10-07 15:24:31.389372629 +0000 UTC m=+0.144640390 container attach d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, CEPH_REF=reef, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS)
Oct  7 11:24:31 np0005473739 inspiring_mclaren[464692]: 167 167
Oct  7 11:24:31 np0005473739 systemd[1]: libpod-d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539.scope: Deactivated successfully.
Oct  7 11:24:31 np0005473739 conmon[464692]: conmon d3dcc85c47adfe3bd4ea <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539.scope/container/memory.events
Oct  7 11:24:31 np0005473739 podman[464676]: 2025-10-07 15:24:31.393791115 +0000 UTC m=+0.149058856 container died d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, OSD_FLAVOR=default)
Oct  7 11:24:31 np0005473739 systemd[1]: var-lib-containers-storage-overlay-a4a774c59ae68d8bbc8f7c10ba65dc602715f95e60aaa7c66cd4489bf992c275-merged.mount: Deactivated successfully.
Oct  7 11:24:31 np0005473739 podman[464676]: 2025-10-07 15:24:31.430840389 +0000 UTC m=+0.186108120 container remove d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=inspiring_mclaren, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:24:31 np0005473739 systemd[1]: libpod-conmon-d3dcc85c47adfe3bd4ea9d0ebd2da9e45ed5f4a607d17dab6efbcb229473a539.scope: Deactivated successfully.
Oct  7 11:24:31 np0005473739 podman[464718]: 2025-10-07 15:24:31.584965997 +0000 UTC m=+0.042090706 container create a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef)
Oct  7 11:24:31 np0005473739 systemd[1]: Started libpod-conmon-a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350.scope.
Oct  7 11:24:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3697: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:31 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:24:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:31 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:31 np0005473739 podman[464718]: 2025-10-07 15:24:31.659489205 +0000 UTC m=+0.116613954 container init a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_REF=reef, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 11:24:31 np0005473739 podman[464718]: 2025-10-07 15:24:31.56528091 +0000 UTC m=+0.022405639 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:24:31 np0005473739 podman[464718]: 2025-10-07 15:24:31.667014132 +0000 UTC m=+0.124138851 container start a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, OSD_FLAVOR=default)
Oct  7 11:24:31 np0005473739 podman[464718]: 2025-10-07 15:24:31.670991456 +0000 UTC m=+0.128116255 container attach a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:24:31 np0005473739 nova_compute[259550]: 2025-10-07 15:24:31.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:31 np0005473739 nova_compute[259550]: 2025-10-07 15:24:31.984 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]: {
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:    "0": [
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:        {
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "devices": [
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "/dev/loop3"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            ],
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_name": "ceph_lv0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_size": "21470642176",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "name": "ceph_lv0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "tags": {
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cluster_name": "ceph",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.crush_device_class": "",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.encrypted": "0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osd_id": "0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.type": "block",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.vdo": "0"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            },
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "type": "block",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "vg_name": "ceph_vg0"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:        }
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:    ],
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:    "1": [
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:        {
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "devices": [
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "/dev/loop4"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            ],
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_name": "ceph_lv1",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_size": "21470642176",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "name": "ceph_lv1",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "tags": {
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cluster_name": "ceph",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.crush_device_class": "",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.encrypted": "0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osd_id": "1",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.type": "block",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.vdo": "0"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            },
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "type": "block",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "vg_name": "ceph_vg1"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:        }
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:    ],
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:    "2": [
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:        {
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "devices": [
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "/dev/loop5"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            ],
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_name": "ceph_lv2",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_size": "21470642176",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "name": "ceph_lv2",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "tags": {
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.cluster_name": "ceph",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.crush_device_class": "",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.encrypted": "0",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osd_id": "2",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.type": "block",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:                "ceph.vdo": "0"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            },
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "type": "block",
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:            "vg_name": "ceph_vg2"
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:        }
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]:    ]
Oct  7 11:24:32 np0005473739 quizzical_gould[464734]: }
Oct  7 11:24:32 np0005473739 systemd[1]: libpod-a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350.scope: Deactivated successfully.
Oct  7 11:24:32 np0005473739 podman[464718]: 2025-10-07 15:24:32.396076032 +0000 UTC m=+0.853200771 container died a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 11:24:32 np0005473739 systemd[1]: var-lib-containers-storage-overlay-df661b1d25246eeb8e2744119779b107e48073bdd0151d8b05223c3474f4d9db-merged.mount: Deactivated successfully.
Oct  7 11:24:32 np0005473739 podman[464718]: 2025-10-07 15:24:32.458946044 +0000 UTC m=+0.916070763 container remove a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=quizzical_gould, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:24:32 np0005473739 systemd[1]: libpod-conmon-a9c65a21c7e58ecb83359b742decc07981f78d03331e12ce156c8a228b23b350.scope: Deactivated successfully.
Oct  7 11:24:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:24:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2786856965' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:24:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:24:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/2786856965' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:24:33 np0005473739 podman[464897]: 2025-10-07 15:24:33.07441501 +0000 UTC m=+0.038998305 container create 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_REF=reef, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:24:33 np0005473739 systemd[1]: Started libpod-conmon-5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef.scope.
Oct  7 11:24:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:24:33 np0005473739 podman[464897]: 2025-10-07 15:24:33.143997648 +0000 UTC m=+0.108580963 container init 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0)
Oct  7 11:24:33 np0005473739 podman[464897]: 2025-10-07 15:24:33.150752625 +0000 UTC m=+0.115335940 container start 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, CEPH_REF=reef, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:24:33 np0005473739 podman[464897]: 2025-10-07 15:24:33.153864757 +0000 UTC m=+0.118448062 container attach 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3)
Oct  7 11:24:33 np0005473739 podman[464897]: 2025-10-07 15:24:33.058864031 +0000 UTC m=+0.023447356 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:24:33 np0005473739 jovial_maxwell[464913]: 167 167
Oct  7 11:24:33 np0005473739 systemd[1]: libpod-5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef.scope: Deactivated successfully.
Oct  7 11:24:33 np0005473739 conmon[464913]: conmon 5eae5440465fdd29c647 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef.scope/container/memory.events
Oct  7 11:24:33 np0005473739 podman[464897]: 2025-10-07 15:24:33.160094871 +0000 UTC m=+0.124678166 container died 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507)
Oct  7 11:24:33 np0005473739 systemd[1]: var-lib-containers-storage-overlay-0ed45e94e6f5b0f72e7fcf1fa157ab6078c82978452eb17f4ff2a07b39300745-merged.mount: Deactivated successfully.
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:24:33 np0005473739 podman[464897]: 2025-10-07 15:24:33.204385194 +0000 UTC m=+0.168968499 container remove 5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jovial_maxwell, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Oct  7 11:24:33 np0005473739 systemd[1]: libpod-conmon-5eae5440465fdd29c647da820b8aef2e9588e56e8c942327f1f3b246a3d5c0ef.scope: Deactivated successfully.
Oct  7 11:24:33 np0005473739 podman[464937]: 2025-10-07 15:24:33.405597049 +0000 UTC m=+0.042781005 container create b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, ceph=True, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:24:33 np0005473739 systemd[1]: Started libpod-conmon-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope.
Oct  7 11:24:33 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:24:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:33 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:24:33 np0005473739 podman[464937]: 2025-10-07 15:24:33.387277938 +0000 UTC m=+0.024461924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:24:33 np0005473739 podman[464937]: 2025-10-07 15:24:33.493152969 +0000 UTC m=+0.130336945 container init b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, OSD_FLAVOR=default)
Oct  7 11:24:33 np0005473739 podman[464937]: 2025-10-07 15:24:33.499062245 +0000 UTC m=+0.136246221 container start b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, org.label-schema.build-date=20250507, io.buildah.version=1.39.3, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:24:33 np0005473739 podman[464937]: 2025-10-07 15:24:33.501911009 +0000 UTC m=+0.139094985 container attach b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:24:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3698: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]: {
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "osd_id": 2,
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "type": "bluestore"
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:    },
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "osd_id": 1,
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "type": "bluestore"
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:    },
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "osd_id": 0,
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:        "type": "bluestore"
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]:    }
Oct  7 11:24:34 np0005473739 naughty_lewin[464954]: }
Oct  7 11:24:34 np0005473739 systemd[1]: libpod-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope: Deactivated successfully.
Oct  7 11:24:34 np0005473739 systemd[1]: libpod-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope: Consumed 1.013s CPU time.
Oct  7 11:24:34 np0005473739 conmon[464954]: conmon b0e12d97464a8acd32b8 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope/container/memory.events
Oct  7 11:24:34 np0005473739 podman[464937]: 2025-10-07 15:24:34.506342202 +0000 UTC m=+1.143526198 container died b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 11:24:34 np0005473739 systemd[1]: var-lib-containers-storage-overlay-c250ccc3bad133ab53126f3ff45dcf25037a38a94889a833f4e9bf3a82aaf055-merged.mount: Deactivated successfully.
Oct  7 11:24:34 np0005473739 podman[464937]: 2025-10-07 15:24:34.567984191 +0000 UTC m=+1.205168137 container remove b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=naughty_lewin, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, ceph=True, OSD_FLAVOR=default)
Oct  7 11:24:34 np0005473739 systemd[1]: libpod-conmon-b0e12d97464a8acd32b80f09d35c3f8526219f882c28b70dde55848d25670125.scope: Deactivated successfully.
Oct  7 11:24:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:24:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:24:34 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:24:34 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:24:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 3344b6f1-4a67-44d6-ada0-30f82f899fee does not exist
Oct  7 11:24:34 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev cd5370f6-b2d1-46ee-bca4-4ffa53ce6249 does not exist
Oct  7 11:24:34 np0005473739 nova_compute[259550]: 2025-10-07 15:24:34.624 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:24:34 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:24:35 np0005473739 nova_compute[259550]: 2025-10-07 15:24:35.077 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3699: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:35 np0005473739 nova_compute[259550]: 2025-10-07 15:24:35.980 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3700: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:37 np0005473739 nova_compute[259550]: 2025-10-07 15:24:37.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:38 np0005473739 nova_compute[259550]: 2025-10-07 15:24:38.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3701: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:39 np0005473739 nova_compute[259550]: 2025-10-07 15:24:39.629 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:39 np0005473739 nova_compute[259550]: 2025-10-07 15:24:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:40 np0005473739 nova_compute[259550]: 2025-10-07 15:24:40.079 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3702: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3703: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:44 np0005473739 nova_compute[259550]: 2025-10-07 15:24:44.633 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:45 np0005473739 nova_compute[259550]: 2025-10-07 15:24:45.080 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3704: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:47 np0005473739 podman[465050]: 2025-10-07 15:24:47.083158149 +0000 UTC m=+0.063587262 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=iscsid, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:24:47 np0005473739 podman[465049]: 2025-10-07 15:24:47.086229179 +0000 UTC m=+0.071321594 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251001)
Oct  7 11:24:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3705: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3706: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:49 np0005473739 nova_compute[259550]: 2025-10-07 15:24:49.676 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:49 np0005473739 nova_compute[259550]: 2025-10-07 15:24:49.977 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:50 np0005473739 nova_compute[259550]: 2025-10-07 15:24:50.083 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3707: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:24:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:24:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3708: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:54 np0005473739 nova_compute[259550]: 2025-10-07 15:24:54.681 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:54 np0005473739 nova_compute[259550]: 2025-10-07 15:24:54.984 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.022 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.023 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.023 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.023 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.135 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:24:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:24:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2153158543' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.555 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.532s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:24:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3709: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.700 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.701 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3565MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.701 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.702 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.861 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:24:55 np0005473739 nova_compute[259550]: 2025-10-07 15:24:55.861 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:24:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.139 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing inventories for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.228 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating ProviderTree inventory for provider cc5ee907-7908-4ad9-99df-64935eda6bff from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.228 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Updating inventory in ProviderTree for provider cc5ee907-7908-4ad9-99df-64935eda6bff with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.242 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing aggregate associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.261 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Refreshing trait associations for resource provider cc5ee907-7908-4ad9-99df-64935eda6bff, traits: COMPUTE_SECURITY_TPM_1_2,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_SSE2,COMPUTE_TRUSTED_CERTS,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_ACCELERATORS,COMPUTE_STORAGE_BUS_FDC,COMPUTE_NODE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_MMX,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSE4A,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_SATA,COMPUTE_STORAGE_BUS_IDE,HW_CPU_X86_CLMUL,HW_CPU_X86_SSE42,COMPUTE_SECURITY_UEFI_SECURE_BOOT,HW_CPU_X86_BMI2,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,HW_CPU_X86_SSSE3,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_F16C,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_DEVICE_TAGGING,HW_CPU_X86_SSE,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_BMI,COMPUTE_RESCUE_BFV,HW_CPU_X86_SSE41,HW_CPU_X86_SVM,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_IMAGE_TYPE_QCOW2,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AESNI,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_VIOMMU_MODEL_VIRTIO,HW_CPU_X86_AVX2,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.276 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:24:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:24:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1352555989' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.726 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.450s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.731 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.747 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.749 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:24:56 np0005473739 nova_compute[259550]: 2025-10-07 15:24:56.750 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:24:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3710: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3711: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:24:59 np0005473739 nova_compute[259550]: 2025-10-07 15:24:59.685 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:25:00.125 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:25:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:25:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:25:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:25:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:25:00 np0005473739 nova_compute[259550]: 2025-10-07 15:25:00.137 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:01 np0005473739 podman[465131]: 2025-10-07 15:25:01.064627479 +0000 UTC m=+0.052636874 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251001, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, tcib_managed=true)
Oct  7 11:25:01 np0005473739 podman[465132]: 2025-10-07 15:25:01.094415742 +0000 UTC m=+0.080211999 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251001, managed_by=edpm_ansible)
Oct  7 11:25:01 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3712: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:01 np0005473739 nova_compute[259550]: 2025-10-07 15:25:01.748 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:01 np0005473739 nova_compute[259550]: 2025-10-07 15:25:01.749 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Oct  7 11:25:01 np0005473739 nova_compute[259550]: 2025-10-07 15:25:01.749 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Oct  7 11:25:01 np0005473739 nova_compute[259550]: 2025-10-07 15:25:01.765 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Oct  7 11:25:03 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3713: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:03 np0005473739 nova_compute[259550]: 2025-10-07 15:25:03.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:04 np0005473739 nova_compute[259550]: 2025-10-07 15:25:04.689 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:05 np0005473739 nova_compute[259550]: 2025-10-07 15:25:05.139 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:05 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3714: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:05 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:07 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3715: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:09 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3716: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:09 np0005473739 nova_compute[259550]: 2025-10-07 15:25:09.694 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:10 np0005473739 nova_compute[259550]: 2025-10-07 15:25:10.142 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:10 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:11 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3717: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:13 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3718: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:14 np0005473739 nova_compute[259550]: 2025-10-07 15:25:14.696 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:15 np0005473739 nova_compute[259550]: 2025-10-07 15:25:15.144 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:15 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3719: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:15 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:17 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3720: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:18 np0005473739 podman[465174]: 2025-10-07 15:25:18.059741657 +0000 UTC m=+0.052097509 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Oct  7 11:25:18 np0005473739 podman[465175]: 2025-10-07 15:25:18.066162996 +0000 UTC m=+0.055150539 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, container_name=iscsid, org.label-schema.build-date=20251001, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, config_id=iscsid, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Oct  7 11:25:19 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3721: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:19 np0005473739 nova_compute[259550]: 2025-10-07 15:25:19.700 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:20 np0005473739 nova_compute[259550]: 2025-10-07 15:25:20.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:20 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:21 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3722: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Optimize plan auto_2025-10-07_15:25:22
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] Mode upmap, max misplaced 0.050000
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] do_upmap
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] pools ['.mgr', 'volumes', 'default.rgw.log', 'cephfs.cephfs.data', 'cephfs.cephfs.meta', 'images', 'vms', '.rgw.root', 'default.rgw.control', 'backups', 'default.rgw.meta']
Oct  7 11:25:22 np0005473739 ceph-mgr[74587]: [balancer INFO root] prepared 0/10 changes
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] TrashPurgeScheduleHandler: load_schedules
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] MirrorSnapshotScheduleHandler: load_schedules
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: vms, start_after=
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: volumes, start_after=
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: backups, start_after=
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: [rbd_support INFO root] load_schedules: images, start_after=
Oct  7 11:25:23 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3723: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:24 np0005473739 nova_compute[259550]: 2025-10-07 15:25:24.704 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:25 np0005473739 nova_compute[259550]: 2025-10-07 15:25:25.148 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:25 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3724: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:25 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:27 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3725: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:29 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3726: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:29 np0005473739 nova_compute[259550]: 2025-10-07 15:25:29.708 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:29 np0005473739 nova_compute[259550]: 2025-10-07 15:25:29.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:30 np0005473739 nova_compute[259550]: 2025-10-07 15:25:30.150 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:30 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:31 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3727: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:32 np0005473739 podman[465216]: 2025-10-07 15:25:32.089067738 +0000 UTC m=+0.067576826 container health_status 2a760750860921a5b129db621f3abb23149dc5747a6508b28b66781bd96aa9c6 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn@sha256:c3e651f35b930bcf1a3084be8910c2f3f34d22a976c5379cf518a68d9994bfa7', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251001, tcib_managed=true, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Oct  7 11:25:32 np0005473739 podman[465217]: 2025-10-07 15:25:32.136677168 +0000 UTC m=+0.097802229 container health_status 43c278da05130a17ce3331737cf2a50010d34306f3633535167179d39ce568bf (image=quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller@sha256:d76f7d6620930cc2e9ac070492bbeb525f83ce5ff4947463e3784bf1ce04a857', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0)
Oct  7 11:25:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"df", "format":"json"} v 0) v1
Oct  7 11:25:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1889608050' entity='client.openstack' cmd=[{"prefix":"df", "format":"json"}]: dispatch
Oct  7 11:25:32 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"} v 0) v1
Oct  7 11:25:32 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.10:0/1889608050' entity='client.openstack' cmd=[{"prefix":"osd pool get-quota", "pool": "volumes", "format":"json"}]: dispatch
Oct  7 11:25:32 np0005473739 nova_compute[259550]: 2025-10-07 15:25:32.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:32 np0005473739 nova_compute[259550]: 2025-10-07 15:25:32.982 2 DEBUG nova.compute.manager [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] _maybe_adjust
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.mgr' root_id -1 using 7.185749983720779e-06 of space, bias 1.0, pg target 0.0021557249951162337 quantized to 1 (current 1)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'vms' root_id -1 using 6.359070782053786e-08 of space, bias 1.0, pg target 1.907721234616136e-05 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'volumes' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'backups' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'images' root_id -1 using 1.9077212346161359e-07 of space, bias 1.0, pg target 5.723163703848408e-05 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.meta' root_id -1 using 5.087256625643029e-07 of space, bias 4.0, pg target 0.0006104707950771635 quantized to 16 (current 16)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'cephfs.cephfs.data' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool '.rgw.root' root_id -1 using 2.5436283128215145e-07 of space, bias 1.0, pg target 7.630884938464544e-05 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.log' root_id -1 using 2.1620840658982875e-06 of space, bias 1.0, pg target 0.0006486252197694863 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.control' root_id -1 using 0.0 of space, bias 1.0, pg target 0.0 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] effective_target_ratio 0.0 0.0 0 64411926528
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: [pg_autoscaler INFO root] Pool 'default.rgw.meta' root_id -1 using 1.2718141564107572e-07 of space, bias 4.0, pg target 0.00015261769876929088 quantized to 32 (current 32)
Oct  7 11:25:33 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3728: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:34 np0005473739 nova_compute[259550]: 2025-10-07 15:25:34.712 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:35 np0005473739 nova_compute[259550]: 2025-10-07 15:25:35.152 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.admin"} v 0) v1
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/osd_remove_queue}] v 0) v1
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:25:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev f819cce3-f64a-4461-acb7-06c88141a954 does not exist
Oct  7 11:25:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 925ecaec-31b9-4ea0-a726-3ed3a3b3905a does not exist
Oct  7 11:25:35 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev d8bf0ff8-9060-47fa-882f-75b2f266d250 does not exist
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd tree", "states": ["destroyed"], "format": "json"} v 0) v1
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "osd tree", "states": ["destroyed"], "format": "json"}]: dispatch
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "auth get", "entity": "client.bootstrap-osd"} v 0) v1
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:25:35 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3729: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.admin"}]: dispatch
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' cmd=[{"prefix": "auth get", "entity": "client.bootstrap-osd"}]: dispatch
Oct  7 11:25:35 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:36 np0005473739 podman[465533]: 2025-10-07 15:25:36.064247435 +0000 UTC m=+0.041251235 container create 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 11:25:36 np0005473739 systemd[1]: Started libpod-conmon-0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2.scope.
Oct  7 11:25:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:25:36 np0005473739 podman[465533]: 2025-10-07 15:25:36.131320696 +0000 UTC m=+0.108324516 container init 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:25:36 np0005473739 podman[465533]: 2025-10-07 15:25:36.044460464 +0000 UTC m=+0.021464294 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:25:36 np0005473739 podman[465533]: 2025-10-07 15:25:36.142178211 +0000 UTC m=+0.119182051 container start 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3)
Oct  7 11:25:36 np0005473739 podman[465533]: 2025-10-07 15:25:36.146571186 +0000 UTC m=+0.123575016 container attach 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.build-date=20250507, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:25:36 np0005473739 infallible_stonebraker[465549]: 167 167
Oct  7 11:25:36 np0005473739 systemd[1]: libpod-0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2.scope: Deactivated successfully.
Oct  7 11:25:36 np0005473739 podman[465533]: 2025-10-07 15:25:36.148906198 +0000 UTC m=+0.125910028 container died 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2)
Oct  7 11:25:36 np0005473739 systemd[1]: var-lib-containers-storage-overlay-d27d28479245ef47f8a02c208c552ff700b3df8acf2074bd4ca2b447790d79aa-merged.mount: Deactivated successfully.
Oct  7 11:25:36 np0005473739 podman[465533]: 2025-10-07 15:25:36.199968269 +0000 UTC m=+0.176972069 container remove 0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=infallible_stonebraker, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, ceph=True, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:25:36 np0005473739 systemd[1]: libpod-conmon-0676877f08d537b0037fa040376043f331c6b80a9e26c2081c21299c860367e2.scope: Deactivated successfully.
Oct  7 11:25:36 np0005473739 podman[465573]: 2025-10-07 15:25:36.351702765 +0000 UTC m=+0.040075734 container create 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507)
Oct  7 11:25:36 np0005473739 systemd[1]: Started libpod-conmon-3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11.scope.
Oct  7 11:25:36 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:36 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7/merged/var/lib/ceph/bootstrap-osd/ceph.keyring supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:36 np0005473739 podman[465573]: 2025-10-07 15:25:36.33513525 +0000 UTC m=+0.023508239 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:25:36 np0005473739 podman[465573]: 2025-10-07 15:25:36.515519488 +0000 UTC m=+0.203892537 container init 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, CEPH_REF=reef, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:25:36 np0005473739 podman[465573]: 2025-10-07 15:25:36.525617643 +0000 UTC m=+0.213990612 container start 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, CEPH_REF=reef, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, io.buildah.version=1.39.3, org.label-schema.build-date=20250507)
Oct  7 11:25:36 np0005473739 podman[465573]: 2025-10-07 15:25:36.5864301 +0000 UTC m=+0.274803119 container attach 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, ceph=True, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS)
Oct  7 11:25:36 np0005473739 nova_compute[259550]: 2025-10-07 15:25:36.978 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:37 np0005473739 serene_swanson[465589]: --> passed data devices: 0 physical, 3 LVM
Oct  7 11:25:37 np0005473739 serene_swanson[465589]: --> relative data size: 1.0
Oct  7 11:25:37 np0005473739 serene_swanson[465589]: --> All data devices are unavailable
Oct  7 11:25:37 np0005473739 systemd[1]: libpod-3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11.scope: Deactivated successfully.
Oct  7 11:25:37 np0005473739 systemd[1]: libpod-3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11.scope: Consumed 1.029s CPU time.
Oct  7 11:25:37 np0005473739 podman[465573]: 2025-10-07 15:25:37.604702177 +0000 UTC m=+1.293075156 container died 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, OSD_FLAVOR=default, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, io.buildah.version=1.39.3, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2)
Oct  7 11:25:37 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b589c48bbf3eeea69cd376427a6aef4f2e3ea4a8f85f4e6a7540a43939acd9d7-merged.mount: Deactivated successfully.
Oct  7 11:25:37 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3730: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:37 np0005473739 podman[465573]: 2025-10-07 15:25:37.657777652 +0000 UTC m=+1.346150621 container remove 3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=serene_swanson, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.label-schema.vendor=CentOS, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default)
Oct  7 11:25:37 np0005473739 systemd[1]: libpod-conmon-3cdc68da9b6d2700702f88387387cf8c6b45c802620cf3395a5b70c4218abc11.scope: Deactivated successfully.
Oct  7 11:25:38 np0005473739 podman[465772]: 2025-10-07 15:25:38.312116219 +0000 UTC m=+0.051969136 container create 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:25:38 np0005473739 systemd[1]: Started libpod-conmon-86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109.scope.
Oct  7 11:25:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:25:38 np0005473739 podman[465772]: 2025-10-07 15:25:38.283472927 +0000 UTC m=+0.023325924 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:25:38 np0005473739 podman[465772]: 2025-10-07 15:25:38.390543619 +0000 UTC m=+0.130396546 container init 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:25:38 np0005473739 podman[465772]: 2025-10-07 15:25:38.403123939 +0000 UTC m=+0.142976846 container start 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20250507, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git)
Oct  7 11:25:38 np0005473739 podman[465772]: 2025-10-07 15:25:38.407897484 +0000 UTC m=+0.147750401 container attach 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.name=CentOS Stream 9 Base Image, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, ceph=True, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.vendor=CentOS)
Oct  7 11:25:38 np0005473739 intelligent_mestorf[465788]: 167 167
Oct  7 11:25:38 np0005473739 systemd[1]: libpod-86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109.scope: Deactivated successfully.
Oct  7 11:25:38 np0005473739 conmon[465788]: conmon 86a37bbb3a3a88d058b1 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109.scope/container/memory.events
Oct  7 11:25:38 np0005473739 podman[465772]: 2025-10-07 15:25:38.410070972 +0000 UTC m=+0.149923889 container died 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, OSD_FLAVOR=default, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.build-date=20250507)
Oct  7 11:25:38 np0005473739 systemd[1]: var-lib-containers-storage-overlay-59d2d22624289291d52d339ce546d3d97f99d365241b64755bfbf5411ce6cfd3-merged.mount: Deactivated successfully.
Oct  7 11:25:38 np0005473739 podman[465772]: 2025-10-07 15:25:38.451759717 +0000 UTC m=+0.191612624 container remove 86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=intelligent_mestorf, org.label-schema.license=GPLv2, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.schema-version=1.0, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>)
Oct  7 11:25:38 np0005473739 systemd[1]: libpod-conmon-86a37bbb3a3a88d058b154bc0b95c969e1dbf66dde1ee3b1f8dea28db3505109.scope: Deactivated successfully.
Oct  7 11:25:38 np0005473739 podman[465813]: 2025-10-07 15:25:38.614970624 +0000 UTC m=+0.043641638 container create 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, io.buildah.version=1.39.3, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.build-date=20250507, org.label-schema.vendor=CentOS)
Oct  7 11:25:38 np0005473739 systemd[1]: Started libpod-conmon-9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e.scope.
Oct  7 11:25:38 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:25:38 np0005473739 podman[465813]: 2025-10-07 15:25:38.595444581 +0000 UTC m=+0.024115625 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:25:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:38 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:38 np0005473739 podman[465813]: 2025-10-07 15:25:38.709196699 +0000 UTC m=+0.137867743 container init 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad)
Oct  7 11:25:38 np0005473739 podman[465813]: 2025-10-07 15:25:38.724320466 +0000 UTC m=+0.152991480 container start 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, OSD_FLAVOR=default, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.vendor=CentOS)
Oct  7 11:25:38 np0005473739 podman[465813]: 2025-10-07 15:25:38.728100955 +0000 UTC m=+0.156772009 container attach 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.39.3, ceph=True, org.label-schema.build-date=20250507, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.license=GPLv2, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]: {
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:    "0": [
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:        {
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "devices": [
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "/dev/loop3"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            ],
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_name": "ceph_lv0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_size": "21470642176",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_tags": "ceph.block_device=/dev/ceph_vg0/ceph_lv0,ceph.block_uuid=C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=d73aa1fa-846f-4eff-9f39-a8604df9c070,ceph.osd_id=0,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "name": "ceph_lv0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "path": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "tags": {
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.block_device": "/dev/ceph_vg0/ceph_lv0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.block_uuid": "C8H4nn-Ilfg-YMHp-yWwn-oW1E-RfZ4-Cel4Xc",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cluster_name": "ceph",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.crush_device_class": "",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.encrypted": "0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osd_fsid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osd_id": "0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.type": "block",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.vdo": "0"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            },
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "type": "block",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "vg_name": "ceph_vg0"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:        }
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:    ],
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:    "1": [
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:        {
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "devices": [
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "/dev/loop4"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            ],
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_name": "ceph_lv1",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_size": "21470642176",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_tags": "ceph.block_device=/dev/ceph_vg1/ceph_lv1,ceph.block_uuid=uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50,ceph.osd_id=1,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "name": "ceph_lv1",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "path": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "tags": {
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.block_device": "/dev/ceph_vg1/ceph_lv1",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.block_uuid": "uQzu4S-hoie-lpXC-LNvj-ZyNq-JjMR-bILA8b",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cluster_name": "ceph",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.crush_device_class": "",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.encrypted": "0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osd_fsid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osd_id": "1",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.type": "block",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.vdo": "0"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            },
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "type": "block",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "vg_name": "ceph_vg1"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:        }
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:    ],
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:    "2": [
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:        {
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "devices": [
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "/dev/loop5"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            ],
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_name": "ceph_lv2",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_size": "21470642176",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_tags": "ceph.block_device=/dev/ceph_vg2/ceph_lv2,ceph.block_uuid=wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x,ceph.cephx_lockbox_secret=,ceph.cluster_fsid=82044f27-a8da-5b2a-a297-ff6afc620e1f,ceph.cluster_name=ceph,ceph.crush_device_class=,ceph.encrypted=0,ceph.osd_fsid=4fdcd5b3-36e5-4c62-927d-1d87ba9aa204,ceph.osd_id=2,ceph.osdspec_affinity=default_drive_group,ceph.type=block,ceph.vdo=0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "lv_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "name": "ceph_lv2",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "path": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "tags": {
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.block_device": "/dev/ceph_vg2/ceph_lv2",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.block_uuid": "wXGpZB-1fGj-l8yH-67gA-E4LO-alba-hleC2x",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cephx_lockbox_secret": "",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cluster_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.cluster_name": "ceph",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.crush_device_class": "",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.encrypted": "0",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osd_fsid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osd_id": "2",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.osdspec_affinity": "default_drive_group",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.type": "block",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:                "ceph.vdo": "0"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            },
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "type": "block",
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:            "vg_name": "ceph_vg2"
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:        }
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]:    ]
Oct  7 11:25:39 np0005473739 jolly_leakey[465831]: }
Oct  7 11:25:39 np0005473739 systemd[1]: libpod-9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e.scope: Deactivated successfully.
Oct  7 11:25:39 np0005473739 podman[465813]: 2025-10-07 15:25:39.513028893 +0000 UTC m=+0.941699897 container died 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.39.3, OSD_FLAVOR=default, FROM_IMAGE=quay.io/centos/centos:stream9, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:25:39 np0005473739 systemd[1]: var-lib-containers-storage-overlay-b9d61215ea1499f47565b4e9d472dd8cfedee8ceef8297756a32ee1fdf380c46-merged.mount: Deactivated successfully.
Oct  7 11:25:39 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3731: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:39 np0005473739 nova_compute[259550]: 2025-10-07 15:25:39.716 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:39 np0005473739 podman[465813]: 2025-10-07 15:25:39.867963196 +0000 UTC m=+1.296634200 container remove 9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=jolly_leakey, ceph=True, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_REF=reef, io.buildah.version=1.39.3, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:25:39 np0005473739 systemd[1]: libpod-conmon-9d228703016e29123e511eaab8eb063c5ab7c42e9424a8ce6afc04606127c86e.scope: Deactivated successfully.
Oct  7 11:25:39 np0005473739 nova_compute[259550]: 2025-10-07 15:25:39.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:39 np0005473739 nova_compute[259550]: 2025-10-07 15:25:39.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:40 np0005473739 nova_compute[259550]: 2025-10-07 15:25:40.154 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:40 np0005473739 podman[465994]: 2025-10-07 15:25:40.485695991 +0000 UTC m=+0.041475629 container create 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, io.buildah.version=1.39.3, OSD_FLAVOR=default, org.label-schema.schema-version=1.0)
Oct  7 11:25:40 np0005473739 systemd[1]: Started libpod-conmon-94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41.scope.
Oct  7 11:25:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:25:40 np0005473739 podman[465994]: 2025-10-07 15:25:40.467148214 +0000 UTC m=+0.022927922 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:25:40 np0005473739 podman[465994]: 2025-10-07 15:25:40.574883334 +0000 UTC m=+0.130662982 container init 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.build-date=20250507, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:25:40 np0005473739 podman[465994]: 2025-10-07 15:25:40.58575458 +0000 UTC m=+0.141534218 container start 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.build-date=20250507, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, CEPH_REF=reef, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/)
Oct  7 11:25:40 np0005473739 podman[465994]: 2025-10-07 15:25:40.589499299 +0000 UTC m=+0.145278987 container attach 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, org.label-schema.build-date=20250507, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/)
Oct  7 11:25:40 np0005473739 eager_moore[466010]: 167 167
Oct  7 11:25:40 np0005473739 systemd[1]: libpod-94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41.scope: Deactivated successfully.
Oct  7 11:25:40 np0005473739 conmon[466010]: conmon 94cf04ee10f9d2da420f <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41.scope/container/memory.events
Oct  7 11:25:40 np0005473739 podman[465994]: 2025-10-07 15:25:40.592296791 +0000 UTC m=+0.148076429 container died 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, org.label-schema.build-date=20250507, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.name=CentOS Stream 9 Base Image, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, OSD_FLAVOR=default, CEPH_REF=reef, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, io.buildah.version=1.39.3, ceph=True, org.label-schema.vendor=CentOS, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:25:40 np0005473739 systemd[1]: var-lib-containers-storage-overlay-83da07ce144da91d7ed92b32bd5d41f0e076fa4a32b4ac36996bfb94d4ac7117-merged.mount: Deactivated successfully.
Oct  7 11:25:40 np0005473739 podman[465994]: 2025-10-07 15:25:40.630749722 +0000 UTC m=+0.186529370 container remove 94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41 (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=eager_moore, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, io.buildah.version=1.39.3, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, org.label-schema.build-date=20250507, org.label-schema.license=GPLv2, CEPH_REF=reef, OSD_FLAVOR=default, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:25:40 np0005473739 systemd[1]: libpod-conmon-94cf04ee10f9d2da420f9733f9d31c94996c0731a22f53fd91b711da51eadc41.scope: Deactivated successfully.
Oct  7 11:25:40 np0005473739 systemd-logind[801]: New session 60 of user zuul.
Oct  7 11:25:40 np0005473739 systemd[1]: Started Session 60 of User zuul.
Oct  7 11:25:40 np0005473739 podman[466036]: 2025-10-07 15:25:40.810898824 +0000 UTC m=+0.045311642 container create a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.vendor=CentOS, CEPH_REF=reef, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, ceph=True, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3, org.label-schema.license=GPLv2, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:25:40 np0005473739 systemd[1]: Started libpod-conmon-a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc.scope.
Oct  7 11:25:40 np0005473739 systemd[1]: Started libcrun container.
Oct  7 11:25:40 np0005473739 podman[466036]: 2025-10-07 15:25:40.79438284 +0000 UTC m=+0.028795688 image pull 0f5473a1e726b0feaff0f41f8de8341c0a94f60365d4584f4c10bd6b40d44bc1 quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0
Oct  7 11:25:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc/merged/rootfs supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc/merged/etc/ceph/ceph.conf supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc/merged/var/log/ceph supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:40 np0005473739 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc/merged/var/lib/ceph/crash supports timestamps until 2038 (0x7fffffff)
Oct  7 11:25:40 np0005473739 podman[466036]: 2025-10-07 15:25:40.901388511 +0000 UTC m=+0.135801349 container init a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, CEPH_REF=reef, org.label-schema.build-date=20250507, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.documentation=https://docs.ceph.com/, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.39.3)
Oct  7 11:25:40 np0005473739 podman[466036]: 2025-10-07 15:25:40.911052584 +0000 UTC m=+0.145465422 container start a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.vendor=CentOS, org.opencontainers.image.documentation=https://docs.ceph.com/, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, io.buildah.version=1.39.3, org.label-schema.schema-version=1.0, ceph=True, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, OSD_FLAVOR=default, org.label-schema.license=GPLv2, CEPH_REF=reef, org.label-schema.build-date=20250507, org.label-schema.name=CentOS Stream 9 Base Image, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9)
Oct  7 11:25:40 np0005473739 podman[466036]: 2025-10-07 15:25:40.914595408 +0000 UTC m=+0.149008246 container attach a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, io.buildah.version=1.39.3, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, CEPH_REF=reef, org.label-schema.vendor=CentOS, OSD_FLAVOR=default, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, FROM_IMAGE=quay.io/centos/centos:stream9, org.label-schema.build-date=20250507, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.label-schema.name=CentOS Stream 9 Base Image, org.opencontainers.image.documentation=https://docs.ceph.com/, ceph=True)
Oct  7 11:25:40 np0005473739 nova_compute[259550]: 2025-10-07 15:25:40.981 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:40 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:41 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3732: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:42 np0005473739 trusting_allen[466074]: {
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:    "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204": {
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "device": "/dev/mapper/ceph_vg2-ceph_lv2",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "osd_id": 2,
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "osd_uuid": "4fdcd5b3-36e5-4c62-927d-1d87ba9aa204",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "type": "bluestore"
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:    },
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:    "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50": {
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "device": "/dev/mapper/ceph_vg1-ceph_lv1",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "osd_id": 1,
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "osd_uuid": "953d6d4f-ef53-4bf6-b1d9-c5f8186dbf50",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "type": "bluestore"
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:    },
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:    "d73aa1fa-846f-4eff-9f39-a8604df9c070": {
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "ceph_fsid": "82044f27-a8da-5b2a-a297-ff6afc620e1f",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "device": "/dev/mapper/ceph_vg0-ceph_lv0",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "osd_id": 0,
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "osd_uuid": "d73aa1fa-846f-4eff-9f39-a8604df9c070",
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:        "type": "bluestore"
Oct  7 11:25:42 np0005473739 trusting_allen[466074]:    }
Oct  7 11:25:42 np0005473739 trusting_allen[466074]: }
Oct  7 11:25:42 np0005473739 systemd[1]: libpod-a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc.scope: Deactivated successfully.
Oct  7 11:25:42 np0005473739 podman[466036]: 2025-10-07 15:25:42.065255922 +0000 UTC m=+1.299668770 container died a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, OSD_FLAVOR=default, org.label-schema.schema-version=1.0, FROM_IMAGE=quay.io/centos/centos:stream9, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_REF=reef, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, org.label-schema.license=GPLv2, org.label-schema.build-date=20250507, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, ceph=True, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, io.buildah.version=1.39.3)
Oct  7 11:25:42 np0005473739 systemd[1]: libpod-a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc.scope: Consumed 1.146s CPU time.
Oct  7 11:25:42 np0005473739 systemd[1]: var-lib-containers-storage-overlay-67a2066c2779f11476578dab282f417828a63e16ad202b88d243c7b1343af6cc-merged.mount: Deactivated successfully.
Oct  7 11:25:42 np0005473739 podman[466036]: 2025-10-07 15:25:42.131337298 +0000 UTC m=+1.365750126 container remove a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc (image=quay.io/ceph/ceph@sha256:1b9158ce28975f95def6a0ad459fa19f1336506074267a4b47c1bd914a00fec0, name=trusting_allen, org.label-schema.build-date=20250507, org.opencontainers.image.documentation=https://docs.ceph.com/, CEPH_GIT_REPO=https://github.com/ceph/ceph.git, org.opencontainers.image.authors=Ceph Release Team <ceph-maintainers@ceph.io>, GANESHA_REPO_BASEURL=https://buildlogs.centos.org/centos/$releasever-stream/storage/$basearch/nfsganesha-5/, org.label-schema.vendor=CentOS, FROM_IMAGE=quay.io/centos/centos:stream9, CEPH_SHA1=6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad, OSD_FLAVOR=default, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, CEPH_REF=reef, io.buildah.version=1.39.3, ceph=True, org.label-schema.name=CentOS Stream 9 Base Image)
Oct  7 11:25:42 np0005473739 systemd[1]: libpod-conmon-a7c3f659b0eb3c9527a208470f0d1e9d56f8aaf3a7902f0b338342333a30dacc.scope: Deactivated successfully.
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0.devices.0}] v 0) v1
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command([{prefix=config-key set, key=mgr/cephadm/host.compute-0}] v 0) v1
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: log_channel(audit) log [INF] : from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:25:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 75640a93-fd78-439b-9b41-800433bf52f5 does not exist
Oct  7 11:25:42 np0005473739 ceph-mgr[74587]: [progress WARNING root] complete: ev 67949c2e-c40d-4779-9ed9-3db38a7ef880 does not exist
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [default] New memtable created with log file: #183. Immutable memtables: 0.
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.184408) [db/db_impl/db_impl_compaction_flush.cc:2832] Calling FlushMemTableToOutputFile with column family [default], flush slots available 1, compaction slots available 1, flush slots scheduled 1, compaction slots scheduled 0
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:856] [default] [JOB 113] Flushing memtable with next log file: 183
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742184444, "job": 113, "event": "flush_started", "num_memtables": 1, "num_entries": 1560, "num_deletes": 251, "total_data_size": 2519049, "memory_usage": 2561728, "flush_reason": "Manual Compaction"}
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:885] [default] [JOB 113] Level-0 flush table #184: started
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742197698, "cf_name": "default", "job": 113, "event": "table_file_creation", "file_number": 184, "file_size": 2473228, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 75914, "largest_seqno": 77473, "table_properties": {"data_size": 2465875, "index_size": 4359, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 1925, "raw_key_size": 14837, "raw_average_key_size": 19, "raw_value_size": 2451344, "raw_average_value_size": 3290, "num_data_blocks": 195, "num_entries": 745, "num_filter_entries": 745, "num_deletions": 251, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759850576, "oldest_key_time": 1759850576, "file_creation_time": 1759850742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 184, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/flush_job.cc:1019] [default] [JOB 113] Flush lasted 13326 microseconds, and 5856 cpu microseconds.
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.197732) [db/flush_job.cc:967] [default] [JOB 113] Level-0 flush table #184: 2473228 bytes OK
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.197748) [db/memtable_list.cc:519] [default] Level-0 commit table #184 started
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.198943) [db/memtable_list.cc:722] [default] Level-0 commit table #184: memtable #1 done
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.198957) EVENT_LOG_v1 {"time_micros": 1759850742198952, "job": 113, "event": "flush_finished", "output_compression": "NoCompression", "lsm_state": [1, 0, 0, 0, 0, 0, 1], "immutable_memtables": 0}
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.198972) [db/db_impl/db_impl_compaction_flush.cc:299] [default] Level summary: base level 6 level multiplier 10.00 max bytes base 268435456 files[1 0 0 0 0 0 1] max score 0.25
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/db_impl/db_impl_files.cc:463] [JOB 113] Try to delete WAL files size 2512270, prev total WAL file size 2512270, number of live WAL files 2.
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000180.log immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.199650) [db/db_impl/db_impl_compaction_flush.cc:3165] [default] Manual compaction from level-0 to level-6 from '7061786F730037353330' seq:72057594037927935, type:22 .. '7061786F730037373832' seq:0, type:0; will stop at (end)
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1995] [default] [JOB 114] Compacting 1@0 + 1@6 files to L6, score -1.00
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:2001] [default]: Compaction start summary: Base version 113 Base level 0, inputs: [184(2415KB)], [182(9958KB)]
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742199713, "job": 114, "event": "compaction_started", "compaction_reason": "ManualCompaction", "files_L0": [184], "files_L6": [182], "score": -1, "input_data_size": 12670816, "oldest_snapshot_seqno": -1}
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/compaction/compaction_job.cc:1588] [default] [JOB 114] Generated table #185: 9338 keys, 10949633 bytes, temperature: kUnknown
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742278223, "cf_name": "default", "job": 114, "event": "table_file_creation", "file_number": 185, "file_size": 10949633, "file_checksum": "", "file_checksum_func_name": "Unknown", "smallest_seqno": 0, "largest_seqno": 0, "table_properties": {"data_size": 10890732, "index_size": 34468, "index_partitions": 0, "top_level_index_size": 0, "index_key_is_user_key": 1, "index_value_is_delta_encoded": 1, "filter_size": 23365, "raw_key_size": 246187, "raw_average_key_size": 26, "raw_value_size": 10727547, "raw_average_value_size": 1148, "num_data_blocks": 1327, "num_entries": 9338, "num_filter_entries": 9338, "num_deletions": 0, "num_merge_operands": 0, "num_range_deletions": 0, "format_version": 0, "fixed_key_len": 0, "filter_policy": "bloomfilter", "column_family_name": "default", "column_family_id": 0, "comparator": "leveldb.BytewiseComparator", "merge_operator": "", "prefix_extractor_name": "nullptr", "property_collectors": "[CompactOnDeletionCollector]", "compression": "NoCompression", "compression_options": "window_bits=-14; level=32767; strategy=0; max_dict_bytes=0; zstd_max_train_bytes=0; enabled=0; max_dict_buffer_bytes=0; use_zstd_dict_trainer=1; ", "creation_time": 1759843832, "oldest_key_time": 0, "file_creation_time": 1759850742, "slow_compression_estimated_data_size": 0, "fast_compression_estimated_data_size": 0, "db_id": "e373f884-c326-49b5-81e6-2b809f3b2d39", "db_session_id": "T0TL9PKSKWA4P62ZNSKG", "orig_file_number": 185, "seqno_to_time_mapping": "N/A"}}
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [db/version_set.cc:4390] More existing levels in DB than needed. max_bytes_for_level_multiplier may not be guaranteed.
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.278480) [db/compaction/compaction_job.cc:1663] [default] [JOB 114] Compacted 1@0 + 1@6 files to L6 => 10949633 bytes
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.280843) [db/compaction/compaction_job.cc:865] [default] compacted to: base level 6 level multiplier 10.00 max bytes base 268435456 files[0 0 0 0 0 0 1] max score 0.00, MB/sec: 161.2 rd, 139.3 wr, level 6, files in(1, 1) out(1 +0 blob) MB in(2.4, 9.7 +0.0 blob) out(10.4 +0.0 blob), read-write-amplify(9.6) write-amplify(4.4) OK, records in: 9852, records dropped: 514 output_compression: NoCompression
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.280867) EVENT_LOG_v1 {"time_micros": 1759850742280856, "job": 114, "event": "compaction_finished", "compaction_time_micros": 78587, "compaction_time_cpu_micros": 27117, "output_level": 6, "num_output_files": 1, "total_output_size": 10949633, "num_input_records": 9852, "num_output_records": 9338, "num_subcompactions": 1, "output_compression": "NoCompression", "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [0, 0, 0, 0, 0, 0, 1]}
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000184.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742281495, "job": 114, "event": "table_file_deletion", "file_number": 184}
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: [file/delete_scheduler.cc:74] Deleted file /var/lib/ceph/mon/ceph-compute-0/store.db/000182.sst immediately, rate_bytes_per_sec 0, total_trash_size 0 max_trash_db_ratio 0.250000
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: EVENT_LOG_v1 {"time_micros": 1759850742283654, "job": 114, "event": "table_file_deletion", "file_number": 182}
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.199568) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283723) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283729) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283730) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283732) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: rocksdb: (Original Log Time 2025/10/07-15:25:42.283733) [db/db_impl/db_impl_compaction_flush.cc:1903] [default] Manual compaction starting
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:25:42 np0005473739 ceph-mon[74295]: from='mgr.14132 192.168.122.100:0/1826331155' entity='mgr.compute-0.kdyrcd' 
Oct  7 11:25:43 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23383 -' entity='client.admin' cmd=[{"prefix": "orch status", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:43 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3733: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:44 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23385 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:44 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "status"} v 0) v1
Oct  7 11:25:44 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1068047090' entity='client.admin' cmd=[{"prefix": "status"}]: dispatch
Oct  7 11:25:44 np0005473739 nova_compute[259550]: 2025-10-07 15:25:44.721 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:45 np0005473739 nova_compute[259550]: 2025-10-07 15:25:45.156 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:45 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3734: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:45 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:47 np0005473739 ovs-vsctl[466435]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Oct  7 11:25:47 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3735: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:48 np0005473739 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Oct  7 11:25:48 np0005473739 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Oct  7 11:25:48 np0005473739 virtqemud[259430]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Oct  7 11:25:49 np0005473739 podman[466678]: 2025-10-07 15:25:49.10673579 +0000 UTC m=+0.104578778 container health_status fd5877220a96deca0c9b3a7f7110a2c5a451f3369cbed0af963e679d92525a9c (image=quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f, name=iscsid, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=iscsid, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251001, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/iscsid', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-iscsid@sha256:261e76f60c6bc6b172dc3608504552c63e83358a4fa3c0952a671544d83aa83f', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/iscsid.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run:/run', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:z', '/etc/target:/etc/target:z', '/var/lib/iscsi:/var/lib/iscsi:z', '/var/lib/openstack/healthchecks/iscsid:/openstack:ro,z']}, container_name=iscsid, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297)
Oct  7 11:25:49 np0005473739 podman[466675]: 2025-10-07 15:25:49.108453005 +0000 UTC m=+0.110240127 container health_status 819750d417417c666c8f7de3b589dd40f430b9fe596e8fd197d0ec4bb4afda4f (image=quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=b78cfc68a577b1553523c8a70a34e297, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd@sha256:02d33f59749441cd5751c319e9d7cff97ab1004844c0e992650d340c6e8fbf43', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi:z', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251001)
Oct  7 11:25:49 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: cache status {prefix=cache status} (starting...)
Oct  7 11:25:49 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: client ls {prefix=client ls} (starting...)
Oct  7 11:25:49 np0005473739 lvm[466803]: PV /dev/loop5 online, VG ceph_vg2 is complete.
Oct  7 11:25:49 np0005473739 lvm[466803]: VG ceph_vg2 finished
Oct  7 11:25:49 np0005473739 lvm[466819]: PV /dev/loop3 online, VG ceph_vg0 is complete.
Oct  7 11:25:49 np0005473739 lvm[466819]: VG ceph_vg0 finished
Oct  7 11:25:49 np0005473739 lvm[466827]: PV /dev/loop4 online, VG ceph_vg1 is complete.
Oct  7 11:25:49 np0005473739 lvm[466827]: VG ceph_vg1 finished
Oct  7 11:25:49 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3736: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:49 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23389 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:49 np0005473739 nova_compute[259550]: 2025-10-07 15:25:49.724 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:49 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: damage ls {prefix=damage ls} (starting...)
Oct  7 11:25:50 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump loads {prefix=dump loads} (starting...)
Oct  7 11:25:50 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23391 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:50 np0005473739 nova_compute[259550]: 2025-10-07 15:25:50.158 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:50 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump tree {prefix=dump tree,root=/} (starting...)
Oct  7 11:25:50 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_blocked_ops {prefix=dump_blocked_ops} (starting...)
Oct  7 11:25:50 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_historic_ops {prefix=dump_historic_ops} (starting...)
Oct  7 11:25:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "report"} v 0) v1
Oct  7 11:25:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3638627149' entity='client.admin' cmd=[{"prefix": "report"}]: dispatch
Oct  7 11:25:50 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_historic_ops_by_duration {prefix=dump_historic_ops_by_duration} (starting...)
Oct  7 11:25:50 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23397 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:50 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:25:50.817+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  7 11:25:50 np0005473739 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  7 11:25:50 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: dump_ops_in_flight {prefix=dump_ops_in_flight} (starting...)
Oct  7 11:25:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config generate-minimal-conf"} v 0) v1
Oct  7 11:25:50 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/948022299' entity='client.admin' cmd=[{"prefix": "config generate-minimal-conf"}]: dispatch
Oct  7 11:25:50 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: get subtrees {prefix=get subtrees} (starting...)
Oct  7 11:25:50 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:51 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: ops {prefix=ops} (starting...)
Oct  7 11:25:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm"} v 0) v1
Oct  7 11:25:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1421570317' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm"}]: dispatch
Oct  7 11:25:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config log"} v 0) v1
Oct  7 11:25:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/776950298' entity='client.admin' cmd=[{"prefix": "config log"}]: dispatch
Oct  7 11:25:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  7 11:25:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4081423507' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  7 11:25:51 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3737: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:51 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "config-key dump"} v 0) v1
Oct  7 11:25:51 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1735205685' entity='client.admin' cmd=[{"prefix": "config-key dump"}]: dispatch
Oct  7 11:25:51 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: session ls {prefix=session ls} (starting...)
Oct  7 11:25:52 np0005473739 ceph-mds[100686]: mds.cephfs.compute-0.xpofvx asok_command: status {prefix=status} (starting...)
Oct  7 11:25:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  7 11:25:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3353477631' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  7 11:25:52 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23411 -' entity='client.admin' cmd=[{"prefix": "crash ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  7 11:25:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/21338404' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  7 11:25:52 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23415 -' entity='client.admin' cmd=[{"prefix": "crash stat", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] scanning for idle connections..
Oct  7 11:25:52 np0005473739 ceph-mgr[74587]: [volumes INFO mgr_util] cleaning up connections: []
Oct  7 11:25:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  7 11:25:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3274021769' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  7 11:25:52 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "features"} v 0) v1
Oct  7 11:25:52 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2011630334' entity='client.admin' cmd=[{"prefix": "features"}]: dispatch
Oct  7 11:25:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "health", "detail": "detail"} v 0) v1
Oct  7 11:25:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4044100278' entity='client.admin' cmd=[{"prefix": "health", "detail": "detail"}]: dispatch
Oct  7 11:25:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr stat"} v 0) v1
Oct  7 11:25:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3455579351' entity='client.admin' cmd=[{"prefix": "mgr stat"}]: dispatch
Oct  7 11:25:53 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3738: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:53 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23425 -' entity='client.admin' cmd=[{"prefix": "insights", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:53 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:25:53.717+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  7 11:25:53 np0005473739 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'insights' is not enabled/loaded (required by command 'insights'): use `ceph mgr module enable insights` to enable it
Oct  7 11:25:53 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  7 11:25:53 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1842496646' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  7 11:25:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"} v 0) v1
Oct  7 11:25:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1260388552' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "audit"}]: dispatch
Oct  7 11:25:54 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23431 -' entity='client.admin' cmd=[{"prefix": "orch host ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:54 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23433 -' entity='client.admin' cmd=[{"prefix": "orch device ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:54 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"} v 0) v1
Oct  7 11:25:54 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/625669770' entity='client.admin' cmd=[{"prefix": "log last", "num": 10000, "level": "debug", "channel": "cluster"}]: dispatch
Oct  7 11:25:54 np0005473739 nova_compute[259550]: 2025-10-07 15:25:54.728 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:54 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23437 -' entity='client.admin' cmd=[{"prefix": "orch ls", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:54 np0005473739 nova_compute[259550]: 2025-10-07 15:25:54.982 2 DEBUG oslo_service.periodic_task [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.020 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.021 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.021 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump"} v 0) v1
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1249979617' entity='client.admin' cmd=[{"prefix": "mgr dump"}]: dispatch
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.200 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:55 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23441 -' entity='client.admin' cmd=[{"prefix": "orch ls", "export": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fc6caf00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 32.682163239s of 33.369308472s, submitted: 9
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fe0ea5a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 45948928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282820608 unmapped: 45948928 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282828800 unmapped: 45940736 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282853376 unmapped: 45916160 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282869760 unmapped: 45899776 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 45891584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282877952 unmapped: 45891584 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.930875778s of 10.032606125s, submitted: 31
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2944731 data_alloc: 218103808 data_used: 13377536
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 282894336 unmapped: 45875200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eea78000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283410432 unmapped: 45359104 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283951104 unmapped: 44818432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee52b000/0x0/0x4ffc00000, data 0x173ed2f/0x18c3000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 44769280 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee50e000/0x0/0x4ffc00000, data 0x175bd2f/0x18e0000, compress 0x0/0x0/0x0, omap 0x639, meta 0xfe0f9c7), peers [0,1] op hist [0,0,0,0,0,1])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991023 data_alloc: 218103808 data_used: 13512704
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 43827200 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284950528 unmapped: 43819008 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285999104 unmapped: 42770432 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee0e8000/0x0/0x4ffc00000, data 0x1761d2f/0x18e6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1022f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992473 data_alloc: 218103808 data_used: 13512704
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.205342293s of 13.499714851s, submitted: 106
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef545000/0x0/0x4ffc00000, data 0x1764d2f/0x18e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef545000/0x0/0x4ffc00000, data 0x1764d2f/0x18e9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993209 data_alloc: 218103808 data_used: 13512704
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 43810816 heap: 328769536 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 290521088 unmapped: 45596672 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fd2e45a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffdbb4a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fd29dc20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4fdf8a000
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffe29860
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecf1000/0x0/0x4ffc00000, data 0x1fb8d2f/0x213d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053185 data_alloc: 218103808 data_used: 13512704
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fdf8bc20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0c960
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 51142656 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4fdd5b2c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053185 data_alloc: 218103808 data_used: 13512704
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.148612022s of 13.547485352s, submitted: 7
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f4ffdbba40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecf1000/0x0/0x4ffc00000, data 0x1fb8d2f/0x213d000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286031872 unmapped: 50085888 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecef000/0x0/0x4ffc00000, data 0x1fb8d62/0x213f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109156 data_alloc: 218103808 data_used: 21655552
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eecef000/0x0/0x4ffc00000, data 0x1fb8d62/0x213f000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3109156 data_alloc: 218103808 data_used: 21655552
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.794086456s of 11.875811577s, submitted: 5
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287318016 unmapped: 48799744 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287940608 unmapped: 48177152 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289021952 unmapped: 47095808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39e000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3187768 data_alloc: 218103808 data_used: 22167552
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289079296 unmapped: 47038464 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39d000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3188904 data_alloc: 218103808 data_used: 22589440
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee39d000/0x0/0x4ffc00000, data 0x2909d62/0x2a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf21e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4ff9972c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289087488 unmapped: 47030272 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.230875969s of 11.849612236s, submitted: 48
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fed86f00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef543000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2990971 data_alloc: 218103808 data_used: 13225984
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef543000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef542000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef542000/0x0/0x4ffc00000, data 0x1765d2f/0x18ea000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2991147 data_alloc: 218103808 data_used: 13225984
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdd48000 session 0x55f4fdd5b0e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f501979c20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efab8000/0x0/0x4ffc00000, data 0x11f1d2f/0x1376000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fcf214a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283762688 unmapped: 52355072 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283770880 unmapped: 52346880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283787264 unmapped: 52330496 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4efadc000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283795456 unmapped: 52322304 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2943493 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283803648 unmapped: 52314112 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 283803648 unmapped: 52314112 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 38.062763214s of 38.424762726s, submitted: 20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 49577984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fd29c960
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4feca9860
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877400 session 0x55f501978780
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff0eb4a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fd2e4d20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef57d000/0x0/0x4ffc00000, data 0x172cd2f/0x18b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2987362 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffa0d0e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef57d000/0x0/0x4ffc00000, data 0x172cd2f/0x18b1000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0dc20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 52117504 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ff35b800 session 0x55f4ff9e8780
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa9e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 51937280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2992039 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 51937280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 51789824 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031879 data_alloc: 218103808 data_used: 18636800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031879 data_alloc: 218103808 data_used: 18636800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ef559000/0x0/0x4ffc00000, data 0x1750d2f/0x18d5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xedcf9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 50937856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.940246582s of 19.882999420s, submitted: 27
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286670848 unmapped: 49446912 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287866880 unmapped: 48250880 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3110761 data_alloc: 218103808 data_used: 20398080
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287907840 unmapped: 48209920 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3111081 data_alloc: 218103808 data_used: 20406272
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edb88000/0x0/0x4ffc00000, data 0x1f81d2f/0x2106000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 287916032 unmapped: 48201728 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.229282379s of 29.198295593s, submitted: 69
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed67e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc3860
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286769152 unmapped: 49348608 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ff9e8000
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286777344 unmapped: 49340416 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286785536 unmapped: 49332224 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2951223 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffb9f800 session 0x55f4ff9e9680
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fe026780
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4feca81e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee93c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff9974a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 25.315746307s of 25.564805984s, submitted: 37
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4feca85a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdb38c00 session 0x55f4ffe29e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fed55860
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4feca9680
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fedac1e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982457 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fe026780
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2982457 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286793728 unmapped: 49324032 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993657 data_alloc: 218103808 data_used: 14663680
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2993657 data_alloc: 218103808 data_used: 14663680
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286801920 unmapped: 49315840 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee6a2000/0x0/0x4ffc00000, data 0x1466d3f/0x15ec000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286810112 unmapped: 49307648 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.177154541s of 20.592260361s, submitted: 12
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee22c000/0x0/0x4ffc00000, data 0x18dcd3f/0x1a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285138944 unmapped: 50978816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028855 data_alloc: 218103808 data_used: 14909440
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 51036160 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 51036160 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee227000/0x0/0x4ffc00000, data 0x18e0d3f/0x1a66000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029661 data_alloc: 218103808 data_used: 14909440
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3029661 data_alloc: 218103808 data_used: 14909440
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ee222000/0x0/0x4ffc00000, data 0x18e6d3f/0x1a6c000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 50962432 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa6f000 session 0x55f4ff98b680
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fdfa85a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fed663c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc3e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.534084320s of 16.210994720s, submitted: 47
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 49561600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4ffa0cf00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe29860
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fdd5b0e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ff996d20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fcf4ba40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063882 data_alloc: 218103808 data_used: 14909440
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285523968 unmapped: 50593792 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063882 data_alloc: 218103808 data_used: 14909440
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4edeb9000/0x0/0x4ffc00000, data 0x1c4fd3f/0x1dd5000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd112c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285532160 unmapped: 50585600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083327 data_alloc: 218103808 data_used: 17149952
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3083327 data_alloc: 218103808 data_used: 17149952
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285540352 unmapped: 50577408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285548544 unmapped: 50569216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.206413269s of 21.587003708s, submitted: 31
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl_write.cc:2098] [L] New memtable created with log file: #49. Immutable memtables: 6.
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ede95000/0x0/0x4ffc00000, data 0x1c73d3f/0x1df9000, compress 0x0/0x0/0x0, omap 0x639, meta 0xff6f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3130179 data_alloc: 218103808 data_used: 17604608
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288866304 unmapped: 47251456 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec7db000/0x0/0x4ffc00000, data 0x218dd3f/0x2313000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec758000/0x0/0x4ffc00000, data 0x220fd3f/0x2395000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec758000/0x0/0x4ffc00000, data 0x220fd3f/0x2395000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3148909 data_alloc: 218103808 data_used: 18034688
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289136640 unmapped: 46981120 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289267712 unmapped: 46850048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf95400 session 0x55f4fdd5be00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe017c20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289275904 unmapped: 46841856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 8.451373100s of 10.043084145s, submitted: 88
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3038639 data_alloc: 218103808 data_used: 14909440
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fc6ca3c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed05e000/0x0/0x4ffc00000, data 0x190ad3f/0x1a90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff9e9680
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fecd3e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289284096 unmapped: 46833664 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fccd1000 session 0x55f4fcf51e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289292288 unmapped: 46825472 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289300480 unmapped: 46817280 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2967467 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed79c000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289308672 unmapped: 46809088 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 28.709793091s of 29.150884628s, submitted: 12
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297426944 unmapped: 38690816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028468 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289628160 unmapped: 46489600 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fef4a000
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdd5a3c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fef4b2c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe0161e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ff997a40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc6000/0x0/0x4ffc00000, data 0x19a2d91/0x1b28000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3028484 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd403860
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f5019781e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4ffa0c3c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,0,1])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289636352 unmapped: 46481408 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 7.679104328s of 10.049762726s, submitted: 34
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fecd34a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 46473216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3031495 data_alloc: 218103808 data_used: 13090816
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 289644544 unmapped: 46473216 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 290816000 unmapped: 45301760 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089575 data_alloc: 218103808 data_used: 21217280
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3089575 data_alloc: 218103808 data_used: 21217280
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecfc4000/0x0/0x4ffc00000, data 0x19a2dc4/0x1b2a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291831808 unmapped: 44285952 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 13.181274414s of 13.181275368s, submitted: 0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294936576 unmapped: 41181184 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3176615 data_alloc: 218103808 data_used: 21319680
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294281216 unmapped: 41836544 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3bf000/0x0/0x4ffc00000, data 0x25a7dc4/0x272f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b9000/0x0/0x4ffc00000, data 0x25addc4/0x2735000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182271 data_alloc: 218103808 data_used: 21987328
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294346752 unmapped: 41771008 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3182271 data_alloc: 218103808 data_used: 21987328
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294354944 unmapped: 41762816 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f502877c00 session 0x55f4fdf8a1e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fd3210e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fe024960
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4ffa01860
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 14.366380692s of 16.474279404s, submitted: 92
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 295936000 unmapped: 40181760 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fcf201e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4ff9e8d20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4ff9970e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4fef4a5a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc32c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200253 data_alloc: 218103808 data_used: 21991424
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21d000/0x0/0x4ffc00000, data 0x2748dd4/0x28d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294379520 unmapped: 41738240 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21d000/0x0/0x4ffc00000, data 0x2748dd4/0x28d1000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3200253 data_alloc: 218103808 data_used: 21991424
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdd11e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205953 data_alloc: 218103808 data_used: 22528000
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3205953 data_alloc: 218103808 data_used: 22528000
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294387712 unmapped: 41730048 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec1f9000/0x0/0x4ffc00000, data 0x276cdd4/0x28f5000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294395904 unmapped: 41721856 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 20.447574615s of 20.827314377s, submitted: 10
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 39960576 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3284443 data_alloc: 218103808 data_used: 22757376
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296157184 unmapped: 39960576 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb869000/0x0/0x4ffc00000, data 0x30fbdd4/0x3284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4eb869000/0x0/0x4ffc00000, data 0x30fbdd4/0x3284000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3291167 data_alloc: 218103808 data_used: 22753280
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296189952 unmapped: 39927808 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff98b2c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ffdba000
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fecc2960
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec3b5000/0x0/0x4ffc00000, data 0x25b1dc4/0x2739000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3191033 data_alloc: 218103808 data_used: 21991424
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.200098038s of 12.547958374s, submitted: 61
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ffa0d4a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fecdb4a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296198144 unmapped: 39919616 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fcccf400 session 0x55f4ffa0c780
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291651584 unmapped: 44466176 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ed456000/0x0/0x4ffc00000, data 0x11cdd2f/0x1352000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2988088 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 44457984 heap: 336117760 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 35.669296265s of 35.925739288s, submitted: 28
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,0,1,2])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297992192 unmapped: 42328064 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fcf4a000
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4ff9965a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fed66f00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ffe28960
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fe0241e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3053060 data_alloc: 218103808 data_used: 13086720
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffe28780
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe294a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fe016b40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fed87a40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3058801 data_alloc: 218103808 data_used: 13697024
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 291659776 unmapped: 48660480 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292102144 unmapped: 48218112 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115441 data_alloc: 218103808 data_used: 21676032
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ecf3a000/0x0/0x4ffc00000, data 0x1a2fd2f/0x1bb4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3115441 data_alloc: 218103808 data_used: 21676032
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292110336 unmapped: 48209920 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 18.079471588s of 18.551073074s, submitted: 14
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294838272 unmapped: 45481984 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294871040 unmapped: 45449216 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec27d000/0x0/0x4ffc00000, data 0x26ecd2f/0x2871000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,0,1])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3220233 data_alloc: 218103808 data_used: 22417408
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294879232 unmapped: 45441024 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec23c000/0x0/0x4ffc00000, data 0x272dd2f/0x28b2000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3227157 data_alloc: 218103808 data_used: 22556672
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 10.110246658s of 11.604579926s, submitted: 73
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226953 data_alloc: 218103808 data_used: 22556672
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec21b000/0x0/0x4ffc00000, data 0x274ed2f/0x28d3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3226953 data_alloc: 218103808 data_used: 22556672
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec218000/0x0/0x4ffc00000, data 0x2751d2f/0x28d6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.151399612s of 12.149919510s, submitted: 4
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3229065 data_alloc: 218103808 data_used: 22589440
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 ms_handle_reset con 0x55f500c4ac00 session 0x55f4fcf51c20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 294928384 unmapped: 45391872 heap: 340320256 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 heartbeat osd_stat(store_statfs(0x4ec207000/0x0/0x4ffc00000, data 0x2762d2f/0x28e7000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 284 handle_osd_map epochs [285,285], i have 284, src has [1,285]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdd5ab40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdfa74a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffe285a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 285 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fcf4be00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 39550976 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 285 heartbeat osd_stat(store_statfs(0x4eba29000/0x0/0x4ffc00000, data 0x2f3d90e/0x30c4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307904512 unmapped: 39550976 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 285 handle_osd_map epochs [285,286], i have 285, src has [1,286]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 286 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4ffe28b40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 286 heartbeat osd_stat(store_statfs(0x4eb1c3000/0x0/0x4ffc00000, data 0x37a490e/0x392b000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,0,0,0,1])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3421390 data_alloc: 234881024 data_used: 35491840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307920896 unmapped: 39534592 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 286 handle_osd_map epochs [286,287], i have 286, src has [1,287]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4fef4af00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307937280 unmapped: 39518208 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b7000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307945472 unmapped: 39510016 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3425380 data_alloc: 234881024 data_used: 35491840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b7000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 307953664 unmapped: 39501824 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.261311531s of 13.822600365s, submitted: 64
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffdbb680
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffdbbe00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdd10d20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ffdbb0e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4ff0eb2c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305127424 unmapped: 42328064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 heartbeat osd_stat(store_statfs(0x4eb1b9000/0x0/0x4ffc00000, data 0x37ac078/0x3935000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305135616 unmapped: 42319872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3420546 data_alloc: 234881024 data_used: 35500032
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305135616 unmapped: 42319872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adadb/0x3938000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305143808 unmapped: 42311680 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305143808 unmapped: 42311680 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdfa90e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4ffdbaf00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3420546 data_alloc: 234881024 data_used: 35500032
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fdd5b4a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4feefa1e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305160192 unmapped: 42295296 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b4000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 305176576 unmapped: 42278912 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467236 data_alloc: 251658240 data_used: 41988096
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3467236 data_alloc: 251658240 data_used: 41988096
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309100544 unmapped: 38354944 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 38346752 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb1b5000/0x0/0x4ffc00000, data 0x37adaeb/0x3939000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309108736 unmapped: 38346752 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 19.982900620s of 20.121829987s, submitted: 17
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309592064 unmapped: 37863424 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3489675 data_alloc: 251658240 data_used: 43794432
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb0fa000/0x0/0x4ffc00000, data 0x3868aeb/0x39f4000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 309788672 unmapped: 37666816 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3500175 data_alloc: 251658240 data_used: 44195840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3500175 data_alloc: 251658240 data_used: 44195840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310706176 unmapped: 36749312 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08c000/0x0/0x4ffc00000, data 0x38d6aeb/0x3a62000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310722560 unmapped: 36732928 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 15.973714828s of 16.078048706s, submitted: 14
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3497747 data_alloc: 251658240 data_used: 44195840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310386688 unmapped: 37068800 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3498067 data_alloc: 251658240 data_used: 44204032
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 310394880 unmapped: 37060608 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 heartbeat osd_stat(store_statfs(0x4eb08b000/0x0/0x4ffc00000, data 0x38d7aeb/0x3a63000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3505747 data_alloc: 251658240 data_used: 45961216
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311271424 unmapped: 36184064 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4fdfa85a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.843559265s of 12.874654770s, submitted: 3
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 311279616 unmapped: 36175872 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 288 handle_osd_map epochs [288,289], i have 288, src has [1,289]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 289 handle_osd_map epochs [289,289], i have 289, src has [1,289]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffbae000 session 0x55f4fedac1e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffb9a000 session 0x55f4ff9e9c20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4fe026f00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 326180864 unmapped: 21274624 heap: 347455488 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 289 ms_handle_reset con 0x55f4fec93000 session 0x55f4ffa012c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 290 ms_handle_reset con 0x55f4ffb9a000 session 0x55f4ffa0c780
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319987712 unmapped: 31670272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 290 handle_osd_map epochs [291,291], i have 290, src has [1,291]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbacc00 session 0x55f4feefa5a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e9172000/0x0/0x4ffc00000, data 0x57f0239/0x597a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3768224 data_alloc: 251658240 data_used: 52424704
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320094208 unmapped: 31563776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbae000 session 0x55f4feca94a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 291 heartbeat osd_stat(store_statfs(0x4e916e000/0x0/0x4ffc00000, data 0x57ecdd2/0x597c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4ffbb1000 session 0x55f4feca9a40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 291 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4feca8960
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320143360 unmapped: 31514624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 291 handle_osd_map epochs [292,292], i have 291, src has [1,292]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fd320d20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320233472 unmapped: 31424512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 320233472 unmapped: 31424512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4ff9961e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffdbab40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 292 ms_handle_reset con 0x55f4ffbba400 session 0x55f4fcf4b860
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3534457 data_alloc: 251658240 data_used: 51228672
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb1a9000/0x0/0x4ffc00000, data 0x37b49af/0x3944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.539574623s of 10.580464363s, submitted: 128
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 292 heartbeat osd_stat(store_statfs(0x4eb1a9000/0x0/0x4ffc00000, data 0x37b49af/0x3944000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 319537152 unmapped: 32120832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 292 handle_osd_map epochs [292,293], i have 292, src has [1,293]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ff358c00 session 0x55f501979e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 312303616 unmapped: 39354368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 312303616 unmapped: 39354368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4ff997e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 293 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fe0161e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 293 handle_osd_map epochs [294,294], i have 293, src has [1,294]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3328352 data_alloc: 234881024 data_used: 34619392
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 294 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ffe29a40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed77b000/0x0/0x4ffc00000, data 0x11deff1/0x1370000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3060980 data_alloc: 218103808 data_used: 12230656
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 294 heartbeat osd_stat(store_statfs(0x4ed77b000/0x0/0x4ffc00000, data 0x11deff1/0x1370000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 11.101095200s of 11.366854668s, submitted: 87
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063106 data_alloc: 218103808 data_used: 12230656
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 35K writes, 142K keys, 35K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.03 MB/s#012Cumulative WAL: 35K writes, 12K syncs, 2.80 writes per sync, written: 0.14 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 1971 writes, 8322 keys, 1971 commit groups, 1.0 writes per commit group, ingest: 9.10 MB, 0.02 MB/s#012Interval WAL: 1971 writes, 761 syncs, 2.59 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063266 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296615936 unmapped: 55042048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3063266 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: mgrc ms_handle_reset ms_handle_reset con 0x55f4fdd48c00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: mgrc handle_mgr_configure stats_period=5
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 54960128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 37.798728943s of 37.813915253s, submitted: 13
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296697856 unmapped: 54960128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fcf4af00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbb8000 session 0x55f4feca81e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fdfa65a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fecc32c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffa3a400 session 0x55f501979c20
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed055000/0x0/0x4ffc00000, data 0x1906a54/0x1a99000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4fed870e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296771584 unmapped: 54886400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3125660 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296935424 unmapped: 54722560 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 54272000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed054000/0x0/0x4ffc00000, data 0x1906a77/0x1a9a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297385984 unmapped: 54272000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4ffbba400 session 0x55f501979a40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f5049ac800 session 0x55f4ffa0d0e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed054000/0x0/0x4ffc00000, data 0x1906a77/0x1a9a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [0,0,1])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ff997e00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata"} v 0) v1
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/718127271' entity='client.admin' cmd=[{"prefix": "mgr metadata"}]: dispatch
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/676956947' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed77a000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1110f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3070103 data_alloc: 218103808 data_used: 12234752
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 58.547794342s of 59.170566559s, submitted: 55
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296116224 unmapped: 55541760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297222144 unmapped: 54435840 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 54427648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 heartbeat osd_stat(store_statfs(0x4ed36b000/0x0/0x4ffc00000, data 0x11e0a54/0x1373000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297230336 unmapped: 54427648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 295 handle_osd_map epochs [296,296], i have 295, src has [1,296]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3074277 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 296 ms_handle_reset con 0x55f4fdf94000 session 0x55f4ffa0de00
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3072989 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 296 heartbeat osd_stat(store_statfs(0x4ed367000/0x0/0x4ffc00000, data 0x11e2602/0x1375000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 296 handle_osd_map epochs [296,297], i have 296, src has [1,297]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 12.538488388s of 13.065173149s, submitted: 139
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075771 data_alloc: 218103808 data_used: 12242944
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297271296 unmapped: 54386688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297279488 unmapped: 54378496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297287680 unmapped: 54370304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297295872 unmapped: 54362112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297304064 unmapped: 54353920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297312256 unmapped: 54345728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297320448 unmapped: 54337536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297328640 unmapped: 54329344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12247040
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 297336832 unmapped: 54321152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdfa83c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296493056 unmapped: 55164928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12378112
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296501248 unmapped: 55156736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 heartbeat osd_stat(store_statfs(0x4ed365000/0x0/0x4ffc00000, data 0x11e4065/0x1378000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3075931 data_alloc: 218103808 data_used: 12378112
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 55148544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 296509440 unmapped: 55148544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 113.802856445s of 113.915954590s, submitted: 17
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 297 handle_osd_map epochs [297,298], i have 297, src has [1,298]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 298 ms_handle_reset con 0x55f4ffbbb000 session 0x55f4fe0254a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3004649 data_alloc: 218103808 data_used: 5570560
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 298 heartbeat osd_stat(store_statfs(0x4edb62000/0x0/0x4ffc00000, data 0x9e5c13/0xb7a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 298 handle_osd_map epochs [299,299], i have 298, src has [1,299]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 292626432 unmapped: 59031552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 299 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4ffa01a40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee361000/0x0/0x4ffc00000, data 0x1e77c1/0x37c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee361000/0x0/0x4ffc00000, data 0x1e77c1/0x37c000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935475 data_alloc: 218103808 data_used: 126976
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee35e000/0x0/0x4ffc00000, data 0x1e9240/0x37f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.533 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.512s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 300 heartbeat osd_stat(store_statfs(0x4ee35e000/0x0/0x4ffc00000, data 0x1e9240/0x37f000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2935475 data_alloc: 218103808 data_used: 126976
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 16.293596268s of 16.493015289s, submitted: 58
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 301 heartbeat osd_stat(store_statfs(0x4ee35a000/0x0/0x4ffc00000, data 0x1eacc6/0x383000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 301 handle_osd_map epochs [302,302], i have 301, src has [1,302]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fdf8ba40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945469 data_alloc: 218103808 data_used: 126976
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288169984 unmapped: 63488000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288186368 unmapped: 63471616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288022528 unmapped: 63635456 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288030720 unmapped: 63627264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288038912 unmapped: 63619072 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288047104 unmapped: 63610880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288055296 unmapped: 63602688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288063488 unmapped: 63594496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288063488 unmapped: 63594496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288071680 unmapped: 63586304 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288079872 unmapped: 63578112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2945949 data_alloc: 218103808 data_used: 139264
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 105.418647766s of 105.483337402s, submitted: 31
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 heartbeat osd_stat(store_statfs(0x4ee356000/0x0/0x4ffc00000, data 0x1ec843/0x386000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288088064 unmapped: 63569920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 303 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fdfa8b40
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288104448 unmapped: 63553536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3005561 data_alloc: 218103808 data_used: 147456
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288112640 unmapped: 63545344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 303 handle_osd_map epochs [303,304], i have 303, src has [1,304]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 ms_handle_reset con 0x55f5049ac800 session 0x55f4ff996000
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288120832 unmapped: 63537152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288129024 unmapped: 63528960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288137216 unmapped: 63520768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288145408 unmapped: 63512576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3009735 data_alloc: 218103808 data_used: 155648
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 heartbeat osd_stat(store_statfs(0x4edb4f000/0x0/0x4ffc00000, data 0x9eff70/0xb8e000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288153600 unmapped: 63504384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 29.854793549s of 29.929613113s, submitted: 9
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288161792 unmapped: 63496192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 304 handle_osd_map epochs [305,305], i have 304, src has [1,305]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 305 ms_handle_reset con 0x55f5039db400 session 0x55f4fcf4a3c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b41/0xb91000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011917 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b1e/0xb90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 305 heartbeat osd_stat(store_statfs(0x4edb4c000/0x0/0x4ffc00000, data 0x9f1b1e/0xb90000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3011917 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 305 handle_osd_map epochs [305,306], i have 305, src has [1,306]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288194560 unmapped: 63463424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288202752 unmapped: 63455232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288210944 unmapped: 63447040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288219136 unmapped: 63438848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288227328 unmapped: 63430656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288243712 unmapped: 63414272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288251904 unmapped: 63406080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288260096 unmapped: 63397888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288284672 unmapped: 63373312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288292864 unmapped: 63365120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288301056 unmapped: 63356928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288309248 unmapped: 63348736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288317440 unmapped: 63340544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288333824 unmapped: 63324160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288342016 unmapped: 63315968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288350208 unmapped: 63307776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288358400 unmapped: 63299584 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288366592 unmapped: 63291392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288374784 unmapped: 63283200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288382976 unmapped: 63275008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288382976 unmapped: 63275008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288391168 unmapped: 63266816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288391168 unmapped: 63266816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288407552 unmapped: 63250432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288415744 unmapped: 63242240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288423936 unmapped: 63234048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288432128 unmapped: 63225856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288440320 unmapped: 63217664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288456704 unmapped: 63201280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288464896 unmapped: 63193088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288473088 unmapped: 63184896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288481280 unmapped: 63176704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288489472 unmapped: 63168512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288497664 unmapped: 63160320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288505856 unmapped: 63152128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288522240 unmapped: 63135744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288530432 unmapped: 63127552 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288538624 unmapped: 63119360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288546816 unmapped: 63111168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288546816 unmapped: 63111168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288555008 unmapped: 63102976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288555008 unmapped: 63102976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288563200 unmapped: 63094784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288571392 unmapped: 63086592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288579584 unmapped: 63078400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288579584 unmapped: 63078400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288587776 unmapped: 63070208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288587776 unmapped: 63070208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288595968 unmapped: 63062016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288604160 unmapped: 63053824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288604160 unmapped: 63053824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288612352 unmapped: 63045632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288620544 unmapped: 63037440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288620544 unmapped: 63037440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288636928 unmapped: 63021056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288645120 unmapped: 63012864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288653312 unmapped: 63004672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288653312 unmapped: 63004672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288661504 unmapped: 62996480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288669696 unmapped: 62988288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288677888 unmapped: 62980096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288686080 unmapped: 62971904 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288694272 unmapped: 62963712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288702464 unmapped: 62955520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288710656 unmapped: 62947328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288735232 unmapped: 62922752 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288751616 unmapped: 62906368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288768000 unmapped: 62889984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288776192 unmapped: 62881792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 36K writes, 144K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.79 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 677 writes, 1677 keys, 677 commit groups, 1.0 writes per commit group, ingest: 0.81 MB, 0.00 MB/s#012Interval WAL: 677 writes, 305 syncs, 2.22 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.2      0.01              0.00         1    0.007       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x55f4fb8e71f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 3.9e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288784384 unmapped: 62873600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288792576 unmapped: 62865408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288800768 unmapped: 62857216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288808960 unmapped: 62849024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288817152 unmapped: 62840832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288825344 unmapped: 62832640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288833536 unmapped: 62824448 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288841728 unmapped: 62816256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288849920 unmapped: 62808064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288849920 unmapped: 62808064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288858112 unmapped: 62799872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288866304 unmapped: 62791680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288874496 unmapped: 62783488 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288882688 unmapped: 62775296 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288890880 unmapped: 62767104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288907264 unmapped: 62750720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4a000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288915456 unmapped: 62742528 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3014699 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 320.509613037s of 321.359313965s, submitted: 63
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288931840 unmapped: 62726144 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 288980992 unmapped: 62676992 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [0,0,0,1])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286449664 unmapped: 65208320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286457856 unmapped: 65200128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286466048 unmapped: 65191936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286474240 unmapped: 65183744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286490624 unmapped: 65167360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286498816 unmapped: 65159168 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286515200 unmapped: 65142784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286523392 unmapped: 65134592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286531584 unmapped: 65126400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286539776 unmapped: 65118208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286556160 unmapped: 65101824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286564352 unmapped: 65093632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286572544 unmapped: 65085440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286588928 unmapped: 65069056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 65052672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286605312 unmapped: 65052672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 heartbeat osd_stat(store_statfs(0x4edb4b000/0x0/0x4ffc00000, data 0x9f3581/0xb93000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3013819 data_alloc: 218103808 data_used: 163840
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 97.341293335s of 97.832099915s, submitted: 90
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 286613504 unmapped: 65044480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 306 handle_osd_map epochs [306,307], i have 306, src has [1,307]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 307 ms_handle_reset con 0x55f4fd23fc00 session 0x55f4fcf210e0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0x9f511f/0xb94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 307 heartbeat osd_stat(store_statfs(0x4edb49000/0x0/0x4ffc00000, data 0x9f511f/0xb94000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 307 handle_osd_map epochs [307,308], i have 307, src has [1,308]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 308 ms_handle_reset con 0x55f4fdf94000 session 0x55f4fed552c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2965964 data_alloc: 218103808 data_used: 172032
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284082176 unmapped: 67575808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284082176 unmapped: 67575808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee346000/0x0/0x4ffc00000, data 0x1f6cf0/0x397000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285147136 unmapped: 66510848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x1f876f/0x39a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 309 heartbeat osd_stat(store_statfs(0x4ee343000/0x0/0x4ffc00000, data 0x1f876f/0x39a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285147136 unmapped: 66510848 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285155328 unmapped: 66502656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3055603 data_alloc: 218103808 data_used: 172032
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 309 handle_osd_map epochs [310,310], i have 309, src has [1,310]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.332621574s of 10.034673691s, submitted: 57
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285171712 unmapped: 66486272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ed6d4000/0x0/0x4ffc00000, data 0xe6876f/0x100a000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [0,0,0,2])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 310 ms_handle_reset con 0x55f4ffa3a400 session 0x55f4fed545a0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285171712 unmapped: 66486272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3059713 data_alloc: 218103808 data_used: 184320
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285179904 unmapped: 66478080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 310 handle_osd_map epochs [310,311], i have 310, src has [1,311]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 310 heartbeat osd_stat(store_statfs(0x4ed6d0000/0x0/0x4ffc00000, data 0xe6a308/0x100d000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285188096 unmapped: 66469888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285196288 unmapped: 66461696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285204480 unmapped: 66453504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285212672 unmapped: 66445312 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285229056 unmapped: 66428928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285237248 unmapped: 66420736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285245440 unmapped: 66412544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285261824 unmapped: 66396160 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3062847 data_alloc: 218103808 data_used: 188416
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 50.395416260s of 50.752197266s, submitted: 15
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 heartbeat osd_stat(store_statfs(0x4ed6cd000/0x0/0x4ffc00000, data 0xe6bd6b/0x1010000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285270016 unmapped: 66387968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ed6ca000/0x0/0x4ffc00000, data 0xe6d93c/0x1013000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 312 ms_handle_reset con 0x55f5049ac800 session 0x55f4fd3212c0
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981229 data_alloc: 218103808 data_used: 196608
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ee33b000/0x0/0x4ffc00000, data 0x1fd93c/0x3a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285294592 unmapped: 66363392 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2981229 data_alloc: 218103808 data_used: 196608
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285302784 unmapped: 66355200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 312 heartbeat osd_stat(store_statfs(0x4ee33b000/0x0/0x4ffc00000, data 0x1fd93c/0x3a3000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 9.898818970s of 11.465106010s, submitted: 39
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285310976 unmapped: 66347008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285319168 unmapped: 66338816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285327360 unmapped: 66330624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285335552 unmapped: 66322432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285343744 unmapped: 66314240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285351936 unmapped: 66306048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285368320 unmapped: 66289664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285376512 unmapped: 66281472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285384704 unmapped: 66273280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285392896 unmapped: 66265088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285417472 unmapped: 66240512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285425664 unmapped: 66232320 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285433856 unmapped: 66224128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285442048 unmapped: 66215936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285442048 unmapped: 66215936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285515776 unmapped: 66142208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'config show' '{prefix=config show}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284499968 unmapped: 67158016 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'perf dump' '{prefix=perf dump}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'perf schema' '{prefix=perf schema}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 67657728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 67657728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 67657728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284000256 unmapped: 67657728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284008448 unmapped: 67649536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284008448 unmapped: 67649536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284008448 unmapped: 67649536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284016640 unmapped: 67641344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284024832 unmapped: 67633152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284041216 unmapped: 67616768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284049408 unmapped: 67608576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284049408 unmapped: 67608576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284057600 unmapped: 67600384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284057600 unmapped: 67600384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284057600 unmapped: 67600384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284057600 unmapped: 67600384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284065792 unmapped: 67592192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284073984 unmapped: 67584000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284090368 unmapped: 67567616 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284098560 unmapped: 67559424 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284106752 unmapped: 67551232 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284114944 unmapped: 67543040 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284131328 unmapped: 67526656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284131328 unmapped: 67526656 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284139520 unmapped: 67518464 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284147712 unmapped: 67510272 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284155904 unmapped: 67502080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284155904 unmapped: 67502080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284155904 unmapped: 67502080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284155904 unmapped: 67502080 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284164096 unmapped: 67493888 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284172288 unmapped: 67485696 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284180480 unmapped: 67477504 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284196864 unmapped: 67461120 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284205056 unmapped: 67452928 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284213248 unmapped: 67444736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284213248 unmapped: 67444736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284213248 unmapped: 67444736 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284221440 unmapped: 67436544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284221440 unmapped: 67436544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284221440 unmapped: 67436544 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284229632 unmapped: 67428352 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284246016 unmapped: 67411968 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284254208 unmapped: 67403776 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 67379200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 67379200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 67379200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284278784 unmapped: 67379200 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284286976 unmapped: 67371008 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284295168 unmapped: 67362816 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284303360 unmapped: 67354624 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284311552 unmapped: 67346432 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284319744 unmapped: 67338240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284319744 unmapped: 67338240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284319744 unmapped: 67338240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284319744 unmapped: 67338240 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 67330048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 67330048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284327936 unmapped: 67330048 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284336128 unmapped: 67321856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284336128 unmapped: 67321856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284344320 unmapped: 67313664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284344320 unmapped: 67313664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284344320 unmapped: 67313664 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284352512 unmapped: 67305472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284352512 unmapped: 67305472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284352512 unmapped: 67305472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284352512 unmapped: 67305472 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284360704 unmapped: 67297280 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284368896 unmapped: 67289088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284368896 unmapped: 67289088 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284377088 unmapped: 67280896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284385280 unmapped: 67272704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284385280 unmapped: 67272704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284385280 unmapped: 67272704 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284393472 unmapped: 67264512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284393472 unmapped: 67264512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284393472 unmapped: 67264512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284393472 unmapped: 67264512 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284409856 unmapped: 67248128 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284418048 unmapped: 67239936 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284426240 unmapped: 67231744 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284442624 unmapped: 67215360 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 67198976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 67198976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 67198976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284459008 unmapped: 67198976 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284467200 unmapped: 67190784 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284475392 unmapped: 67182592 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284483584 unmapped: 67174400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284483584 unmapped: 67174400 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284491776 unmapped: 67166208 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284508160 unmapped: 67149824 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284516352 unmapped: 67141632 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284524544 unmapped: 67133440 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284540928 unmapped: 67117056 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284549120 unmapped: 67108864 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284557312 unmapped: 67100672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284557312 unmapped: 67100672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284557312 unmapped: 67100672 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284565504 unmapped: 67092480 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284573696 unmapped: 67084288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284573696 unmapped: 67084288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284573696 unmapped: 67084288 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 67076096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 67076096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284581888 unmapped: 67076096 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284590080 unmapped: 67067904 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284590080 unmapped: 67067904 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284598272 unmapped: 67059712 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284606464 unmapped: 67051520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284606464 unmapped: 67051520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284606464 unmapped: 67051520 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284614656 unmapped: 67043328 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284622848 unmapped: 67035136 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284631040 unmapped: 67026944 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284631040 unmapped: 67026944 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284631040 unmapped: 67026944 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284647424 unmapped: 67010560 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284647424 unmapped: 67010560 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6600.1 total, 600.0 interval#012Cumulative writes: 36K writes, 145K keys, 36K commit groups, 1.0 writes per commit group, ingest: 0.14 GB, 0.02 MB/s#012Cumulative WAL: 36K writes, 13K syncs, 2.78 writes per sync, written: 0.14 GB, 0.02 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 385 writes, 982 keys, 385 commit groups, 1.0 writes per commit group, ingest: 0.40 MB, 0.00 MB/s#012Interval WAL: 385 writes, 171 syncs, 2.25 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284655616 unmapped: 67002368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284655616 unmapped: 67002368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284655616 unmapped: 67002368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284655616 unmapped: 67002368 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284663808 unmapped: 66994176 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284663808 unmapped: 66994176 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284663808 unmapped: 66994176 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284663808 unmapped: 66994176 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284672000 unmapped: 66985984 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284680192 unmapped: 66977792 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284688384 unmapped: 66969600 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284696576 unmapped: 66961408 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284704768 unmapped: 66953216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284704768 unmapped: 66953216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284704768 unmapped: 66953216 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284712960 unmapped: 66945024 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284721152 unmapped: 66936832 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284729344 unmapped: 66928640 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284737536 unmapped: 66920448 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284737536 unmapped: 66920448 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284745728 unmapped: 66912256 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284753920 unmapped: 66904064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284753920 unmapped: 66904064 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284762112 unmapped: 66895872 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284770304 unmapped: 66887680 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee337000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2985403 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284794880 unmapped: 66863104 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore(/var/lib/ceph/osd/ceph-2) _kv_sync_thread utilization: idle 429.513305664s of 430.105316162s, submitted: 13
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284811264 unmapped: 66846720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284811264 unmapped: 66846720 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284827648 unmapped: 66830336 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284852224 unmapped: 66805760 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284860416 unmapped: 66797568 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284868608 unmapped: 66789376 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284876800 unmapped: 66781184 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284884992 unmapped: 66772992 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284893184 unmapped: 66764800 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284901376 unmapped: 66756608 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284901376 unmapped: 66756608 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284901376 unmapped: 66756608 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284917760 unmapped: 66740224 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284925952 unmapped: 66732032 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284925952 unmapped: 66732032 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284925952 unmapped: 66732032 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284925952 unmapped: 66732032 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284934144 unmapped: 66723840 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284934144 unmapped: 66723840 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284942336 unmapped: 66715648 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284950528 unmapped: 66707456 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3739: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284958720 unmapped: 66699264 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284975104 unmapped: 66682880 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 66674688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 66674688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284983296 unmapped: 66674688 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 284991488 unmapped: 66666496 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 66650112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 66650112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 66650112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285007872 unmapped: 66650112 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285016064 unmapped: 66641920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285016064 unmapped: 66641920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285016064 unmapped: 66641920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285016064 unmapped: 66641920 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285024256 unmapped: 66633728 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285032448 unmapped: 66625536 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285040640 unmapped: 66617344 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 66609152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 66609152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 66609152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285048832 unmapped: 66609152 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 66600960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 66600960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285057024 unmapped: 66600960 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285065216 unmapped: 66592768 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285073408 unmapped: 66584576 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 66576384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 66576384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285081600 unmapped: 66576384 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285089792 unmapped: 66568192 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285097984 unmapped: 66560000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285097984 unmapped: 66560000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285097984 unmapped: 66560000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285097984 unmapped: 66560000 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285106176 unmapped: 66551808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285106176 unmapped: 66551808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285106176 unmapped: 66551808 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: osd.2 313 heartbeat osd_stat(store_statfs(0x4ee338000/0x0/0x4ffc00000, data 0x1ff39f/0x3a6000, compress 0x0/0x0/0x0, omap 0x639, meta 0x1151f9c7), peers [0,1] op hist [])
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'config show' '{prefix=config show}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}'
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285360128 unmapped: 66297856 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: bluestore.MempoolThread(0x55f4fb9c5b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 2984523 data_alloc: 218103808 data_used: 204800
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: prioritycache tune_memory target: 4294967296 mapped: 285401088 unmapped: 66256896 heap: 351657984 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:55 np0005473739 ceph-osd[90092]: do_command 'log dump' '{prefix=log dump}'
Oct  7 11:25:55 np0005473739 rsyslogd[1004]: imjournal: journal files changed, reloading...  [v8.2506.0-2.el9 try https://www.rsyslog.com/e/0 ]
Oct  7 11:25:55 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23447 -' entity='client.admin' cmd=[{"prefix": "orch ps", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.862 2 WARNING nova.virt.libvirt.driver [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.864 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=3486MB free_disk=59.988277435302734GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.864 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.864 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.932 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.932 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=59GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Oct  7 11:25:55 np0005473739 nova_compute[259550]: 2025-10-07 15:25:55.952 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Running cmd (subprocess): ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls"} v 0) v1
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2518768742' entity='client.admin' cmd=[{"prefix": "mgr module ls"}]: dispatch
Oct  7 11:25:55 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader).osd e313 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 343932928 full_alloc: 348127232 kv_alloc: 318767104
Oct  7 11:25:56 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23451 -' entity='client.admin' cmd=[{"prefix": "orch status", "detail": true, "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services"} v 0) v1
Oct  7 11:25:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/667570897' entity='client.admin' cmd=[{"prefix": "mgr services"}]: dispatch
Oct  7 11:25:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "df", "format": "json"} v 0) v1
Oct  7 11:25:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/4137672803' entity='client.openstack' cmd=[{"prefix": "df", "format": "json"}]: dispatch
Oct  7 11:25:56 np0005473739 nova_compute[259550]: 2025-10-07 15:25:56.425 2 DEBUG oslo_concurrency.processutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] CMD "ceph df --format=json --id openstack --conf /etc/ceph/ceph.conf" returned: 0 in 0.473s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Oct  7 11:25:56 np0005473739 nova_compute[259550]: 2025-10-07 15:25:56.429 2 DEBUG nova.compute.provider_tree [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed in ProviderTree for provider: cc5ee907-7908-4ad9-99df-64935eda6bff update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Oct  7 11:25:56 np0005473739 nova_compute[259550]: 2025-10-07 15:25:56.456 2 DEBUG nova.scheduler.client.report [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Inventory has not changed for provider cc5ee907-7908-4ad9-99df-64935eda6bff based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 59, 'reserved': 1, 'min_unit': 1, 'max_unit': 59, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Oct  7 11:25:56 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23457 -' entity='client.admin' cmd=[{"prefix": "orch upgrade status", "target": ["mon-mgr", ""]}]: dispatch
Oct  7 11:25:56 np0005473739 nova_compute[259550]: 2025-10-07 15:25:56.458 2 DEBUG nova.compute.resource_tracker [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Oct  7 11:25:56 np0005473739 nova_compute[259550]: 2025-10-07 15:25:56.458 2 DEBUG oslo_concurrency.lockutils [None req-96b881e6-9e83-4b8b-9dd3-c4b0f62ccd59 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.594s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:25:56 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr versions"} v 0) v1
Oct  7 11:25:56 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/504522829' entity='client.admin' cmd=[{"prefix": "mgr versions"}]: dispatch
Oct  7 11:25:56 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23461 -' entity='client.admin' cmd=[{"prefix": "balancer eval", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  7 11:25:57 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23465 -' entity='client.admin' cmd=[{"prefix": "balancer status", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  7 11:25:57 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mon stat"} v 0) v1
Oct  7 11:25:57 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2003225363' entity='client.admin' cmd=[{"prefix": "mon stat"}]: dispatch
Oct  7 11:25:57 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3740: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:57 np0005473739 ceph-mgr[74587]: log_channel(audit) log [DBG] : from='client.23471 -' entity='client.admin' cmd=[{"prefix": "healthcheck history ls", "target": ["mon-mgr", ""], "format": "json-pretty"}]: dispatch
Oct  7 11:25:57 np0005473739 ceph-mgr[74587]: mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  7 11:25:57 np0005473739 ceph-82044f27-a8da-5b2a-a297-ff6afc620e1f-mgr-compute-0-kdyrcd[74583]: 2025-10-07T15:25:57.949+0000 7fbd06de6640 -1 mgr.server reply reply (95) Operation not supported Module 'prometheus' is not enabled/loaded (required by command 'healthcheck history ls'): use `ceph mgr module enable prometheus` to enable it
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "node ls"} v 0) v1
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1446855042' entity='client.admin' cmd=[{"prefix": "node ls"}]: dispatch
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "log last", "channel": "cephadm", "format": "json-pretty"} v 0) v1
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1534847316' entity='client.admin' cmd=[{"prefix": "log last", "channel": "cephadm", "format": "json-pretty"}]: dispatch
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush class ls"} v 0) v1
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3059605516' entity='client.admin' cmd=[{"prefix": "osd crush class ls"}]: dispatch
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr dump", "format": "json-pretty"} v 0) v1
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1660064485' entity='client.admin' cmd=[{"prefix": "mgr dump", "format": "json-pretty"}]: dispatch
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush dump"} v 0) v1
Oct  7 11:25:58 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1438499408' entity='client.admin' cmd=[{"prefix": "osd crush dump"}]: dispatch
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush rule ls"} v 0) v1
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2656692931' entity='client.admin' cmd=[{"prefix": "osd crush rule ls"}]: dispatch
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr metadata", "format": "json-pretty"} v 0) v1
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/3977948967' entity='client.admin' cmd=[{"prefix": "mgr metadata", "format": "json-pretty"}]: dispatch
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush show-tunables"} v 0) v1
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2953768568' entity='client.admin' cmd=[{"prefix": "osd crush show-tunables"}]: dispatch
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr module ls", "format": "json-pretty"} v 0) v1
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1266719636' entity='client.admin' cmd=[{"prefix": "mgr module ls", "format": "json-pretty"}]: dispatch
Oct  7 11:25:59 np0005473739 ceph-mgr[74587]: log_channel(cluster) log [DBG] : pgmap v3741: 305 pgs: 305 active+clean; 457 KiB data, 1001 MiB used, 59 GiB / 60 GiB avail
Oct  7 11:25:59 np0005473739 nova_compute[259550]: 2025-10-07 15:25:59.731 2 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 27 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7fae000/0x0/0x4ffc00000, data 0x2331813/0x24c0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 348790784 unmapped: 62242816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3666280 data_alloc: 234881024 data_used: 21540864
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354885632 unmapped: 56147968 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 5.728628635s of 10.006806374s, submitted: 121
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 354697216 unmapped: 56336384 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355778560 unmapped: 55255040 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7493000/0x0/0x4ffc00000, data 0x2e4b813/0x2fda000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x1578f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355958784 unmapped: 55074816 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3768784 data_alloc: 234881024 data_used: 23756800
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3769104 data_alloc: 234881024 data_used: 23764992
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706c000/0x0/0x4ffc00000, data 0x2e63813/0x2ff2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.462882042s of 13.117553711s, submitted: 114
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356089856 unmapped: 54943744 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833585 data_alloc: 234881024 data_used: 23764992
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x5569354fdc20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x55693489f4a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569344cdc20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569354fc780
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x556933585680
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356106240 unmapped: 54927360 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3821841 data_alloc: 234881024 data_used: 23764992
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569354f52c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569334545a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556933583a40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569334543c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69d6000/0x0/0x4ffc00000, data 0x34f9813/0x3688000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.283301353s of 10.441823006s, submitted: 21
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3826034 data_alloc: 234881024 data_used: 23764992
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 356114432 unmapped: 54919168 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359202816 unmapped: 51830784 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3875606 data_alloc: 234881024 data_used: 29491200
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b2000/0x0/0x4ffc00000, data 0x351d813/0x36ac000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3876746 data_alloc: 234881024 data_used: 29483008
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e69b0000/0x0/0x4ffc00000, data 0x351e813/0x36ad000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359268352 unmapped: 51765248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.968862534s of 12.149744034s, submitted: 5
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359915520 unmapped: 51118080 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360153088 unmapped: 50880512 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3958876 data_alloc: 234881024 data_used: 30003200
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60ea000/0x0/0x4ffc00000, data 0x3dd7813/0x3f66000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3954340 data_alloc: 234881024 data_used: 30003200
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361291776 unmapped: 49741824 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x55693489e960
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569354aa000 session 0x5569355652c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361299968 unmapped: 49733632 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.944513321s of 10.002432823s, submitted: 77
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e60f6000/0x0/0x4ffc00000, data 0x3dd9813/0x3f68000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361308160 unmapped: 49725440 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556935565c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3783798 data_alloc: 234881024 data_used: 22560768
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361316352 unmapped: 49717248 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e706b000/0x0/0x4ffc00000, data 0x2e64813/0x2ff3000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361324544 unmapped: 49709056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348954a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x55693345f860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3781038 data_alloc: 234881024 data_used: 22560768
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 48627712 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355385344 unmapped: 55648256 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 7.576909542s of 10.082017899s, submitted: 108
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934894780
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355401728 unmapped: 55631872 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355409920 unmapped: 55623680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355418112 unmapped: 55615488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 55607296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3435131 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933c71c00 session 0x55693416c3c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693488da40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569333e12c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556933461c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355426304 unmapped: 55607296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 28.773714066s of 28.863956451s, submitted: 9
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8918000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1,0,1,7,6])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556933454f00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b5ea400 session 0x5569341243c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932b5ed20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934898780
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569333e1c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556932ede960
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3464974 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693bacec00 session 0x55693345e5a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a74000/0x0/0x4ffc00000, data 0x145d7e0/0x15ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355450880 unmapped: 55582720 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934894d20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569341c2960
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355459072 unmapped: 55574528 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355467264 unmapped: 55566336 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a73000/0x0/0x4ffc00000, data 0x145d7f0/0x15eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3474172 data_alloc: 218103808 data_used: 6074368
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.313346863s of 19.983018875s, submitted: 34
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 355475456 unmapped: 55558144 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 53174272 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e805b000/0x0/0x4ffc00000, data 0x1e6f7f0/0x1ffd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 53084160 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8018000/0x0/0x4ffc00000, data 0x1ea97f0/0x2037000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3565560 data_alloc: 218103808 data_used: 6414336
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8018000/0x0/0x4ffc00000, data 0x1ea97f0/0x2037000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359071744 unmapped: 51961856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558532 data_alloc: 218103808 data_used: 6418432
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8025000/0x0/0x4ffc00000, data 0x1eab7f0/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8025000/0x0/0x4ffc00000, data 0x1eab7f0/0x2039000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558532 data_alloc: 218103808 data_used: 6418432
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 13.445440292s of 14.635492325s, submitted: 114
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358916096 unmapped: 52117504 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558760 data_alloc: 218103808 data_used: 6418432
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558760 data_alloc: 218103808 data_used: 6418432
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8024000/0x0/0x4ffc00000, data 0x1eac7f0/0x203a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.461366653s of 14.488227844s, submitted: 1
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556932eff2c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348985a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358907904 unmapped: 52125696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558336 data_alloc: 218103808 data_used: 6418432
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c53c00 session 0x556935cb34a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358924288 unmapped: 52109312 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358932480 unmapped: 52101120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358940672 unmapped: 52092928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358948864 unmapped: 52084736 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358957056 unmapped: 52076544 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 52068352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358965248 unmapped: 52068352 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3447347 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8a63000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 25.447141647s of 25.612014771s, submitted: 32
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 370622464 unmapped: 40411136 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556934125a40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569344d05a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556933455c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556933c4f0e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935e52000 session 0x5569333e0d20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815c000/0x0/0x4ffc00000, data 0x1d7776e/0x1f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932ea92c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556935cb34a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3533416 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815c000/0x0/0x4ffc00000, data 0x1d7776e/0x1f02000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569348985a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556932eff2c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359301120 unmapped: 51732480 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3616138 data_alloc: 234881024 data_used: 16084992
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620138 data_alloc: 234881024 data_used: 16707584
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 49897472 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3620138 data_alloc: 234881024 data_used: 16707584
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.948329926s of 20.243396759s, submitted: 34
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e815b000/0x0/0x4ffc00000, data 0x1d77791/0x1f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,1])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365010944 unmapped: 46022656 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368492544 unmapped: 42541056 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7583000/0x0/0x4ffc00000, data 0x294f791/0x2adb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3721858 data_alloc: 234881024 data_used: 18522112
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e757d000/0x0/0x4ffc00000, data 0x2955791/0x2ae1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730482 data_alloc: 234881024 data_used: 18755584
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3730482 data_alloc: 234881024 data_used: 18755584
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369139712 unmapped: 41893888 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369147904 unmapped: 41885696 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.108821869s of 16.882175446s, submitted: 108
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556933c4e000
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774326 data_alloc: 234881024 data_used: 18755584
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569341c30e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369311744 unmapped: 41721856 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693616a1e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932b5fa40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3774326 data_alloc: 234881024 data_used: 18755584
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569341e4000
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369319936 unmapped: 41713664 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 369942528 unmapped: 41091072 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818646 data_alloc: 234881024 data_used: 24911872
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3818646 data_alloc: 234881024 data_used: 24911872
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e6f8e000/0x0/0x4ffc00000, data 0x2f44791/0x30d0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.362392426s of 19.756223679s, submitted: 9
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371032064 unmapped: 40001536 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 370868224 unmapped: 40165376 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372105216 unmapped: 38928384 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888978 data_alloc: 234881024 data_used: 26075136
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68c1000/0x0/0x4ffc00000, data 0x3611791/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68c1000/0x0/0x4ffc00000, data 0x3611791/0x379d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372121600 unmapped: 38912000 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372129792 unmapped: 38903808 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3888978 data_alloc: 234881024 data_used: 26075136
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 372129792 unmapped: 38903808 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693345ef00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e68be000/0x0/0x4ffc00000, data 0x3614791/0x37a0000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371220480 unmapped: 39813120 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.185196877s of 10.101745605s, submitted: 92
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693616be00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3737914 data_alloc: 234881024 data_used: 18817024
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7573000/0x0/0x4ffc00000, data 0x295f791/0x2aeb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569333e1c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b762800 session 0x556934808960
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 371228672 unmapped: 39804928 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569354f4b40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3469696 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569334550e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932efd4a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569348990e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x55693488d0e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.007173538s of 30.416368484s, submitted: 66
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359505920 unmapped: 51527680 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c94000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,4])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569354f5a40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x55693b762800 session 0x556933454f00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569353d3860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3510209 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556935cb25a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x5569348954a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x55693488cb40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3509809 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569361adc20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693488c3c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,1])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 8.333779335s of 10.194737434s, submitted: 17
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569356eb4a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359514112 unmapped: 51519488 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3511570 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551250 data_alloc: 218103808 data_used: 10608640
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359522304 unmapped: 51511296 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551250 data_alloc: 218103808 data_used: 10608640
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.512318611s of 13.137044907s, submitted: 4
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8741000/0x0/0x4ffc00000, data 0x179276e/0x191d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360407040 unmapped: 50626560 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3572334 data_alloc: 218103808 data_used: 10981376
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8525000/0x0/0x4ffc00000, data 0x19ae76e/0x1b39000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580412 data_alloc: 218103808 data_used: 11112448
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360415232 unmapped: 50618368 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580412 data_alloc: 218103808 data_used: 11112448
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x55693345eb40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569335854a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x556933583680
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569344cc3c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360423424 unmapped: 50610176 heap: 411033600 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 15.535245895s of 16.586713791s, submitted: 25
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,1,0,6])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569362ec000
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556935cb32c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556932ea9860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360677376 unmapped: 58228736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x556934895860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935c56000 session 0x5569354f4780
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360685568 unmapped: 58220544 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3685997 data_alloc: 218103808 data_used: 11112448
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x556932efe000
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x556933583e00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569353d23c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x5569341c25a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3686790 data_alloc: 218103808 data_used: 11112448
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360693760 unmapped: 58212352 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360701952 unmapped: 58204160 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788494 data_alloc: 234881024 data_used: 25300992
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3788494 data_alloc: 234881024 data_used: 25300992
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "osd crush tree", "show_shadow": true} v 0) v1
Oct  7 11:25:59 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/1980792972' entity='client.admin' cmd=[{"prefix": "osd crush tree", "show_shadow": true}]: dispatch
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7759000/0x0/0x4ffc00000, data 0x277a76e/0x2905000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 363364352 unmapped: 55541760 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 19.612783432s of 20.685228348s, submitted: 31
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366215168 unmapped: 52690944 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365830144 unmapped: 53075968 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,2,0,1])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3846534 data_alloc: 234881024 data_used: 25468928
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e7153000/0x0/0x4ffc00000, data 0x2d7876e/0x2f03000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e70d5000/0x0/0x4ffc00000, data 0x2df676e/0x2f81000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366321664 unmapped: 52584448 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 53248000 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3841770 data_alloc: 234881024 data_used: 25468928
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556934125860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693489f680
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 365658112 unmapped: 53248000 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x55693442ed20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8519000/0x0/0x4ffc00000, data 0x19ba76e/0x1b45000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362307584 unmapped: 56598528 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3591774 data_alloc: 218103808 data_used: 11173888
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.757778168s of 12.639021873s, submitted: 116
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556932876f00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556935b01400 session 0x556934898f00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 59170816 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x55693489fa40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e8c95000/0x0/0x4ffc00000, data 0x123e76e/0x13c9000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3489852 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 35.288772583s of 35.765110016s, submitted: 48
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 59162624 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b6f400 session 0x5569354fc1e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x556934125a40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x5569343fc800 session 0x556935cb34a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933bdb000 session 0x5569348990e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933526400 session 0x5569354f4b40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3521521 data_alloc: 218103808 data_used: 5021696
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 57901056 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549973 data_alloc: 218103808 data_used: 8990720
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3549973 data_alloc: 218103808 data_used: 8990720
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e88ad000/0x0/0x4ffc00000, data 0x162676e/0x17b1000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 57892864 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 18.138957977s of 18.666332245s, submitted: 15
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3557359 data_alloc: 218103808 data_used: 8982528
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361693184 unmapped: 57212928 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880d000/0x0/0x4ffc00000, data 0x16be76e/0x1849000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 57204736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361701376 unmapped: 57204736 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559541 data_alloc: 218103808 data_used: 8982528
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 57876480 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559541 data_alloc: 218103808 data_used: 8982528
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.330938339s of 10.883395195s, submitted: 16
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,0,0,0,0,0,0,0,0,1])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 57868288 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559557 data_alloc: 218103808 data_used: 8982528
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361046016 unmapped: 57860096 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559557 data_alloc: 218103808 data_used: 8982528
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880f000/0x0/0x4ffc00000, data 0x16c476e/0x184f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361054208 unmapped: 57851904 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.153526306s of 12.898469925s, submitted: 1
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361062400 unmapped: 57843712 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 heartbeat osd_stat(store_statfs(0x4e880d000/0x0/0x4ffc00000, data 0x16c576e/0x1850000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 57835520 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3559113 data_alloc: 218103808 data_used: 8982528
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 ms_handle_reset con 0x556933b71c00 session 0x5569362ed860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 57835520 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 handle_osd_map epochs [284,285], i have 284, src has [1,285]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 284 handle_osd_map epochs [285,285], i have 285, src has [1,285]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x5569343fc800 session 0x556932efeb40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c56000 session 0x5569361acd20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c4f000 session 0x5569335852c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 57819136 heap: 418906112 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 285 ms_handle_reset con 0x556935c4f000 session 0x5569344ce3c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364568576 unmapped: 65880064 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 285 handle_osd_map epochs [286,286], i have 285, src has [1,286]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 286 ms_handle_reset con 0x556933526400 session 0x5569362ecf00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364593152 unmapped: 65855488 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 286 heartbeat osd_stat(store_statfs(0x4e7113000/0x0/0x4ffc00000, data 0x2dbbecb/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 286 heartbeat osd_stat(store_statfs(0x4e7113000/0x0/0x4ffc00000, data 0x2dbbecb/0x2f4a000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 286 handle_osd_map epochs [287,287], i have 286, src has [1,287]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x5569341c3a40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762301 data_alloc: 218103808 data_used: 9789440
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x55693489ef00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c56000 session 0x556932edf680
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933526400 session 0x55693345ef00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364601344 unmapped: 65847296 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710e000/0x0/0x4ffc00000, data 0x2dbdac6/0x2f4e000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3762301 data_alloc: 218103808 data_used: 9789440
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x5569333574a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x556934894d20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c4f000 session 0x556932edf0e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x55693f800800 session 0x55693489e5a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.815621376s of 13.853458405s, submitted: 48
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71c00 session 0x556932b5f860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933526400 session 0x556933583860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x5569343fc800 session 0x556935564d20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556935c4f000 session 0x5569344d0960
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 ms_handle_reset con 0x556933b71400 session 0x5569341c3c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 364609536 unmapped: 65839104 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 heartbeat osd_stat(store_statfs(0x4e710d000/0x0/0x4ffc00000, data 0x2dbdb28/0x2f4f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 handle_osd_map epochs [288,288], i have 287, src has [1,288]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 287 handle_osd_map epochs [288,288], i have 288, src has [1,288]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3755695 data_alloc: 218103808 data_used: 9789440
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf58b/0x2f52000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360767488 unmapped: 69681152 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 69664768 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360783872 unmapped: 69664768 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 ms_handle_reset con 0x556933526400 session 0x5569354f5a40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360792064 unmapped: 69656576 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3758141 data_alloc: 218103808 data_used: 9789440
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360808448 unmapped: 69640192 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 67837952 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833473 data_alloc: 234881024 data_used: 20254720
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 17.241428375s of 17.537368774s, submitted: 23
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3833473 data_alloc: 234881024 data_used: 20254720
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362995712 unmapped: 67452928 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866312 data_alloc: 234881024 data_used: 26378240
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367239168 unmapped: 63209472 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367247360 unmapped: 63201280 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3866344 data_alloc: 234881024 data_used: 26378240
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367255552 unmapped: 63193088 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 heartbeat osd_stat(store_statfs(0x4e710b000/0x0/0x4ffc00000, data 0x2dbf5ae/0x2f53000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3872264 data_alloc: 234881024 data_used: 27168768
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 ms_handle_reset con 0x556935c4f000 session 0x5569353d25a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367550464 unmapped: 62898176 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 288 handle_osd_map epochs [289,289], i have 288, src has [1,289]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 32.446823120s of 32.526866913s, submitted: 8
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556934891400 session 0x5569348945a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 367566848 unmapped: 62881792 heap: 430448640 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556937127400 session 0x556932876780
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x55693f800000 session 0x55693416c3c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556933526400 session 0x5569353d3a40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 289 ms_handle_reset con 0x556934891400 session 0x556933c4fa40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 289 handle_osd_map epochs [290,290], i have 289, src has [1,290]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374775808 unmapped: 59875328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 290 handle_osd_map epochs [290,291], i have 290, src has [1,291]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556935c4f000 session 0x556932efd2c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374390784 unmapped: 60260352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 291 heartbeat osd_stat(store_statfs(0x4e45c9000/0x0/0x4ffc00000, data 0x58fb8f7/0x5a94000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556937127400 session 0x556935cb34a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374398976 unmapped: 60252160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556933526800 session 0x556934125860
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 4223401 data_alloc: 251658240 data_used: 33873920
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 291 ms_handle_reset con 0x556933526400 session 0x55693489f0e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 291 handle_osd_map epochs [291,292], i have 291, src has [1,292]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 60211200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556934891400 session 0x5569354f4780
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 292 heartbeat osd_stat(store_statfs(0x4e70fe000/0x0/0x4ffc00000, data 0x2dc6482/0x2f5f000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374439936 unmapped: 60211200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556933b71c00 session 0x556934899c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x5569343fc800 session 0x556933583c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374448128 unmapped: 60203008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 292 ms_handle_reset con 0x556935c4f000 session 0x55693501bc20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.056338
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1191182336 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1124073472 meta_used: 3922026 data_alloc: 251658240 data_used: 33869824
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 292 handle_osd_map epochs [293,293], i have 292, src has [1,293]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.127110481s of 10.177739143s, submitted: 163
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 374464512 unmapped: 60186624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 293 heartbeat osd_stat(store_statfs(0x4e70ff000/0x0/0x4ffc00000, data 0x2dc639b/0x2f5c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 293 ms_handle_reset con 0x556933526400 session 0x5569362ec5a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 293 handle_osd_map epochs [293,294], i have 293, src has [1,294]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368902144 unmapped: 65748992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 294 ms_handle_reset con 0x556933b6f400 session 0x5569354f4960
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 368918528 unmapped: 65732608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 294 ms_handle_reset con 0x556933b71c00 session 0x5569333e0d20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560957 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3560957 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 294 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 294 handle_osd_map epochs [295,295], i have 294, src has [1,295]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.429761887s of 10.748415947s, submitted: 82
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c76000/0x0/0x4ffc00000, data 0x124fa30/0x13e7000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 5400.1 total, 600.0 interval#012Cumulative writes: 49K writes, 193K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.04 MB/s#012Cumulative WAL: 49K writes, 17K syncs, 2.76 writes per sync, written: 0.19 GB, 0.04 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 2813 writes, 11K keys, 2813 commit groups, 1.0 writes per commit group, ingest: 11.66 MB, 0.02 MB/s#012Interval WAL: 2813 writes, 1097 syncs, 2.56 writes per sync, written: 0.01 GB, 0.02 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933bdbc00 session 0x5569361ad2c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: mgrc ms_handle_reset ms_handle_reset con 0x556935c56c00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: mgrc reconnect Terminating session with v2:192.168.122.100:6800/3626055412
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: mgrc reconnect Starting new session with [v2:192.168.122.100:6800/3626055412,v1:192.168.122.100:6801/3626055412]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: mgrc handle_mgr_configure stats_period=5
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556935c59000 session 0x5569341e52c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x55693360f000 session 0x55693442e3c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362037248 unmapped: 72613888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3563931 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c73000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 37.841426849s of 37.856082916s, submitted: 15
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556934891400 session 0x5569333e12c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556937127400 session 0x5569362ed2c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8873000/0x0/0x4ffc00000, data 0x16514bc/0x17eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933526400 session 0x5569353d2000
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8873000/0x0/0x4ffc00000, data 0x16514f5/0x17eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933b6f400 session 0x556933585e00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933b71c00 session 0x556933460f00
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362053632 unmapped: 72597504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556934891400 session 0x5569333e1c20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362356736 unmapped: 72294400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3629019 data_alloc: 218103808 data_used: 9269248
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556935605800 session 0x5569334550e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x5569335b2000 session 0x556932ea85a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e884e000/0x0/0x4ffc00000, data 0x1675504/0x1810000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 ms_handle_reset con 0x556933526400 session 0x5569335830e0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362373120 unmapped: 72278016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362381312 unmapped: 72269824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362389504 unmapped: 72261632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 72253440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362397696 unmapped: 72253440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567992 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362405888 unmapped: 72245248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8872000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362414080 unmapped: 72237056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 58.524005890s of 59.169479370s, submitted: 66
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362422272 unmapped: 72228864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3567816 data_alloc: 218103808 data_used: 5070848
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 heartbeat osd_stat(store_statfs(0x4e8c74000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362430464 unmapped: 72220672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 72204288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362446848 unmapped: 72204288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 295 handle_osd_map epochs [295,296], i have 295, src has [1,296]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c74000/0x0/0x4ffc00000, data 0x1251493/0x13ea000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362463232 unmapped: 72187904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 296 ms_handle_reset con 0x556933b6f400 session 0x5569354fcb40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570568 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c72000/0x0/0x4ffc00000, data 0x1252f30/0x13eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3570568 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362471424 unmapped: 72179712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362479616 unmapped: 72171520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 296 heartbeat osd_stat(store_statfs(0x4e8c72000/0x0/0x4ffc00000, data 0x1252f30/0x13eb000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 296 handle_osd_map epochs [297,297], i have 296, src has [1,297]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 14.192964554s of 14.603412628s, submitted: 105
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362496000 unmapped: 72155136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362504192 unmapped: 72146944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362512384 unmapped: 72138752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362520576 unmapped: 72130560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362528768 unmapped: 72122368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362536960 unmapped: 72114176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362545152 unmapped: 72105984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362553344 unmapped: 72097792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362561536 unmapped: 72089600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362569728 unmapped: 72081408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362577920 unmapped: 72073216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362586112 unmapped: 72065024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362594304 unmapped: 72056832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362602496 unmapped: 72048640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362610688 unmapped: 72040448 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362618880 unmapped: 72032256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362627072 unmapped: 72024064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362635264 unmapped: 72015872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3573542 data_alloc: 218103808 data_used: 5079040
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362651648 unmapped: 71999488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c6f000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362659840 unmapped: 71991296 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 104.046302795s of 104.136131287s, submitted: 16
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 ms_handle_reset con 0x556933b71c00 session 0x556933c4f4a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 ms_handle_reset con 0x556934891400 session 0x55693416cb40
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580822 data_alloc: 218103808 data_used: 10911744
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366854144 unmapped: 67796992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366862336 unmapped: 67788800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3580822 data_alloc: 218103808 data_used: 10911744
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [1])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 366878720 unmapped: 67772416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 297 handle_osd_map epochs [298,298], i have 297, src has [1,298]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 298 heartbeat osd_stat(store_statfs(0x4e8c70000/0x0/0x4ffc00000, data 0x1254993/0x13ee000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15b9f9c6), peers [0,2] op hist [0,1,1])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 298 ms_handle_reset con 0x556933526400 session 0x556932b5e000
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 11.556733131s of 11.714488029s, submitted: 44
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 362176512 unmapped: 72474624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3508745 data_alloc: 218103808 data_used: 4096000
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 298 handle_osd_map epochs [298,299], i have 298, src has [1,299]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 299 ms_handle_reset con 0x5569335b2000 session 0x556932efc960
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 299 heartbeat osd_stat(store_statfs(0x4e905b000/0x0/0x4ffc00000, data 0xa58102/0xbf2000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 299 handle_osd_map epochs [300,300], i have 299, src has [1,300]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445661 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3445661 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 300 heartbeat osd_stat(store_statfs(0x4e9858000/0x0/0x4ffc00000, data 0x259b81/0x3f5000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 300 handle_osd_map epochs [301,301], i have 300, src has [1,301]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 12.843618393s of 12.988986969s, submitted: 53
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 301 handle_osd_map epochs [301,302], i have 301, src has [1,302]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 ms_handle_reset con 0x556933b6f400 session 0x556935cb34a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357433344 unmapped: 77217792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357441536 unmapped: 77209600 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357449728 unmapped: 77201408 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357457920 unmapped: 77193216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 77185024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357466112 unmapped: 77185024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357474304 unmapped: 77176832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357482496 unmapped: 77168640 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357498880 unmapped: 77152256 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357507072 unmapped: 77144064 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357515264 unmapped: 77135872 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357523456 unmapped: 77127680 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 77119488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357531648 unmapped: 77119488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357548032 unmapped: 77103104 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357556224 unmapped: 77094912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 77078528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357572608 unmapped: 77078528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3456151 data_alloc: 218103808 data_used: 1089536
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357580800 unmapped: 77070336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 heartbeat osd_stat(store_statfs(0x4e984f000/0x0/0x4ffc00000, data 0x25d194/0x3fd000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 105.469673157s of 105.490242004s, submitted: 20
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 302 handle_osd_map epochs [303,303], i have 302, src has [1,303]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 303 ms_handle_reset con 0x556933b71c00 session 0x556935cb32c0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357564416 unmapped: 77086720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 303 handle_osd_map epochs [304,304], i have 303, src has [1,304]
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 304 ms_handle_reset con 0x556935605800 session 0x5569344ce5a0
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357588992 unmapped: 77062144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357597184 unmapped: 77053952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357605376 unmapped: 77045760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357613568 unmapped: 77037568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: osd.1 304 heartbeat osd_stat(store_statfs(0x4e8bd9000/0x0/0x4ffc00000, data 0xed08b1/0x1074000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 77029376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357621760 unmapped: 77029376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3550337 data_alloc: 218103808 data_used: 1097728
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 77012992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357638144 unmapped: 77012992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:25:59 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 29.815532684s of 29.996992111s, submitted: 26
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 304 handle_osd_map epochs [304,305], i have 304, src has [1,305]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 305 ms_handle_reset con 0x556933526400 session 0x5569355654a0
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551884 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357654528 unmapped: 76996608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 76988416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357662720 unmapped: 76988416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3551884 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357670912 unmapped: 76980224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 305 heartbeat osd_stat(store_statfs(0x4e8bd8000/0x0/0x4ffc00000, data 0xed244f/0x1075000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 305 handle_osd_map epochs [306,306], i have 305, src has [1,306]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.439725876s of 10.520721436s, submitted: 15
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 76955648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357695488 unmapped: 76955648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357703680 unmapped: 76947456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357711872 unmapped: 76939264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357720064 unmapped: 76931072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357736448 unmapped: 76914688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357744640 unmapped: 76906496 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357752832 unmapped: 76898304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357761024 unmapped: 76890112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357769216 unmapped: 76881920 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357777408 unmapped: 76873728 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357793792 unmapped: 76857344 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 76849152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357801984 unmapped: 76849152 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357810176 unmapped: 76840960 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357818368 unmapped: 76832768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357826560 unmapped: 76824576 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357834752 unmapped: 76816384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 76808192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357842944 unmapped: 76808192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357851136 unmapped: 76800000 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357859328 unmapped: 76791808 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 76783616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357867520 unmapped: 76783616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357875712 unmapped: 76775424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357883904 unmapped: 76767232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357892096 unmapped: 76759040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357908480 unmapped: 76742656 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357924864 unmapped: 76726272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 76718080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357933056 unmapped: 76718080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 76709888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357941248 unmapped: 76709888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357949440 unmapped: 76701696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 76693504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357957632 unmapped: 76693504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357965824 unmapped: 76685312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357974016 unmapped: 76677120 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 76668928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357982208 unmapped: 76668928 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 76660736 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 357990400 unmapped: 76660736 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358006784 unmapped: 76644352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 76636160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358014976 unmapped: 76636160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 76627968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358023168 unmapped: 76627968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358031360 unmapped: 76619776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358039552 unmapped: 76611584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 76603392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358047744 unmapped: 76603392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358055936 unmapped: 76595200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 76587008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358064128 unmapped: 76587008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358080512 unmapped: 76570624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358088704 unmapped: 76562432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358096896 unmapped: 76554240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358105088 unmapped: 76546048 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358129664 unmapped: 76521472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358137856 unmapped: 76513280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358146048 unmapped: 76505088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358162432 unmapped: 76488704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 76480512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358170624 unmapped: 76480512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358178816 unmapped: 76472320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358187008 unmapped: 76464128 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358195200 unmapped: 76455936 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358211584 unmapped: 76439552 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 76431360 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358219776 unmapped: 76431360 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358227968 unmapped: 76423168 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358244352 unmapped: 76406784 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358252544 unmapped: 76398592 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358260736 unmapped: 76390400 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358268928 unmapped: 76382208 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358277120 unmapped: 76374016 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358285312 unmapped: 76365824 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 76357632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358293504 unmapped: 76357632 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358301696 unmapped: 76349440 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1109] ------- DUMPING STATS -------
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: [db/db_impl/db_impl.cc:1111] #012** DB Stats **#012Uptime(secs): 6000.1 total, 600.0 interval#012Cumulative writes: 49K writes, 195K keys, 49K commit groups, 1.0 writes per commit group, ingest: 0.19 GB, 0.03 MB/s#012Cumulative WAL: 49K writes, 18K syncs, 2.75 writes per sync, written: 0.19 GB, 0.03 MB/s#012Cumulative stall: 00:00:0.000 H:M:S, 0.0 percent#012Interval writes: 645 writes, 1636 keys, 645 commit groups, 1.0 writes per commit group, ingest: 0.83 MB, 0.00 MB/s#012Interval WAL: 645 writes, 286 syncs, 2.26 writes per sync, written: 0.00 GB, 0.00 MB/s#012Interval stall: 00:00:0.000 H:M:S, 0.0 percent#012#012** Compaction Stats [default] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012  L0      2/0    2.63 KB   0.2      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Sum      2/0    2.63 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   1.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [default] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012User      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.1      0.01              0.00         1    0.015       0      0       0.0       0.0#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [default] **#012#012** Compaction Stats [m-0] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-0] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtable_slowdown, interval 0 total count#012Block cache BinnedLRUCache@0x556931a3b1f0#2 capacity: 1.12 GB usage: 2.09 KB table_size: 0 occupancy: 18446744073709551615 collections: 11 last_copies: 8 last_secs: 5.1e-05 secs_since: 0#012Block cache entry stats(count,size,portion): DataBlock(3,1.42 KB,0.000120534%) FilterBlock(3,0.33 KB,2.78155e-05%) IndexBlock(3,0.34 KB,2.914e-05%) Misc(1,0.00 KB,0%)#012#012** File Read Latency Histogram By Level [m-0] **#012#012** Compaction Stats [m-1] **#012Level    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012 Sum      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012 Int      0/0    0.00 KB   0.0      0.0     0.0      0.0       0.0      0.0       0.0   0.0      0.0      0.0      0.00              0.00         0    0.000       0      0       0.0       0.0#012#012** Compaction Stats [m-1] **#012Priority    Files   Size     Score Read(GB)  Rn(GB) Rnp1(GB) Write(GB) Wnew(GB) Moved(GB) W-Amp Rd(MB/s) Wr(MB/s) Comp(sec) CompMergeCPU(sec) Comp(cnt) Avg(sec) KeyIn KeyDrop Rblob(GB) Wblob(GB)#012---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------#012#012Blob file count: 0, total size: 0.0 GB, garbage size: 0.0 GB, space amp: 0.0#012#012Uptime(secs): 6000.1 total, 4800.0 interval#012Flush(GB): cumulative 0.000, interval 0.000#012AddFile(GB): cumulative 0.000, interval 0.000#012AddFile(Total Files): cumulative 0, interval 0#012AddFile(L0 Files): cumulative 0, interval 0#012AddFile(Keys): cumulative 0, interval 0#012Cumulative compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Interval compaction: 0.00 GB write, 0.00 MB/s write, 0.00 GB read, 0.00 MB/s read, 0.0 seconds#012Stalls(count): 0 level0_slowdown, 0 level0_slowdown_with_compaction, 0 level0_numfiles, 0 level0_numfiles_with_compaction, 0 stop for pending_compaction_bytes, 0 slowdown for pending_compaction_bytes, 0 memtable_compaction, 0 memtab
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 76341248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358309888 unmapped: 76341248 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358318080 unmapped: 76333056 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 76324864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358326272 unmapped: 76324864 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 76316672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358334464 unmapped: 76316672 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358342656 unmapped: 76308480 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358350848 unmapped: 76300288 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358359040 unmapped: 76292096 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 76283904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358367232 unmapped: 76283904 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 76275712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358375424 unmapped: 76275712 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358383616 unmapped: 76267520 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 76259328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358391808 unmapped: 76259328 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358400000 unmapped: 76251136 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358408192 unmapped: 76242944 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358416384 unmapped: 76234752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358424576 unmapped: 76226560 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358432768 unmapped: 76218368 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 76210176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358440960 unmapped: 76210176 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358449152 unmapped: 76201984 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358457344 unmapped: 76193792 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358481920 unmapped: 76169216 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3554858 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 310.758331299s of 310.772949219s, submitted: 14
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358490112 unmapped: 76161024 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd5000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 358498304 unmapped: 76152832 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359579648 unmapped: 75071488 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359604224 unmapped: 75046912 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359612416 unmapped: 75038720 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359620608 unmapped: 75030528 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359628800 unmapped: 75022336 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359653376 unmapped: 74997760 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 74989568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-mon[74295]: mon.compute-0@0(leader) e1 handle_command mon_command({"prefix": "mgr services", "format": "json-pretty"} v 0) v1
Oct  7 11:26:00 np0005473739 ceph-mon[74295]: log_channel(audit) log [DBG] : from='client.? 192.168.122.100:0/2165553733' entity='client.admin' cmd=[{"prefix": "mgr services", "format": "json-pretty"}]: dispatch
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 74940416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359710720 unmapped: 74940416 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359743488 unmapped: 74907648 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 74899456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359751680 unmapped: 74899456 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3553978 data_alloc: 218103808 data_used: 1097728
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 74866688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 heartbeat osd_stat(store_statfs(0x4e8bd6000/0x0/0x4ffc00000, data 0xed3eb2/0x1078000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359784448 unmapped: 74866688 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 97.285827637s of 97.792411804s, submitted: 90
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 306 handle_osd_map epochs [307,307], i have 306, src has [1,307]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3558328 data_alloc: 218103808 data_used: 1105920
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 307 ms_handle_reset con 0x5569335b2000 session 0x556933454f00
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359800832 unmapped: 74850304 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359809024 unmapped: 74842112 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 307 handle_osd_map epochs [308,308], i have 307, src has [1,308]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 308 ms_handle_reset con 0x556933b6f400 session 0x55693489e000
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0x267631/0x40d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3476702 data_alloc: 218103808 data_used: 1114112
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360890368 unmapped: 73760768 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 308 heartbeat osd_stat(store_statfs(0x4e9840000/0x0/0x4ffc00000, data 0x267631/0x40d000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 308 handle_osd_map epochs [309,309], i have 308, src has [1,309]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360906752 unmapped: 73744384 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 309 heartbeat osd_stat(store_statfs(0x4e983c000/0x0/0x4ffc00000, data 0x2690e3/0x412000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 309 handle_osd_map epochs [309,310], i have 309, src has [1,310]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360914944 unmapped: 73736192 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 9.069248199s of 10.059258461s, submitted: 86
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 310 ms_handle_reset con 0x556933b71c00 session 0x55693489e960
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488669 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360931328 unmapped: 73719808 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e9837000/0x0/0x4ffc00000, data 0x26ac9f/0x416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 310 heartbeat osd_stat(store_statfs(0x4e9837000/0x0/0x4ffc00000, data 0x26ac9f/0x416000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360939520 unmapped: 73711616 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3488669 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360947712 unmapped: 73703424 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 310 handle_osd_map epochs [311,311], i have 310, src has [1,311]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360955904 unmapped: 73695232 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360964096 unmapped: 73687040 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360972288 unmapped: 73678848 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360988672 unmapped: 73662464 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360996864 unmapped: 73654272 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361005056 unmapped: 73646080 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361013248 unmapped: 73637888 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 73629696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361021440 unmapped: 73629696 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491003 data_alloc: 218103808 data_used: 1126400
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 heartbeat osd_stat(store_statfs(0x4e9834000/0x0/0x4ffc00000, data 0x26c702/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361029632 unmapped: 73621504 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3491163 data_alloc: 218103808 data_used: 1130496
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 50.736141205s of 51.071052551s, submitted: 14
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361037824 unmapped: 73613312 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 311 handle_osd_map epochs [312,312], i have 311, src has [1,312]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9832000/0x0/0x4ffc00000, data 0x26e2b0/0x41b000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 312 ms_handle_reset con 0x55693288bc00 session 0x556933455680
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492334 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9833000/0x0/0x4ffc00000, data 0x26e27d/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3492334 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 312 heartbeat osd_stat(store_statfs(0x4e9833000/0x0/0x4ffc00000, data 0x26e27d/0x419000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 312 handle_osd_map epochs [313,313], i have 312, src has [1,313]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 312 handle_osd_map epochs [313,313], i have 313, src has [1,313]
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore(/var/lib/ceph/osd/ceph-1) _kv_sync_thread utilization: idle 10.388833046s of 11.278204918s, submitted: 27
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361070592 unmapped: 73580544 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361078784 unmapped: 73572352 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361086976 unmapped: 73564160 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361095168 unmapped: 73555968 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361103360 unmapped: 73547776 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361111552 unmapped: 73539584 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361119744 unmapped: 73531392 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361127936 unmapped: 73523200 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361136128 unmapped: 73515008 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361144320 unmapped: 73506816 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361152512 unmapped: 73498624 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361160704 unmapped: 73490432 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361168896 unmapped: 73482240 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361185280 unmapped: 73465856 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361193472 unmapped: 73457664 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361201664 unmapped: 73449472 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x556932eba800 session 0x55693501a3c0
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361209856 unmapped: 73441280 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x55693560b400 session 0x556935cb3c20
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 ms_handle_reset con 0x556935c59000 session 0x556933583a40
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361218048 unmapped: 73433088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361218048 unmapped: 73433088 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361226240 unmapped: 73424896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361226240 unmapped: 73424896 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361234432 unmapped: 73416704 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'config diff' '{prefix=config diff}'
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'config diff' '{prefix=config diff}' result is 0 bytes
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'config show' '{prefix=config show}'
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'config show' '{prefix=config show}' result is 0 bytes
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 361250816 unmapped: 73400320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'counter dump' '{prefix=counter dump}'
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'counter dump' '{prefix=counter dump}' result is 0 bytes
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'counter schema' '{prefix=counter schema}'
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'counter schema' '{prefix=counter schema}' result is 0 bytes
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360464384 unmapped: 74186752 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360218624 unmapped: 74432512 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'log dump' '{prefix=log dump}'
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'log dump' '{prefix=log dump}' result is 0 bytes
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 360226816 unmapped: 74424320 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'perf dump' '{prefix=perf dump}'
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'perf dump' '{prefix=perf dump}' result is 0 bytes
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'perf histogram dump' '{prefix=perf histogram dump}'
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'perf histogram dump' '{prefix=perf histogram dump}' result is 0 bytes
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'perf schema' '{prefix=perf schema}'
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: do_command 'perf schema' '{prefix=perf schema}' result is 0 bytes
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359636992 unmapped: 75014144 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359645184 unmapped: 75005952 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 74989568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359661568 unmapped: 74989568 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359669760 unmapped: 74981376 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:26:00.126 161536 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Oct  7 11:26:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:26:00.127 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Oct  7 11:26:00 np0005473739 ovn_metadata_agent[161531]: 2025-10-07 15:26:00.127 161536 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359677952 unmapped: 74973184 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359686144 unmapped: 74964992 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359694336 unmapped: 74956800 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359702528 unmapped: 74948608 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359718912 unmapped: 74932224 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359727104 unmapped: 74924032 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359735296 unmapped: 74915840 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359759872 unmapped: 74891264 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359768064 unmapped: 74883072 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.285714
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: rocksdb: commit_cache_size High Pri Pool Ratio set to 0.0555556
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: bluestore.MempoolThread(0x556931b19b60) _resize_shards cache_size: 2845415832 kv_alloc: 1207959552 kv_used: 2144 kv_onode_alloc: 234881024 kv_onode_used: 464 meta_alloc: 1140850688 meta_used: 3495132 data_alloc: 218103808 data_used: 1134592
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: osd.1 313 heartbeat osd_stat(store_statfs(0x4e9831000/0x0/0x4ffc00000, data 0x26fce0/0x41c000, compress 0x0/0x0/0x0, omap 0x63a, meta 0x15faf9c6), peers [0,2] op hist [])
Oct  7 11:26:00 np0005473739 ceph-osd[89062]: prioritycache tune_memory target: 4294967296 mapped: 359776256 unmapped: 74874880 heap: 434651136 old mem: 2845415832 new mem: 2845415832
